DPDK patches and discussions
 help / color / mirror / Atom feed
* [v1 00/43] DPAA2 specific patches
@ 2024-09-13  5:59 vanshika.shukla
  2024-09-13  5:59 ` [v1 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
                   ` (43 more replies)
  0 siblings, 44 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This series includes:
-> Fixes and enhancements for NXP DPAA2 drivers.
-> Upgrade with MC version 10.37
-> Enhancements in DPDMUX code
-> Fixes for coverity issues reported

Apeksha Gupta (2):
  net/dpaa2: add proper MTU debugging print
  net/dpaa2: store drop priority in mbuf

Brick Yang (1):
  net/dpaa2: update DPNI link status method

Gagandeep Singh (3):
  bus/fslmc: upgrade with MC version 10.37
  net/dpaa2: fix memory corruption in TM
  net/dpaa2: support software taildrop

Hemant Agrawal (2):
  net/dpaa2: add support to dump dpdmux counters
  bus/fslmc: change dpcon close as internal symbol

Jun Yang (23):
  net/dpaa2: enhance Tx scatter-gather mempool
  net/dpaa2: add new PMD API to check dpaa platform version
  bus/fslmc: improve BMAN buffer acquire
  bus/fslmc: get MC VFIO group FD directly
  bus/fslmc: enhance MC VFIO multiprocess support
  bus/fslmc: dynamic IOVA mode configuration
  bus/fslmc: remove VFIO IRQ mapping
  bus/fslmc: create dpaa2 device with it's object
  bus/fslmc: introduce VFIO DMA mapping API for fslmc
  net/dpaa2: flow API refactor
  net/dpaa2: dump Rx parser result
  net/dpaa2: enhancement of raw flow extract
  net/dpaa2: frame attribute flags parser
  net/dpaa2: add VXLAN distribution support
  net/dpaa2: protocol inside tunnel distribution
  net/dpaa2: eCPRI support by parser result
  net/dpaa2: add GTP flow support
  net/dpaa2: check if Soft parser is loaded
  net/dpaa2: soft parser flow verification
  net/dpaa2: add flow support for IPsec AH and ESP
  net/dpaa2: check IOVA before sending MC command
  net/dpaa2: add API to get endpoint name
  net/dpaa2: dpdmux single flow/multiple rules support

Rohit Raj (7):
  bus/fslmc: add close API to close DPAA2 device
  net/dpaa2: support link state for eth interfaces
  bus/fslmc: free VFIO group FD in case of add group failure
  bus/fslmc: fix coverity issue
  bus/fslmc: fix invalid error FD code
  bus/fslmc: change qbman eq desc from d to desc
  net/dpaa2: change miss flow ID macro name

Sachin Saxena (1):
  net/dpaa2: improve DPDMUX error behavior settings

Vanshika Shukla (4):
  net/dpaa2: support PTP packet one-step timestamp
  net/dpaa2: dpdmux: add support for CVLAN
  net/dpaa2: support VLAN traffic splitting
  net/dpaa2: add support for C-VLAN and MAC

 doc/guides/platform/dpaa2.rst                 |    4 +-
 drivers/bus/fslmc/bus_fslmc_driver.h          |   72 +-
 drivers/bus/fslmc/fslmc_bus.c                 |   62 +-
 drivers/bus/fslmc/fslmc_logs.h                |    5 +-
 drivers/bus/fslmc/fslmc_vfio.c                | 1627 +++-
 drivers/bus/fslmc/fslmc_vfio.h                |   39 +-
 drivers/bus/fslmc/mc/dpio.c                   |   94 +-
 drivers/bus/fslmc/mc/fsl_dpcon.h              |    6 +-
 drivers/bus/fslmc/mc/fsl_dpio.h               |   21 +-
 drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |   13 +-
 drivers/bus/fslmc/mc/fsl_dpmng.h              |    4 +-
 drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |    8 +-
 drivers/bus/fslmc/meson.build                 |    3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |   38 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |   38 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   50 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |    3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dprc.c      |    8 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  114 +-
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |   12 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |   49 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   30 +-
 drivers/bus/fslmc/version.map                 |   16 +-
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |   91 +-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |   47 +-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |   19 +-
 drivers/dma/dpaa2/dpaa2_qdma.c                |    1 +
 drivers/event/dpaa2/dpaa2_hw_dpcon.c          |   38 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |    2 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |   63 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  597 +-
 drivers/net/dpaa2/dpaa2_ethdev.h              |  225 +-
 drivers/net/dpaa2/dpaa2_flow.c                | 7062 ++++++++++-------
 drivers/net/dpaa2/dpaa2_mux.c                 |  543 +-
 drivers/net/dpaa2/dpaa2_parse_dump.h          |  250 +
 drivers/net/dpaa2/dpaa2_ptp.c                 |    8 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |   32 +-
 drivers/net/dpaa2/dpaa2_sparser.c             |   27 +-
 drivers/net/dpaa2/dpaa2_tm.c                  |   72 +-
 drivers/net/dpaa2/mc/dpdmux.c                 |  205 +-
 drivers/net/dpaa2/mc/dpkg.c                   |   12 +-
 drivers/net/dpaa2/mc/dpni.c                   |  383 +-
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |   99 +-
 drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |   83 +-
 drivers/net/dpaa2/mc/fsl_dpkg.h               |    7 +-
 drivers/net/dpaa2/mc/fsl_dpni.h               |  176 +-
 drivers/net/dpaa2/mc/fsl_dpni_cmd.h           |  125 +-
 drivers/net/dpaa2/rte_pmd_dpaa2.h             |   51 +-
 drivers/net/dpaa2/version.map                 |    6 +
 49 files changed, 8295 insertions(+), 4245 deletions(-)
 create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 01/43] net/dpaa2: enhance Tx scatter-gather mempool
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 02/43] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
                   ` (42 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Create TX SG pool only for primary process and lookup
this pool in secondary process.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 46 +++++++++++++++++++++++---------
 1 file changed, 33 insertions(+), 13 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 449bbda7ca..238533f439 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2867,6 +2867,35 @@ int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+static int dpaa2_tx_sg_pool_init(void)
+{
+	char name[RTE_MEMZONE_NAMESIZE];
+
+	if (dpaa2_tx_sg_pool)
+		return 0;
+
+	sprintf(name, "dpaa2_mbuf_tx_sg_pool");
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		dpaa2_tx_sg_pool = rte_pktmbuf_pool_create(name,
+			DPAA2_POOL_SIZE,
+			DPAA2_POOL_CACHE_SIZE, 0,
+			DPAA2_MAX_SGS * sizeof(struct qbman_sge),
+			rte_socket_id());
+		if (!dpaa2_tx_sg_pool) {
+			DPAA2_PMD_ERR("SG pool creation failed\n");
+			return -ENOMEM;
+		}
+	} else {
+		dpaa2_tx_sg_pool = rte_mempool_lookup(name);
+		if (!dpaa2_tx_sg_pool) {
+			DPAA2_PMD_ERR("SG pool lookup failed\n");
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
 static int
 rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 		struct rte_dpaa2_device *dpaa2_dev)
@@ -2921,19 +2950,10 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 
 	/* Invoke PMD device initialization function */
 	diag = dpaa2_dev_init(eth_dev);
-	if (diag == 0) {
-		if (!dpaa2_tx_sg_pool) {
-			dpaa2_tx_sg_pool =
-				rte_pktmbuf_pool_create("dpaa2_mbuf_tx_sg_pool",
-				DPAA2_POOL_SIZE,
-				DPAA2_POOL_CACHE_SIZE, 0,
-				DPAA2_MAX_SGS * sizeof(struct qbman_sge),
-				rte_socket_id());
-			if (dpaa2_tx_sg_pool == NULL) {
-				DPAA2_PMD_ERR("SG pool creation failed\n");
-				return -ENOMEM;
-			}
-		}
+	if (!diag) {
+		diag = dpaa2_tx_sg_pool_init();
+		if (diag)
+			return diag;
 		rte_eth_dev_probing_finish(eth_dev);
 		dpaa2_valid_dev++;
 		return 0;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 02/43] net/dpaa2: support PTP packet one-step timestamp
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
  2024-09-13  5:59 ` [v1 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 03/43] net/dpaa2: add proper MTU debugging print vanshika.shukla
                   ` (41 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds PTP one-step timestamping support.
dpni_set_single_step_cfg() MC API is utilized with offset provided
to insert correction time on frame.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 61 +++++++++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h  |  3 ++
 drivers/net/dpaa2/rte_pmd_dpaa2.h | 10 +++++
 drivers/net/dpaa2/version.map     |  3 ++
 4 files changed, 77 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 238533f439..596f1b4f61 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -548,6 +548,9 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	int tx_l4_csum_offload = false;
 	int ret, tc_index;
 	uint32_t max_rx_pktlen;
+#if defined(RTE_LIBRTE_IEEE1588)
+	uint16_t ptp_correction_offset;
+#endif
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -632,6 +635,11 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
+#if defined(RTE_LIBRTE_IEEE1588)
+	/* By default setting ptp correction offset for Ethernet SYNC packets */
+	ptp_correction_offset = RTE_ETHER_HDR_LEN + 8;
+	rte_pmd_dpaa2_set_one_step_ts(dev->data->port_id, ptp_correction_offset, 0);
+#endif
 	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
@@ -2867,6 +2875,59 @@ int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+#if defined(RTE_LIBRTE_IEEE1588)
+int
+rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpni = priv->eth_dev->process_private;
+	struct dpni_single_step_cfg ptp_cfg;
+	int err;
+
+	if (!mc_query)
+		return priv->ptp_correction_offset;
+
+	err = dpni_get_single_step_cfg(dpni, CMD_PRI_LOW, priv->token, &ptp_cfg);
+	if (err) {
+		DPAA2_PMD_ERR("Failed to retrieve onestep configuration");
+		return err;
+	}
+
+	if (!ptp_cfg.ptp_onestep_reg_base) {
+		DPAA2_PMD_ERR("1588 onestep reg not available");
+		return -1;
+	}
+
+	priv->ptp_correction_offset = ptp_cfg.offset;
+
+	return priv->ptp_correction_offset;
+}
+
+int
+rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpni = dev->process_private;
+	struct dpni_single_step_cfg cfg;
+	int err;
+
+	cfg.en = 1;
+	cfg.ch_update = ch_update;
+	cfg.offset = offset;
+	cfg.peer_delay = 0;
+
+	err = dpni_set_single_step_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
+	if (err)
+		return err;
+
+	priv->ptp_correction_offset = offset;
+
+	return 0;
+}
+#endif
+
 static int dpaa2_tx_sg_pool_init(void)
 {
 	char name[RTE_MEMZONE_NAMESIZE];
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 9feb631d5f..6625afaba3 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -230,6 +230,9 @@ struct dpaa2_dev_priv {
 	rte_spinlock_t lpbk_qp_lock;
 
 	uint8_t channel_inuse;
+	/* Stores correction offset for one step timestamping */
+	uint16_t ptp_correction_offset;
+
 	LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */
 	LIST_HEAD(nodes, dpaa2_tm_node) nodes;
 	LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles;
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index a1152eb717..aea9bae905 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -102,4 +102,14 @@ rte_pmd_dpaa2_thread_init(void);
 __rte_experimental
 uint32_t
 rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
+
+#if defined(RTE_LIBRTE_IEEE1588)
+__rte_experimental
+int
+rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update);
+
+__rte_experimental
+int
+rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query);
+#endif
 #endif /* _RTE_PMD_DPAA2_H */
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index ba756d26bd..2d95303e27 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -16,6 +16,9 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_thread_init;
 	# added in 21.11
 	rte_pmd_dpaa2_get_tlu_hash;
+	# added in 24.11
+	rte_pmd_dpaa2_set_one_step_ts;
+	rte_pmd_dpaa2_get_one_step_ts;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 03/43] net/dpaa2: add proper MTU debugging print
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
  2024-09-13  5:59 ` [v1 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
  2024-09-13  5:59 ` [v1 02/43] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 04/43] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
                   ` (40 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Apeksha Gupta, Jun Yang

From: Apeksha Gupta <apeksha.gupta@nxp.com>

This patch add proper debug info for check information of
max-pkt-len and configured params.

also store MTU

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 596f1b4f61..efba9ef286 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -579,9 +579,11 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 			DPAA2_PMD_ERR("Unable to set mtu. check config");
 			return ret;
 		}
-		DPAA2_PMD_INFO("MTU configured for the device: %d",
+		DPAA2_PMD_DEBUG("MTU configured for the device: %d",
 				dev->data->mtu);
 	} else {
+		DPAA2_PMD_ERR("Configured mtu %d and calculated max-pkt-len is %d which should be <= %d",
+			eth_conf->rxmode.mtu, max_rx_pktlen, DPAA2_MAX_RX_PKT_LEN);
 		return -1;
 	}
 
@@ -1534,6 +1536,7 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 		DPAA2_PMD_ERR("Setting the max frame length failed");
 		return -1;
 	}
+	dev->data->mtu = mtu;
 	DPAA2_PMD_INFO("MTU configured for the device: %d", mtu);
 	return 0;
 }
@@ -2836,6 +2839,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		DPAA2_PMD_ERR("Unable to set mtu. check config");
 		goto init_err;
 	}
+	eth_dev->data->mtu = RTE_ETHER_MTU;
 
 	/*TODO To enable soft parser support DPAA2 driver needs to integrate
 	 * with external entity to receive byte code for software sequence
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 04/43] net/dpaa2: add support to dump dpdmux counters
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (2 preceding siblings ...)
  2024-09-13  5:59 ` [v1 03/43] net/dpaa2: add proper MTU debugging print vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 05/43] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
                   ` (39 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Hemant Agrawal <hemant.agrawal@nxp.com>

This patch add supports to dump dpdmux counters as they are required
to identify the reasons for packet drop in dpdmux.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     | 84 +++++++++++++++++++++++++++++++
 drivers/net/dpaa2/rte_pmd_dpaa2.h | 18 +++++++
 drivers/net/dpaa2/version.map     |  1 +
 3 files changed, 103 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 2ff1a98fda..d682a61e52 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -259,6 +259,90 @@ rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 	return ret;
 }
 
+/* dump the status of the dpaa2_mux counters on the console */
+void
+rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
+{
+	struct dpaa2_dpdmux_dev *dpdmux;
+	uint64_t counter;
+	int ret;
+	int if_id;
+
+	/* Find the DPDMUX from dpdmux_id in our list */
+	dpdmux = get_dpdmux_from_id(dpdmux_id);
+	if (!dpdmux) {
+		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
+		return;
+	}
+
+	for (if_id = 0; if_id < num_if; if_id++) {
+		fprintf(f, "dpdmux.%d\n", if_id);
+
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FRAME, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BYTE, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FLTR_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FLTR_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FRAME_DISCARD,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FRAME_DISCARD %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_MCAST_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_MCAST_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_MCAST_BYTE,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_MCAST_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BCAST_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BCAST_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BCAST_BYTES,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BCAST_BYTES %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_FRAME, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_BYTE, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_FRAME_DISCARD,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_FRAME_DISCARD %" PRIu64 "\n",
+				counter);
+	}
+}
+
 static int
 dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 			   struct vfio_device_info *obj_info __rte_unused,
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index aea9bae905..fd9acd841b 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -33,6 +33,24 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			      struct rte_flow_item *pattern[],
 			      struct rte_flow_action *actions[]);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Dump demultiplex ethernet traffic counters
+ *
+ * @param f
+ *    output stream
+ * @param dpdmux_id
+ *    ID of the DPDMUX MC object.
+ * @param num_if
+ *    number of interface in dpdmux object
+ *
+ */
+__rte_experimental
+void
+rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 2d95303e27..7323fc8869 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	# added in 24.11
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
+	rte_pmd_dpaa2_mux_dump_counter;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 05/43] bus/fslmc: change dpcon close as internal symbol
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (3 preceding siblings ...)
  2024-09-13  5:59 ` [v1 04/43] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 06/43] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
                   ` (38 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena

From: Hemant Agrawal <hemant.agrawal@nxp.com>

This patch marks dpcon_close API as internal symbol and
also adds it into version map file

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/mc/fsl_dpcon.h | 3 ++-
 drivers/bus/fslmc/version.map    | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index db72477c8a..34b30d15c2 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2021 NXP
  *
  */
 #ifndef __FSL_DPCON_H
@@ -28,6 +28,7 @@ int dpcon_open(struct fsl_mc_io *mc_io,
 	       int dpcon_id,
 	       uint16_t *token);
 
+__rte_internal
 int dpcon_close(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index e19b8d1f6b..01e28c6625 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -36,6 +36,7 @@ INTERNAL {
 	dpci_set_rx_queue;
 	dpcon_get_attributes;
 	dpcon_open;
+	dpcon_close;
 	dpdmai_close;
 	dpdmai_disable;
 	dpdmai_enable;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 06/43] bus/fslmc: add close API to close DPAA2 device
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (4 preceding siblings ...)
  2024-09-13  5:59 ` [v1 05/43] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 07/43] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
                   ` (37 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Add rte_fslmc_close API to close all the DPAA2 devices while
closing the DPDK application.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     |  3 +
 drivers/bus/fslmc/fslmc_bus.c            | 13 ++++
 drivers/bus/fslmc/fslmc_vfio.c           | 87 ++++++++++++++++++++++++
 drivers/bus/fslmc/fslmc_vfio.h           |  3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c | 31 ++++++++-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c | 32 ++++++++-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 34 +++++++++
 drivers/event/dpaa2/dpaa2_hw_dpcon.c     | 32 ++++++++-
 drivers/net/dpaa2/dpaa2_mux.c            | 18 ++++-
 drivers/net/dpaa2/rte_pmd_dpaa2.h        |  5 +-
 10 files changed, 252 insertions(+), 6 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 7ac5fe6ff1..dc2f395f60 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -98,6 +98,8 @@ typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
 				      struct vfio_device_info *obj_info,
 				      int object_id);
 
+typedef void (*rte_dpaa2_obj_close_t)(int object_id);
+
 /**
  * A structure describing a DPAA2 object.
  */
@@ -106,6 +108,7 @@ struct rte_dpaa2_object {
 	const char *name;                   /**< Name of Object. */
 	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
 	rte_dpaa2_obj_create_t create;
+	rte_dpaa2_obj_close_t close;
 };
 
 /**
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index c155f4a2fd..7baadf99b9 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -384,6 +384,18 @@ rte_fslmc_match(struct rte_dpaa2_driver *dpaa2_drv,
 	return 1;
 }
 
+static int
+rte_fslmc_close(void)
+{
+	int ret = 0;
+
+	ret = fslmc_vfio_close_group();
+	if (ret)
+		DPAA2_BUS_ERR("Unable to close devices %d", ret);
+
+	return 0;
+}
+
 static int
 rte_fslmc_probe(void)
 {
@@ -664,6 +676,7 @@ struct rte_fslmc_bus rte_fslmc_bus = {
 	.bus = {
 		.scan = rte_fslmc_scan,
 		.probe = rte_fslmc_probe,
+		.cleanup = rte_fslmc_close,
 		.parse = rte_fslmc_parse,
 		.find_device = rte_fslmc_find_device,
 		.get_iommu_class = rte_dpaa2_get_iommu_class,
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index e12fd62f34..17163333af 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -702,6 +702,54 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 	return -1;
 }
 
+static void
+fslmc_close_iodevices(struct rte_dpaa2_device *dev)
+{
+	struct rte_dpaa2_object *object = NULL;
+	struct rte_dpaa2_driver *drv;
+	int ret, probe_all;
+
+	switch (dev->dev_type) {
+	case DPAA2_IO:
+	case DPAA2_CON:
+	case DPAA2_CI:
+	case DPAA2_BPOOL:
+	case DPAA2_MUX:
+		TAILQ_FOREACH(object, &dpaa2_obj_list, next) {
+			if (dev->dev_type == object->dev_type)
+				object->close(dev->object_id);
+			else
+				continue;
+		}
+		break;
+	case DPAA2_ETH:
+	case DPAA2_CRYPTO:
+	case DPAA2_QDMA:
+		probe_all = rte_fslmc_bus.bus.conf.scan_mode !=
+			    RTE_BUS_SCAN_ALLOWLIST;
+		TAILQ_FOREACH(drv, &rte_fslmc_bus.driver_list, next) {
+			if (drv->drv_type != dev->dev_type)
+				continue;
+			if (rte_dev_is_probed(&dev->device))
+				continue;
+			if (probe_all ||
+			    (dev->device.devargs &&
+			     dev->device.devargs->policy ==
+			     RTE_DEV_ALLOWED)) {
+				ret = drv->remove(dev);
+				if (ret)
+					DPAA2_BUS_ERR("Unable to remove");
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	DPAA2_BUS_LOG(DEBUG, "Device (%s) Closed",
+		      dev->device.name);
+}
+
 /*
  * fslmc_process_iodevices for processing only IO (ETH, CRYPTO, and possibly
  * EVENT) devices.
@@ -807,6 +855,45 @@ fslmc_process_mcp(struct rte_dpaa2_device *dev)
 	return ret;
 }
 
+int
+fslmc_vfio_close_group(void)
+{
+	struct rte_dpaa2_device *dev, *dev_temp;
+
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+		if (dev->device.devargs &&
+		    dev->device.devargs->policy == RTE_DEV_BLOCKED) {
+			DPAA2_BUS_LOG(DEBUG, "%s Blacklisted, skipping",
+				      dev->device.name);
+			TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+				continue;
+		}
+		switch (dev->dev_type) {
+		case DPAA2_ETH:
+		case DPAA2_CRYPTO:
+		case DPAA2_QDMA:
+		case DPAA2_IO:
+			fslmc_close_iodevices(dev);
+			break;
+		case DPAA2_CON:
+		case DPAA2_CI:
+		case DPAA2_BPOOL:
+		case DPAA2_MUX:
+			if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+				continue;
+
+			fslmc_close_iodevices(dev);
+			break;
+		case DPAA2_DPRTC:
+		default:
+			DPAA2_BUS_DEBUG("Device cannot be closed: Not supported (%s)",
+					dev->device.name);
+		}
+	}
+
+	return 0;
+}
+
 int
 fslmc_vfio_process_group(void)
 {
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 133606a9fd..b6677bdd18 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2019 NXP
+ *   Copyright 2016,2019-2020 NXP
  *
  */
 
@@ -55,6 +55,7 @@ int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 
 int fslmc_vfio_setup_group(void);
 int fslmc_vfio_process_group(void);
+int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
 int fslmc_get_container_group(int *gropuid);
 int rte_fslmc_vfio_dmamap(void);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index d7f6e45b7d..bc36607e64 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016 NXP
+ *   Copyright 2016,2020 NXP
  *
  */
 
@@ -33,6 +33,19 @@ TAILQ_HEAD(dpbp_dev_list, dpaa2_dpbp_dev);
 static struct dpbp_dev_list dpbp_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpbp_dev_list); /*!< DPBP device list */
 
+static struct dpaa2_dpbp_dev *get_dpbp_from_id(uint32_t dpbp_id)
+{
+	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
+
+	/* Get DPBP dev handle from list using index */
+	TAILQ_FOREACH(dpbp_dev, &dpbp_dev_list, next) {
+		if (dpbp_dev->dpbp_id == dpbp_id)
+			break;
+	}
+
+	return dpbp_dev;
+}
+
 static int
 dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
 			 struct vfio_device_info *obj_info __rte_unused,
@@ -116,9 +129,25 @@ int dpaa2_dpbp_supported(void)
 	return 0;
 }
 
+static void
+dpaa2_close_dpbp_device(int object_id)
+{
+	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
+
+	dpbp_dev = get_dpbp_from_id((uint32_t)object_id);
+
+	if (dpbp_dev) {
+		dpaa2_free_dpbp_dev(dpbp_dev);
+		dpbp_close(&dpbp_dev->dpbp, CMD_PRI_LOW, dpbp_dev->token);
+		TAILQ_REMOVE(&dpbp_dev_list, dpbp_dev, next);
+		rte_free(dpbp_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpbp_obj = {
 	.dev_type = DPAA2_BPOOL,
 	.create = dpaa2_create_dpbp_device,
+	.close = dpaa2_close_dpbp_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpbp, rte_dpaa2_dpbp_obj);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 07256ed7ec..d7de2bca05 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017 NXP
+ *   Copyright 2017,2020 NXP
  *
  */
 
@@ -30,6 +30,19 @@ TAILQ_HEAD(dpci_dev_list, dpaa2_dpci_dev);
 static struct dpci_dev_list dpci_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpci_dev_list); /*!< DPCI device list */
 
+static struct dpaa2_dpci_dev *get_dpci_from_id(uint32_t dpci_id)
+{
+	struct dpaa2_dpci_dev *dpci_dev = NULL;
+
+	/* Get DPCI dev handle from list using index */
+	TAILQ_FOREACH(dpci_dev, &dpci_dev_list, next) {
+		if (dpci_dev->dpci_id == dpci_id)
+			break;
+	}
+
+	return dpci_dev;
+}
+
 static int
 rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
 			     struct vfio_device_info *obj_info __rte_unused,
@@ -179,9 +192,26 @@ void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci)
 	}
 }
 
+
+static void
+rte_dpaa2_close_dpci_device(int object_id)
+{
+	struct dpaa2_dpci_dev *dpci_dev = NULL;
+
+	dpci_dev = get_dpci_from_id((uint32_t)object_id);
+
+	if (dpci_dev) {
+		rte_dpaa2_free_dpci_dev(dpci_dev);
+		dpci_close(&dpci_dev->dpci, CMD_PRI_LOW, dpci_dev->token);
+		TAILQ_REMOVE(&dpci_dev_list, dpci_dev, next);
+		rte_free(dpci_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpci_obj = {
 	.dev_type = DPAA2_CI,
 	.create = rte_dpaa2_create_dpci_device,
+	.close = rte_dpaa2_close_dpci_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpci, rte_dpaa2_dpci_obj);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 4aec7b2cd8..8265fee497 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -86,6 +86,19 @@ static int dpaa2_cluster_sz = 2;
  * Cluster 4 (ID = x07) : CPU14, CPU15;
  */
 
+static struct dpaa2_dpio_dev *get_dpio_dev_from_id(int32_t dpio_id)
+{
+	struct dpaa2_dpio_dev *dpio_dev = NULL;
+
+	/* Get DPIO dev handle from list using index */
+	TAILQ_FOREACH(dpio_dev, &dpio_dev_list, next) {
+		if (dpio_dev->hw_id == dpio_id)
+			break;
+	}
+
+	return dpio_dev;
+}
+
 static int
 dpaa2_get_core_id(void)
 {
@@ -358,6 +371,26 @@ static void dpaa2_portal_finish(void *arg)
 	pthread_setspecific(dpaa2_portal_key, NULL);
 }
 
+static void
+dpaa2_close_dpio_device(int object_id)
+{
+	struct dpaa2_dpio_dev *dpio_dev = NULL;
+
+	dpio_dev = get_dpio_dev_from_id((int32_t)object_id);
+
+	if (dpio_dev) {
+		if (dpio_dev->dpio) {
+			dpio_disable(dpio_dev->dpio, CMD_PRI_LOW,
+				     dpio_dev->token);
+			dpio_close(dpio_dev->dpio, CMD_PRI_LOW,
+				   dpio_dev->token);
+			rte_free(dpio_dev->dpio);
+		}
+		TAILQ_REMOVE(&dpio_dev_list, dpio_dev, next);
+		rte_free(dpio_dev);
+	}
+}
+
 static int
 dpaa2_create_dpio_device(int vdev_fd,
 			 struct vfio_device_info *obj_info,
@@ -635,6 +668,7 @@ dpaa2_free_eq_descriptors(void)
 static struct rte_dpaa2_object rte_dpaa2_dpio_obj = {
 	.dev_type = DPAA2_IO,
 	.create = dpaa2_create_dpio_device,
+	.close = dpaa2_close_dpio_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpio, rte_dpaa2_dpio_obj);
diff --git a/drivers/event/dpaa2/dpaa2_hw_dpcon.c b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
index a68d3ac154..64b0136e24 100644
--- a/drivers/event/dpaa2/dpaa2_hw_dpcon.c
+++ b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017 NXP
+ *   Copyright 2017,2020 NXP
  *
  */
 
@@ -30,6 +30,19 @@ TAILQ_HEAD(dpcon_dev_list, dpaa2_dpcon_dev);
 static struct dpcon_dev_list dpcon_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpcon_dev_list); /*!< DPCON device list */
 
+static struct dpaa2_dpcon_dev *get_dpcon_from_id(uint32_t dpcon_id)
+{
+	struct dpaa2_dpcon_dev *dpcon_dev = NULL;
+
+	/* Get DPCONC dev handle from list using index */
+	TAILQ_FOREACH(dpcon_dev, &dpcon_dev_list, next) {
+		if (dpcon_dev->dpcon_id == dpcon_id)
+			break;
+	}
+
+	return dpcon_dev;
+}
+
 static int
 rte_dpaa2_create_dpcon_device(int dev_fd __rte_unused,
 			      struct vfio_device_info *obj_info __rte_unused,
@@ -105,9 +118,26 @@ void rte_dpaa2_free_dpcon_dev(struct dpaa2_dpcon_dev *dpcon)
 	}
 }
 
+
+static void
+rte_dpaa2_close_dpcon_device(int object_id)
+{
+	struct dpaa2_dpcon_dev *dpcon_dev = NULL;
+
+	dpcon_dev = get_dpcon_from_id((uint32_t)object_id);
+
+	if (dpcon_dev) {
+		rte_dpaa2_free_dpcon_dev(dpcon_dev);
+		dpcon_close(&dpcon_dev->dpcon, CMD_PRI_LOW, dpcon_dev->token);
+		TAILQ_REMOVE(&dpcon_dev_list, dpcon_dev, next);
+		rte_free(dpcon_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpcon_obj = {
 	.dev_type = DPAA2_CON,
 	.create = rte_dpaa2_create_dpcon_device,
+	.close = rte_dpaa2_close_dpcon_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpcon, rte_dpaa2_dpcon_obj);
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index d682a61e52..fa3659e452 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -44,7 +44,7 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev = NULL;
 
-	/* Get DPBP dev handle from list using index */
+	/* Get DPDMUX dev handle from list using index */
 	TAILQ_FOREACH(dpdmux_dev, &dpdmux_dev_list, next) {
 		if (dpdmux_dev->dpdmux_id == dpdmux_id)
 			break;
@@ -442,9 +442,25 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	return -1;
 }
 
+static void
+dpaa2_close_dpdmux_device(int object_id)
+{
+	struct dpaa2_dpdmux_dev *dpdmux_dev;
+
+	dpdmux_dev = get_dpdmux_from_id((uint32_t)object_id);
+
+	if (dpdmux_dev) {
+		dpdmux_close(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+			     dpdmux_dev->token);
+		TAILQ_REMOVE(&dpdmux_dev_list, dpdmux_dev, next);
+		rte_free(dpdmux_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpdmux_obj = {
 	.dev_type = DPAA2_MUX,
 	.create = dpaa2_create_dpdmux_device,
+	.close = dpaa2_close_dpdmux_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpdmux, rte_dpaa2_dpdmux_obj);
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index fd9acd841b..80e5e3298b 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2024 NXP
  */
 
 #ifndef _RTE_PMD_DPAA2_H
@@ -32,6 +32,9 @@ struct rte_flow *
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			      struct rte_flow_item *pattern[],
 			      struct rte_flow_action *actions[]);
+int
+rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
+	uint16_t entry_index);
 
 /**
  * @warning
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 07/43] net/dpaa2: dpdmux: add support for CVLAN
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (5 preceding siblings ...)
  2024-09-13  5:59 ` [v1 06/43] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 08/43] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
                   ` (36 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds the support for DPDMUX_METHOD_C_VLAN_MAC method
which implements DPDMUX based on C-VLAN and MAC address.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     | 59 +++++++++++++++++++++++++------
 drivers/net/dpaa2/mc/fsl_dpdmux.h | 18 +++++++++-
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  3 ++
 3 files changed, 68 insertions(+), 12 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index fa3659e452..53020e9302 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -233,6 +233,35 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	return NULL;
 }
 
+int
+rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
+	uint8_t mac_addr[6], uint16_t vlan_id, int dest_if)
+{
+	struct dpaa2_dpdmux_dev *dpdmux_dev;
+	struct dpdmux_l2_rule rule;
+	int ret, i;
+
+	/* Find the DPDMUX from dpdmux_id in our list */
+	dpdmux_dev = get_dpdmux_from_id(dpdmux_id);
+	if (!dpdmux_dev) {
+		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
+		return -ENODEV;
+	}
+
+	for (i = 0; i < 6; i++)
+		rule.mac_addr[i] = mac_addr[i];
+	rule.vlan_id = vlan_id;
+
+	ret = dpdmux_if_add_l2_rule(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+			dpdmux_dev->token, dest_if, &rule);
+	if (ret) {
+		DPAA2_PMD_ERR("dpdmux_if_add_l2_rule failed:err(%d)", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
 int
 rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 {
@@ -353,6 +382,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	int ret;
 	uint16_t maj_ver;
 	uint16_t min_ver;
+	uint8_t skip_reset_flags;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -379,12 +409,18 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 		goto init_err;
 	}
 
-	ret = dpdmux_if_set_default(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-				    dpdmux_dev->token, attr.default_if);
-	if (ret) {
-		DPAA2_PMD_ERR("setting default interface failed in %s",
-			      __func__);
-		goto init_err;
+	if (attr.method != DPDMUX_METHOD_C_VLAN_MAC) {
+		ret = dpdmux_if_set_default(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+				dpdmux_dev->token, attr.default_if);
+		if (ret) {
+			DPAA2_PMD_ERR("setting default interface failed in %s",
+				      __func__);
+			goto init_err;
+		}
+		skip_reset_flags = DPDMUX_SKIP_DEFAULT_INTERFACE
+			| DPDMUX_SKIP_UNICAST_RULES | DPDMUX_SKIP_MULTICAST_RULES;
+	} else {
+		skip_reset_flags = DPDMUX_SKIP_DEFAULT_INTERFACE;
 	}
 
 	ret = dpdmux_get_api_version(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
@@ -400,10 +436,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	 */
 	if (maj_ver >= 6 && min_ver >= 6) {
 		ret = dpdmux_set_resetable(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-				dpdmux_dev->token,
-				DPDMUX_SKIP_DEFAULT_INTERFACE |
-				DPDMUX_SKIP_UNICAST_RULES |
-				DPDMUX_SKIP_MULTICAST_RULES);
+				dpdmux_dev->token, skip_reset_flags);
 		if (ret) {
 			DPAA2_PMD_ERR("setting default interface failed in %s",
 				      __func__);
@@ -416,7 +449,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 
 		memset(&mux_err_cfg, 0, sizeof(mux_err_cfg));
 		mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE;
-		mux_err_cfg.errors = DPDMUX_ERROR_DISC;
+
+		if (attr.method != DPDMUX_METHOD_C_VLAN_MAC)
+			mux_err_cfg.errors = DPDMUX_ERROR_DISC;
+		else
+			mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
 
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 4600ea94d4..9bbac44219 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2022 NXP
  *
  */
 #ifndef __FSL_DPDMUX_H
@@ -549,6 +549,22 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
  */
 #define DPDMUX__ERROR_L4CE			0x00000001
 
+#define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
+				 DPDMUX__ERROR_L4CV | \
+				 DPDMUX__ERROR_L3CE | \
+				 DPDMUX__ERROR_L3CV | \
+				 DPDMUX_ERROR_BLE | \
+				 DPDMUX_ERROR_PHE | \
+				 DPDMUX_ERROR_ISP | \
+				 DPDMUX_ERROR_PTE | \
+				 DPDMUX_ERROR_FPE | \
+				 DPDMUX_ERROR_FLE | \
+				 DPDMUX_ERROR_PIEE | \
+				 DPDMUX_ERROR_TIDE | \
+				 DPDMUX_ERROR_MNLE | \
+				 DPDMUX_ERROR_EOFHE | \
+				 DPDMUX_ERROR_KSE)
+
 enum dpdmux_error_action {
 	DPDMUX_ERROR_ACTION_DISCARD = 0,
 	DPDMUX_ERROR_ACTION_CONTINUE = 1
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index 80e5e3298b..bebebcacdc 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -35,6 +35,9 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 int
 rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
 	uint16_t entry_index);
+int
+rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
+	uint8_t mac_addr[6], uint16_t vlan_id, int dest_if);
 
 /**
  * @warning
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 08/43] bus/fslmc: upgrade with MC version 10.37
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (6 preceding siblings ...)
  2024-09-13  5:59 ` [v1 07/43] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 09/43] net/dpaa2: support link state for eth interfaces vanshika.shukla
                   ` (35 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh; +Cc: Apeksha Gupta

From: Gagandeep Singh <g.singh@nxp.com>

This patch upgrades the MC version compaitbility to 10.37

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 doc/guides/platform/dpaa2.rst                 |   4 +-
 drivers/bus/fslmc/mc/dpio.c                   |  94 ++++-
 drivers/bus/fslmc/mc/fsl_dpcon.h              |   5 +-
 drivers/bus/fslmc/mc/fsl_dpio.h               |  21 +-
 drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |  13 +-
 drivers/bus/fslmc/mc/fsl_dpmng.h              |   4 +-
 drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |   8 +-
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |  12 +-
 drivers/bus/fslmc/version.map                 |   7 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |  91 ++++-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |  47 ++-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |  19 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  36 +-
 drivers/net/dpaa2/mc/dpdmux.c                 | 205 +++++++++-
 drivers/net/dpaa2/mc/dpkg.c                   |  12 +-
 drivers/net/dpaa2/mc/dpni.c                   | 383 +++++++++++++++++-
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |  67 ++-
 drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |  83 +++-
 drivers/net/dpaa2/mc/fsl_dpkg.h               |   7 +-
 drivers/net/dpaa2/mc/fsl_dpni.h               | 176 +++++---
 drivers/net/dpaa2/mc/fsl_dpni_cmd.h           | 125 ++++--
 21 files changed, 1267 insertions(+), 152 deletions(-)

diff --git a/doc/guides/platform/dpaa2.rst b/doc/guides/platform/dpaa2.rst
index 2b0d93a976..c9ec21334f 100644
--- a/doc/guides/platform/dpaa2.rst
+++ b/doc/guides/platform/dpaa2.rst
@@ -105,8 +105,8 @@ separately:
 
 Currently supported by DPDK:
 
-- NXP SDK **LSDK 19.09++**.
-- MC Firmware version **10.18.0** and higher.
+- NXP SDK **LSDK 21.08++**.
+- MC Firmware version **10.37.0** and higher.
 - Supported architectures:  **arm64 LE**.
 
 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
diff --git a/drivers/bus/fslmc/mc/dpio.c b/drivers/bus/fslmc/mc/dpio.c
index a3382ed142..97c08fa713 100644
--- a/drivers/bus/fslmc/mc/dpio.c
+++ b/drivers/bus/fslmc/mc/dpio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -376,6 +376,98 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpio_set_stashing_destination_by_core_id() - Set the stashing destination source
+ * using the core id.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @core_id:	Core id stashing destination
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
+					uint32_t cmd_flags,
+					uint16_t token,
+					uint8_t core_id)
+{
+	struct dpio_stashing_dest_by_core_id *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST_BY_CORE_ID,
+										cmd_flags,
+										token);
+	cmd_params = (struct dpio_stashing_dest_by_core_id  *)cmd.params;
+	cmd_params->core_id = core_id;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpio_set_stashing_destination_source() - Set the stashing destination source.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @ss:		Stashing destination source (0 manual/1 automatic)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ss)
+{
+	struct dpio_stashing_dest_source *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST_SOURCE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpio_stashing_dest_source *)cmd.params;
+	cmd_params->ss = ss;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpio_get_stashing_destination_source() - Get the stashing destination source.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @ss:		Returns the stashing destination source (0 manual/1 automatic)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *ss)
+{
+	struct dpio_stashing_dest_source *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_STASHING_DEST_SOURCE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpio_stashing_dest_source *)cmd.params;
+	*ss = rsp_params->ss;
+
+	return 0;
+}
+
 /**
  * dpio_add_static_dequeue_channel() - Add a static dequeue channel.
  * @mc_io:		Pointer to MC portal's I/O object
diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index 34b30d15c2..e3a626077e 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2021, 2024 NXP
  *
  */
 #ifndef __FSL_DPCON_H
@@ -52,10 +52,12 @@ int dpcon_destroy(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint32_t obj_id);
 
+__rte_internal
 int dpcon_enable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
 
+__rte_internal
 int dpcon_disable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token);
@@ -65,6 +67,7 @@ int dpcon_is_enabled(struct fsl_mc_io *mc_io,
 		     uint16_t token,
 		     int *en);
 
+__rte_internal
 int dpcon_reset(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
index c2db76bdf8..eddce58a5f 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPIO_H
@@ -87,11 +87,30 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint16_t token,
 				  uint8_t sdest);
 
+__rte_internal
 int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
 				  uint8_t *sdest);
 
+__rte_internal
+int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t core_id);
+
+__rte_internal
+int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ss);
+
+__rte_internal
+int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *ss);
+
 __rte_internal
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
diff --git a/drivers/bus/fslmc/mc/fsl_dpio_cmd.h b/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
index 45ed01f809..360c68eaa5 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2019 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPIO_CMD_H
@@ -40,6 +40,9 @@
 #define DPIO_CMDID_GET_STASHING_DEST			DPIO_CMD(0x121)
 #define DPIO_CMDID_ADD_STATIC_DEQUEUE_CHANNEL		DPIO_CMD(0x122)
 #define DPIO_CMDID_REMOVE_STATIC_DEQUEUE_CHANNEL	DPIO_CMD(0x123)
+#define DPIO_CMDID_SET_STASHING_DEST_SOURCE		DPIO_CMD(0x124)
+#define DPIO_CMDID_GET_STASHING_DEST_SOURCE		DPIO_CMD(0x125)
+#define DPIO_CMDID_SET_STASHING_DEST_BY_CORE_ID		DPIO_CMD(0x126)
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPIO_MASK(field)        \
@@ -98,6 +101,14 @@ struct dpio_stashing_dest {
 	uint8_t sdest;
 };
 
+struct dpio_stashing_dest_source {
+	uint8_t ss;
+};
+
+struct dpio_stashing_dest_by_core_id {
+	uint8_t core_id;
+};
+
 struct dpio_cmd_static_dequeue_channel {
 	uint32_t dpcon_id;
 };
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index c6ea220df7..dfa51b3a86 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2017-2022 NXP
+ * Copyright 2017-2023 NXP
  *
  */
 #ifndef __FSL_DPMNG_H
@@ -20,7 +20,7 @@ struct fsl_mc_io;
  * Management Complex firmware version information
  */
 #define MC_VER_MAJOR 10
-#define MC_VER_MINOR 32
+#define MC_VER_MINOR 37
 
 /**
  * struct mc_version
diff --git a/drivers/bus/fslmc/mc/fsl_dprc_cmd.h b/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
index 6efa5634d2..d5ba35b5f0 100644
--- a/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 
@@ -10,13 +10,17 @@
 
 /* Minimal supported DPRC Version */
 #define DPRC_VER_MAJOR			6
-#define DPRC_VER_MINOR			6
+#define DPRC_VER_MINOR			7
 
 /* Command versioning */
 #define DPRC_CMD_BASE_VERSION			1
+#define DPRC_CMD_VERSION_2			2
+#define DPRC_CMD_VERSION_3			3
 #define DPRC_CMD_ID_OFFSET			4
 
 #define DPRC_CMD(id)	((id << DPRC_CMD_ID_OFFSET) | DPRC_CMD_BASE_VERSION)
+#define DPRC_CMD_V2(id)	(((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_VERSION_2)
+#define DPRC_CMD_V3(id)	(((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_VERSION_3)
 
 /* Command IDs */
 #define DPRC_CMDID_CLOSE                        DPRC_CMD(0x800)
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 18b6a3c2e4..297d4ed4fc 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2023 NXP
  */
 #ifndef _FSL_QBMAN_DEBUG_H
 #define _FSL_QBMAN_DEBUG_H
@@ -105,16 +105,6 @@ uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r);
 uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r);
 uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r);
 
-/* FQ query command for non-programmable fields*/
-enum qbman_fq_schedstate_e {
-	qbman_fq_schedstate_oos = 0,
-	qbman_fq_schedstate_retired,
-	qbman_fq_schedstate_tentatively_scheduled,
-	qbman_fq_schedstate_truly_scheduled,
-	qbman_fq_schedstate_parked,
-	qbman_fq_schedstate_held_active,
-};
-
 struct qbman_fq_query_np_rslt {
 uint8_t verb;
 	uint8_t rslt;
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index 01e28c6625..df1143733d 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -37,6 +37,9 @@ INTERNAL {
 	dpcon_get_attributes;
 	dpcon_open;
 	dpcon_close;
+	dpcon_reset;
+	dpcon_enable;
+	dpcon_disable;
 	dpdmai_close;
 	dpdmai_disable;
 	dpdmai_enable;
@@ -53,7 +56,11 @@ INTERNAL {
 	dpio_open;
 	dpio_remove_static_dequeue_channel;
 	dpio_reset;
+	dpio_get_stashing_destination;
+	dpio_get_stashing_destination_source;
 	dpio_set_stashing_destination;
+	dpio_set_stashing_destination_by_core_id;
+	dpio_set_stashing_destination_source;
 	mc_get_soc_version;
 	mc_get_version;
 	mc_send_command;
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
index 87e0defdc6..773b4648e0 100644
--- a/drivers/crypto/dpaa2_sec/mc/dpseci.c
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -763,3 +763,92 @@ int dpseci_get_congestion_notification(
 
 	return 0;
 }
+
+
+/**
+ * dpseci_get_rx_queue_status() - Get queue status attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue_index:	Select the queue_index
+ * @attr:	Returned queue status attributes
+ *
+ * Return:	'0' on success, error code otherwise
+ */
+int dpseci_get_rx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr)
+{
+	struct dpseci_rsp_get_queue_status *rsp_params;
+	struct dpseci_cmd_get_queue_status *cmd_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpseci_cmd_get_queue_status *)cmd.params;
+	cmd_params->queue_index = cpu_to_le32(queue_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpseci_rsp_get_queue_status *)cmd.params;
+	attr->fqid = le32_to_cpu(rsp_params->fqid);
+	attr->schedstate = (enum qbman_fq_schedstate_e)(le16_to_cpu(rsp_params->schedstate));
+	attr->state_flags = le16_to_cpu(rsp_params->state_flags);
+	attr->frame_count = le32_to_cpu(rsp_params->frame_count);
+	attr->byte_count = le32_to_cpu(rsp_params->byte_count);
+
+	return 0;
+}
+
+/**
+ * dpseci_get_tx_queue_status() - Get queue status attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue_index:	Select the queue_index
+ * @attr:	Returned queue status attributes
+ *
+ * Return:	'0' on success, error code otherwise
+ */
+int dpseci_get_tx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr)
+{
+	struct dpseci_rsp_get_queue_status *rsp_params;
+	struct dpseci_cmd_get_queue_status *cmd_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpseci_cmd_get_queue_status *)cmd.params;
+	cmd_params->queue_index = cpu_to_le32(queue_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpseci_rsp_get_queue_status *)cmd.params;
+	attr->fqid = le32_to_cpu(rsp_params->fqid);
+	attr->schedstate = (enum qbman_fq_schedstate_e)(le16_to_cpu(rsp_params->schedstate));
+	attr->state_flags = le16_to_cpu(rsp_params->state_flags);
+	attr->frame_count = le32_to_cpu(rsp_params->frame_count);
+	attr->byte_count = le32_to_cpu(rsp_params->byte_count);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
index c295c04f24..e371abdd64 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPSECI_H
@@ -429,4 +429,49 @@ int dpseci_get_congestion_notification(
 			uint16_t token,
 			struct dpseci_congestion_notification_cfg *cfg);
 
+/* Available FQ's scheduling states */
+enum qbman_fq_schedstate_e {
+	qbman_fq_schedstate_oos = 0,
+	qbman_fq_schedstate_retired,
+	qbman_fq_schedstate_tentatively_scheduled,
+	qbman_fq_schedstate_truly_scheduled,
+	qbman_fq_schedstate_parked,
+	qbman_fq_schedstate_held_active,
+};
+
+/* FQ's force eligible pending bit */
+#define DPSECI_FQ_STATE_FORCE_ELIGIBLE			0x00000001
+/* FQ's XON/XOFF state, 0: XON, 1: XOFF */
+#define DPSECI_FQ_STATE_XOFF					0x00000002
+/* FQ's retirement pending bit */
+#define DPSECI_FQ_STATE_RETIREMENT_PENDING		0x00000004
+/* FQ's overflow error bit */
+#define DPSECI_FQ_STATE_OVERFLOW_ERROR			0x00000008
+
+struct dpseci_queue_status {
+	uint32_t fqid;
+	/* FQ's scheduling states
+	 * (available scheduling states are defined in qbman_fq_schedstate_e)
+	 */
+	enum qbman_fq_schedstate_e schedstate;
+	/* FQ's state flags (available flags are defined above) */
+	uint16_t state_flags;
+	/* FQ's frame count */
+	uint32_t frame_count;
+	/* FQ's byte count */
+	uint32_t byte_count;
+};
+
+int dpseci_get_rx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr);
+
+int dpseci_get_tx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr);
+
 #endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
index af3518a0f3..065464b701 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPSECI_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPSECI Version */
 #define DPSECI_VER_MAJOR		5
-#define DPSECI_VER_MINOR		3
+#define DPSECI_VER_MINOR		4
 
 /* Command versioning */
 #define DPSECI_CMD_BASE_VERSION		1
@@ -46,6 +46,9 @@
 #define DPSECI_CMDID_GET_OPR		DPSECI_CMD_V1(0x19B)
 #define DPSECI_CMDID_SET_CONGESTION_NOTIFICATION	DPSECI_CMD_V1(0x170)
 #define DPSECI_CMDID_GET_CONGESTION_NOTIFICATION	DPSECI_CMD_V1(0x171)
+#define DPSECI_CMDID_GET_RX_QUEUE_STATUS	DPSECI_CMD_V1(0x172)
+#define DPSECI_CMDID_GET_TX_QUEUE_STATUS	DPSECI_CMD_V1(0x173)
+
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPSECI_MASK(field)        \
@@ -251,5 +254,17 @@ struct dpseci_cmd_set_congestion_notification {
 	uint32_t threshold_exit;
 };
 
+struct dpseci_cmd_get_queue_status {
+	uint32_t queue_index;
+};
+
+struct dpseci_rsp_get_queue_status {
+	uint32_t fqid;
+	uint16_t schedstate;
+	uint16_t state_flags;
+	uint32_t frame_count;
+	uint32_t byte_count;
+};
+
 #pragma pack(pop)
 #endif /* _FSL_DPSECI_CMD_H */
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index efba9ef286..4dc7a82b47 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -896,6 +896,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct dpni_queue tx_conf_cfg;
 	struct dpni_queue tx_flow_cfg;
 	uint8_t options = 0, flow_id;
+	uint8_t ceetm_ch_idx;
 	uint16_t channel_id;
 	struct dpni_queue_id qid;
 	uint32_t tc_id;
@@ -922,20 +923,27 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	memset(&tx_conf_cfg, 0, sizeof(struct dpni_queue));
 	memset(&tx_flow_cfg, 0, sizeof(struct dpni_queue));
 
-	if (tx_queue_id == 0) {
-		/*Set tx-conf and error configuration*/
-		if (priv->flags & DPAA2_TX_CONF_ENABLE)
-			ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW,
-							    priv->token,
-							    DPNI_CONF_AFFINE);
-		else
-			ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW,
-							    priv->token,
-							    DPNI_CONF_DISABLE);
-		if (ret) {
-			DPAA2_PMD_ERR("Error in set tx conf mode settings: "
-				      "err=%d", ret);
-			return -1;
+	if (!tx_queue_id) {
+		for (ceetm_ch_idx = 0;
+			ceetm_ch_idx <= (priv->num_channels - 1);
+			ceetm_ch_idx++) {
+			/*Set tx-conf and error configuration*/
+			if (priv->flags & DPAA2_TX_CONF_ENABLE) {
+				ret = dpni_set_tx_confirmation_mode(dpni,
+						CMD_PRI_LOW, priv->token,
+						ceetm_ch_idx,
+						DPNI_CONF_AFFINE);
+			} else {
+				ret = dpni_set_tx_confirmation_mode(dpni,
+						CMD_PRI_LOW, priv->token,
+						ceetm_ch_idx,
+						DPNI_CONF_DISABLE);
+			}
+			if (ret) {
+				DPAA2_PMD_ERR("Error(%d) in tx conf setting",
+					ret);
+				return ret;
+			}
 		}
 	}
 
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 1bb153cad7..f4feef3840 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -287,15 +287,19 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
  * @token:	Token of DPDMUX object
  * @skip_reset_flags:	By default all are 0.
  *			By setting 1 will deactivate the reset.
- *	The flags are:
- *			DPDMUX_SKIP_DEFAULT_INTERFACE  0x01
- *			DPDMUX_SKIP_UNICAST_RULES      0x02
- *			DPDMUX_SKIP_MULTICAST_RULES    0x04
+ * The flags are:
+ *			DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE  0x01
+ *			DPDMUX_SKIP_UNICAST_RULES             0x02
+ *			DPDMUX_SKIP_MULTICAST_RULES           0x04
+ *			DPDMUX_SKIP_RESET_DEFAULT_INTERFACE   0x08
  *
  * For example, by default, through DPDMUX_RESET the default
  * interface will be restored with the one from create.
- * By setting DPDMUX_SKIP_DEFAULT_INTERFACE flag,
- * through DPDMUX_RESET the default interface will not be modified.
+ * By setting DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE flag,
+ * through DPDMUX_RESET the default interface will not be modified after reset.
+ * By setting DPDMUX_SKIP_RESET_DEFAULT_INTERFACE flag,
+ * through DPDMUX_RESET the default interface will not be reset
+ * and will continue to be functional during reset procedure.
  *
  * Return:	'0' on Success; Error code otherwise.
  */
@@ -327,10 +331,11 @@ int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
  * @token:	Token of DPDMUX object
  * @skip_reset_flags:	Get the reset flags.
  *
- *	The flags are:
- *			DPDMUX_SKIP_DEFAULT_INTERFACE  0x01
- *			DPDMUX_SKIP_UNICAST_RULES      0x02
- *			DPDMUX_SKIP_MULTICAST_RULES    0x04
+ * The flags are:
+ *			DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE  0x01
+ *			DPDMUX_SKIP_UNICAST_RULES             0x02
+ *			DPDMUX_SKIP_MULTICAST_RULES           0x04
+ *			DPDMUX_SKIP_RESET_DEFAULT_INTERFACE   0x08
  *
  * Return:	'0' on Success; Error code otherwise.
  */
@@ -1064,6 +1069,127 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpdmux_if_set_taildrop() - enable taildrop for egress interface queues.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id:	Interface Identifier
+ * @cfg: Taildrop configuration
+ */
+int dpdmux_if_set_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+	struct dpdmux_cmd_set_taildrop *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_SET_TAILDROP,
+			cmd_flags,
+			token);
+	cmd_params = (struct dpdmux_cmd_set_taildrop *)cmd.params;
+	cmd_params->if_id		= cpu_to_le16(if_id);
+	cmd_params->units		= cfg->units;
+	cmd_params->threshold	= cpu_to_le32(cfg->threshold);
+	dpdmux_set_field(cmd_params->oal_en, ENABLE, (!!cfg->enable));
+
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpdmux_if_get_taildrop() - get current taildrop configuration.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id:	Interface Identifier
+ * @cfg: Taildrop configuration
+ */
+int dpdmux_if_get_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg)
+{
+	struct mc_command cmd = {0};
+	struct dpdmux_cmd_get_taildrop *cmd_params;
+	struct dpdmux_rsp_get_taildrop *rsp_params;
+	int err = 0;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_TAILDROP,
+			cmd_flags,
+			token);
+	cmd_params = (struct dpdmux_cmd_get_taildrop *)cmd.params;
+	cmd_params->if_id	= cpu_to_le16(if_id);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpdmux_rsp_get_taildrop *)cmd.params;
+	cfg->threshold = le32_to_cpu(rsp_params->threshold);
+	cfg->units = rsp_params->units;
+	cfg->enable = dpdmux_get_field(rsp_params->oal_en, ENABLE);
+
+	return err;
+}
+
+/**
+ * dpdmux_dump_table() - Dump the content of table_type table into memory.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @table_type: The type of the table to dump
+ *	- DPDMUX_DMAT_TABLE
+ *	- DPDMUX_MISS_TABLE
+ *	- DPDMUX_PRUNE_TABLE
+ * @table_index: The index of the table to dump in case of more than one table
+ *	if table_type == DPDMUX_DMAT_TABLE
+ *		- DPDMUX_HMAP_UNICAST
+ *		- DPDMUX_HMAP_MULTICAST
+ *	else 0
+ * @iova_addr: The snapshot will be stored in this variable as an header of struct dump_table_header
+ *             followed by an array of struct dump_table_entry
+ * @iova_size: Memory size allocated for iova_addr
+ * @num_entries: Number of entries written in iova_addr
+ *
+ * Return: Completion status. '0' on Success; Error code otherwise.
+ *
+ * The memory allocated at iova_addr must be zeroed before command execution.
+ * If the table content exceeds the memory size provided the dump will be truncated.
+ */
+int dpdmux_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	struct dpdmux_cmd_dump_table *cmd_params;
+	struct dpdmux_rsp_dump_table *rsp_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_DUMP_TABLE, cmd_flags, token);
+	cmd_params = (struct dpdmux_cmd_dump_table *)cmd.params;
+	cmd_params->table_type = cpu_to_le16(table_type);
+	cmd_params->table_index = cpu_to_le16(table_index);
+	cmd_params->iova_addr = cpu_to_le64(iova_addr);
+	cmd_params->iova_size = cpu_to_le32(iova_size);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpdmux_rsp_dump_table *)cmd.params;
+	*num_entries = le16_to_cpu(rsp_params->num_entries);
+
+	return 0;
+}
+
+
 /**
  * dpdmux_if_set_errors_behavior() - Set errors behavior
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
@@ -1100,3 +1226,60 @@ int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 	/* send command to mc*/
 	return mc_send_command(mc_io, &cmd);
 }
+
+/* Sets up a Soft Parser Profile on this DPDMUX
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @sp_profile: Soft Parser Profile name (must a valid name for a defined profile)
+ *			Maximum allowed length for this string is 8 characters long
+ *			If this parameter is empty string (all zeros)
+ *			then the Default SP Profile is set on this dpdmux
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ */
+int dpdmux_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type)
+{
+	struct dpdmux_cmd_set_sp_profile *cmd_params;
+	struct mc_command cmd = { 0 };
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SET_SP_PROFILE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpdmux_cmd_set_sp_profile *)cmd.params;
+	for (i = 0; i < MAX_SP_PROFILE_ID_SIZE && sp_profile[i]; i++)
+		cmd_params->sp_profile[i] = sp_profile[i];
+	cmd_params->type = type;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/* Enable/Disable Soft Parser on this DPDMUX interface
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id: interface id
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ * @en: 1 to enable or 0 to disable
+ */
+int dpdmux_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t if_id, uint8_t type, uint8_t en)
+{
+	struct dpdmux_cmd_sp_enable *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SP_ENABLE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpdmux_cmd_sp_enable *)cmd.params;
+	cmd_params->if_id = if_id;
+	cmd_params->type = type;
+	cmd_params->en = en;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
diff --git a/drivers/net/dpaa2/mc/dpkg.c b/drivers/net/dpaa2/mc/dpkg.c
index 4789976b7d..5db3d092c1 100644
--- a/drivers/net/dpaa2/mc/dpkg.c
+++ b/drivers/net/dpaa2/mc/dpkg.c
@@ -1,16 +1,18 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2021, 2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
 #include <fsl_mc_cmd.h>
 #include <fsl_dpkg.h>
+#include <string.h>
 
 /**
  * dpkg_prepare_key_cfg() - function prepare extract parameters
  * @cfg: defining a full Key Generation profile (rule)
- * @key_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
+ * @key_cfg_buf: Zeroed memory whose size is sizeo of
+ *		"struct dpni_ext_set_rx_tc_dist" before mapping it to DMA
  *
  * This function has to be called before the following functions:
  *	- dpni_set_rx_tc_dist()
@@ -18,7 +20,8 @@
  *	- dpkg_prepare_key_cfg()
  */
 int
-dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, uint8_t *key_cfg_buf)
+dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+	void *key_cfg_buf)
 {
 	int i, j;
 	struct dpni_ext_set_rx_tc_dist *dpni_ext;
@@ -27,11 +30,12 @@ dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, uint8_t *key_cfg_buf)
 	if (cfg->num_extracts > DPKG_MAX_NUM_OF_EXTRACTS)
 		return -EINVAL;
 
-	dpni_ext = (struct dpni_ext_set_rx_tc_dist *)key_cfg_buf;
+	dpni_ext = key_cfg_buf;
 	dpni_ext->num_extracts = cfg->num_extracts;
 
 	for (i = 0; i < cfg->num_extracts; i++) {
 		extr = &dpni_ext->extracts[i];
+		memset(extr, 0, sizeof(struct dpni_dist_extract));
 
 		switch (cfg->extracts[i].type) {
 		case DPKG_EXTRACT_FROM_HDR:
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index 4d97b98939..558f08dc69 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2022 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -852,6 +852,92 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpni_get_qdid_ex() - Extension for the function to get the Queuing Destination ID (QDID)
+ *			that should be used for enqueue operations.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @qtype:	Type of queue to receive QDID for
+ * @qdid:	Array of virtual QDID value that should be used as an argument
+ *			in all enqueue operations.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * This function must be used when dpni is created using multiple Tx channels to return one
+ * qdid for each channel.
+ */
+int dpni_get_qdid_ex(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  enum dpni_queue_type qtype,
+		  uint16_t *qdid)
+{
+	struct mc_command cmd = { 0 };
+	struct dpni_cmd_get_qdid *cmd_params;
+	struct dpni_rsp_get_qdid_ex *rsp_params;
+	int i;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QDID_EX,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_qdid *)cmd.params;
+	cmd_params->qtype = qtype;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_qdid_ex *)cmd.params;
+	for (i = 0; i < DPNI_MAX_CHANNELS; i++)
+		qdid[i] = le16_to_cpu(rsp_params->qdid[i]);
+
+	return 0;
+}
+
+/**
+ * dpni_get_sp_info() - Get the AIOP storage profile IDs associated
+ *			with the DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @sp_info:	Returned AIOP storage-profile information
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * @warning	Only relevant for DPNI that belongs to AIOP container.
+ */
+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
+		     uint32_t cmd_flags,
+		     uint16_t token,
+		     struct dpni_sp_info *sp_info)
+{
+	struct dpni_rsp_get_sp_info *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err, i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SP_INFO,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_sp_info *)cmd.params;
+	for (i = 0; i < DPNI_MAX_SP; i++)
+		sp_info->spids[i] = le16_to_cpu(rsp_params->spids[i]);
+
+	return 0;
+}
+
 /**
  * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
  * @mc_io:	Pointer to MC portal's I/O object
@@ -1684,6 +1770,7 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
  * @mc_io:	Pointer to MC portal's I/O object
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
  * @mode:	Tx confirmation mode
  *
  * This function is useful only when 'DPNI_OPT_TX_CONF_DISABLED' is not
@@ -1701,6 +1788,7 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
 int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode mode)
 {
 	struct dpni_tx_confirmation_mode *cmd_params;
@@ -1711,6 +1799,7 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 					  cmd_flags,
 					  token);
 	cmd_params = (struct dpni_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
 	cmd_params->confirmation_mode = mode;
 
 	/* send command to mc*/
@@ -1722,6 +1811,7 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
  * @mc_io:	Pointer to MC portal's I/O object
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
  * @mode:	Tx confirmation mode
  *
  * Return:  '0' on Success; Error code otherwise.
@@ -1729,8 +1819,10 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode *mode)
 {
+	struct dpni_tx_confirmation_mode *cmd_params;
 	struct dpni_tx_confirmation_mode *rsp_params;
 	struct mc_command cmd = { 0 };
 	int err;
@@ -1738,6 +1830,8 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONFIRMATION_MODE,
 					cmd_flags,
 					token);
+	cmd_params = (struct dpni_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
 
 	err = mc_send_command(mc_io, &cmd);
 	if (err)
@@ -1749,6 +1843,78 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpni_set_queue_tx_confirmation_mode() - Set Tx confirmation mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
+ * @index:	queue index
+ * @mode:	Tx confirmation mode
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode mode)
+{
+	struct dpni_queue_tx_confirmation_mode *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QUEUE_TX_CONFIRMATION_MODE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
+	cmd_params->index = index;
+	cmd_params->confirmation_mode = mode;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_queue_tx_confirmation_mode() - Get Tx confirmation mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
+ * @index:	queue index
+ * @mode:	Tx confirmation mode
+ *
+ * Return:  '0' on Success; Error code otherwise.
+ */
+int dpni_get_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode *mode)
+{
+	struct dpni_queue_tx_confirmation_mode *cmd_params;
+	struct dpni_queue_tx_confirmation_mode *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QUEUE_TX_CONFIRMATION_MODE,
+					cmd_flags,
+					token);
+	cmd_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
+	cmd_params->index = index;
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	*mode =  rsp_params->confirmation_mode;
+
+	return 0;
+}
+
 /**
  * dpni_set_qos_table() - Set QoS mapping table
  * @mc_io:	Pointer to MC portal's I/O object
@@ -2291,8 +2457,7 @@ int dpni_set_congestion_notification(struct fsl_mc_io *mc_io,
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
  * @qtype:	Type of queue - Rx, Tx and Tx confirm types are supported
- * @param:	Traffic class and channel. Bits[0-7] contain traaffic class,
- *		byte[8-15] contains channel id
+ * @tc_id:	Traffic class selection (0-7)
  * @cfg:	congestion notification configuration
  *
  * Return:	'0' on Success; error code otherwise.
@@ -3114,8 +3279,216 @@ int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 
 	cmd_params = (struct dpni_cmd_set_port_cfg *)cmd.params;
 	cmd_params->flags = cpu_to_le32(flags);
-	dpni_set_field(cmd_params->bit_params,	PORT_LOOPBACK_EN,
-			!!port_cfg->loopback_en);
+	dpni_set_field(cmd_params->bit_params, PORT_LOOPBACK_EN, !!port_cfg->loopback_en);
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_single_step_cfg() - return current configuration for single step PTP
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPNI object
+ * @ptp_cfg: ptp single step configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ */
+int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_single_step_cfg *ptp_cfg)
+{
+	struct dpni_rsp_single_step_cfg *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SINGLE_STEP_CFG,
+						cmd_flags,
+						token);
+	/* send command to mc*/
+	err =  mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* read command response */
+	rsp_params = (struct dpni_rsp_single_step_cfg *)cmd.params;
+	ptp_cfg->offset = le16_to_cpu(rsp_params->offset);
+	ptp_cfg->en = dpni_get_field(rsp_params->flags, PTP_ENABLE);
+	ptp_cfg->ch_update = dpni_get_field(rsp_params->flags, PTP_CH_UPDATE);
+	ptp_cfg->peer_delay = le32_to_cpu(rsp_params->peer_delay);
+	ptp_cfg->ptp_onestep_reg_base =
+				  le32_to_cpu(rsp_params->ptp_onestep_reg_base);
+
+	return err;
+}
+
+/**
+ * dpni_get_port_cfg() - return configuration from physical port. The command has effect only if
+ *			dpni is connected to a mac object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @port_cfg: Configuration data
+ * The command can be called only when dpni is connected to a dpmac object.
+ * If the dpni is unconnected or the endpoint is not a dpni it will return error;
+ */
+int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_port_cfg *port_cfg)
+{
+	struct dpni_rsp_get_port_cfg *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PORT_CFG,
+			cmd_flags, token);
+
+	/* send command to MC */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* read command response */
+	rsp_params = (struct dpni_rsp_get_port_cfg *)cmd.params;
+	port_cfg->loopback_en = dpni_get_field(rsp_params->bit_params, PORT_LOOPBACK_EN);
+
+	return 0;
+}
+
+/**
+ * dpni_set_single_step_cfg() - enable/disable and configure single step PTP
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPNI object
+ * @ptp_cfg: ptp single step configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * The function has effect only when dpni object is connected to a dpmac object. If the
+ * dpni is not connected to a dpmac the configuration will be stored inside and applied
+ * when connection is made.
+ */
+int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_single_step_cfg *ptp_cfg)
+{
+	struct dpni_cmd_single_step_cfg *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_SINGLE_STEP_CFG,
+						cmd_flags,
+						token);
+	cmd_params = (struct dpni_cmd_single_step_cfg *)cmd.params;
+	cmd_params->offset = cpu_to_le16(ptp_cfg->offset);
+	cmd_params->peer_delay = cpu_to_le32(ptp_cfg->peer_delay);
+	dpni_set_field(cmd_params->flags, PTP_ENABLE, !!ptp_cfg->en);
+	dpni_set_field(cmd_params->flags, PTP_CH_UPDATE, !!ptp_cfg->ch_update);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_dump_table() - Dump the content of table_type table into memory.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @table_type: The type of the table to dump
+ * @table_index: The index of the table to dump in case of more than one table
+ * @iova_addr: The snapshot will be stored in this variable as an header of struct dump_table_header
+ *             followed by an array of struct dump_table_entry
+ * @iova_size: Memory size allocated for iova_addr
+ * @num_entries: Number of entries written in iova_addr
+ *
+ * Return: Completion status. '0' on Success; Error code otherwise.
+ *
+ * The memory allocated at iova_addr must be zeroed before command execution.
+ * If the table content exceeds the memory size provided the dump will be truncated.
+ */
+int dpni_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	struct dpni_cmd_dump_table *cmd_params;
+	struct dpni_rsp_dump_table *rsp_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_DUMP_TABLE, cmd_flags, token);
+	cmd_params = (struct dpni_cmd_dump_table *)cmd.params;
+	cmd_params->table_type = cpu_to_le16(table_type);
+	cmd_params->table_index = cpu_to_le16(table_index);
+	cmd_params->iova_addr = cpu_to_le64(iova_addr);
+	cmd_params->iova_size = cpu_to_le32(iova_size);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_rsp_dump_table *)cmd.params;
+	*num_entries = le16_to_cpu(rsp_params->num_entries);
+
+	return 0;
+}
+
+/* Sets up a Soft Parser Profile on this DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @sp_profile: Soft Parser Profile name (must a valid name for a defined profile)
+ *			Maximum allowed length for this string is 8 characters long
+ *			If this parameter is empty string (all zeros)
+ *			then the Default SP Profile is set on this dpni
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ */
+int dpni_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type)
+{
+	struct dpni_cmd_set_sp_profile *cmd_params;
+	struct mc_command cmd = { 0 };
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_SP_PROFILE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpni_cmd_set_sp_profile *)cmd.params;
+	for (i = 0; i < MAX_SP_PROFILE_ID_SIZE && sp_profile[i]; i++)
+		cmd_params->sp_profile[i] = sp_profile[i];
+	cmd_params->type = type;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/* Enable/Disable Soft Parser on this DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ * @en: 1 to enable or 0 to disable
+ */
+int dpni_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t type, uint8_t en)
+{
+	struct dpni_cmd_sp_enable *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SP_ENABLE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpni_cmd_sp_enable *)cmd.params;
+	cmd_params->type = type;
+	cmd_params->en = en;
 
 	/* send command to MC */
 	return mc_send_command(mc_io, &cmd);
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 9bbac44219..97b09e59f9 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2022 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #ifndef __FSL_DPDMUX_H
@@ -154,6 +154,10 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
  *Setting 1 DPDMUX_RESET will not reset multicast rules
  */
 #define DPDMUX_SKIP_MULTICAST_RULES	0x04
+/**
+ *Setting 4 DPDMUX_RESET will not reset default interface
+ */
+#define DPDMUX_SKIP_RESET_DEFAULT_INTERFACE	0x08
 
 int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
@@ -464,10 +468,50 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
 			   uint16_t *major_ver,
 			   uint16_t *minor_ver);
 
+enum dpdmux_congestion_unit {
+	DPDMUX_TAIDLROP_DROP_UNIT_BYTE = 0,
+	DPDMUX_TAILDROP_DROP_UNIT_FRAMES,
+	DPDMUX_TAILDROP_DROP_UNIT_BUFFERS
+};
+
 /**
- * Discard bit. This bit must be used together with other bits in
- * DPDMUX_ERROR_ACTION_CONTINUE to disable discarding of frames containing
- * errors
+ * struct dpdmux_taildrop_cfg - interface taildrop configuration
+ * @enable - enable (1 ) or disable (0) taildrop
+ * @units - taildrop units
+ * @threshold - taildtop threshold
+ */
+struct dpdmux_taildrop_cfg {
+	char enable;
+	enum dpdmux_congestion_unit units;
+	uint32_t threshold;
+};
+
+int dpdmux_if_set_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg);
+
+int dpdmux_if_get_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg);
+
+#define DPDMUX_MAX_KEY_SIZE 56
+
+enum dpdmux_table_type {
+	DPDMUX_DMAT_TABLE = 1,
+	DPDMUX_MISS_TABLE = 2,
+	DPDMUX_PRUNE_TABLE = 3,
+};
+
+int dpdmux_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries);
+
+/**
+ * Discard bit. This bit must be used together with other bits in DPDMUX_ERROR_ACTION_CONTINUE
+ * to disable discarding of frames containing errors
  */
 #define DPDMUX_ERROR_DISC		0x80000000
 /**
@@ -583,4 +627,19 @@ struct dpdmux_error_cfg {
 int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 		uint16_t token, uint16_t if_id, struct dpdmux_error_cfg *cfg);
 
+/**
+ * SP Profile on Ingress DPDMUX
+ */
+#define DPDMUX_SP_PROFILE_INGRESS 0x1
+/**
+ * SP Profile on Egress DPDMUX
+ */
+#define DPDMUX_SP_PROFILE_EGRESS	0x2
+
+int dpdmux_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type);
+
+int dpdmux_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t if_id, uint8_t type, uint8_t en);
+
 #endif /* __FSL_DPDMUX_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
index bf6b8a20d1..a94f1bf91a 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #ifndef _FSL_DPDMUX_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPDMUX Version */
 #define DPDMUX_VER_MAJOR		6
-#define DPDMUX_VER_MINOR		9
+#define DPDMUX_VER_MINOR		10
 
 #define DPDMUX_CMD_BASE_VERSION		1
 #define DPDMUX_CMD_VERSION_2		2
@@ -63,8 +63,17 @@
 
 #define DPDMUX_CMDID_SET_RESETABLE		DPDMUX_CMD(0x0ba)
 #define DPDMUX_CMDID_GET_RESETABLE		DPDMUX_CMD(0x0bb)
+
+#define DPDMUX_CMDID_IF_SET_TAILDROP		DPDMUX_CMD(0x0bc)
+#define DPDMUX_CMDID_IF_GET_TAILDROP		DPDMUX_CMD(0x0bd)
+
+#define DPDMUX_CMDID_DUMP_TABLE           DPDMUX_CMD(0x0be)
+
 #define DPDMUX_CMDID_SET_ERRORS_BEHAVIOR	DPDMUX_CMD(0x0bf)
 
+#define DPDMUX_CMDID_SET_SP_PROFILE			DPDMUX_CMD(0x0c0)
+#define DPDMUX_CMDID_SP_ENABLE				DPDMUX_CMD(0x0c1)
+
 #define DPDMUX_MASK(field)        \
 	GENMASK(DPDMUX_##field##_SHIFT + DPDMUX_##field##_SIZE - 1, \
 		DPDMUX_##field##_SHIFT)
@@ -241,7 +250,7 @@ struct dpdmux_cmd_remove_custom_cls_entry {
 };
 
 #define DPDMUX_SKIP_RESET_FLAGS_SHIFT    0
-#define DPDMUX_SKIP_RESET_FLAGS_SIZE     3
+#define DPDMUX_SKIP_RESET_FLAGS_SIZE     4
 
 struct dpdmux_cmd_set_skip_reset_flags {
 	uint8_t skip_reset_flags;
@@ -251,6 +260,61 @@ struct dpdmux_rsp_get_skip_reset_flags {
 	uint8_t skip_reset_flags;
 };
 
+struct dpdmux_cmd_set_taildrop {
+	uint32_t	pad1;
+	uint16_t	if_id;
+	uint16_t	pad2;
+	uint16_t	oal_en;
+	uint8_t		units;
+	uint8_t		pad3;
+	uint32_t	threshold;
+};
+
+struct dpdmux_cmd_get_taildrop {
+	uint32_t	pad1;
+	uint16_t	if_id;
+};
+
+struct dpdmux_rsp_get_taildrop {
+	uint16_t	pad1;
+	uint16_t	pad2;
+	uint16_t	if_id;
+	uint16_t	pad3;
+	uint16_t	oal_en;
+	uint8_t		units;
+	uint8_t		pad4;
+	uint32_t	threshold;
+};
+
+struct dpdmux_cmd_dump_table {
+	uint16_t table_type;
+	uint16_t table_index;
+	uint32_t pad0;
+	uint64_t iova_addr;
+	uint32_t iova_size;
+};
+
+struct dpdmux_rsp_dump_table {
+	uint16_t num_entries;
+};
+
+struct dpdmux_dump_table_header {
+	uint16_t table_type;
+	uint16_t table_num_entries;
+	uint16_t table_max_entries;
+	uint8_t default_action;
+	uint8_t match_type;
+	uint8_t reserved[24];
+};
+
+struct dpdmux_dump_table_entry {
+	uint8_t key[DPDMUX_MAX_KEY_SIZE];
+	uint8_t mask[DPDMUX_MAX_KEY_SIZE];
+	uint8_t key_action;
+	uint16_t result[3];
+	uint8_t reserved[21];
+};
+
 #define DPDMUX_ERROR_ACTION_SHIFT		0
 #define DPDMUX_ERROR_ACTION_SIZE		4
 
@@ -260,5 +324,18 @@ struct dpdmux_cmd_set_errors_behavior {
 	uint16_t if_id;
 };
 
+#define MAX_SP_PROFILE_ID_SIZE	8
+
+struct dpdmux_cmd_set_sp_profile {
+	uint8_t sp_profile[MAX_SP_PROFILE_ID_SIZE];
+	uint8_t type;
+};
+
+struct dpdmux_cmd_sp_enable {
+	uint16_t if_id;
+	uint8_t type;
+	uint8_t en;
+};
+
 #pragma pack(pop)
 #endif /* _FSL_DPDMUX_CMD_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpkg.h b/drivers/net/dpaa2/mc/fsl_dpkg.h
index 70f2339ea5..834c765513 100644
--- a/drivers/net/dpaa2/mc/fsl_dpkg.h
+++ b/drivers/net/dpaa2/mc/fsl_dpkg.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  * Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPKG_H_
@@ -180,7 +180,8 @@ struct dpni_ext_set_rx_tc_dist {
 	struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
 };
 
-int dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
-			 uint8_t *key_cfg_buf);
+int
+dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+	void *key_cfg_buf);
 
 #endif /* __FSL_DPKG_H_ */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index ce84f4265e..3a5fcfa8a5 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPNI_H
@@ -116,6 +116,11 @@ struct fsl_mc_io;
  * Flow steering table is shared between all traffic classes
  */
 #define DPNI_OPT_SHARED_FS				0x001000
+/*
+ * Fq frame data, context and annotations stashing disable.
+ * The stashing is enabled by default.
+ */
+#define DPNI_OPT_STASHING_DIS			0x002000
 /**
  * Software sequence maximum layout size
  */
@@ -147,6 +152,7 @@ int dpni_close(struct fsl_mc_io *mc_io,
  *		DPNI_OPT_HAS_KEY_MASKING
  *		DPNI_OPT_NO_FS
  *		DPNI_OPT_SINGLE_SENDER
+ *		DPNI_OPT_STASHING_DIS
  * @fs_entries: Number of entries in the flow steering table.
  *		This table is used to select the ingress queue for
  *		ingress traffic, targeting a GPP core or another.
@@ -335,6 +341,7 @@ int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
  *		DPNI_OPT_SHARED_CONGESTION
  *		DPNI_OPT_HAS_KEY_MASKING
  *		DPNI_OPT_NO_FS
+ *		DPNI_OPT_STASHING_DIS
  * @num_queues: Number of Tx and Rx queues used for traffic distribution.
  * @num_rx_tcs: Number of RX traffic classes (TCs), reserved for the DPNI.
  * @num_tx_tcs: Number of TX traffic classes (TCs), reserved for the DPNI.
@@ -394,7 +401,7 @@ int dpni_get_attributes(struct fsl_mc_io *mc_io,
  * error queue. To be used in dpni_set_errors_behavior() only if error_action
  * parameter is set to DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE.
  */
-#define DPNI_ERROR_DISC		0x80000000
+#define DPNI_ERROR_DISC			0x80000000
 
 /**
  * Extract out of frame header error
@@ -576,6 +583,8 @@ enum dpni_offload {
 	DPNI_OFF_TX_L3_CSUM,
 	DPNI_OFF_TX_L4_CSUM,
 	DPNI_FLCTYPE_HASH,
+	DPNI_HEADER_STASHING,
+	DPNI_PAYLOAD_STASHING,
 };
 
 int dpni_set_offload(struct fsl_mc_io *mc_io,
@@ -596,6 +605,26 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 		  enum dpni_queue_type qtype,
 		  uint16_t *qdid);
 
+int dpni_get_qdid_ex(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  enum dpni_queue_type qtype,
+		  uint16_t *qdid);
+
+/**
+ * struct dpni_sp_info - Structure representing DPNI storage-profile information
+ * (relevant only for DPNI owned by AIOP)
+ * @spids: array of storage-profiles
+ */
+struct dpni_sp_info {
+	uint16_t spids[DPNI_MAX_SP];
+};
+
+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
+		     uint32_t cmd_flags,
+		     uint16_t token,
+		     struct dpni_sp_info *sp_info);
+
 int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
 			    uint32_t cmd_flags,
 			    uint16_t token,
@@ -1443,11 +1472,25 @@ enum dpni_confirmation_mode {
 int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode mode);
 
 int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
+				  enum dpni_confirmation_mode *mode);
+
+int dpni_set_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode mode);
+
+int dpni_get_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
 				  enum dpni_confirmation_mode *mode);
 
 /**
@@ -1841,6 +1884,60 @@ void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
 				     const uint8_t *sw_sequence_layout_buf);
 
 /**
+ * When used for queue_idx in function dpni_set_rx_dist_default_queue will signal to dpni
+ * to drop all unclassified frames
+ */
+#define DPNI_FS_MISS_DROP		((uint16_t)-1)
+
+/**
+ * struct dpni_rx_dist_cfg - distribution configuration
+ * @dist_size:	distribution size; supported values: 1,2,3,4,6,7,8,
+ *		12,14,16,24,28,32,48,56,64,96,112,128,192,224,256,384,448,
+ *		512,768,896,1024
+ * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
+ *		the extractions to be used for the distribution key by calling
+ *		dpkg_prepare_key_cfg() relevant only when enable!=0 otherwise it can be '0'
+ * @enable: enable/disable the distribution.
+ * @tc: TC id for which distribution is set
+ * @fs_miss_flow_id: when packet misses all rules from flow steering table and hash is
+ *		disabled it will be put into this queue id; use DPNI_FS_MISS_DROP to drop
+ *		frames. The value of this field is used only when flow steering distribution
+ *		is enabled and hash distribution is disabled
+ */
+struct dpni_rx_dist_cfg {
+	uint16_t dist_size;
+	uint64_t key_cfg_iova;
+	uint8_t enable;
+	uint8_t tc;
+	uint16_t fs_miss_flow_id;
+};
+
+int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		const struct dpni_rx_dist_cfg *cfg);
+
+int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		const struct dpni_rx_dist_cfg *cfg);
+
+int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t tpid);
+
+int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t tpid);
+
+/**
+ * struct dpni_custom_tpid_cfg - custom TPID configuration. Contains custom TPID values
+ *		used in current dpni object to detect 802.1q frames.
+ *	@tpid1: first tag. Not used if zero.
+ *	@tpid2: second tag. Not used if zero.
+ */
+struct dpni_custom_tpid_cfg {
+	uint16_t tpid1;
+	uint16_t tpid2;
+};
+
+int dpni_get_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_custom_tpid_cfg *tpid);
+/*
  * struct dpni_ptp_cfg - configure single step PTP (IEEE 1588)
  *	@en: enable single step PTP. When enabled the PTPv1 functionality will
  *		not work. If the field is zero, offset and ch_update parameters
@@ -1858,6 +1955,7 @@ struct dpni_single_step_cfg {
 	uint8_t ch_update;
 	uint16_t offset;
 	uint32_t peer_delay;
+	uint32_t ptp_onestep_reg_base;
 };
 
 int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
@@ -1885,61 +1983,35 @@ int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 		uint16_t token, struct dpni_port_cfg *port_cfg);
 
-/**
- * When used for queue_idx in function dpni_set_rx_dist_default_queue will
- * signal to dpni to drop all unclassified frames
- */
-#define DPNI_FS_MISS_DROP		((uint16_t)-1)
-
-/**
- * struct dpni_rx_dist_cfg - distribution configuration
- * @dist_size:	distribution size; supported values: 1,2,3,4,6,7,8,
- *		12,14,16,24,28,32,48,56,64,96,112,128,192,224,256,384,448,
- *		512,768,896,1024
- * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
- *		the extractions to be used for the distribution key by calling
- *		dpkg_prepare_key_cfg() relevant only when enable!=0 otherwise
- *		it can be '0'
- * @enable: enable/disable the distribution.
- * @tc: TC id for which distribution is set
- * @fs_miss_flow_id: when packet misses all rules from flow steering table and
- *		hash is disabled it will be put into this queue id; use
- *		DPNI_FS_MISS_DROP to drop frames. The value of this field is
- *		used only when flow steering distribution is enabled and hash
- *		distribution is disabled
- */
-struct dpni_rx_dist_cfg {
-	uint16_t dist_size;
-	uint64_t key_cfg_iova;
-	uint8_t enable;
-	uint8_t tc;
-	uint16_t fs_miss_flow_id;
+enum dpni_table_type {
+	DPNI_FS_TABLE = 1,
+	DPNI_MAC_TABLE = 2,
+	DPNI_QOS_TABLE = 3,
+	DPNI_VLAN_TABLE = 4,
 };
 
-int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, const struct dpni_rx_dist_cfg *cfg);
-
-int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, const struct dpni_rx_dist_cfg *cfg);
-
-int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, uint16_t tpid);
-
-int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, uint16_t tpid);
+int dpni_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries);
 
 /**
- * struct dpni_custom_tpid_cfg - custom TPID configuration. Contains custom TPID
- *	values used in current dpni object to detect 802.1q frames.
- *	@tpid1: first tag. Not used if zero.
- *	@tpid2: second tag. Not used if zero.
+ * SP Profile on Ingress DPNI
  */
-struct dpni_custom_tpid_cfg {
-	uint16_t tpid1;
-	uint16_t tpid2;
-};
+#define DPNI_SP_PROFILE_INGRESS 0x1
+/**
+ * SP Profile on Egress DPNI
+ */
+#define DPNI_SP_PROFILE_EGRESS	0x2
+
+int dpni_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type);
 
-int dpni_get_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, struct dpni_custom_tpid_cfg *tpid);
+int dpni_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t type, uint8_t en);
 
 #endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
index 781f936add..1152182e34 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2022 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPNI_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPNI Version */
 #define DPNI_VER_MAJOR				8
-#define DPNI_VER_MINOR				2
+#define DPNI_VER_MINOR				4
 
 #define DPNI_CMD_BASE_VERSION			1
 #define DPNI_CMD_VERSION_2			2
@@ -108,8 +108,8 @@
 #define DPNI_CMDID_GET_EARLY_DROP		DPNI_CMD_V3(0x26A)
 #define DPNI_CMDID_GET_OFFLOAD			DPNI_CMD_V2(0x26B)
 #define DPNI_CMDID_SET_OFFLOAD			DPNI_CMD_V2(0x26C)
-#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE	DPNI_CMD(0x266)
-#define DPNI_CMDID_GET_TX_CONFIRMATION_MODE	DPNI_CMD(0x26D)
+#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE	DPNI_CMD_V2(0x266)
+#define DPNI_CMDID_GET_TX_CONFIRMATION_MODE	DPNI_CMD_V2(0x26D)
 #define DPNI_CMDID_SET_OPR			DPNI_CMD_V2(0x26e)
 #define DPNI_CMDID_GET_OPR			DPNI_CMD_V2(0x26f)
 #define DPNI_CMDID_LOAD_SW_SEQUENCE		DPNI_CMD(0x270)
@@ -121,7 +121,16 @@
 #define DPNI_CMDID_REMOVE_CUSTOM_TPID		DPNI_CMD(0x276)
 #define DPNI_CMDID_GET_CUSTOM_TPID		DPNI_CMD(0x277)
 #define DPNI_CMDID_GET_LINK_CFG			DPNI_CMD(0x278)
+#define DPNI_CMDID_SET_SINGLE_STEP_CFG			DPNI_CMD(0x279)
+#define DPNI_CMDID_GET_SINGLE_STEP_CFG		DPNI_CMD_V2(0x27a)
 #define DPNI_CMDID_SET_PORT_CFG			DPNI_CMD(0x27B)
+#define DPNI_CMDID_GET_PORT_CFG			DPNI_CMD(0x27C)
+#define DPNI_CMDID_DUMP_TABLE           DPNI_CMD(0x27D)
+#define DPNI_CMDID_SET_SP_PROFILE		DPNI_CMD(0x27E)
+#define DPNI_CMDID_GET_QDID_EX			DPNI_CMD(0x27F)
+#define DPNI_CMDID_SP_ENABLE		    DPNI_CMD(0x280)
+#define DPNI_CMDID_SET_QUEUE_TX_CONFIRMATION_MODE	DPNI_CMD(0x281)
+#define DPNI_CMDID_GET_QUEUE_TX_CONFIRMATION_MODE	DPNI_CMD(0x282)
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPNI_MASK(field)	\
@@ -329,6 +338,10 @@ struct dpni_rsp_get_qdid {
 	uint16_t qdid;
 };
 
+struct dpni_rsp_get_qdid_ex {
+	uint16_t qdid[16];
+};
+
 struct dpni_rsp_get_sp_info {
 	uint16_t spids[2];
 };
@@ -748,7 +761,16 @@ struct dpni_cmd_set_taildrop {
 };
 
 struct dpni_tx_confirmation_mode {
-	uint32_t pad;
+	uint8_t ceetm_ch_idx;
+	uint8_t pad1;
+	uint16_t pad2;
+	uint8_t confirmation_mode;
+};
+
+struct dpni_queue_tx_confirmation_mode {
+	uint8_t ceetm_ch_idx;
+	uint8_t index;
+	uint16_t pad;
 	uint8_t confirmation_mode;
 };
 
@@ -894,6 +916,42 @@ struct dpni_sw_sequence_layout_entry {
 	uint16_t pad;
 };
 
+#define DPNI_RX_FS_DIST_ENABLE_SHIFT	0
+#define DPNI_RX_FS_DIST_ENABLE_SIZE		1
+struct dpni_cmd_set_rx_fs_dist {
+	uint16_t	dist_size;
+	uint8_t		enable;
+	uint8_t		tc;
+	uint16_t	miss_flow_id;
+	uint16_t	pad1;
+	uint64_t	key_cfg_iova;
+};
+
+#define DPNI_RX_HASH_DIST_ENABLE_SHIFT	0
+#define DPNI_RX_HASH_DIST_ENABLE_SIZE		1
+struct dpni_cmd_set_rx_hash_dist {
+	uint16_t	dist_size;
+	uint8_t		enable;
+	uint8_t		tc_id;
+	uint32_t	pad;
+	uint64_t	key_cfg_iova;
+};
+
+struct dpni_cmd_add_custom_tpid {
+	uint16_t	pad;
+	uint16_t	tpid;
+};
+
+struct dpni_cmd_remove_custom_tpid {
+	uint16_t	pad;
+	uint16_t	tpid;
+};
+
+struct dpni_rsp_get_custom_tpid {
+	uint16_t	tpid1;
+	uint16_t	tpid2;
+};
+
 #define DPNI_PTP_ENABLE_SHIFT			0
 #define DPNI_PTP_ENABLE_SIZE			1
 #define DPNI_PTP_CH_UPDATE_SHIFT		1
@@ -925,40 +983,45 @@ struct dpni_rsp_get_port_cfg {
 	uint32_t	bit_params;
 };
 
-#define DPNI_RX_FS_DIST_ENABLE_SHIFT	0
-#define DPNI_RX_FS_DIST_ENABLE_SIZE		1
-struct dpni_cmd_set_rx_fs_dist {
-	uint16_t	dist_size;
-	uint8_t		enable;
-	uint8_t		tc;
-	uint16_t	miss_flow_id;
-	uint16_t	pad1;
-	uint64_t	key_cfg_iova;
+struct dpni_cmd_dump_table {
+	uint16_t table_type;
+	uint16_t table_index;
+	uint32_t pad0;
+	uint64_t iova_addr;
+	uint32_t iova_size;
 };
 
-#define DPNI_RX_HASH_DIST_ENABLE_SHIFT	0
-#define DPNI_RX_HASH_DIST_ENABLE_SIZE		1
-struct dpni_cmd_set_rx_hash_dist {
-	uint16_t	dist_size;
-	uint8_t		enable;
-	uint8_t		tc_id;
-	uint32_t	pad;
-	uint64_t	key_cfg_iova;
+struct dpni_rsp_dump_table {
+	uint16_t num_entries;
 };
 
-struct dpni_cmd_add_custom_tpid {
-	uint16_t	pad;
-	uint16_t	tpid;
+struct dump_table_header {
+	uint16_t table_type;
+	uint16_t table_num_entries;
+	uint16_t table_max_entries;
+	uint8_t default_action;
+	uint8_t match_type;
+	uint8_t reserved[24];
 };
 
-struct dpni_cmd_remove_custom_tpid {
-	uint16_t	pad;
-	uint16_t	tpid;
+struct dump_table_entry {
+	uint8_t key[DPNI_MAX_KEY_SIZE];
+	uint8_t mask[DPNI_MAX_KEY_SIZE];
+	uint8_t key_action;
+	uint16_t result[3];
+	uint8_t reserved[21];
 };
 
-struct dpni_rsp_get_custom_tpid {
-	uint16_t	tpid1;
-	uint16_t	tpid2;
+#define MAX_SP_PROFILE_ID_SIZE	8
+
+struct dpni_cmd_set_sp_profile {
+	uint8_t sp_profile[MAX_SP_PROFILE_ID_SIZE];
+	uint8_t type;
+};
+
+struct dpni_cmd_sp_enable {
+	uint8_t type;
+	uint8_t en;
 };
 
 #pragma pack(pop)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 09/43] net/dpaa2: support link state for eth interfaces
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (7 preceding siblings ...)
  2024-09-13  5:59 ` [v1 08/43] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 10/43] net/dpaa2: update DPNI link status method vanshika.shukla
                   ` (34 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

This patch add support to update the duplex value along with
link status and link speed after setting the link UP.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4dc7a82b47..9172097abf 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1985,7 +1985,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	if (ret) {
 		/* Unable to obtain dpni status; Not continuing */
 		DPAA2_PMD_ERR("Interface Link UP failed (%d)", ret);
-		return -EINVAL;
+		return ret;
 	}
 
 	/* Enable link if not already enabled */
@@ -1993,13 +1993,13 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 		ret = dpni_enable(dpni, CMD_PRI_LOW, priv->token);
 		if (ret) {
 			DPAA2_PMD_ERR("Interface Link UP failed (%d)", ret);
-			return -EINVAL;
+			return ret;
 		}
 	}
 	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
 	if (ret < 0) {
 		DPAA2_PMD_DEBUG("Unable to get link state (%d)", ret);
-		return -1;
+		return ret;
 	}
 
 	/* changing tx burst function to start enqueues */
@@ -2007,10 +2007,15 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = state.up;
 	dev->data->dev_link.link_speed = state.rate;
 
+	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
+		dev->data->dev_link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	else
+		dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+
 	if (state.up)
-		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
+		DPAA2_PMD_DEBUG("Port %d Link is Up", dev->data->port_id);
 	else
-		DPAA2_PMD_INFO("Port %d Link is Down", dev->data->port_id);
+		DPAA2_PMD_DEBUG("Port %d Link is Down", dev->data->port_id);
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 10/43] net/dpaa2: update DPNI link status method
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (8 preceding siblings ...)
  2024-09-13  5:59 ` [v1 09/43] net/dpaa2: support link state for eth interfaces vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 11/43] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
                   ` (33 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Brick Yang, Rohit Raj

From: Brick Yang <brick.yang@nxp.com>

If SFP module is not connected to the port and flow control is
configured using flow control API, link will show DOWN even after
connecting the SFP module and fiber cable.

This issue cannot be reproduced if only SFP module is connected and
fiber cable is disconnected before configuring flow control even
though link is down in this case too.

This patch improves it by getting configuration values from
dpni_get_link_cfg API instead of dpni_get_link_state API, which
provides us static configuration data.

Signed-off-by: Brick Yang <brick.yang@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9172097abf..2fb9b8ea95 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2084,7 +2084,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	int ret = -EINVAL;
 	struct dpaa2_dev_priv *priv;
 	struct fsl_mc_io *dpni;
-	struct dpni_link_state state = {0};
+	struct dpni_link_cfg cfg = {0};
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -2096,14 +2096,14 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
+	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("error: dpni_get_link_state %d", ret);
+		DPAA2_PMD_ERR("error: dpni_get_link_cfg %d", ret);
 		return ret;
 	}
 
 	memset(fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	if (state.options & DPNI_LINK_OPT_PAUSE) {
+	if (cfg.options & DPNI_LINK_OPT_PAUSE) {
 		/* DPNI_LINK_OPT_PAUSE set
 		 *  if ASYM_PAUSE not set,
 		 *	RX Side flow control (handle received Pause frame)
@@ -2112,7 +2112,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	RX Side flow control (handle received Pause frame)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
-		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
+		if (!(cfg.options & DPNI_LINK_OPT_ASYM_PAUSE))
 			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
 			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
@@ -2124,7 +2124,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *  if ASYM_PAUSE not set,
 		 *	Flow control disabled
 		 */
-		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
+		if (cfg.options & DPNI_LINK_OPT_ASYM_PAUSE)
 			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
 			fc_conf->mode = RTE_ETH_FC_NONE;
@@ -2139,7 +2139,6 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	int ret = -EINVAL;
 	struct dpaa2_dev_priv *priv;
 	struct fsl_mc_io *dpni;
-	struct dpni_link_state state = {0};
 	struct dpni_link_cfg cfg = {0};
 
 	PMD_INIT_FUNC_TRACE();
@@ -2152,23 +2151,19 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	/* It is necessary to obtain the current state before setting fc_conf
+	/* It is necessary to obtain the current cfg before setting fc_conf
 	 * as MC would return error in case rate, autoneg or duplex values are
 	 * different.
 	 */
-	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
+	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("Unable to get link state (err=%d)", ret);
+		DPAA2_PMD_ERR("Unable to get link cfg (err=%d)", ret);
 		return -1;
 	}
 
 	/* Disable link before setting configuration */
 	dpaa2_dev_set_link_down(dev);
 
-	/* Based on fc_conf, update cfg */
-	cfg.rate = state.rate;
-	cfg.options = state.options;
-
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
 	case RTE_ETH_FC_FULL:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 11/43] net/dpaa2: add new PMD API to check dpaa platform version
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (9 preceding siblings ...)
  2024-09-13  5:59 ` [v1 10/43] net/dpaa2: update DPNI link status method vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 12/43] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
                   ` (32 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

This patch add support to check the DPAA platform type from
the applications.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 16 +++++++++++++---
 drivers/net/dpaa2/dpaa2_flow.c    |  5 ++---
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  4 ++++
 drivers/net/dpaa2/version.map     |  1 +
 4 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 2fb9b8ea95..f0b4843472 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2158,7 +2158,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
 		DPAA2_PMD_ERR("Unable to get link cfg (err=%d)", ret);
-		return -1;
+		return ret;
 	}
 
 	/* Disable link before setting configuration */
@@ -2200,7 +2200,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	default:
 		DPAA2_PMD_ERR("Incorrect Flow control flag (%d)",
 			      fc_conf->mode);
-		return -1;
+		return -EINVAL;
 	}
 
 	ret = dpni_set_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
@@ -2882,8 +2882,18 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
+int
+rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 {
+	struct rte_eth_dev *dev;
+
+	if (eth_id >= RTE_MAX_ETHPORTS)
+		return false;
+
+	dev = &rte_eth_devices[eth_id];
+	if (!dev->device)
+		return false;
+
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 6c7bac4d48..15f3343db4 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3300,14 +3300,13 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 	if (idx >= 0) {
 		if (!rte_eth_dev_is_valid_port(idx))
 			return NULL;
+		if (!rte_pmd_dpaa2_dev_is_dpaa2(idx))
+			return NULL;
 		dest_dev = &rte_eth_devices[idx];
 	} else {
 		dest_dev = priv->eth_dev;
 	}
 
-	if (!dpaa2_dev_is_dpaa2(dest_dev))
-		return NULL;
-
 	return dest_dev;
 }
 
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index bebebcacdc..fc52a9218e 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -127,6 +127,10 @@ __rte_experimental
 uint32_t
 rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
 
+__rte_experimental
+int
+rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id);
+
 #if defined(RTE_LIBRTE_IEEE1588)
 __rte_experimental
 int
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 7323fc8869..233c6e6b2c 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -17,6 +17,7 @@ EXPERIMENTAL {
 	# added in 21.11
 	rte_pmd_dpaa2_get_tlu_hash;
 	# added in 24.11
+	rte_pmd_dpaa2_dev_is_dpaa2;
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
 	rte_pmd_dpaa2_mux_dump_counter;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 12/43] bus/fslmc: improve BMAN buffer acquire
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (10 preceding siblings ...)
  2024-09-13  5:59 ` [v1 11/43] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 13/43] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
                   ` (31 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Ignore reserved bits of BMan acquire response number.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 1f24cdce7e..3fdca9761d 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2023-2024 NXP
  *
  */
 
@@ -42,6 +42,8 @@
 /* opaque token for static dequeues */
 #define QMAN_SDQCR_TOKEN    0xbb
 
+#define BMAN_VALID_RSLT_NUM_MASK 0x7
+
 enum qbman_sdqcr_dct {
 	qbman_sdqcr_dct_null = 0,
 	qbman_sdqcr_dct_prio_ics,
@@ -2628,7 +2630,7 @@ struct qbman_acquire_rslt {
 	uint16_t reserved;
 	uint8_t num;
 	uint8_t reserved2[3];
-	uint64_t buf[7];
+	uint64_t buf[BMAN_VALID_RSLT_NUM_MASK];
 };
 
 static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
@@ -2636,8 +2638,9 @@ static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
+	int num;
 
-	if (!num_buffers || (num_buffers > 7))
+	if (!num_buffers || (num_buffers > BMAN_VALID_RSLT_NUM_MASK))
 		return -EINVAL;
 
 	/* Start the management command */
@@ -2668,12 +2671,13 @@ static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
 		return -EIO;
 	}
 
-	QBMAN_BUG_ON(r->num > num_buffers);
+	num = r->num & BMAN_VALID_RSLT_NUM_MASK;
+	QBMAN_BUG_ON(num > num_buffers);
 
 	/* Copy the acquired buffers to the caller's array */
-	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+	u64_from_le32_copy(buffers, &r->buf[0], num);
 
-	return (int)r->num;
+	return num;
 }
 
 static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
@@ -2681,8 +2685,9 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
+	int num;
 
-	if (!num_buffers || (num_buffers > 7))
+	if (!num_buffers || (num_buffers > BMAN_VALID_RSLT_NUM_MASK))
 		return -EINVAL;
 
 	/* Start the management command */
@@ -2713,12 +2718,13 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 		return -EIO;
 	}
 
-	QBMAN_BUG_ON(r->num > num_buffers);
+	num = r->num & BMAN_VALID_RSLT_NUM_MASK;
+	QBMAN_BUG_ON(num > num_buffers);
 
 	/* Copy the acquired buffers to the caller's array */
-	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+	u64_from_le32_copy(buffers, &r->buf[0], num);
 
-	return (int)r->num;
+	return num;
 }
 
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 13/43] bus/fslmc: get MC VFIO group FD directly
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (11 preceding siblings ...)
  2024-09-13  5:59 ` [v1 12/43] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 14/43] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
                   ` (30 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Get vfio group fd directly from file system instead of
from RTE API to avoid conflicting with PCIe VFIO.
FSL MC VFIO should have it's own logic which doe NOT depend on
RTE VFIO.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 88 ++++++++++++++++++++++++++--------
 drivers/bus/fslmc/meson.build  |  3 +-
 2 files changed, 71 insertions(+), 20 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 17163333af..1cc256f849 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2021 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -30,6 +30,7 @@
 #include <rte_kvargs.h>
 #include <dev_driver.h>
 #include <rte_eal_memconfig.h>
+#include <eal_vfio.h>
 
 #include "private.h"
 #include "fslmc_vfio.h"
@@ -440,6 +441,59 @@ int rte_fslmc_vfio_dmamap(void)
 	return 0;
 }
 
+static int
+fslmc_vfio_open_group_fd(int iommu_group_num)
+{
+	int vfio_group_fd;
+	char filename[PATH_MAX];
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
+
+	/* if primary, try to open the group */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		/* try regular group format */
+		snprintf(filename, sizeof(filename),
+			VFIO_GROUP_FMT, iommu_group_num);
+		vfio_group_fd = open(filename, O_RDWR);
+		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
+				filename, vfio_group_fd);
+		}
+
+		return vfio_group_fd;
+	}
+	/* if we're in a secondary process, request group fd from the primary
+	 * process via mp channel.
+	 */
+	p->req = SOCKET_REQ_GROUP;
+	p->group_num = iommu_group_num;
+	strcpy(mp_req.name, EAL_VFIO_MP);
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_group_fd = -1;
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+	    mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		p = (struct vfio_mp_param *)mp_rep->param;
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+			vfio_group_fd = mp_rep->fds[0];
+		} else if (p->result == SOCKET_NO_FD) {
+			DPAA2_BUS_ERR("Bad VFIO group fd");
+			vfio_group_fd = 0;
+		}
+	}
+
+	free(mp_reply.msgs);
+	if (vfio_group_fd < 0) {
+		DPAA2_BUS_ERR("Cannot request group fd(%d)",
+			vfio_group_fd);
+	}
+	return vfio_group_fd;
+}
+
 static int
 fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		int *vfio_dev_fd, struct vfio_device_info *device_info)
@@ -455,7 +509,7 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		return -1;
 
 	/* get the actual group fd */
-	vfio_group_fd = rte_vfio_get_group_fd(iommu_group_no);
+	vfio_group_fd = vfio_group.fd;
 	if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
 		return -1;
 
@@ -891,6 +945,11 @@ fslmc_vfio_close_group(void)
 		}
 	}
 
+	if (vfio_group.fd > 0) {
+		close(vfio_group.fd);
+		vfio_group.fd = 0;
+	}
+
 	return 0;
 }
 
@@ -1081,7 +1140,6 @@ fslmc_vfio_setup_group(void)
 {
 	int groupid;
 	int ret;
-	int vfio_container_fd;
 	struct vfio_group_status status = { .argsz = sizeof(status) };
 
 	/* if already done once */
@@ -1100,16 +1158,9 @@ fslmc_vfio_setup_group(void)
 		return 0;
 	}
 
-	ret = rte_vfio_container_create();
-	if (ret < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return ret;
-	}
-	vfio_container_fd = ret;
-
 	/* Get the actual group fd */
-	ret = rte_vfio_container_group_bind(vfio_container_fd, groupid);
-	if (ret < 0)
+	ret = fslmc_vfio_open_group_fd(groupid);
+	if (ret <= 0)
 		return ret;
 	vfio_group.fd = ret;
 
@@ -1118,14 +1169,14 @@ fslmc_vfio_setup_group(void)
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO error getting group status");
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return ret;
 	}
 
 	if (!(status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
 		DPAA2_BUS_ERR("VFIO group not viable");
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return -EPERM;
 	}
 	/* Since Group is VIABLE, Store the groupid */
@@ -1136,11 +1187,10 @@ fslmc_vfio_setup_group(void)
 		/* Now connect this IOMMU group to given container */
 		ret = vfio_connect_container();
 		if (ret) {
-			DPAA2_BUS_ERR(
-				"Error connecting container with groupid %d",
-				groupid);
+			DPAA2_BUS_ERR("vfio group(%d) connect failed(%d)",
+				groupid, ret);
 			close(vfio_group.fd);
-			rte_vfio_clear_group(vfio_group.fd);
+			vfio_group.fd = 0;
 			return ret;
 		}
 	}
@@ -1151,7 +1201,7 @@ fslmc_vfio_setup_group(void)
 		DPAA2_BUS_ERR("Error getting device %s fd from group %d",
 			      fslmc_container, vfio_group.groupid);
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return ret;
 	}
 	container_device_fd = ret;
diff --git a/drivers/bus/fslmc/meson.build b/drivers/bus/fslmc/meson.build
index 162ca286fe..70098ad778 100644
--- a/drivers/bus/fslmc/meson.build
+++ b/drivers/bus/fslmc/meson.build
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018,2021 NXP
+# Copyright 2018-2023 NXP
 
 if not is_linux
     build = false
@@ -27,3 +27,4 @@ sources = files(
 )
 
 includes += include_directories('mc', 'qbman/include', 'portal')
+includes += include_directories('../../../lib/eal/linux')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 14/43] bus/fslmc: enhance MC VFIO multiprocess support
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (12 preceding siblings ...)
  2024-09-13  5:59 ` [v1 13/43] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 15/43] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
                   ` (29 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

MC VFIO is not registered into RTE VFIO. Primary process registers
MC vfio mp action for secondary process to request.
VFIO/Container handlers are provided via CMSG.
Primary process is responsible to connect MC VFIO group to container.

In addition, MC VFIO code is refactored according to container/group logic.
In general, VFIO container can support multiple groups per process.
Now we only support single MC group(dprc.x) per process, but we add
logic to support connecting multiple MC groups to container.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_bus.c  |  14 +-
 drivers/bus/fslmc/fslmc_vfio.c | 997 ++++++++++++++++++++++-----------
 drivers/bus/fslmc/fslmc_vfio.h |  35 +-
 drivers/bus/fslmc/version.map  |   1 +
 4 files changed, 695 insertions(+), 352 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 7baadf99b9..654726dbe6 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -318,6 +318,7 @@ rte_fslmc_scan(void)
 	struct dirent *entry;
 	static int process_once;
 	int groupid;
+	char *group_name;
 
 	if (process_once) {
 		DPAA2_BUS_DEBUG("Fslmc bus already scanned. Not rescanning");
@@ -325,12 +326,19 @@ rte_fslmc_scan(void)
 	}
 	process_once = 1;
 
-	ret = fslmc_get_container_group(&groupid);
+	/* Now we only support single group per process.*/
+	group_name = getenv("DPRC");
+	if (!group_name) {
+		DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
+		return -EINVAL;
+	}
+
+	ret = fslmc_get_container_group(group_name, &groupid);
 	if (ret != 0)
 		goto scan_fail;
 
 	/* Scan devices on the group */
-	sprintf(fslmc_dirpath, "%s/%s", SYSFS_FSL_MC_DEVICES, fslmc_container);
+	sprintf(fslmc_dirpath, "%s/%s", SYSFS_FSL_MC_DEVICES, group_name);
 	dir = opendir(fslmc_dirpath);
 	if (!dir) {
 		DPAA2_BUS_ERR("Unable to open VFIO group directory");
@@ -338,7 +346,7 @@ rte_fslmc_scan(void)
 	}
 
 	/* Scan the DPRC container object */
-	ret = scan_one_fslmc_device(fslmc_container);
+	ret = scan_one_fslmc_device(group_name);
 	if (ret != 0) {
 		/* Error in parsing directory - exit gracefully */
 		goto scan_fail_cleanup;
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 1cc256f849..c6a010922e 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -42,12 +42,14 @@
 
 #define FSLMC_CONTAINER_MAX_LEN 8 /**< Of the format dprc.XX */
 
-/* Number of VFIO containers & groups with in */
-static struct fslmc_vfio_group vfio_group;
-static struct fslmc_vfio_container vfio_container;
-static int container_device_fd;
-char *fslmc_container;
-static int fslmc_iommu_type;
+#define FSLMC_VFIO_MP "fslmc_vfio_mp_sync"
+
+/* Container is composed by multiple groups, however,
+ * now each process only supports single group with in container.
+ */
+static struct fslmc_vfio_container s_vfio_container;
+/* Currently we only support single group/process. */
+const char *fslmc_group; /* dprc.x*/
 static uint32_t *msi_intr_vaddr;
 void *(*rte_mcp_ptr_list);
 
@@ -72,108 +74,547 @@ rte_fslmc_object_register(struct rte_dpaa2_object *object)
 	TAILQ_INSERT_TAIL(&dpaa2_obj_list, object, next);
 }
 
-int
-fslmc_get_container_group(int *groupid)
+static const char *
+fslmc_vfio_get_group_name(void)
 {
-	int ret;
-	char *container;
+	return fslmc_group;
+}
+
+static void
+fslmc_vfio_set_group_name(const char *group_name)
+{
+	fslmc_group = group_name;
+}
+
+static int
+fslmc_vfio_add_group(int vfio_group_fd,
+	int iommu_group_num, const char *group_name)
+{
+	struct fslmc_vfio_group *group;
+
+	group = rte_zmalloc(NULL, sizeof(struct fslmc_vfio_group), 0);
+	if (!group)
+		return -ENOMEM;
+	group->fd = vfio_group_fd;
+	group->groupid = iommu_group_num;
+	strcpy(group->group_name, group_name);
+	if (rte_vfio_noiommu_is_enabled() > 0)
+		group->iommu_type = RTE_VFIO_NOIOMMU;
+	else
+		group->iommu_type = VFIO_TYPE1_IOMMU;
+	LIST_INSERT_HEAD(&s_vfio_container.groups, group, next);
+
+	return 0;
+}
+
+static int
+fslmc_vfio_clear_group(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+	struct fslmc_vfio_device *dev;
+	int clear = 0;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			LIST_FOREACH(dev, &group->vfio_devices, next)
+				LIST_REMOVE(dev, next);
+
+			close(vfio_group_fd);
+			LIST_REMOVE(group, next);
+			rte_free(group);
+			clear = 1;
 
-	if (!fslmc_container) {
-		container = getenv("DPRC");
-		if (container == NULL) {
-			DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
-			return -EINVAL;
+			break;
 		}
+	}
 
-		if (strlen(container) >= FSLMC_CONTAINER_MAX_LEN) {
-			DPAA2_BUS_ERR("Invalid container name: %s", container);
-			return -1;
+	if (LIST_EMPTY(&s_vfio_container.groups)) {
+		if (s_vfio_container.fd > 0)
+			close(s_vfio_container.fd);
+
+		s_vfio_container.fd = -1;
+	}
+	if (clear)
+		return 0;
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_connect_container(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			group->connected = 1;
+
+			return 0;
+		}
+	}
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_container_connected(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			if (group->connected)
+				return 1;
+		}
+	}
+	return 0;
+}
+
+static int
+fslmc_vfio_iommu_type(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd)
+			return group->iommu_type;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_fd_by_name(const char *group_name)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (!strcmp(group->group_name, group_name))
+			return group->fd;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_fd_by_id(int group_id)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->groupid == group_id)
+			return group->fd;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_add_dev(int vfio_group_fd,
+	int dev_fd, const char *name)
+{
+	struct fslmc_vfio_group *group;
+	struct fslmc_vfio_device *dev;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			dev = rte_zmalloc(NULL,
+				sizeof(struct fslmc_vfio_device), 0);
+			dev->fd = dev_fd;
+			strcpy(dev->dev_name, name);
+			LIST_INSERT_HEAD(&group->vfio_devices, dev, next);
+			return 0;
 		}
+	}
+	return -ENODEV;
+}
 
-		fslmc_container = strdup(container);
-		if (!fslmc_container) {
-			DPAA2_BUS_ERR("Mem alloc failure; Container name");
-			return -ENOMEM;
+static int
+fslmc_vfio_group_remove_dev(int vfio_group_fd,
+	const char *name)
+{
+	struct fslmc_vfio_group *group = NULL;
+	struct fslmc_vfio_device *dev;
+	int removed = 0;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd)
+			break;
+	}
+
+	if (group) {
+		LIST_FOREACH(dev, &group->vfio_devices, next) {
+			if (!strcmp(dev->dev_name, name)) {
+				LIST_REMOVE(dev, next);
+				removed = 1;
+				break;
+			}
 		}
 	}
 
-	fslmc_iommu_type = (rte_vfio_noiommu_is_enabled() == 1) ?
-		RTE_VFIO_NOIOMMU : VFIO_TYPE1_IOMMU;
+	if (removed)
+		return 0;
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_container_fd(void)
+{
+	return s_vfio_container.fd;
+}
+
+static int
+fslmc_get_group_id(const char *group_name,
+	int *groupid)
+{
+	int ret;
 
 	/* get group number */
 	ret = rte_vfio_get_group_num(SYSFS_FSL_MC_DEVICES,
-				     fslmc_container, groupid);
+			group_name, groupid);
 	if (ret <= 0) {
-		DPAA2_BUS_ERR("Unable to find %s IOMMU group", fslmc_container);
-		return -1;
+		DPAA2_BUS_ERR("Unable to find %s IOMMU group", group_name);
+		if (ret < 0)
+			return ret;
+
+		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("Container: %s has VFIO iommu group id = %d",
-			fslmc_container, *groupid);
+	DPAA2_BUS_DEBUG("GROUP(%s) has VFIO iommu group id = %d",
+		group_name, *groupid);
 
 	return 0;
 }
 
 static int
-vfio_connect_container(void)
+fslmc_vfio_open_group_fd(const char *group_name)
 {
-	int fd, ret;
+	int vfio_group_fd;
+	char filename[PATH_MAX];
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
+	int iommu_group_num, ret;
 
-	if (vfio_container.used) {
-		DPAA2_BUS_DEBUG("No container available");
-		return -1;
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd > 0)
+		return vfio_group_fd;
+
+	ret = fslmc_get_group_id(group_name, &iommu_group_num);
+	if (ret)
+		return ret;
+	/* if primary, try to open the group */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		/* try regular group format */
+		snprintf(filename, sizeof(filename),
+			VFIO_GROUP_FMT, iommu_group_num);
+		vfio_group_fd = open(filename, O_RDWR);
+
+		goto add_vfio_group;
+	}
+	/* if we're in a secondary process, request group fd from the primary
+	 * process via mp channel.
+	 */
+	p->req = SOCKET_REQ_GROUP;
+	p->group_num = iommu_group_num;
+	strcpy(mp_req.name, FSLMC_VFIO_MP);
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_group_fd = -1;
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+	    mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		p = (struct vfio_mp_param *)mp_rep->param;
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+			vfio_group_fd = mp_rep->fds[0];
+		} else if (p->result == SOCKET_NO_FD) {
+			DPAA2_BUS_ERR("Bad VFIO group fd");
+			vfio_group_fd = 0;
+		}
 	}
 
-	/* Try connecting to vfio container if already created */
-	if (!ioctl(vfio_group.fd, VFIO_GROUP_SET_CONTAINER,
-		&vfio_container.fd)) {
-		DPAA2_BUS_DEBUG(
-		    "Container pre-exists with FD[0x%x] for this group",
-		    vfio_container.fd);
-		vfio_group.container = &vfio_container;
+	free(mp_reply.msgs);
+
+add_vfio_group:
+	if (vfio_group_fd <= 0) {
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
+				filename, vfio_group_fd);
+		} else {
+			DPAA2_BUS_ERR("Cannot request group fd(%d)",
+				vfio_group_fd);
+		}
+	} else {
+		ret = fslmc_vfio_add_group(vfio_group_fd, iommu_group_num,
+			group_name);
+		if (ret)
+			return ret;
+	}
+
+	return vfio_group_fd;
+}
+
+static int
+fslmc_vfio_check_extensions(int vfio_container_fd)
+{
+	int ret;
+	uint32_t idx, n_extensions = 0;
+	static const int type_id[] = {RTE_VFIO_TYPE1, RTE_VFIO_SPAPR,
+		RTE_VFIO_NOIOMMU};
+	static const char * const type_id_nm[] = {"Type 1",
+		"sPAPR", "No-IOMMU"};
+
+	for (idx = 0; idx < RTE_DIM(type_id); idx++) {
+		ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
+			type_id[idx]);
+		if (ret < 0) {
+			DPAA2_BUS_ERR("Could not get IOMMU type, error %i (%s)",
+				errno, strerror(errno));
+			close(vfio_container_fd);
+			return -errno;
+		} else if (ret == 1) {
+			/* we found a supported extension */
+			n_extensions++;
+		}
+		DPAA2_BUS_DEBUG("IOMMU type %d (%s) is %s",
+			type_id[idx], type_id_nm[idx],
+			ret ? "supported" : "not supported");
+	}
+
+	/* if we didn't find any supported IOMMU types, fail */
+	if (!n_extensions) {
+		close(vfio_container_fd);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int
+fslmc_vfio_open_container_fd(void)
+{
+	int ret, vfio_container_fd;
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (void *)mp_req.param;
+
+	if (fslmc_vfio_container_fd() > 0)
+		return fslmc_vfio_container_fd();
+
+	/* if we're in a primary process, try to open the container */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
+		if (vfio_container_fd < 0) {
+			DPAA2_BUS_ERR("Cannot open VFIO container(%s), err(%d)",
+				VFIO_CONTAINER_PATH, vfio_container_fd);
+			ret = vfio_container_fd;
+			goto err_exit;
+		}
+
+		/* check VFIO API version */
+		ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
+		if (ret < 0) {
+			DPAA2_BUS_ERR("Could not get VFIO API version(%d)",
+				ret);
+		} else if (ret != VFIO_API_VERSION) {
+			DPAA2_BUS_ERR("Unsupported VFIO API version(%d)",
+				ret);
+			ret = -ENOTSUP;
+		}
+		if (ret < 0) {
+			close(vfio_container_fd);
+			goto err_exit;
+		}
+
+		ret = fslmc_vfio_check_extensions(vfio_container_fd);
+		if (ret) {
+			DPAA2_BUS_ERR("No supported IOMMU extensions found(%d)",
+				ret);
+			close(vfio_container_fd);
+			goto err_exit;
+		}
+
+		goto success_exit;
+	}
+	/*
+	 * if we're in a secondary process, request container fd from the
+	 * primary process via mp channel
+	 */
+	p->req = SOCKET_REQ_CONTAINER;
+	strcpy(mp_req.name, FSLMC_VFIO_MP);
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_container_fd = -1;
+	ret = rte_mp_request_sync(&mp_req, &mp_reply, &ts);
+	if (ret)
+		goto err_exit;
+
+	if (mp_reply.nb_received != 1) {
+		ret = -EIO;
+		goto err_exit;
+	}
+
+	mp_rep = &mp_reply.msgs[0];
+	p = (void *)mp_rep->param;
+	if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+		vfio_container_fd = mp_rep->fds[0];
+		free(mp_reply.msgs);
+	}
+
+success_exit:
+	s_vfio_container.fd = vfio_container_fd;
+
+	return vfio_container_fd;
+
+err_exit:
+	if (mp_reply.msgs)
+		free(mp_reply.msgs);
+	DPAA2_BUS_ERR("Cannot request container fd err(%d)", ret);
+	return ret;
+}
+
+int
+fslmc_get_container_group(const char *group_name,
+	int *groupid)
+{
+	int ret;
+
+	if (!group_name) {
+		DPAA2_BUS_ERR("No group name provided!");
+
+		return -EINVAL;
+	}
+	ret = fslmc_get_group_id(group_name, groupid);
+	if (ret)
+		return ret;
+
+	fslmc_vfio_set_group_name(group_name);
+
+	return 0;
+}
+
+static int
+fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
+	const void *peer)
+{
+	int fd = -1;
+	int ret;
+	struct rte_mp_msg reply;
+	struct vfio_mp_param *r = (void *)reply.param;
+	const struct vfio_mp_param *m = (const void *)msg->param;
+
+	if (msg->len_param != sizeof(*m)) {
+		DPAA2_BUS_ERR("fslmc vfio received invalid message!");
+		return -EINVAL;
+	}
+
+	memset(&reply, 0, sizeof(reply));
+
+	switch (m->req) {
+	case SOCKET_REQ_GROUP:
+		r->req = SOCKET_REQ_GROUP;
+		r->group_num = m->group_num;
+		fd = fslmc_vfio_group_fd_by_id(m->group_num);
+		if (fd < 0) {
+			r->result = SOCKET_ERR;
+		} else if (!fd) {
+			/* if group exists but isn't bound to VFIO driver */
+			r->result = SOCKET_NO_FD;
+		} else {
+			/* if group exists and is bound to VFIO driver */
+			r->result = SOCKET_OK;
+			reply.num_fds = 1;
+			reply.fds[0] = fd;
+		}
+		break;
+	case SOCKET_REQ_CONTAINER:
+		r->req = SOCKET_REQ_CONTAINER;
+		fd = fslmc_vfio_container_fd();
+		if (fd <= 0) {
+			r->result = SOCKET_ERR;
+		} else {
+			r->result = SOCKET_OK;
+			reply.num_fds = 1;
+			reply.fds[0] = fd;
+		}
+		break;
+	default:
+		DPAA2_BUS_ERR("fslmc vfio received invalid message(%08x)",
+			m->req);
+		return -ENOTSUP;
+	}
+
+	strcpy(reply.name, FSLMC_VFIO_MP);
+	reply.len_param = sizeof(*r);
+	ret = rte_mp_reply(&reply, peer);
+
+	return ret;
+}
+
+static int
+fslmc_vfio_mp_sync_setup(void)
+{
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		ret = rte_mp_action_register(FSLMC_VFIO_MP,
+			fslmc_vfio_mp_primary);
+		if (ret && rte_errno != ENOTSUP)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int
+vfio_connect_container(int vfio_container_fd,
+	int vfio_group_fd)
+{
+	int ret;
+	int iommu_type;
+
+	if (fslmc_vfio_container_connected(vfio_group_fd)) {
+		DPAA2_BUS_WARN("VFIO FD(%d) has connected to container",
+			vfio_group_fd);
 		return 0;
 	}
 
-	/* Opens main vfio file descriptor which represents the "container" */
-	fd = rte_vfio_get_container_fd();
-	if (fd < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return -errno;
+	iommu_type = fslmc_vfio_iommu_type(vfio_group_fd);
+	if (iommu_type < 0) {
+		DPAA2_BUS_ERR("Failed to get iommu type(%d)",
+			iommu_type);
+
+		return iommu_type;
 	}
 
 	/* Check whether support for SMMU type IOMMU present or not */
-	if (ioctl(fd, VFIO_CHECK_EXTENSION, fslmc_iommu_type)) {
+	if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type)) {
 		/* Connect group to container */
-		ret = ioctl(vfio_group.fd, VFIO_GROUP_SET_CONTAINER, &fd);
+		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
+			&vfio_container_fd);
 		if (ret) {
 			DPAA2_BUS_ERR("Failed to setup group container");
-			close(fd);
 			return -errno;
 		}
 
-		ret = ioctl(fd, VFIO_SET_IOMMU, fslmc_iommu_type);
+		ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
 		if (ret) {
 			DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-			close(fd);
 			return -errno;
 		}
 	} else {
 		DPAA2_BUS_ERR("No supported IOMMU available");
-		close(fd);
 		return -EINVAL;
 	}
 
-	vfio_container.used = 1;
-	vfio_container.fd = fd;
-	vfio_container.group = &vfio_group;
-	vfio_group.container = &vfio_container;
-
-	return 0;
+	return fslmc_vfio_connect_container(vfio_group_fd);
 }
 
-static int vfio_map_irq_region(struct fslmc_vfio_group *group)
+static int vfio_map_irq_region(void)
 {
-	int ret;
+	int ret, fd;
 	unsigned long *vaddr = NULL;
 	struct vfio_iommu_type1_dma_map map = {
 		.argsz = sizeof(map),
@@ -182,9 +623,23 @@ static int vfio_map_irq_region(struct fslmc_vfio_group *group)
 		.iova = 0x6030000,
 		.size = 0x1000,
 	};
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (!fslmc_vfio_container_connected(fd)) {
+		DPAA2_BUS_ERR("Container is not connected");
+		return -EIO;
+	}
 
 	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
-		PROT_READ, MAP_SHARED, container_device_fd, 0x6030000);
+		PROT_READ, MAP_SHARED, fd, 0x6030000);
 	if (vaddr == MAP_FAILED) {
 		DPAA2_BUS_INFO("Unable to map region (errno = %d)", errno);
 		return -errno;
@@ -192,8 +647,8 @@ static int vfio_map_irq_region(struct fslmc_vfio_group *group)
 
 	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
 	map.vaddr = (unsigned long)vaddr;
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA, &map);
-	if (ret == 0)
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA, &map);
+	if (!ret)
 		return 0;
 
 	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
@@ -204,8 +659,8 @@ static int fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
 static int fslmc_unmap_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
 
 static void
-fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
-		void *arg __rte_unused)
+fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
+	size_t len, void *arg __rte_unused)
 {
 	struct rte_memseg_list *msl;
 	struct rte_memseg *ms;
@@ -262,44 +717,54 @@ fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
 }
 
 static int
-fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr __rte_unused, size_t len)
+fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr,
+	size_t len)
 {
-	struct fslmc_vfio_group *group;
 	struct vfio_iommu_type1_dma_map dma_map = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
 		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
 	};
-	int ret;
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
+	int ret, fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
 		return 0;
 	}
 
 	dma_map.size = len;
 	dma_map.vaddr = vaddr;
-
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 	dma_map.iova = iovaddr;
-#else
-	dma_map.iova = dma_map.vaddr;
+
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+	if (vaddr != iovaddr) {
+		DPAA2_BUS_WARN("vaddr(0x%lx) != iovaddr(0x%lx)",
+			vaddr, iovaddr);
+	}
 #endif
 
 	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-
-	if (!group->container) {
+	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected ");
-		return -1;
+		return -EIO;
 	}
 
 	DPAA2_BUS_DEBUG("--> Map address: 0x%"PRIx64", size: %"PRIu64"",
 			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA, &dma_map);
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA,
+		&dma_map);
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO_IOMMU_MAP_DMA API(errno = %d)",
 				errno);
-		return -1;
+		return ret;
 	}
 
 	return 0;
@@ -308,14 +773,22 @@ fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr __rte_unused, size_t len)
 static int
 fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 {
-	struct fslmc_vfio_group *group;
 	struct vfio_iommu_type1_dma_unmap dma_unmap = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_unmap),
 		.flags = 0,
 	};
-	int ret;
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
+	int ret, fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
 		return 0;
 	}
@@ -324,16 +797,15 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 	dma_unmap.iova = vaddr;
 
 	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-
-	if (!group->container) {
+	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected ");
-		return -1;
+		return -EIO;
 	}
 
 	DPAA2_BUS_DEBUG("--> Unmap address: 0x%"PRIx64", size: %"PRIu64"",
 			(uint64_t)dma_unmap.iova, (uint64_t)dma_unmap.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap);
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_UNMAP_DMA,
+		&dma_unmap);
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO_IOMMU_UNMAP_DMA API(errno = %d)",
 				errno);
@@ -367,41 +839,14 @@ fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 int
 rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 {
-	int ret;
-	struct fslmc_vfio_group *group;
-	struct vfio_iommu_type1_dma_map dma_map = {
-		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
-		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
-	};
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
-		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
-		return 0;
-	}
-
-	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-	if (!group->container) {
-		DPAA2_BUS_ERR("Container is not connected");
-		return -1;
-	}
-
-	dma_map.size = size;
-	dma_map.vaddr = vaddr;
-	dma_map.iova = iova;
-
-	DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64"\n",
-			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.iova,
-			(uint64_t)dma_map.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA,
-		    &dma_map);
-	if (ret) {
-		DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)",
-			errno);
-		return ret;
-	}
+	return fslmc_map_dma(vaddr, iova, size);
+}
 
-	return 0;
+__rte_internal
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
+{
+	return fslmc_unmap_dma(iova, 0, size);
 }
 
 int rte_fslmc_vfio_dmamap(void)
@@ -431,7 +876,7 @@ int rte_fslmc_vfio_dmamap(void)
 	 * the interrupt region to SMMU. This should be removed once the
 	 * support is added in the Kernel.
 	 */
-	vfio_map_irq_region(&vfio_group);
+	vfio_map_irq_region();
 
 	/* Existing segments have been mapped and memory callback for hotplug
 	 * has been installed.
@@ -442,149 +887,19 @@ int rte_fslmc_vfio_dmamap(void)
 }
 
 static int
-fslmc_vfio_open_group_fd(int iommu_group_num)
-{
-	int vfio_group_fd;
-	char filename[PATH_MAX];
-	struct rte_mp_msg mp_req, *mp_rep;
-	struct rte_mp_reply mp_reply = {0};
-	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
-	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
-
-	/* if primary, try to open the group */
-	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
-		/* try regular group format */
-		snprintf(filename, sizeof(filename),
-			VFIO_GROUP_FMT, iommu_group_num);
-		vfio_group_fd = open(filename, O_RDWR);
-		if (vfio_group_fd <= 0) {
-			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
-				filename, vfio_group_fd);
-		}
-
-		return vfio_group_fd;
-	}
-	/* if we're in a secondary process, request group fd from the primary
-	 * process via mp channel.
-	 */
-	p->req = SOCKET_REQ_GROUP;
-	p->group_num = iommu_group_num;
-	strcpy(mp_req.name, EAL_VFIO_MP);
-	mp_req.len_param = sizeof(*p);
-	mp_req.num_fds = 0;
-
-	vfio_group_fd = -1;
-	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
-	    mp_reply.nb_received == 1) {
-		mp_rep = &mp_reply.msgs[0];
-		p = (struct vfio_mp_param *)mp_rep->param;
-		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
-			vfio_group_fd = mp_rep->fds[0];
-		} else if (p->result == SOCKET_NO_FD) {
-			DPAA2_BUS_ERR("Bad VFIO group fd");
-			vfio_group_fd = 0;
-		}
-	}
-
-	free(mp_reply.msgs);
-	if (vfio_group_fd < 0) {
-		DPAA2_BUS_ERR("Cannot request group fd(%d)",
-			vfio_group_fd);
-	}
-	return vfio_group_fd;
-}
-
-static int
-fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
-		int *vfio_dev_fd, struct vfio_device_info *device_info)
+fslmc_vfio_setup_device(const char *dev_addr,
+	int *vfio_dev_fd, struct vfio_device_info *device_info)
 {
 	struct vfio_group_status group_status = {
 			.argsz = sizeof(group_status)
 	};
-	int vfio_group_fd, vfio_container_fd, iommu_group_no, ret;
+	int vfio_group_fd, ret;
+	const char *group_name = fslmc_vfio_get_group_name();
 
-	/* get group number */
-	ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_no);
-	if (ret < 0)
-		return -1;
-
-	/* get the actual group fd */
-	vfio_group_fd = vfio_group.fd;
-	if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
-		return -1;
-
-	/*
-	 * if vfio_group_fd == -ENOENT, that means the device
-	 * isn't managed by VFIO
-	 */
-	if (vfio_group_fd == -ENOENT) {
-		DPAA2_BUS_WARN(" %s not managed by VFIO driver, skipping",
-				dev_addr);
-		return 1;
-	}
-
-	/* Opens main vfio file descriptor which represents the "container" */
-	vfio_container_fd = rte_vfio_get_container_fd();
-	if (vfio_container_fd < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return -errno;
-	}
-
-	/* check if the group is viable */
-	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status);
-	if (ret) {
-		DPAA2_BUS_ERR("  %s cannot get group status, "
-				"error %i (%s)\n", dev_addr,
-				errno, strerror(errno));
-		close(vfio_group_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
-	} else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
-		DPAA2_BUS_ERR("  %s VFIO group is not viable!\n", dev_addr);
-		close(vfio_group_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
-	}
-	/* At this point, we know that this group is viable (meaning,
-	 * all devices are either bound to VFIO or not bound to anything)
-	 */
-
-	/* check if group does not have a container yet */
-	if (!(group_status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
-
-		/* add group to a container */
-		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
-				&vfio_container_fd);
-		if (ret) {
-			DPAA2_BUS_ERR("  %s cannot add VFIO group to container, "
-					"error %i (%s)\n", dev_addr,
-					errno, strerror(errno));
-			close(vfio_group_fd);
-			close(vfio_container_fd);
-			rte_vfio_clear_group(vfio_group_fd);
-			return -1;
-		}
-
-		/*
-		 * set an IOMMU type for container
-		 *
-		 */
-		if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
-			  fslmc_iommu_type)) {
-			ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU,
-				    fslmc_iommu_type);
-			if (ret) {
-				DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-				close(vfio_group_fd);
-				close(vfio_container_fd);
-				return -errno;
-			}
-		} else {
-			DPAA2_BUS_ERR("No supported IOMMU available");
-			close(vfio_group_fd);
-			close(vfio_container_fd);
-			return -EINVAL;
-		}
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (!fslmc_vfio_container_connected(vfio_group_fd)) {
+		DPAA2_BUS_ERR("Container is not connected");
+		return -EIO;
 	}
 
 	/* get a file descriptor for the device */
@@ -594,26 +909,21 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		 * the VFIO group or the container not having IOMMU configured.
 		 */
 
-		DPAA2_BUS_WARN("Getting a vfio_dev_fd for %s failed", dev_addr);
-		close(vfio_group_fd);
-		close(vfio_container_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
+		DPAA2_BUS_ERR("Getting a vfio_dev_fd for %s from %s failed",
+			dev_addr, group_name);
+		return -EIO;
 	}
 
 	/* test and setup the device */
 	ret = ioctl(*vfio_dev_fd, VFIO_DEVICE_GET_INFO, device_info);
 	if (ret) {
-		DPAA2_BUS_ERR("  %s cannot get device info, error %i (%s)",
-				dev_addr, errno, strerror(errno));
-		close(*vfio_dev_fd);
-		close(vfio_group_fd);
-		close(vfio_container_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
+		DPAA2_BUS_ERR("%s cannot get device info err(%d)(%s)",
+			dev_addr, errno, strerror(errno));
+		return ret;
 	}
 
-	return 0;
+	return fslmc_vfio_group_add_dev(vfio_group_fd, *vfio_dev_fd,
+			dev_addr);
 }
 
 static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
@@ -625,8 +935,7 @@ static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
 	struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
 
-	fslmc_vfio_setup_device(SYSFS_FSL_MC_DEVICES, mcp_obj,
-			&mc_fd, &d_info);
+	fslmc_vfio_setup_device(mcp_obj, &mc_fd, &d_info);
 
 	/* getting device region info*/
 	ret = ioctl(mc_fd, VFIO_DEVICE_GET_REGION_INFO, &reg_info);
@@ -757,7 +1066,8 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 }
 
 static void
-fslmc_close_iodevices(struct rte_dpaa2_device *dev)
+fslmc_close_iodevices(struct rte_dpaa2_device *dev,
+	int vfio_fd)
 {
 	struct rte_dpaa2_object *object = NULL;
 	struct rte_dpaa2_driver *drv;
@@ -800,6 +1110,11 @@ fslmc_close_iodevices(struct rte_dpaa2_device *dev)
 		break;
 	}
 
+	ret = fslmc_vfio_group_remove_dev(vfio_fd, dev->device.name);
+	if (ret) {
+		DPAA2_BUS_ERR("Failed to remove %s from vfio",
+			dev->device.name);
+	}
 	DPAA2_BUS_LOG(DEBUG, "Device (%s) Closed",
 		      dev->device.name);
 }
@@ -811,17 +1126,21 @@ fslmc_close_iodevices(struct rte_dpaa2_device *dev)
 static int
 fslmc_process_iodevices(struct rte_dpaa2_device *dev)
 {
-	int dev_fd;
+	int dev_fd, ret;
 	struct vfio_device_info device_info = { .argsz = sizeof(device_info) };
 	struct rte_dpaa2_object *object = NULL;
 
-	fslmc_vfio_setup_device(SYSFS_FSL_MC_DEVICES, dev->device.name,
-			&dev_fd, &device_info);
+	ret = fslmc_vfio_setup_device(dev->device.name, &dev_fd,
+			&device_info);
+	if (ret)
+		return ret;
 
 	switch (dev->dev_type) {
 	case DPAA2_ETH:
-		rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
-					  device_info.num_irqs);
+		ret = rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
+				device_info.num_irqs);
+		if (ret)
+			return ret;
 		break;
 	case DPAA2_CON:
 	case DPAA2_IO:
@@ -913,6 +1232,10 @@ int
 fslmc_vfio_close_group(void)
 {
 	struct rte_dpaa2_device *dev, *dev_temp;
+	int vfio_group_fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
 		if (dev->device.devargs &&
@@ -927,7 +1250,7 @@ fslmc_vfio_close_group(void)
 		case DPAA2_CRYPTO:
 		case DPAA2_QDMA:
 		case DPAA2_IO:
-			fslmc_close_iodevices(dev);
+			fslmc_close_iodevices(dev, vfio_group_fd);
 			break;
 		case DPAA2_CON:
 		case DPAA2_CI:
@@ -936,7 +1259,7 @@ fslmc_vfio_close_group(void)
 			if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 				continue;
 
-			fslmc_close_iodevices(dev);
+			fslmc_close_iodevices(dev, vfio_group_fd);
 			break;
 		case DPAA2_DPRTC:
 		default:
@@ -945,10 +1268,7 @@ fslmc_vfio_close_group(void)
 		}
 	}
 
-	if (vfio_group.fd > 0) {
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
-	}
+	fslmc_vfio_clear_group(vfio_group_fd);
 
 	return 0;
 }
@@ -1138,75 +1458,84 @@ fslmc_vfio_process_group(void)
 int
 fslmc_vfio_setup_group(void)
 {
-	int groupid;
-	int ret;
+	int vfio_group_fd, vfio_container_fd, ret;
 	struct vfio_group_status status = { .argsz = sizeof(status) };
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	/* MC VFIO setup entry */
+	vfio_container_fd = fslmc_vfio_container_fd();
+	if (vfio_container_fd <= 0) {
+		vfio_container_fd = fslmc_vfio_open_container_fd();
+		if (vfio_container_fd <= 0) {
+			DPAA2_BUS_ERR("Failed to create MC VFIO container");
+			return -rte_errno;
+		}
+	}
 
-	/* if already done once */
-	if (container_device_fd)
-		return 0;
-
-	ret = fslmc_get_container_group(&groupid);
-	if (ret)
-		return ret;
-
-	/* In case this group was already opened, continue without any
-	 * processing.
-	 */
-	if (vfio_group.groupid == groupid) {
-		DPAA2_BUS_ERR("groupid already exists %d", groupid);
-		return 0;
+	if (!group_name) {
+		DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
+		return -EINVAL;
 	}
 
-	/* Get the actual group fd */
-	ret = fslmc_vfio_open_group_fd(groupid);
-	if (ret <= 0)
-		return ret;
-	vfio_group.fd = ret;
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
+		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("Failed to create MC VFIO group");
+			return -rte_errno;
+		}
+	}
 
 	/* Check group viability */
-	ret = ioctl(vfio_group.fd, VFIO_GROUP_GET_STATUS, &status);
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &status);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO error getting group status");
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		DPAA2_BUS_ERR("VFIO(%s:fd=%d) error getting group status(%d)",
+			group_name, vfio_group_fd, ret);
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return ret;
 	}
 
 	if (!(status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
 		DPAA2_BUS_ERR("VFIO group not viable");
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return -EPERM;
 	}
-	/* Since Group is VIABLE, Store the groupid */
-	vfio_group.groupid = groupid;
 
 	/* check if group does not have a container yet */
 	if (!(status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
 		/* Now connect this IOMMU group to given container */
-		ret = vfio_connect_container();
-		if (ret) {
-			DPAA2_BUS_ERR("vfio group(%d) connect failed(%d)",
-				groupid, ret);
-			close(vfio_group.fd);
-			vfio_group.fd = 0;
-			return ret;
-		}
+		ret = vfio_connect_container(vfio_container_fd,
+			vfio_group_fd);
+	} else {
+		/* Here is supposed in secondary process,
+		 * group has been set to container in primary process.
+		 */
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+			DPAA2_BUS_WARN("This group has been set container?");
+		ret = fslmc_vfio_connect_container(vfio_group_fd);
+	}
+	if (ret) {
+		DPAA2_BUS_ERR("vfio group connect failed(%d)", ret);
+		fslmc_vfio_clear_group(vfio_group_fd);
+		return ret;
 	}
 
 	/* Get Device information */
-	ret = ioctl(vfio_group.fd, VFIO_GROUP_GET_DEVICE_FD, fslmc_container);
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_DEVICE_FD, group_name);
 	if (ret < 0) {
-		DPAA2_BUS_ERR("Error getting device %s fd from group %d",
-			      fslmc_container, vfio_group.groupid);
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		DPAA2_BUS_ERR("Error getting device %s fd", group_name);
+		fslmc_vfio_clear_group(vfio_group_fd);
+		return ret;
+	}
+
+	ret = fslmc_vfio_mp_sync_setup();
+	if (ret) {
+		DPAA2_BUS_ERR("VFIO MP sync setup failed!");
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return ret;
 	}
-	container_device_fd = ret;
-	DPAA2_BUS_DEBUG("VFIO Container FD is [0x%X]",
-			container_device_fd);
+
+	DPAA2_BUS_DEBUG("VFIO GROUP FD is %d", vfio_group_fd);
 
 	return 0;
 }
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index b6677bdd18..1695b6c078 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2019-2020 NXP
+ *   Copyright 2016,2019-2023 NXP
  *
  */
 
@@ -20,26 +20,28 @@
 #define DPAA2_MC_DPBP_DEVID	10
 #define DPAA2_MC_DPCI_DEVID	11
 
-typedef struct fslmc_vfio_device {
+struct fslmc_vfio_device {
+	LIST_ENTRY(fslmc_vfio_device) next;
 	int fd; /* fslmc root container device ?? */
 	int index; /*index of child object */
+	char dev_name[64];
 	struct fslmc_vfio_device *child; /* Child object */
-} fslmc_vfio_device;
+};
 
-typedef struct fslmc_vfio_group {
+struct fslmc_vfio_group {
+	LIST_ENTRY(fslmc_vfio_group) next;
 	int fd; /* /dev/vfio/"groupid" */
 	int groupid;
-	struct fslmc_vfio_container *container;
-	int object_index;
-	struct fslmc_vfio_device *vfio_device;
-} fslmc_vfio_group;
+	int connected;
+	char group_name[64]; /* dprc.x*/
+	int iommu_type;
+	LIST_HEAD(, fslmc_vfio_device) vfio_devices;
+};
 
-typedef struct fslmc_vfio_container {
+struct fslmc_vfio_container {
 	int fd; /* /dev/vfio/vfio */
-	int used;
-	int index; /* index in group list */
-	struct fslmc_vfio_group *group;
-} fslmc_vfio_container;
+	LIST_HEAD(, fslmc_vfio_group) groups;
+};
 
 extern char *fslmc_container;
 
@@ -57,8 +59,11 @@ int fslmc_vfio_setup_group(void);
 int fslmc_vfio_process_group(void);
 int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
-int fslmc_get_container_group(int *gropuid);
+int fslmc_get_container_group(const char *group_name, int *gropuid);
 int rte_fslmc_vfio_dmamap(void);
-int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova,
+		uint64_t size);
+int rte_fslmc_vfio_mem_dmaunmap(uint64_t iova,
+		uint64_t size);
 
 #endif /* _FSLMC_VFIO_H_ */
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index df1143733d..b49bc0a62c 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -118,6 +118,7 @@ INTERNAL {
 	rte_fslmc_get_device_count;
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
+	rte_fslmc_vfio_mem_dmaunmap;
 
 	local: *;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 15/43] bus/fslmc: free VFIO group FD in case of add group failure
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (13 preceding siblings ...)
  2024-09-13  5:59 ` [v1 14/43] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 16/43] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
                   ` (28 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Free vfio_group_fd if add group fails to avoid ersource leak
NXP coverity-id: 26661846

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c6a010922e..b550066183 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -347,8 +347,10 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	} else {
 		ret = fslmc_vfio_add_group(vfio_group_fd, iommu_group_num,
 			group_name);
-		if (ret)
+		if (ret) {
+			close(vfio_group_fd);
 			return ret;
+		}
 	}
 
 	return vfio_group_fd;
@@ -1481,6 +1483,8 @@ fslmc_vfio_setup_group(void)
 	if (vfio_group_fd <= 0) {
 		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
 		if (vfio_group_fd <= 0) {
+			if (!vfio_group_fd)
+				close(vfio_group_fd);
 			DPAA2_BUS_ERR("Failed to create MC VFIO group");
 			return -rte_errno;
 		}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 16/43] bus/fslmc: dynamic IOVA mode configuration
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (14 preceding siblings ...)
  2024-09-13  5:59 ` [v1 15/43] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 17/43] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
                   ` (27 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh
  Cc: Jun Yang, Vanshika Shukla

From: Jun Yang <jun.yang@nxp.com>

IOVA mode should not be configured with CFLAGS because
1) User can perform "--iova-mode" to configure IOVA.
2) IOVA mode is determined by negotiation between multiple devices.
   Eal is in VA mode only when all devices support VA mode.

Hence:
1) Remove RTE_LIBRTE_DPAA2_USE_PHYS_IOVA cflags.
   Instead, use rte_eal_iova_mode API to identify VA or PA mode.
2) Support memory IOMMU mapping and I/O IOMMU mapping(PCI space).
3) For memory IOMMU, in VA mode, IOVA:VA = 1:1;
   in PA mode, IOVA:VA = PA:VA. The mapping policy is determined by
   EAL memory driver.
4) For I/O IOMMU, IOVA:VA is up to I/O driver configuration.
   In general, it's aligned with memory IOMMU mapping.
5) Memory and I/O IOVA tables are created and update when DMA
   mapping is setup, which takes place of dpaax IOVA table.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     |  29 +-
 drivers/bus/fslmc/fslmc_bus.c            |  33 +-
 drivers/bus/fslmc/fslmc_logs.h           |   5 +-
 drivers/bus/fslmc/fslmc_vfio.c           | 666 ++++++++++++++++++-----
 drivers/bus/fslmc/fslmc_vfio.h           |   4 +
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c |   3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c |  10 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h |   3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h  | 111 ++--
 drivers/bus/fslmc/version.map            |   7 +-
 drivers/dma/dpaa2/dpaa2_qdma.c           |   1 +
 11 files changed, 617 insertions(+), 255 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index dc2f395f60..11eebd560c 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -37,9 +37,6 @@ extern "C" {
 
 #include <fslmc_vfio.h>
 
-#include "portal/dpaa2_hw_pvt.h"
-#include "portal/dpaa2_hw_dpio.h"
-
 #define FSLMC_OBJECT_MAX_LEN 32   /**< Length of each device on bus */
 
 #define DPAA2_INVALID_MBUF_SEQN        0
@@ -149,6 +146,32 @@ struct rte_dpaa2_driver {
 	rte_dpaa2_remove_t remove;
 };
 
+int
+rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+__rte_internal
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size);
+__rte_internal
+uint64_t
+rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
+	uint64_t size);
+__rte_internal
+void *
+rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
+	uint64_t size);
+__rte_internal
+__hot uint64_t
+rte_fslmc_mem_vaddr_to_iova(void *vaddr);
+__rte_internal
+__hot void *
+rte_fslmc_mem_iova_to_vaddr(uint64_t iova);
+__rte_internal
+uint64_t
+rte_fslmc_io_vaddr_to_iova(void *vaddr);
+__rte_internal
+void *
+rte_fslmc_io_iova_to_vaddr(uint64_t iova);
+
 /**
  * Register a DPAA2 driver.
  *
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 654726dbe6..ce87b4ddbd 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -27,7 +27,6 @@
 #define FSLMC_BUS_NAME	fslmc
 
 struct rte_fslmc_bus rte_fslmc_bus;
-uint8_t dpaa2_virt_mode;
 
 #define DPAA2_SEQN_DYNFIELD_NAME "dpaa2_seqn_dynfield"
 int dpaa2_seqn_dynfield_offset = -1;
@@ -457,22 +456,6 @@ rte_fslmc_probe(void)
 
 	probe_all = rte_fslmc_bus.bus.conf.scan_mode != RTE_BUS_SCAN_ALLOWLIST;
 
-	/* In case of PA, the FD addresses returned by qbman APIs are physical
-	 * addresses, which need conversion into equivalent VA address for
-	 * rte_mbuf. For that, a table (a serial array, in memory) is used to
-	 * increase translation efficiency.
-	 * This has to be done before probe as some device initialization
-	 * (during) probe allocate memory (dpaa2_sec) which needs to be pinned
-	 * to this table.
-	 *
-	 * Error is ignored as relevant logs are handled within dpaax and
-	 * handling for unavailable dpaax table too is transparent to caller.
-	 *
-	 * And, the IOVA table is only applicable in case of PA mode.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_populate();
-
 	TAILQ_FOREACH(dev, &rte_fslmc_bus.device_list, next) {
 		TAILQ_FOREACH(drv, &rte_fslmc_bus.driver_list, next) {
 			ret = rte_fslmc_match(drv, dev);
@@ -507,9 +490,6 @@ rte_fslmc_probe(void)
 		}
 	}
 
-	if (rte_eal_iova_mode() == RTE_IOVA_VA)
-		dpaa2_virt_mode = 1;
-
 	return 0;
 }
 
@@ -558,12 +538,6 @@ rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
 void
 rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver)
 {
-	/* Cleanup the PA->VA Translation table; From wherever this function
-	 * is called from.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_depopulate();
-
 	TAILQ_REMOVE(&rte_fslmc_bus.driver_list, driver, next);
 }
 
@@ -599,13 +573,12 @@ rte_dpaa2_get_iommu_class(void)
 	bool is_vfio_noiommu_enabled = 1;
 	bool has_iova_va;
 
+	if (rte_eal_iova_mode() == RTE_IOVA_PA)
+		return RTE_IOVA_PA;
+
 	if (TAILQ_EMPTY(&rte_fslmc_bus.device_list))
 		return RTE_IOVA_DC;
 
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-	return RTE_IOVA_PA;
-#endif
-
 	/* check if all devices on the bus support Virtual addressing or not */
 	has_iova_va = fslmc_all_device_support_iova();
 
diff --git a/drivers/bus/fslmc/fslmc_logs.h b/drivers/bus/fslmc/fslmc_logs.h
index e15c603426..d6abffc566 100644
--- a/drivers/bus/fslmc/fslmc_logs.h
+++ b/drivers/bus/fslmc/fslmc_logs.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2016 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -10,7 +10,8 @@
 extern int dpaa2_logtype_bus;
 
 #define DPAA2_BUS_LOG(level, fmt, args...) \
-	rte_log(RTE_LOG_ ## level, dpaa2_logtype_bus, "fslmc: " fmt "\n", \
+	rte_log(RTE_LOG_ ## level, dpaa2_logtype_bus, \
+		"fslmc " # level ": " fmt "\n", \
 		##args)
 
 /* Debug logs are with Function names */
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index b550066183..31011b8532 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -19,6 +19,7 @@
 #include <libgen.h>
 #include <dirent.h>
 #include <sys/eventfd.h>
+#include <ctype.h>
 
 #include <eal_filesystem.h>
 #include <rte_mbuf.h>
@@ -49,9 +50,41 @@
  */
 static struct fslmc_vfio_container s_vfio_container;
 /* Currently we only support single group/process. */
-const char *fslmc_group; /* dprc.x*/
+static const char *fslmc_group; /* dprc.x*/
 static uint32_t *msi_intr_vaddr;
-void *(*rte_mcp_ptr_list);
+static void *(*rte_mcp_ptr_list);
+
+struct fslmc_dmaseg {
+	uint64_t vaddr;
+	uint64_t iova;
+	uint64_t size;
+
+	TAILQ_ENTRY(fslmc_dmaseg) next;
+};
+
+TAILQ_HEAD(fslmc_dmaseg_list, fslmc_dmaseg);
+
+struct fslmc_dmaseg_list fslmc_memsegs =
+		TAILQ_HEAD_INITIALIZER(fslmc_memsegs);
+struct fslmc_dmaseg_list fslmc_iosegs =
+		TAILQ_HEAD_INITIALIZER(fslmc_iosegs);
+
+static uint64_t fslmc_mem_va2iova = RTE_BAD_IOVA;
+static int fslmc_mem_map_num;
+
+struct fslmc_mem_param {
+	struct vfio_mp_param mp_param;
+	struct fslmc_dmaseg_list memsegs;
+	struct fslmc_dmaseg_list iosegs;
+	uint64_t mem_va2iova;
+	int mem_map_num;
+};
+
+enum {
+	FSLMC_VFIO_SOCKET_REQ_CONTAINER = 0x100,
+	FSLMC_VFIO_SOCKET_REQ_GROUP,
+	FSLMC_VFIO_SOCKET_REQ_MEM
+};
 
 void *
 dpaa2_get_mcp_ptr(int portal_idx)
@@ -65,6 +98,64 @@ dpaa2_get_mcp_ptr(int portal_idx)
 static struct rte_dpaa2_object_list dpaa2_obj_list =
 	TAILQ_HEAD_INITIALIZER(dpaa2_obj_list);
 
+static uint64_t
+fslmc_io_virt2phy(const void *virtaddr)
+{
+	FILE *fp = fopen("/proc/self/maps", "r");
+	char *line = NULL;
+	size_t linesz;
+	uint64_t start, end, phy;
+	const uint64_t va = (const uint64_t)virtaddr;
+	char tmp[1024];
+	int ret;
+
+	if (!fp)
+		return RTE_BAD_IOVA;
+	while (getdelim(&line, &linesz, '\n', fp) > 0) {
+		char *ptr = line;
+		int n;
+
+		/** Parse virtual address range.*/
+		n = 0;
+		while (*ptr && !isspace(*ptr)) {
+			tmp[n] = *ptr;
+			ptr++;
+			n++;
+		}
+		tmp[n] = 0;
+		ret = sscanf(tmp, "%" SCNx64 "-%" SCNx64, &start, &end);
+		if (ret != 2)
+			continue;
+		if (va < start || va >= end)
+			continue;
+
+		/** This virtual address is in this segment.*/
+		while (*ptr == ' ' || *ptr == 'r' ||
+			*ptr == 'w' || *ptr == 's' ||
+			*ptr == 'p' || *ptr == 'x' ||
+			*ptr == '-')
+			ptr++;
+
+		/** Extract phy address*/
+		n = 0;
+		while (*ptr && !isspace(*ptr)) {
+			tmp[n] = *ptr;
+			ptr++;
+			n++;
+		}
+		tmp[n] = 0;
+		phy = strtoul(tmp, 0, 16);
+		if (!phy)
+			continue;
+
+		fclose(fp);
+		return phy + va - start;
+	}
+
+	fclose(fp);
+	return RTE_BAD_IOVA;
+}
+
 /*register a fslmc bus based dpaa2 driver */
 void
 rte_fslmc_object_register(struct rte_dpaa2_object *object)
@@ -271,7 +362,7 @@ fslmc_get_group_id(const char *group_name,
 	ret = rte_vfio_get_group_num(SYSFS_FSL_MC_DEVICES,
 			group_name, groupid);
 	if (ret <= 0) {
-		DPAA2_BUS_ERR("Unable to find %s IOMMU group", group_name);
+		DPAA2_BUS_ERR("Find %s IOMMU group", group_name);
 		if (ret < 0)
 			return ret;
 
@@ -314,7 +405,7 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	/* if we're in a secondary process, request group fd from the primary
 	 * process via mp channel.
 	 */
-	p->req = SOCKET_REQ_GROUP;
+	p->req = FSLMC_VFIO_SOCKET_REQ_GROUP;
 	p->group_num = iommu_group_num;
 	strcpy(mp_req.name, FSLMC_VFIO_MP);
 	mp_req.len_param = sizeof(*p);
@@ -408,7 +499,7 @@ fslmc_vfio_open_container_fd(void)
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
 		if (vfio_container_fd < 0) {
-			DPAA2_BUS_ERR("Cannot open VFIO container(%s), err(%d)",
+			DPAA2_BUS_ERR("Open VFIO container(%s), err(%d)",
 				VFIO_CONTAINER_PATH, vfio_container_fd);
 			ret = vfio_container_fd;
 			goto err_exit;
@@ -417,7 +508,7 @@ fslmc_vfio_open_container_fd(void)
 		/* check VFIO API version */
 		ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
 		if (ret < 0) {
-			DPAA2_BUS_ERR("Could not get VFIO API version(%d)",
+			DPAA2_BUS_ERR("Get VFIO API version(%d)",
 				ret);
 		} else if (ret != VFIO_API_VERSION) {
 			DPAA2_BUS_ERR("Unsupported VFIO API version(%d)",
@@ -431,7 +522,7 @@ fslmc_vfio_open_container_fd(void)
 
 		ret = fslmc_vfio_check_extensions(vfio_container_fd);
 		if (ret) {
-			DPAA2_BUS_ERR("No supported IOMMU extensions found(%d)",
+			DPAA2_BUS_ERR("Unsupported IOMMU extensions found(%d)",
 				ret);
 			close(vfio_container_fd);
 			goto err_exit;
@@ -443,7 +534,7 @@ fslmc_vfio_open_container_fd(void)
 	 * if we're in a secondary process, request container fd from the
 	 * primary process via mp channel
 	 */
-	p->req = SOCKET_REQ_CONTAINER;
+	p->req = FSLMC_VFIO_SOCKET_REQ_CONTAINER;
 	strcpy(mp_req.name, FSLMC_VFIO_MP);
 	mp_req.len_param = sizeof(*p);
 	mp_req.num_fds = 0;
@@ -473,7 +564,7 @@ fslmc_vfio_open_container_fd(void)
 err_exit:
 	if (mp_reply.msgs)
 		free(mp_reply.msgs);
-	DPAA2_BUS_ERR("Cannot request container fd err(%d)", ret);
+	DPAA2_BUS_ERR("Open container fd err(%d)", ret);
 	return ret;
 }
 
@@ -506,17 +597,19 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 	struct rte_mp_msg reply;
 	struct vfio_mp_param *r = (void *)reply.param;
 	const struct vfio_mp_param *m = (const void *)msg->param;
+	struct fslmc_mem_param *map;
 
 	if (msg->len_param != sizeof(*m)) {
-		DPAA2_BUS_ERR("fslmc vfio received invalid message!");
+		DPAA2_BUS_ERR("Invalid msg size(%d) for req(%d)",
+			msg->len_param, m->req);
 		return -EINVAL;
 	}
 
 	memset(&reply, 0, sizeof(reply));
 
 	switch (m->req) {
-	case SOCKET_REQ_GROUP:
-		r->req = SOCKET_REQ_GROUP;
+	case FSLMC_VFIO_SOCKET_REQ_GROUP:
+		r->req = FSLMC_VFIO_SOCKET_REQ_GROUP;
 		r->group_num = m->group_num;
 		fd = fslmc_vfio_group_fd_by_id(m->group_num);
 		if (fd < 0) {
@@ -530,9 +623,10 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 			reply.num_fds = 1;
 			reply.fds[0] = fd;
 		}
+		reply.len_param = sizeof(*r);
 		break;
-	case SOCKET_REQ_CONTAINER:
-		r->req = SOCKET_REQ_CONTAINER;
+	case FSLMC_VFIO_SOCKET_REQ_CONTAINER:
+		r->req = FSLMC_VFIO_SOCKET_REQ_CONTAINER;
 		fd = fslmc_vfio_container_fd();
 		if (fd <= 0) {
 			r->result = SOCKET_ERR;
@@ -541,20 +635,73 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 			reply.num_fds = 1;
 			reply.fds[0] = fd;
 		}
+		reply.len_param = sizeof(*r);
+		break;
+	case FSLMC_VFIO_SOCKET_REQ_MEM:
+		map = (void *)reply.param;
+		r = &map->mp_param;
+		r->req = FSLMC_VFIO_SOCKET_REQ_MEM;
+		r->result = SOCKET_OK;
+		rte_memcpy(&map->memsegs, &fslmc_memsegs,
+			sizeof(struct fslmc_dmaseg_list));
+		rte_memcpy(&map->iosegs, &fslmc_iosegs,
+			sizeof(struct fslmc_dmaseg_list));
+		map->mem_va2iova = fslmc_mem_va2iova;
+		map->mem_map_num = fslmc_mem_map_num;
+		reply.len_param = sizeof(struct fslmc_mem_param);
 		break;
 	default:
-		DPAA2_BUS_ERR("fslmc vfio received invalid message(%08x)",
+		DPAA2_BUS_ERR("VFIO received invalid message(%08x)",
 			m->req);
 		return -ENOTSUP;
 	}
 
 	strcpy(reply.name, FSLMC_VFIO_MP);
-	reply.len_param = sizeof(*r);
 	ret = rte_mp_reply(&reply, peer);
 
 	return ret;
 }
 
+static int
+fslmc_vfio_mp_sync_mem_req(void)
+{
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	int ret = 0;
+	struct vfio_mp_param *mp_param;
+	struct fslmc_mem_param *mem_rsp;
+
+	mp_param = (void *)mp_req.param;
+	memset(&mp_req, 0, sizeof(struct rte_mp_msg));
+	mp_param->req = FSLMC_VFIO_SOCKET_REQ_MEM;
+	strcpy(mp_req.name, FSLMC_VFIO_MP);
+	mp_req.len_param = sizeof(struct vfio_mp_param);
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+		mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		mem_rsp = (struct fslmc_mem_param *)mp_rep->param;
+		if (mem_rsp->mp_param.result == SOCKET_OK) {
+			rte_memcpy(&fslmc_memsegs,
+				&mem_rsp->memsegs,
+				sizeof(struct fslmc_dmaseg_list));
+			rte_memcpy(&fslmc_memsegs,
+				&mem_rsp->memsegs,
+				sizeof(struct fslmc_dmaseg_list));
+			fslmc_mem_va2iova = mem_rsp->mem_va2iova;
+			fslmc_mem_map_num = mem_rsp->mem_map_num;
+		} else {
+			DPAA2_BUS_ERR("Bad MEM SEG");
+			ret = -EINVAL;
+		}
+	} else {
+		ret = -EINVAL;
+	}
+	free(mp_reply.msgs);
+
+	return ret;
+}
+
 static int
 fslmc_vfio_mp_sync_setup(void)
 {
@@ -565,6 +712,10 @@ fslmc_vfio_mp_sync_setup(void)
 			fslmc_vfio_mp_primary);
 		if (ret && rte_errno != ENOTSUP)
 			return ret;
+	} else {
+		ret = fslmc_vfio_mp_sync_mem_req();
+		if (ret)
+			return ret;
 	}
 
 	return 0;
@@ -585,30 +736,34 @@ vfio_connect_container(int vfio_container_fd,
 
 	iommu_type = fslmc_vfio_iommu_type(vfio_group_fd);
 	if (iommu_type < 0) {
-		DPAA2_BUS_ERR("Failed to get iommu type(%d)",
-			iommu_type);
+		DPAA2_BUS_ERR("Get iommu type(%d)", iommu_type);
 
 		return iommu_type;
 	}
 
 	/* Check whether support for SMMU type IOMMU present or not */
-	if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type)) {
-		/* Connect group to container */
-		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
+	ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type);
+	if (ret <= 0) {
+		DPAA2_BUS_ERR("Unsupport IOMMU type(%d) ret(%d), err(%d)",
+			iommu_type, ret, -errno);
+		return -EINVAL;
+	}
+
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
 			&vfio_container_fd);
-		if (ret) {
-			DPAA2_BUS_ERR("Failed to setup group container");
-			return -errno;
-		}
+	if (ret) {
+		DPAA2_BUS_ERR("Set group container ret(%d), err(%d)",
+			ret, -errno);
 
-		ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
-		if (ret) {
-			DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-			return -errno;
-		}
-	} else {
-		DPAA2_BUS_ERR("No supported IOMMU available");
-		return -EINVAL;
+		return ret;
+	}
+
+	ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
+	if (ret) {
+		DPAA2_BUS_ERR("Set iommu ret(%d), err(%d)",
+			ret, -errno);
+
+		return ret;
 	}
 
 	return fslmc_vfio_connect_container(vfio_group_fd);
@@ -629,11 +784,11 @@ static int vfio_map_irq_region(void)
 
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected");
@@ -643,8 +798,8 @@ static int vfio_map_irq_region(void)
 	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
 		PROT_READ, MAP_SHARED, fd, 0x6030000);
 	if (vaddr == MAP_FAILED) {
-		DPAA2_BUS_INFO("Unable to map region (errno = %d)", errno);
-		return -errno;
+		DPAA2_BUS_ERR("Unable to map region (errno = %d)", errno);
+		return -ENOMEM;
 	}
 
 	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
@@ -654,141 +809,200 @@ static int vfio_map_irq_region(void)
 		return 0;
 
 	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
-	return -errno;
-}
-
-static int fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
-static int fslmc_unmap_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
-
-static void
-fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
-	size_t len, void *arg __rte_unused)
-{
-	struct rte_memseg_list *msl;
-	struct rte_memseg *ms;
-	size_t cur_len = 0, map_len = 0;
-	uint64_t virt_addr;
-	rte_iova_t iova_addr;
-	int ret;
-
-	msl = rte_mem_virt2memseg_list(addr);
-
-	while (cur_len < len) {
-		const void *va = RTE_PTR_ADD(addr, cur_len);
-
-		ms = rte_mem_virt2memseg(va, msl);
-		iova_addr = ms->iova;
-		virt_addr = ms->addr_64;
-		map_len = ms->len;
-
-		DPAA2_BUS_DEBUG("Request for %s, va=%p, "
-				"virt_addr=0x%" PRIx64 ", "
-				"iova=0x%" PRIx64 ", map_len=%zu",
-				type == RTE_MEM_EVENT_ALLOC ?
-					"alloc" : "dealloc",
-				va, virt_addr, iova_addr, map_len);
-
-		/* iova_addr may be set to RTE_BAD_IOVA */
-		if (iova_addr == RTE_BAD_IOVA) {
-			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping\n");
-			cur_len += map_len;
-			continue;
-		}
-
-		if (type == RTE_MEM_EVENT_ALLOC)
-			ret = fslmc_map_dma(virt_addr, iova_addr, map_len);
-		else
-			ret = fslmc_unmap_dma(virt_addr, iova_addr, map_len);
-
-		if (ret != 0) {
-			DPAA2_BUS_ERR("DMA Mapping/Unmapping failed. "
-					"Map=%d, addr=%p, len=%zu, err:(%d)",
-					type, va, map_len, ret);
-			return;
-		}
-
-		cur_len += map_len;
-	}
-
-	if (type == RTE_MEM_EVENT_ALLOC)
-		DPAA2_BUS_DEBUG("Total Mapped: addr=%p, len=%zu",
-				addr, len);
-	else
-		DPAA2_BUS_DEBUG("Total Unmapped: addr=%p, len=%zu",
-				addr, len);
+	return ret;
 }
 
 static int
-fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr,
-	size_t len)
+fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len)
 {
 	struct vfio_iommu_type1_dma_map dma_map = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
 		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
 	};
-	int ret, fd;
+	int ret, fd, is_io = 0;
 	const char *group_name = fslmc_vfio_get_group_name();
+	struct fslmc_dmaseg *dmaseg = NULL;
+	uint64_t phy = 0;
+
+	if (rte_eal_iova_mode() == RTE_IOVA_VA) {
+		if (vaddr != iovaddr) {
+			DPAA2_BUS_ERR("IOVA:VA(%" PRIx64 " : %" PRIx64 ") %s",
+				iovaddr, vaddr,
+				"should be 1:1 for VA mode");
+
+			return -EINVAL;
+		}
+	}
 
+	phy = rte_mem_virt2phy((const void *)(uintptr_t)vaddr);
+	if (phy == RTE_BAD_IOVA) {
+		phy = fslmc_io_virt2phy((const void *)(uintptr_t)vaddr);
+		if (phy == RTE_BAD_IOVA)
+			return -ENOMEM;
+		is_io = 1;
+	} else if (fslmc_mem_va2iova != RTE_BAD_IOVA &&
+		fslmc_mem_va2iova != (iovaddr - vaddr)) {
+		DPAA2_BUS_WARN("Multiple MEM PA<->VA conversions.");
+	}
+	DPAA2_BUS_DEBUG("%s(%zu): VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+		is_io ? "DMA IO map size" : "DMA MEM map size",
+		len, vaddr, iovaddr, phy);
+
+	if (is_io)
+		goto io_mapping_check;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (!((vaddr + len) <= dmaseg->vaddr ||
+			(dmaseg->vaddr + dmaseg->size) <= vaddr)) {
+			DPAA2_BUS_ERR("MEM: New VA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				vaddr, vaddr + len);
+			DPAA2_BUS_ERR("MEM: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->vaddr,
+				dmaseg->vaddr + dmaseg->size);
+			return -EEXIST;
+		}
+		if (!((iovaddr + len) <= dmaseg->iova ||
+			(dmaseg->iova + dmaseg->size) <= iovaddr)) {
+			DPAA2_BUS_ERR("MEM: New IOVA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				iovaddr, iovaddr + len);
+			DPAA2_BUS_ERR("MEM: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->iova,
+				dmaseg->iova + dmaseg->size);
+			return -EEXIST;
+		}
+	}
+	goto start_mapping;
+
+io_mapping_check:
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if (!((vaddr + len) <= dmaseg->vaddr ||
+			(dmaseg->vaddr + dmaseg->size) <= vaddr)) {
+			DPAA2_BUS_ERR("IO: New VA Range (%" PRIx64 " ~ %" PRIx64 ")",
+				vaddr, vaddr + len);
+			DPAA2_BUS_ERR("IO: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->vaddr,
+				dmaseg->vaddr + dmaseg->size);
+			return -EEXIST;
+		}
+		if (!((iovaddr + len) <= dmaseg->iova ||
+			(dmaseg->iova + dmaseg->size) <= iovaddr)) {
+			DPAA2_BUS_ERR("IO: New IOVA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				iovaddr, iovaddr + len);
+			DPAA2_BUS_ERR("IO: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->iova,
+				dmaseg->iova + dmaseg->size);
+			return -EEXIST;
+		}
+	}
+
+start_mapping:
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
-		return 0;
+		if (phy != iovaddr) {
+			DPAA2_BUS_ERR("IOVA should support with IOMMU");
+			return -EIO;
+		}
+		goto end_mapping;
 	}
 
 	dma_map.size = len;
 	dma_map.vaddr = vaddr;
 	dma_map.iova = iovaddr;
 
-#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-	if (vaddr != iovaddr) {
-		DPAA2_BUS_WARN("vaddr(0x%lx) != iovaddr(0x%lx)",
-			vaddr, iovaddr);
-	}
-#endif
-
 	/* SET DMA MAP for IOMMU */
 	if (!fslmc_vfio_container_connected(fd)) {
-		DPAA2_BUS_ERR("Container is not connected ");
+		DPAA2_BUS_ERR("Container is not connected");
 		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("--> Map address: 0x%"PRIx64", size: %"PRIu64"",
-			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.size);
 	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA,
 		&dma_map);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO_IOMMU_MAP_DMA API(errno = %d)",
-				errno);
+		DPAA2_BUS_ERR("%s(%d) VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+			is_io ? "DMA IO map err" : "DMA MEM map err",
+			errno, vaddr, iovaddr, phy);
 		return ret;
 	}
 
+end_mapping:
+	dmaseg = malloc(sizeof(struct fslmc_dmaseg));
+	if (!dmaseg) {
+		DPAA2_BUS_ERR("DMA segment malloc failed!");
+		return -ENOMEM;
+	}
+	dmaseg->vaddr = vaddr;
+	dmaseg->iova = iovaddr;
+	dmaseg->size = len;
+	if (is_io) {
+		TAILQ_INSERT_TAIL(&fslmc_iosegs, dmaseg, next);
+	} else {
+		fslmc_mem_map_num++;
+		if (fslmc_mem_map_num == 1)
+			fslmc_mem_va2iova = iovaddr - vaddr;
+		else
+			fslmc_mem_va2iova = RTE_BAD_IOVA;
+		TAILQ_INSERT_TAIL(&fslmc_memsegs, dmaseg, next);
+	}
+	DPAA2_BUS_LOG(NOTICE,
+		"%s(%zx): VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+		is_io ? "DMA I/O map size" : "DMA MEM map size",
+		len, vaddr, iovaddr, phy);
+
 	return 0;
 }
 
 static int
-fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
+fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr, size_t len)
 {
 	struct vfio_iommu_type1_dma_unmap dma_unmap = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_unmap),
 		.flags = 0,
 	};
-	int ret, fd;
+	int ret, fd, is_io = 0;
 	const char *group_name = fslmc_vfio_get_group_name();
+	struct fslmc_dmaseg *dmaseg = NULL;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (((vaddr && dmaseg->vaddr == vaddr) || !vaddr) &&
+			dmaseg->iova == iovaddr &&
+			dmaseg->size == len) {
+			is_io = 0;
+			break;
+		}
+	}
+
+	if (!dmaseg) {
+		TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+			if (((vaddr && dmaseg->vaddr == vaddr) || !vaddr) &&
+				dmaseg->iova == iovaddr &&
+				dmaseg->size == len) {
+				is_io = 1;
+				break;
+			}
+		}
+	}
+
+	if (!dmaseg) {
+		DPAA2_BUS_ERR("IOVA(%" PRIx64 ") with length(%zx) not mapped",
+			iovaddr, len);
+		return 0;
+	}
 
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
@@ -796,7 +1010,7 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 	}
 
 	dma_unmap.size = len;
-	dma_unmap.iova = vaddr;
+	dma_unmap.iova = iovaddr;
 
 	/* SET DMA MAP for IOMMU */
 	if (!fslmc_vfio_container_connected(fd)) {
@@ -804,19 +1018,162 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("--> Unmap address: 0x%"PRIx64", size: %"PRIu64"",
-			(uint64_t)dma_unmap.iova, (uint64_t)dma_unmap.size);
 	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_UNMAP_DMA,
 		&dma_unmap);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO_IOMMU_UNMAP_DMA API(errno = %d)",
-				errno);
-		return -1;
+		DPAA2_BUS_ERR("DMA un-map IOVA(%" PRIx64 " ~ %" PRIx64 ") err(%d)",
+			iovaddr, iovaddr + len, errno);
+		return ret;
+	}
+
+	if (is_io) {
+		TAILQ_REMOVE(&fslmc_iosegs, dmaseg, next);
+	} else {
+		TAILQ_REMOVE(&fslmc_memsegs, dmaseg, next);
+		fslmc_mem_map_num--;
+		if (TAILQ_EMPTY(&fslmc_memsegs))
+			fslmc_mem_va2iova = RTE_BAD_IOVA;
 	}
 
+	free(dmaseg);
+
 	return 0;
 }
 
+uint64_t
+rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
+	uint64_t size)
+{
+	struct fslmc_dmaseg *dmaseg;
+	uint64_t va;
+
+	va = (uint64_t)vaddr;
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (va >= dmaseg->vaddr &&
+			(va + size) < (dmaseg->vaddr + dmaseg->size)) {
+			return dmaseg->iova + va - dmaseg->vaddr;
+		}
+	}
+
+	return RTE_BAD_IOVA;
+}
+
+void *
+rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
+	uint64_t size)
+{
+	struct fslmc_dmaseg *dmaseg;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (iova >= dmaseg->iova &&
+			(iova + size) < (dmaseg->iova + dmaseg->size))
+			return (void *)((uintptr_t)dmaseg->vaddr + (uintptr_t)(iova - dmaseg->iova));
+	}
+
+	return NULL;
+}
+
+__hot uint64_t
+rte_fslmc_mem_vaddr_to_iova(void *vaddr)
+{
+	if (likely(fslmc_mem_va2iova != RTE_BAD_IOVA))
+		return (uint64_t)vaddr + fslmc_mem_va2iova;
+
+	return rte_fslmc_cold_mem_vaddr_to_iova(vaddr, 0);
+}
+
+__hot void *
+rte_fslmc_mem_iova_to_vaddr(uint64_t iova)
+{
+	if (likely(fslmc_mem_va2iova != RTE_BAD_IOVA))
+		return (void *)((uintptr_t)iova - (uintptr_t)fslmc_mem_va2iova);
+
+	return rte_fslmc_cold_mem_iova_to_vaddr(iova, 0);
+}
+
+uint64_t
+rte_fslmc_io_vaddr_to_iova(void *vaddr)
+{
+	struct fslmc_dmaseg *dmaseg = NULL;
+	uint64_t va = (uint64_t)vaddr;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if ((va >= dmaseg->vaddr) &&
+			va < dmaseg->vaddr + dmaseg->size)
+			return dmaseg->iova + va - dmaseg->vaddr;
+	}
+
+	return RTE_BAD_IOVA;
+}
+
+void *
+rte_fslmc_io_iova_to_vaddr(uint64_t iova)
+{
+	struct fslmc_dmaseg *dmaseg = NULL;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if ((iova >= dmaseg->iova) &&
+			iova < dmaseg->iova + dmaseg->size)
+			return (void *)((uintptr_t)dmaseg->vaddr + (uintptr_t)(iova - dmaseg->iova));
+	}
+
+	return NULL;
+}
+
+static void
+fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
+	size_t len, void *arg __rte_unused)
+{
+	struct rte_memseg_list *msl;
+	struct rte_memseg *ms;
+	size_t cur_len = 0, map_len = 0;
+	uint64_t virt_addr;
+	rte_iova_t iova_addr;
+	int ret;
+
+	msl = rte_mem_virt2memseg_list(addr);
+
+	while (cur_len < len) {
+		const void *va = RTE_PTR_ADD(addr, cur_len);
+
+		ms = rte_mem_virt2memseg(va, msl);
+		iova_addr = ms->iova;
+		virt_addr = ms->addr_64;
+		map_len = ms->len;
+
+		DPAA2_BUS_DEBUG("%s, va=%p, virt=%" PRIx64 ", iova=%" PRIx64 ", len=%zu",
+			type == RTE_MEM_EVENT_ALLOC ? "alloc" : "dealloc",
+			va, virt_addr, iova_addr, map_len);
+
+		/* iova_addr may be set to RTE_BAD_IOVA */
+		if (iova_addr == RTE_BAD_IOVA) {
+			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping\n");
+			cur_len += map_len;
+			continue;
+		}
+
+		if (type == RTE_MEM_EVENT_ALLOC)
+			ret = fslmc_map_dma(virt_addr, iova_addr, map_len);
+		else
+			ret = fslmc_unmap_dma(virt_addr, iova_addr, map_len);
+
+		if (ret != 0) {
+			DPAA2_BUS_ERR("%s: Map=%d, addr=%p, len=%zu, err:(%d)",
+				type == RTE_MEM_EVENT_ALLOC ?
+				"DMA Mapping failed. " :
+				"DMA Unmapping failed. ",
+				type, va, map_len, ret);
+			return;
+		}
+
+		cur_len += map_len;
+	}
+
+	DPAA2_BUS_DEBUG("Total %s: addr=%p, len=%zu",
+		type == RTE_MEM_EVENT_ALLOC ? "Mapped" : "Unmapped",
+		addr, len);
+}
+
 static int
 fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 		const struct rte_memseg *ms, void *arg)
@@ -848,7 +1205,7 @@ __rte_internal
 int
 rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 {
-	return fslmc_unmap_dma(iova, 0, size);
+	return fslmc_unmap_dma(0, iova, size);
 }
 
 int rte_fslmc_vfio_dmamap(void)
@@ -858,9 +1215,10 @@ int rte_fslmc_vfio_dmamap(void)
 	/* Lock before parsing and registering callback to memory subsystem */
 	rte_mcfg_mem_read_lock();
 
-	if (rte_memseg_walk(fslmc_dmamap_seg, &i) < 0) {
+	ret = rte_memseg_walk(fslmc_dmamap_seg, &i);
+	if (ret) {
 		rte_mcfg_mem_read_unlock();
-		return -1;
+		return ret;
 	}
 
 	ret = rte_mem_event_callback_register("fslmc_memevent_clb",
@@ -899,6 +1257,14 @@ fslmc_vfio_setup_device(const char *dev_addr,
 	const char *group_name = fslmc_vfio_get_group_name();
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, vfio_group_fd);
+		if (vfio_group_fd < 0)
+			return vfio_group_fd;
+		return -EIO;
+	}
+
 	if (!fslmc_vfio_container_connected(vfio_group_fd)) {
 		DPAA2_BUS_ERR("Container is not connected");
 		return -EIO;
@@ -1007,8 +1373,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
 	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
 	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 	if (ret)
-		DPAA2_BUS_ERR(
-			"Error disabling dpaa2 interrupts for fd %d",
+		DPAA2_BUS_ERR("Error disabling dpaa2 interrupts for fd %d",
 			rte_intr_fd_get(intr_handle));
 
 	return ret;
@@ -1033,7 +1398,7 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 		if (ret < 0) {
 			DPAA2_BUS_ERR("Cannot get IRQ(%d) info, error %i (%s)",
 				      i, errno, strerror(errno));
-			return -1;
+			return ret;
 		}
 
 		/* if this vector cannot be used with eventfd,
@@ -1047,8 +1412,8 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 		fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
 		if (fd < 0) {
 			DPAA2_BUS_ERR("Cannot set up eventfd, error %i (%s)",
-				      errno, strerror(errno));
-			return -1;
+				errno, strerror(errno));
+			return fd;
 		}
 
 		if (rte_intr_fd_set(intr_handle, fd))
@@ -1064,7 +1429,7 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 	}
 
 	/* if we're here, we haven't found a suitable interrupt vector */
-	return -1;
+	return -EIO;
 }
 
 static void
@@ -1238,6 +1603,13 @@ fslmc_vfio_close_group(void)
 	const char *group_name = fslmc_vfio_get_group_name();
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, vfio_group_fd);
+		if (vfio_group_fd < 0)
+			return vfio_group_fd;
+		return -EIO;
+	}
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
 		if (dev->device.devargs &&
@@ -1329,7 +1701,7 @@ fslmc_vfio_process_group(void)
 				ret = fslmc_process_mcp(dev);
 				if (ret) {
 					DPAA2_BUS_ERR("Unable to map MC Portal");
-					return -1;
+					return ret;
 				}
 				found_mportal = 1;
 			}
@@ -1346,7 +1718,7 @@ fslmc_vfio_process_group(void)
 	/* Cannot continue if there is not even a single mportal */
 	if (!found_mportal) {
 		DPAA2_BUS_ERR("No MC Portal device found. Not continuing");
-		return -1;
+		return -EIO;
 	}
 
 	/* Search for DPRC device next as it updates endpoint of
@@ -1358,7 +1730,7 @@ fslmc_vfio_process_group(void)
 			ret = fslmc_process_iodevices(dev);
 			if (ret) {
 				DPAA2_BUS_ERR("Unable to process dprc");
-				return -1;
+				return ret;
 			}
 			TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		}
@@ -1415,7 +1787,7 @@ fslmc_vfio_process_group(void)
 			if (ret) {
 				DPAA2_BUS_DEBUG("Dev (%s) init failed",
 						dev->device.name);
-				return -1;
+				return ret;
 			}
 
 			break;
@@ -1439,7 +1811,7 @@ fslmc_vfio_process_group(void)
 			if (ret) {
 				DPAA2_BUS_DEBUG("Dev (%s) init failed",
 						dev->device.name);
-				return -1;
+				return ret;
 			}
 
 			break;
@@ -1468,9 +1840,9 @@ fslmc_vfio_setup_group(void)
 	vfio_container_fd = fslmc_vfio_container_fd();
 	if (vfio_container_fd <= 0) {
 		vfio_container_fd = fslmc_vfio_open_container_fd();
-		if (vfio_container_fd <= 0) {
+		if (vfio_container_fd < 0) {
 			DPAA2_BUS_ERR("Failed to create MC VFIO container");
-			return -rte_errno;
+			return vfio_container_fd;
 		}
 	}
 
@@ -1483,6 +1855,8 @@ fslmc_vfio_setup_group(void)
 	if (vfio_group_fd <= 0) {
 		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
 		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("%s: open group name(%s) failed(%d)",
+				__func__, group_name, vfio_group_fd);
 			if (!vfio_group_fd)
 				close(vfio_group_fd);
 			DPAA2_BUS_ERR("Failed to create MC VFIO group");
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 1695b6c078..408b35680d 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -11,6 +11,10 @@
 #include <rte_compat.h>
 #include <rte_vfio.h>
 
+#ifndef __hot
+#define __hot __attribute__((hot))
+#endif
+
 /* Pathname of FSL-MC devices directory. */
 #define SYSFS_FSL_MC_DEVICES	"/sys/bus/fsl-mc/devices"
 #define DPAA2_MC_DPNI_DEVID	7
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index bc36607e64..85e4c16c03 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2020 NXP
+ *   Copyright 2016,2020-2023 NXP
  *
  */
 
@@ -28,7 +28,6 @@
 #include "portal/dpaa2_hw_pvt.h"
 #include "portal/dpaa2_hw_dpio.h"
 
-
 TAILQ_HEAD(dpbp_dev_list, dpaa2_dpbp_dev);
 static struct dpbp_dev_list dpbp_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpbp_dev_list); /*!< DPBP device list */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 8265fee497..b52a8c8ba5 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -332,9 +332,8 @@ dpaa2_affine_qbman_swp(void)
 		}
 		RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
 
-		DPAA2_BUS_INFO(
-			"DPAA Portal=%p (%d) is affined to thread %" PRIu64,
-			dpio_dev, dpio_dev->index, tid);
+		DPAA2_BUS_DEBUG("Portal[%d] is affined to thread %" PRIu64,
+			dpio_dev->index, tid);
 	}
 	return 0;
 }
@@ -354,9 +353,8 @@ dpaa2_affine_qbman_ethrx_swp(void)
 		}
 		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
 
-		DPAA2_BUS_INFO(
-			"DPAA Portal=%p (%d) is affined for eth rx to thread %"
-			PRIu64, dpio_dev, dpio_dev->index, tid);
+		DPAA2_BUS_DEBUG("Portal_eth_rx[%d] is affined to thread %" PRIu64,
+			dpio_dev->index, tid);
 	}
 	return 0;
 }
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 7407f8d38d..328e1e788a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -12,6 +12,7 @@
 #include <mc/fsl_mc_sys.h>
 
 #include <rte_compat.h>
+#include <dpaa2_hw_pvt.h>
 
 struct dpaa2_io_portal_t {
 	struct dpaa2_dpio_dev *dpio_dev;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 169c7917ea..c5900bd06a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -14,6 +14,7 @@
 
 #include <mc/fsl_mc_sys.h>
 #include <fsl_qbman_portal.h>
+#include <bus_fslmc_driver.h>
 
 #ifndef false
 #define false      0
@@ -80,6 +81,8 @@
 #define DPAA2_PACKET_LAYOUT_ALIGN	64 /*changing from 256 */
 
 #define DPAA2_DPCI_MAX_QUEUES 2
+#define DPAA2_INVALID_FLOW_ID 0xffff
+#define DPAA2_INVALID_CGID 0xff
 
 struct dpaa2_queue;
 
@@ -365,83 +368,63 @@ enum qbman_fd_format {
  */
 #define DPAA2_EQ_RESP_ALWAYS		1
 
-/* Various structures representing contiguous memory maps */
-struct dpaa2_memseg {
-	TAILQ_ENTRY(dpaa2_memseg) next;
-	char *vaddr;
-	rte_iova_t iova;
-	size_t len;
-};
-
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-extern uint8_t dpaa2_virt_mode;
-static void *dpaa2_mem_ptov(phys_addr_t paddr) __rte_unused;
-
-static void *dpaa2_mem_ptov(phys_addr_t paddr)
+static inline uint64_t
+dpaa2_mem_va_to_iova(void *va)
 {
-	void *va;
-
-	if (dpaa2_virt_mode)
-		return (void *)(size_t)paddr;
-
-	va = (void *)dpaax_iova_table_get_va(paddr);
-	if (likely(va != NULL))
-		return va;
-
-	/* If not, Fallback to full memseg list searching */
-	va = rte_mem_iova2virt(paddr);
+	if (likely(rte_eal_iova_mode() == RTE_IOVA_VA))
+		return (uint64_t)va;
 
-	return va;
+	return rte_fslmc_mem_vaddr_to_iova(va);
 }
 
-static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr) __rte_unused;
-
-static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
+static inline void *
+dpaa2_mem_iova_to_va(uint64_t iova)
 {
-	const struct rte_memseg *memseg;
-
-	if (dpaa2_virt_mode)
-		return vaddr;
+	if (likely(rte_eal_iova_mode() == RTE_IOVA_VA))
+		return (void *)(uintptr_t)iova;
 
-	memseg = rte_mem_virt2memseg((void *)(uintptr_t)vaddr, NULL);
-	if (memseg)
-		return memseg->iova + RTE_PTR_DIFF(vaddr, memseg->addr);
-	return (size_t)NULL;
+	return rte_fslmc_mem_iova_to_vaddr(iova);
 }
 
-/**
- * When we are using Physical addresses as IO Virtual Addresses,
- * Need to call conversion routines dpaa2_mem_vtop & dpaa2_mem_ptov
- * wherever required.
- * These routines are called with help of below MACRO's
- */
-
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_iova)
-
-/**
- * macro to convert Virtual address to IOVA
- */
-#define DPAA2_VADDR_TO_IOVA(_vaddr) dpaa2_mem_vtop((size_t)(_vaddr))
-
-/**
- * macro to convert IOVA to Virtual address
- */
-#define DPAA2_IOVA_TO_VADDR(_iova) dpaa2_mem_ptov((size_t)(_iova))
-
-/**
- * macro to convert modify the memory containing IOVA to Virtual address
- */
+#define DPAA2_VADDR_TO_IOVA(_vaddr) \
+	dpaa2_mem_va_to_iova((void *)(uintptr_t)_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) \
+	dpaa2_mem_iova_to_va((uint64_t)_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type) \
-	{_mem = (_type)(dpaa2_mem_ptov((size_t)(_mem))); }
+	{_mem = (_type)DPAA2_IOVA_TO_VADDR(_mem); }
+
+#define DPAA2_VAMODE_VADDR_TO_IOVA(_vaddr) ((uint64_t)_vaddr)
+#define DPAA2_VAMODE_IOVA_TO_VADDR(_iova) ((void *)_iova)
+#define DPAA2_VAMODE_MODIFY_IOVA_TO_VADDR(_mem, _type) \
+	{_mem = (_type)(_mem); }
+
+#define DPAA2_PAMODE_VADDR_TO_IOVA(_vaddr) \
+	rte_fslmc_mem_vaddr_to_iova((void *)_vaddr)
+#define DPAA2_PAMODE_IOVA_TO_VADDR(_iova) \
+	rte_fslmc_mem_iova_to_vaddr((uint64_t)_iova)
+#define DPAA2_PAMODE_MODIFY_IOVA_TO_VADDR(_mem, _type) \
+	{_mem = (_type)rte_fslmc_mem_iova_to_vaddr(_mem); }
+
+static inline uint64_t
+dpaa2_mem_va_to_iova_check(void *va, uint64_t size)
+{
+	uint64_t iova = rte_fslmc_cold_mem_vaddr_to_iova(va, size);
 
-#else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
+	if (iova == RTE_BAD_IOVA)
+		return RTE_BAD_IOVA;
 
-#define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
-#define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
+	/** Double check the iova is valid.*/
+	if (iova != rte_mem_virt2iova(va))
+		return RTE_BAD_IOVA;
+
+	return iova;
+}
 
-#endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
+#define DPAA2_VADDR_TO_IOVA_AND_CHECK(_vaddr, size) \
+	dpaa2_mem_va_to_iova_check(_vaddr, size)
+#define DPAA2_IOVA_TO_VADDR_AND_CHECK(_iova, size) \
+	rte_fslmc_cold_mem_iova_to_vaddr(_iova, size)
 
 static inline
 int check_swp_active_dqs(uint16_t dpio_index)
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index b49bc0a62c..2c36895285 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -24,7 +24,6 @@ INTERNAL {
 	dpaa2_seqn_dynfield_offset;
 	dpaa2_seqn;
 	dpaa2_svr_family;
-	dpaa2_virt_mode;
 	dpbp_disable;
 	dpbp_enable;
 	dpbp_get_attributes;
@@ -119,6 +118,12 @@ INTERNAL {
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
 	rte_fslmc_vfio_mem_dmaunmap;
+	rte_fslmc_cold_mem_vaddr_to_iova;
+	rte_fslmc_cold_mem_iova_to_vaddr;
+	rte_fslmc_mem_vaddr_to_iova;
+	rte_fslmc_mem_iova_to_vaddr;
+	rte_fslmc_io_vaddr_to_iova;
+	rte_fslmc_io_iova_to_vaddr;
 
 	local: *;
 };
diff --git a/drivers/dma/dpaa2/dpaa2_qdma.c b/drivers/dma/dpaa2/dpaa2_qdma.c
index 2c91ceec13..99b8881c5d 100644
--- a/drivers/dma/dpaa2/dpaa2_qdma.c
+++ b/drivers/dma/dpaa2/dpaa2_qdma.c
@@ -10,6 +10,7 @@
 
 #include <mc/fsl_dpdmai.h>
 
+#include <dpaa2_hw_dpio.h>
 #include "rte_pmd_dpaa2_qdma.h"
 #include "dpaa2_qdma.h"
 #include "dpaa2_qdma_logs.h"
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 17/43] bus/fslmc: remove VFIO IRQ mapping
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (15 preceding siblings ...)
  2024-09-13  5:59 ` [v1 16/43] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 18/43] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
                   ` (26 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Remove unused GITS translator VFIO mapping.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 50 ----------------------------------
 1 file changed, 50 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 31011b8532..f5d398c8b0 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -51,7 +51,6 @@
 static struct fslmc_vfio_container s_vfio_container;
 /* Currently we only support single group/process. */
 static const char *fslmc_group; /* dprc.x*/
-static uint32_t *msi_intr_vaddr;
 static void *(*rte_mcp_ptr_list);
 
 struct fslmc_dmaseg {
@@ -769,49 +768,6 @@ vfio_connect_container(int vfio_container_fd,
 	return fslmc_vfio_connect_container(vfio_group_fd);
 }
 
-static int vfio_map_irq_region(void)
-{
-	int ret, fd;
-	unsigned long *vaddr = NULL;
-	struct vfio_iommu_type1_dma_map map = {
-		.argsz = sizeof(map),
-		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
-		.vaddr = 0x6030000,
-		.iova = 0x6030000,
-		.size = 0x1000,
-	};
-	const char *group_name = fslmc_vfio_get_group_name();
-
-	fd = fslmc_vfio_group_fd_by_name(group_name);
-	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
-			__func__, group_name, fd);
-		if (fd < 0)
-			return fd;
-		return -EIO;
-	}
-	if (!fslmc_vfio_container_connected(fd)) {
-		DPAA2_BUS_ERR("Container is not connected");
-		return -EIO;
-	}
-
-	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
-		PROT_READ, MAP_SHARED, fd, 0x6030000);
-	if (vaddr == MAP_FAILED) {
-		DPAA2_BUS_ERR("Unable to map region (errno = %d)", errno);
-		return -ENOMEM;
-	}
-
-	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
-	map.vaddr = (unsigned long)vaddr;
-	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA, &map);
-	if (!ret)
-		return 0;
-
-	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
-	return ret;
-}
-
 static int
 fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len)
 {
@@ -1232,12 +1188,6 @@ int rte_fslmc_vfio_dmamap(void)
 
 	DPAA2_BUS_DEBUG("Total %d segments found.", i);
 
-	/* TODO - This is a W.A. as VFIO currently does not add the mapping of
-	 * the interrupt region to SMMU. This should be removed once the
-	 * support is added in the Kernel.
-	 */
-	vfio_map_irq_region();
-
 	/* Existing segments have been mapped and memory callback for hotplug
 	 * has been installed.
 	 */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 18/43] bus/fslmc: create dpaa2 device with it's object
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (16 preceding siblings ...)
  2024-09-13  5:59 ` [v1 17/43] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 19/43] bus/fslmc: fix coverity issue vanshika.shukla
                   ` (25 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Create dpaa2 device with object instead of object ID.
Assign each dpaa2 object with it's container.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     | 39 ++++++++++++------------
 drivers/bus/fslmc/fslmc_vfio.c           |  3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c |  6 ++--
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c |  8 ++---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c |  6 ++--
 drivers/bus/fslmc/portal/dpaa2_hw_dprc.c |  8 +++--
 drivers/event/dpaa2/dpaa2_hw_dpcon.c     |  8 ++---
 drivers/net/dpaa2/dpaa2_mux.c            |  6 ++--
 drivers/net/dpaa2/dpaa2_ptp.c            |  8 ++---
 9 files changed, 47 insertions(+), 45 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 11eebd560c..462bf2113e 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -89,25 +89,6 @@ enum rte_dpaa2_dev_type {
 	DPAA2_DEVTYPE_MAX,
 };
 
-TAILQ_HEAD(rte_dpaa2_object_list, rte_dpaa2_object);
-
-typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
-				      struct vfio_device_info *obj_info,
-				      int object_id);
-
-typedef void (*rte_dpaa2_obj_close_t)(int object_id);
-
-/**
- * A structure describing a DPAA2 object.
- */
-struct rte_dpaa2_object {
-	TAILQ_ENTRY(rte_dpaa2_object) next; /**< Next in list. */
-	const char *name;                   /**< Name of Object. */
-	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
-	rte_dpaa2_obj_create_t create;
-	rte_dpaa2_obj_close_t close;
-};
-
 /**
  * A structure describing a DPAA2 device.
  */
@@ -123,6 +104,7 @@ struct rte_dpaa2_device {
 	enum rte_dpaa2_dev_type dev_type;   /**< Device Type */
 	uint16_t object_id;                 /**< DPAA2 Object ID */
 	enum rte_dpaa2_dev_type ep_dev_type;   /**< Endpoint Device Type */
+	struct dpaa2_dprc_dev *container;
 	uint16_t ep_object_id;                 /**< Endpoint DPAA2 Object ID */
 	char ep_name[RTE_DEV_NAME_MAX_LEN];
 	struct rte_intr_handle *intr_handle; /**< Interrupt handle */
@@ -130,10 +112,29 @@ struct rte_dpaa2_device {
 	char name[FSLMC_OBJECT_MAX_LEN];    /**< DPAA2 Object name*/
 };
 
+typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
+				      struct vfio_device_info *obj_info,
+				      struct rte_dpaa2_device *dev);
+
+typedef void (*rte_dpaa2_obj_close_t)(int object_id);
+
 typedef int (*rte_dpaa2_probe_t)(struct rte_dpaa2_driver *dpaa2_drv,
 				 struct rte_dpaa2_device *dpaa2_dev);
 typedef int (*rte_dpaa2_remove_t)(struct rte_dpaa2_device *dpaa2_dev);
 
+TAILQ_HEAD(rte_dpaa2_object_list, rte_dpaa2_object);
+
+/**
+ * A structure describing a DPAA2 object.
+ */
+struct rte_dpaa2_object {
+	TAILQ_ENTRY(rte_dpaa2_object) next; /**< Next in list. */
+	const char *name;                   /**< Name of Object. */
+	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
+	rte_dpaa2_obj_create_t create;
+	rte_dpaa2_obj_close_t close;
+};
+
 /**
  * A structure describing a DPAA2 driver.
  */
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index f5d398c8b0..3aeeca6880 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1468,8 +1468,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
 	case DPAA2_DPRC:
 		TAILQ_FOREACH(object, &dpaa2_obj_list, next) {
 			if (dev->dev_type == object->dev_type)
-				object->create(dev_fd, &device_info,
-					       dev->object_id);
+				object->create(dev_fd, &device_info, dev);
 			else
 				continue;
 		}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index 85e4c16c03..0ca3b2b2e4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -47,11 +47,11 @@ static struct dpaa2_dpbp_dev *get_dpbp_from_id(uint32_t dpbp_id)
 
 static int
 dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
-			 struct vfio_device_info *obj_info __rte_unused,
-			 int dpbp_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpbp_dev *dpbp_node;
-	int ret;
+	int ret, dpbp_id = obj->object_id;
 	static int register_once;
 
 	/* Allocate DPAA2 dpbp handle */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index d7de2bca05..03c2c82f66 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2020 NXP
+ *   Copyright 2017, 2020, 2023 NXP
  *
  */
 
@@ -45,15 +45,15 @@ static struct dpaa2_dpci_dev *get_dpci_from_id(uint32_t dpci_id)
 
 static int
 rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
-			     struct vfio_device_info *obj_info __rte_unused,
-			     int dpci_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpci_dev *dpci_node;
 	struct dpci_attr attr;
 	struct dpci_rx_queue_cfg rx_queue_cfg;
 	struct dpci_rx_queue_attr rx_attr;
 	struct dpci_tx_queue_attr tx_attr;
-	int ret, i;
+	int ret, i, dpci_id = obj->object_id;
 
 	/* Allocate DPAA2 dpci handle */
 	dpci_node = rte_malloc(NULL, sizeof(struct dpaa2_dpci_dev), 0);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index b52a8c8ba5..346092a6b4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -391,14 +391,14 @@ dpaa2_close_dpio_device(int object_id)
 
 static int
 dpaa2_create_dpio_device(int vdev_fd,
-			 struct vfio_device_info *obj_info,
-			 int object_id)
+	struct vfio_device_info *obj_info,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)};
 	struct qbman_swp_desc p_des;
 	struct dpio_attr attr;
-	int ret;
+	int ret, object_id = obj->object_id;
 
 	if (obj_info->num_regions < NUM_DPIO_REGIONS) {
 		DPAA2_BUS_ERR("Not sufficient number of DPIO regions");
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c b/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
index 65e2d799c3..a057cb1309 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
@@ -23,13 +23,13 @@ static struct dprc_dev_list dprc_dev_list
 
 static int
 rte_dpaa2_create_dprc_device(int vdev_fd __rte_unused,
-			     struct vfio_device_info *obj_info __rte_unused,
-			     int dprc_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dprc_dev *dprc_node;
 	struct dprc_endpoint endpoint1, endpoint2;
 	struct rte_dpaa2_device *dev, *dev_tmp;
-	int ret;
+	int ret, dprc_id = obj->object_id;
 
 	/* Allocate DPAA2 dprc handle */
 	dprc_node = rte_malloc(NULL, sizeof(struct dpaa2_dprc_dev), 0);
@@ -50,6 +50,8 @@ rte_dpaa2_create_dprc_device(int vdev_fd __rte_unused,
 	}
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_tmp) {
+		/** DPRC is always created before it's children are created.*/
+		dev->container = dprc_node;
 		if (dev->dev_type == DPAA2_ETH) {
 			int link_state;
 
diff --git a/drivers/event/dpaa2/dpaa2_hw_dpcon.c b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
index 64b0136e24..ea5b0d4b85 100644
--- a/drivers/event/dpaa2/dpaa2_hw_dpcon.c
+++ b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2020 NXP
+ *   Copyright 2017, 2020, 2023 NXP
  *
  */
 
@@ -45,12 +45,12 @@ static struct dpaa2_dpcon_dev *get_dpcon_from_id(uint32_t dpcon_id)
 
 static int
 rte_dpaa2_create_dpcon_device(int dev_fd __rte_unused,
-			      struct vfio_device_info *obj_info __rte_unused,
-			      int dpcon_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpcon_dev *dpcon_node;
 	struct dpcon_attr attr;
-	int ret;
+	int ret, dpcon_id = obj->object_id;
 
 	/* Allocate DPAA2 dpcon handle */
 	dpcon_node = rte_malloc(NULL, sizeof(struct dpaa2_dpcon_dev), 0);
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 53020e9302..4390be9789 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -374,12 +374,12 @@ rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
 
 static int
 dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
-			   struct vfio_device_info *obj_info __rte_unused,
-			   int dpdmux_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev;
 	struct dpdmux_attr attr;
-	int ret;
+	int ret, dpdmux_id = obj->object_id;
 	uint16_t maj_ver;
 	uint16_t min_ver;
 	uint8_t skip_reset_flags;
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index c08aa0f3bf..751e558c73 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2019 NXP
+ * Copyright 2019, 2023 NXP
  */
 
 #include <sys/queue.h>
@@ -134,11 +134,11 @@ int dpaa2_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
 #if defined(RTE_LIBRTE_IEEE1588)
 static int
 dpaa2_create_dprtc_device(int vdev_fd __rte_unused,
-			   struct vfio_device_info *obj_info __rte_unused,
-			   int dprtc_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dprtc_attr attr;
-	int ret;
+	int ret, dprtc_id = obj->object_id;
 
 	PMD_INIT_FUNC_TRACE();
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 19/43] bus/fslmc: fix coverity issue
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (17 preceding siblings ...)
  2024-09-13  5:59 ` [v1 18/43] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 20/43] bus/fslmc: fix invalid error FD code vanshika.shukla
                   ` (24 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fix Issues reported by coverity (NXP Internal Coverity)

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 49 +++++++++++++++++----------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index eea06988ff..0e471ec3fd 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2022 NXP
  */
 
 #include "compat.h"
@@ -37,6 +37,7 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
 		   struct qbman_bp_query_rslt *r)
 {
 	struct qbman_bp_query_desc *p;
+	struct qbman_bp_query_rslt *bp_query_rslt;
 
 	/* Start the management command */
 	p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s);
@@ -47,14 +48,16 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
 	p->bpid = bpid;
 
 	/* Complete the management command */
-	*r = *(struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s, p,
-						 QBMAN_BP_QUERY);
-	if (!r) {
+	bp_query_rslt = (struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s,
+						p, QBMAN_BP_QUERY);
+	if (!bp_query_rslt) {
 		pr_err("qbman: Query BPID %d failed, no response\n",
 			bpid);
 		return -EIO;
 	}
 
+	*r = *bp_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY);
 
@@ -202,20 +205,23 @@ int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
 		   struct qbman_fq_query_rslt *r)
 {
 	struct qbman_fq_query_desc *p;
+	struct qbman_fq_query_rslt *fq_query_rslt;
 
 	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->fqid = fqid;
-	*r = *(struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s, p,
-					  QBMAN_FQ_QUERY);
-	if (!r) {
+	fq_query_rslt = (struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s,
+					p, QBMAN_FQ_QUERY);
+	if (!fq_query_rslt) {
 		pr_err("qbman: Query FQID %d failed, no response\n",
 			fqid);
 		return -EIO;
 	}
 
+	*r = *fq_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY);
 
@@ -398,20 +404,23 @@ int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
 		    struct qbman_cgr_query_rslt *r)
 {
 	struct qbman_cgr_query_desc *p;
+	struct qbman_cgr_query_rslt *cgr_query_rslt;
 
 	p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->cgid = cgid;
-	*r = *(struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_CGR_QUERY);
-	if (!r) {
+	cgr_query_rslt = (struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s,
+					p, QBMAN_CGR_QUERY);
+	if (!cgr_query_rslt) {
 		pr_err("qbman: Query CGID %d failed, no response\n",
 			cgid);
 		return -EIO;
 	}
 
+	*r = *cgr_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_CGR_QUERY);
 
@@ -473,20 +482,23 @@ int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
 			struct qbman_wred_query_rslt *r)
 {
 	struct qbman_cgr_query_desc *p;
+	struct qbman_wred_query_rslt *wred_query_rslt;
 
 	p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->cgid = cgid;
-	*r = *(struct qbman_wred_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_WRED_QUERY);
-	if (!r) {
+	wred_query_rslt = (struct qbman_wred_query_rslt *)qbman_swp_mc_complete(
+					s, p, QBMAN_WRED_QUERY);
+	if (!wred_query_rslt) {
 		pr_err("qbman: Query CGID WRED %d failed, no response\n",
 			cgid);
 		return -EIO;
 	}
 
+	*r = *wred_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WRED_QUERY);
 
@@ -527,7 +539,7 @@ void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
 	if (mn == 0)
 		*maxth = ma;
 	else
-		*maxth = ((ma+256) * (1<<(mn-1)));
+		*maxth = ((uint64_t)(ma+256) * (1<<(mn-1)));
 
 	if (step_s == 0)
 		*minth = *maxth - step_i;
@@ -630,6 +642,7 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
 		       struct qbman_wqchan_query_rslt *r)
 {
 	struct qbman_wqchan_query_desc *p;
+	struct qbman_wqchan_query_rslt *wqchan_query_rslt;
 
 	/* Start the management command */
 	p = (struct qbman_wqchan_query_desc *)qbman_swp_mc_start(s);
@@ -640,14 +653,16 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
 	p->chid = chanid;
 
 	/* Complete the management command */
-	*r = *(struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_WQ_QUERY);
-	if (!r) {
+	wqchan_query_rslt = (struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(
+						s, p, QBMAN_WQ_QUERY);
+	if (!wqchan_query_rslt) {
 		pr_err("qbman: Query WQ Channel %d failed, no response\n",
 			chanid);
 		return -EIO;
 	}
 
+	*r = *wqchan_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WQ_QUERY);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 20/43] bus/fslmc: fix invalid error FD code
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (18 preceding siblings ...)
  2024-09-13  5:59 ` [v1 19/43] bus/fslmc: fix coverity issue vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 21/43] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
                   ` (23 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Since error code was being set to 0 in case of error which is a valid
fd, it caused memory leak issue.
This issue have been fixed by changing zero to a valid non fd error.
CID: 26661848

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 3aeeca6880..bcdca909ee 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2023 NXP
+ *   Copyright 2016-2024 NXP
  *
  */
 
@@ -41,8 +41,6 @@
 #include "portal/dpaa2_hw_pvt.h"
 #include "portal/dpaa2_hw_dpio.h"
 
-#define FSLMC_CONTAINER_MAX_LEN 8 /**< Of the format dprc.XX */
-
 #define FSLMC_VFIO_MP "fslmc_vfio_mp_sync"
 
 /* Container is composed by multiple groups, however,
@@ -415,18 +413,16 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	    mp_reply.nb_received == 1) {
 		mp_rep = &mp_reply.msgs[0];
 		p = (struct vfio_mp_param *)mp_rep->param;
-		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1)
 			vfio_group_fd = mp_rep->fds[0];
-		} else if (p->result == SOCKET_NO_FD) {
+		else if (p->result == SOCKET_NO_FD)
 			DPAA2_BUS_ERR("Bad VFIO group fd");
-			vfio_group_fd = 0;
-		}
 	}
 
 	free(mp_reply.msgs);
 
 add_vfio_group:
-	if (vfio_group_fd <= 0) {
+	if (vfio_group_fd < 0) {
 		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
 				filename, vfio_group_fd);
@@ -1801,14 +1797,11 @@ fslmc_vfio_setup_group(void)
 	}
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
-	if (vfio_group_fd <= 0) {
+	if (vfio_group_fd < 0) {
 		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
-		if (vfio_group_fd <= 0) {
+		if (vfio_group_fd < 0) {
 			DPAA2_BUS_ERR("%s: open group name(%s) failed(%d)",
 				__func__, group_name, vfio_group_fd);
-			if (!vfio_group_fd)
-				close(vfio_group_fd);
-			DPAA2_BUS_ERR("Failed to create MC VFIO group");
 			return -rte_errno;
 		}
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 21/43] bus/fslmc: change qbman eq desc from d to desc
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (19 preceding siblings ...)
  2024-09-13  5:59 ` [v1 20/43] bus/fslmc: fix invalid error FD code vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
                   ` (22 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Change qbman_eq_desc name to avoid redefining same variable.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 3fdca9761d..5d0cedc136 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1008,9 +1008,9 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
 				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
 		p[0] = cl[0] | s->eqcr.pi_vb;
 		if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
-			struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+			struct qbman_eq_desc *desc = (struct qbman_eq_desc *)p;
 
-			d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+			desc->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
 				((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
 		}
 		eqcr_pi++;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (20 preceding siblings ...)
  2024-09-13  5:59 ` [v1 21/43] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 23/43] net/dpaa2: change miss flow ID macro name vanshika.shukla
                   ` (21 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Declare rte_fslmc_vfio_mem_dmamap and rte_fslmc_vfio_mem_dmaunmap
in bus_fslmc_driver.h for external usage.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     | 7 ++++++-
 drivers/bus/fslmc/fslmc_bus.c            | 2 +-
 drivers/bus/fslmc/fslmc_vfio.c           | 3 ++-
 drivers/bus/fslmc/fslmc_vfio.h           | 7 +------
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 2 +-
 5 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 462bf2113e..7479fd35e0 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2016,2021 NXP
+ *   Copyright 2016,2021-2023 NXP
  *
  */
 
@@ -135,6 +135,11 @@ struct rte_dpaa2_object {
 	rte_dpaa2_obj_close_t close;
 };
 
+int
+rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size);
+
 /**
  * A structure describing a DPAA2 driver.
  */
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index ce87b4ddbd..6590b2305f 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -438,7 +438,7 @@ rte_fslmc_probe(void)
 	 * install callback handler.
 	 */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
-		ret = rte_fslmc_vfio_dmamap();
+		ret = fslmc_vfio_dmamap();
 		if (ret) {
 			DPAA2_BUS_ERR("Unable to DMA map existing VAs: (%d)",
 				      ret);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index bcdca909ee..bd8455b70d 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1160,7 +1160,8 @@ rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 	return fslmc_unmap_dma(0, iova, size);
 }
 
-int rte_fslmc_vfio_dmamap(void)
+int
+fslmc_vfio_dmamap(void)
 {
 	int i = 0, ret;
 
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 408b35680d..11efcc036e 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -64,10 +64,5 @@ int fslmc_vfio_process_group(void);
 int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
 int fslmc_get_container_group(const char *group_name, int *gropuid);
-int rte_fslmc_vfio_dmamap(void);
-int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova,
-		uint64_t size);
-int rte_fslmc_vfio_mem_dmaunmap(uint64_t iova,
-		uint64_t size);
-
+int fslmc_vfio_dmamap(void);
 #endif /* _FSLMC_VFIO_H_ */
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 42e17d984c..cfa71751d8 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -23,7 +23,7 @@
 #include <dev_driver.h>
 #include "rte_dpaa2_mempool.h"
 
-#include "fslmc_vfio.h"
+#include <bus_fslmc_driver.h>
 #include <fslmc_logs.h>
 #include <mc/fsl_dpbp.h>
 #include <portal/dpaa2_hw_pvt.h>
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 23/43] net/dpaa2: change miss flow ID macro name
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (21 preceding siblings ...)
  2024-09-13  5:59 ` [v1 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 24/43] net/dpaa2: flow API refactor vanshika.shukla
                   ` (20 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Remove miss flow id macro name to DPNI_FS_MISS_DROP since its
conflicting with enum. Also, set default miss flow id to 0.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 15f3343db4..c30c5225c7 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -30,8 +30,7 @@
 int mc_l4_port_identification;
 
 static char *dpaa2_flow_control_log;
-static uint16_t dpaa2_flow_miss_flow_id =
-	DPNI_FS_MISS_DROP;
+static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
 #define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
 
@@ -3994,7 +3993,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
 		struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 		dpaa2_flow_miss_flow_id =
-			atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
+			(uint16_t)atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
 		if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
 			DPAA2_PMD_ERR(
 				"The missed flow ID %d exceeds the max flow ID %d",
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 24/43] net/dpaa2: flow API refactor
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (22 preceding siblings ...)
  2024-09-13  5:59 ` [v1 23/43] net/dpaa2: change miss flow ID macro name vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 25/43] net/dpaa2: dump Rx parser result vanshika.shukla
                   ` (19 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

1) Gather redundant code with same logic from various protocol
   handlers to create common functions.
2) struct dpaa2_key_profile is used to describe each extract's
   offset of rule and size. It's easy to insert new extract previous
   IP address extract.
3) IP address profile is used to describe ipv4/v6 addresses extracts
   located at end of rule.
4) L4 ports profile is used to describe the ports positions and offsets
   of rule.
5) Once the extracts of QoS/FS table are update, go through all
   the existing flows of this table to update the rule data.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c |   27 +-
 drivers/net/dpaa2/dpaa2_ethdev.h |   90 +-
 drivers/net/dpaa2/dpaa2_flow.c   | 4839 ++++++++++++------------------
 3 files changed, 2030 insertions(+), 2926 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index f0b4843472..533effd72b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2805,39 +2805,20 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	/* Init fields w.r.t. classification */
 	memset(&priv->extract.qos_key_extract, 0,
 		sizeof(struct dpaa2_key_extract));
-	priv->extract.qos_extract_param = (size_t)rte_malloc(NULL, 256, 64);
+	priv->extract.qos_extract_param = rte_malloc(NULL, 256, 64);
 	if (!priv->extract.qos_extract_param) {
-		DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow "
-			    " classification ", ret);
+		DPAA2_PMD_ERR("Memory alloc failed");
 		goto init_err;
 	}
-	priv->extract.qos_key_extract.key_info.ipv4_src_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv4_dst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv6_src_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv6_dst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
 
 	for (i = 0; i < MAX_TCS; i++) {
 		memset(&priv->extract.tc_key_extract[i], 0,
 			sizeof(struct dpaa2_key_extract));
-		priv->extract.tc_extract_param[i] =
-			(size_t)rte_malloc(NULL, 256, 64);
+		priv->extract.tc_extract_param[i] = rte_malloc(NULL, 256, 64);
 		if (!priv->extract.tc_extract_param[i]) {
-			DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classification",
-				     ret);
+			DPAA2_PMD_ERR("Memory alloc failed");
 			goto init_err;
 		}
-		priv->extract.tc_key_extract[i].key_info.ipv4_src_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv4_dst_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv6_src_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv6_dst_offset =
-			IP_ADDRESS_OFFSET_INVALID;
 	}
 
 	ret = dpni_set_max_frame_length(dpni_dev, CMD_PRI_LOW, priv->token,
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 6625afaba3..ea1c1b5117 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -145,14 +145,6 @@ extern bool dpaa2_enable_ts[];
 extern uint64_t dpaa2_timestamp_rx_dynflag;
 extern int dpaa2_timestamp_dynfield_offset;
 
-#define DPAA2_QOS_TABLE_RECONFIGURE	1
-#define DPAA2_FS_TABLE_RECONFIGURE	2
-
-#define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4
-#define DPAA2_FS_TABLE_IPADDR_EXTRACT 8
-
-#define DPAA2_FLOW_MAX_KEY_SIZE		16
-
 /* Externally defined */
 extern const struct rte_flow_ops dpaa2_flow_ops;
 
@@ -160,29 +152,85 @@ extern const struct rte_tm_ops dpaa2_tm_ops;
 
 extern bool dpaa2_enable_err_queue;
 
-#define IP_ADDRESS_OFFSET_INVALID (-1)
+struct ipv4_sd_addr_extract_rule {
+	uint32_t ipv4_src;
+	uint32_t ipv4_dst;
+};
+
+struct ipv6_sd_addr_extract_rule {
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+};
 
-struct dpaa2_key_info {
+struct ipv4_ds_addr_extract_rule {
+	uint32_t ipv4_dst;
+	uint32_t ipv4_src;
+};
+
+struct ipv6_ds_addr_extract_rule {
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+union ip_addr_extract_rule {
+	struct ipv4_sd_addr_extract_rule ipv4_sd_addr;
+	struct ipv6_sd_addr_extract_rule ipv6_sd_addr;
+	struct ipv4_ds_addr_extract_rule ipv4_ds_addr;
+	struct ipv6_ds_addr_extract_rule ipv6_ds_addr;
+};
+
+union ip_src_addr_extract_rule {
+	uint32_t ipv4_src;
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+union ip_dst_addr_extract_rule {
+	uint32_t ipv4_dst;
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+enum ip_addr_extract_type {
+	IP_NONE_ADDR_EXTRACT,
+	IP_SRC_EXTRACT,
+	IP_DST_EXTRACT,
+	IP_SRC_DST_EXTRACT,
+	IP_DST_SRC_EXTRACT
+};
+
+struct key_prot_field {
+	enum net_prot prot;
+	uint32_t key_field;
+};
+
+struct dpaa2_key_profile {
+	uint8_t num;
 	uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
 	uint8_t key_size[DPKG_MAX_NUM_OF_EXTRACTS];
-	/* Special for IP address. */
-	int ipv4_src_offset;
-	int ipv4_dst_offset;
-	int ipv6_src_offset;
-	int ipv6_dst_offset;
-	uint8_t key_total_size;
+
+	enum ip_addr_extract_type ip_addr_type;
+	uint8_t ip_addr_extract_pos;
+	uint8_t ip_addr_extract_off;
+
+	uint8_t l4_src_port_present;
+	uint8_t l4_src_port_pos;
+	uint8_t l4_src_port_offset;
+	uint8_t l4_dst_port_present;
+	uint8_t l4_dst_port_pos;
+	uint8_t l4_dst_port_offset;
+	struct key_prot_field prot_field[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint16_t key_max_size;
 };
 
 struct dpaa2_key_extract {
 	struct dpkg_profile_cfg dpkg;
-	struct dpaa2_key_info key_info;
+	struct dpaa2_key_profile key_profile;
 };
 
 struct extract_s {
 	struct dpaa2_key_extract qos_key_extract;
 	struct dpaa2_key_extract tc_key_extract[MAX_TCS];
-	uint64_t qos_extract_param;
-	uint64_t tc_extract_param[MAX_TCS];
+	uint8_t *qos_extract_param;
+	uint8_t *tc_extract_param[MAX_TCS];
 };
 
 struct dpaa2_dev_priv {
@@ -233,7 +281,8 @@ struct dpaa2_dev_priv {
 	/* Stores correction offset for one step timestamping */
 	uint16_t ptp_correction_offset;
 
-	LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */
+	struct dpaa2_dev_flow *curr;
+	LIST_HEAD(, dpaa2_dev_flow) flows;
 	LIST_HEAD(nodes, dpaa2_tm_node) nodes;
 	LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles;
 };
@@ -292,7 +341,6 @@ uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
 void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci, struct dpaa2_queue *dpaa2_q);
 void dpaa2_flow_clean(struct rte_eth_dev *dev);
 uint16_t dpaa2_dev_tx_conf(void *queue)  __rte_unused;
-int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev);
 
 int dpaa2_timesync_enable(struct rte_eth_dev *dev);
 int dpaa2_timesync_disable(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index c30c5225c7..0522fdb026 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2022 NXP
  */
 
 #include <sys/queue.h>
@@ -27,41 +27,40 @@
  * MC/WRIOP are not able to identify
  * the l4 protocol with l4 ports.
  */
-int mc_l4_port_identification;
+static int mc_l4_port_identification;
 
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
-#define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
-
-enum flow_rule_ipaddr_type {
-	FLOW_NONE_IPADDR,
-	FLOW_IPV4_ADDR,
-	FLOW_IPV6_ADDR
+enum dpaa2_flow_entry_size {
+	DPAA2_FLOW_ENTRY_MIN_SIZE = (DPNI_MAX_KEY_SIZE / 2),
+	DPAA2_FLOW_ENTRY_MAX_SIZE = DPNI_MAX_KEY_SIZE
 };
 
-struct flow_rule_ipaddr {
-	enum flow_rule_ipaddr_type ipaddr_type;
-	int qos_ipsrc_offset;
-	int qos_ipdst_offset;
-	int fs_ipsrc_offset;
-	int fs_ipdst_offset;
+enum dpaa2_flow_dist_type {
+	DPAA2_FLOW_QOS_TYPE = 1 << 0,
+	DPAA2_FLOW_FS_TYPE = 1 << 1
 };
 
-struct rte_flow {
-	LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
+#define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
+#define DPAA2_FLOW_MAX_KEY_SIZE			16
+
+struct dpaa2_dev_flow {
+	LIST_ENTRY(dpaa2_dev_flow) next;
 	struct dpni_rule_cfg qos_rule;
+	uint8_t *qos_key_addr;
+	uint8_t *qos_mask_addr;
+	uint16_t qos_rule_size;
 	struct dpni_rule_cfg fs_rule;
 	uint8_t qos_real_key_size;
 	uint8_t fs_real_key_size;
+	uint8_t *fs_key_addr;
+	uint8_t *fs_mask_addr;
+	uint16_t fs_rule_size;
 	uint8_t tc_id; /** Traffic Class ID. */
 	uint8_t tc_index; /** index within this Traffic Class. */
-	enum rte_flow_action_type action;
-	/* Special for IP address to specify the offset
-	 * in key/mask.
-	 */
-	struct flow_rule_ipaddr ipaddr_rule;
-	struct dpni_fs_action_cfg action_cfg;
+	enum rte_flow_action_type action_type;
+	struct dpni_fs_action_cfg fs_action_cfg;
 };
 
 static const
@@ -94,9 +93,6 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
 };
 
-/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
-#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
-
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
 	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
@@ -155,11 +151,12 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
-
 #endif
 
-static inline void dpaa2_prot_field_string(
-	enum net_prot prot, uint32_t field,
+#define DPAA2_FLOW_DUMP printf
+
+static inline void
+dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 	char *string)
 {
 	if (!dpaa2_flow_control_log)
@@ -234,60 +231,84 @@ static inline void dpaa2_prot_field_string(
 	}
 }
 
-static inline void dpaa2_flow_qos_table_extracts_log(
-	const struct dpaa2_dev_priv *priv, FILE *f)
+static inline void
+dpaa2_flow_qos_extracts_log(const struct dpaa2_dev_priv *priv)
 {
 	int idx;
 	char string[32];
+	const struct dpkg_profile_cfg *dpkg =
+		&priv->extract.qos_key_extract.dpkg;
+	const struct dpkg_extract *extract;
+	enum dpkg_extract_type type;
+	enum net_prot prot;
+	uint32_t field;
 
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "Setup QoS table: number of extracts: %d\r\n",
-			priv->extract.qos_key_extract.dpkg.num_extracts);
-	for (idx = 0; idx < priv->extract.qos_key_extract.dpkg.num_extracts;
-		idx++) {
-		dpaa2_prot_field_string(priv->extract.qos_key_extract.dpkg
-			.extracts[idx].extract.from_hdr.prot,
-			priv->extract.qos_key_extract.dpkg.extracts[idx]
-			.extract.from_hdr.field,
-			string);
-		fprintf(f, "%s", string);
-		if ((idx + 1) < priv->extract.qos_key_extract.dpkg.num_extracts)
-			fprintf(f, " / ");
-	}
-	fprintf(f, "\r\n");
+	DPAA2_FLOW_DUMP("QoS table: %d extracts\r\n",
+		dpkg->num_extracts);
+	for (idx = 0; idx < dpkg->num_extracts; idx++) {
+		extract = &dpkg->extracts[idx];
+		type = extract->type;
+		if (type == DPKG_EXTRACT_FROM_HDR) {
+			prot = extract->extract.from_hdr.prot;
+			field = extract->extract.from_hdr.field;
+			dpaa2_prot_field_string(prot, field,
+				string);
+		} else if (type == DPKG_EXTRACT_FROM_DATA) {
+			sprintf(string, "raw offset/len: %d/%d",
+				extract->extract.from_data.offset,
+				extract->extract.from_data.size);
+		}
+		DPAA2_FLOW_DUMP("%s", string);
+		if ((idx + 1) < dpkg->num_extracts)
+			DPAA2_FLOW_DUMP(" / ");
+	}
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_fs_table_extracts_log(
-	const struct dpaa2_dev_priv *priv, int tc_id, FILE *f)
+static inline void
+dpaa2_flow_fs_extracts_log(const struct dpaa2_dev_priv *priv,
+	int tc_id)
 {
 	int idx;
 	char string[32];
+	const struct dpkg_profile_cfg *dpkg =
+		&priv->extract.tc_key_extract[tc_id].dpkg;
+	const struct dpkg_extract *extract;
+	enum dpkg_extract_type type;
+	enum net_prot prot;
+	uint32_t field;
 
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "Setup FS table: number of extracts of TC[%d]: %d\r\n",
-			tc_id, priv->extract.tc_key_extract[tc_id]
-			.dpkg.num_extracts);
-	for (idx = 0; idx < priv->extract.tc_key_extract[tc_id]
-		.dpkg.num_extracts; idx++) {
-		dpaa2_prot_field_string(priv->extract.tc_key_extract[tc_id]
-			.dpkg.extracts[idx].extract.from_hdr.prot,
-			priv->extract.tc_key_extract[tc_id].dpkg.extracts[idx]
-			.extract.from_hdr.field,
-			string);
-		fprintf(f, "%s", string);
-		if ((idx + 1) < priv->extract.tc_key_extract[tc_id]
-			.dpkg.num_extracts)
-			fprintf(f, " / ");
-	}
-	fprintf(f, "\r\n");
+	DPAA2_FLOW_DUMP("FS table: %d extracts in TC[%d]\r\n",
+		dpkg->num_extracts, tc_id);
+	for (idx = 0; idx < dpkg->num_extracts; idx++) {
+		extract = &dpkg->extracts[idx];
+		type = extract->type;
+		if (type == DPKG_EXTRACT_FROM_HDR) {
+			prot = extract->extract.from_hdr.prot;
+			field = extract->extract.from_hdr.field;
+			dpaa2_prot_field_string(prot, field,
+				string);
+		} else if (type == DPKG_EXTRACT_FROM_DATA) {
+			sprintf(string, "raw offset/len: %d/%d",
+				extract->extract.from_data.offset,
+				extract->extract.from_data.size);
+		}
+		DPAA2_FLOW_DUMP("%s", string);
+		if ((idx + 1) < dpkg->num_extracts)
+			DPAA2_FLOW_DUMP(" / ");
+	}
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_qos_entry_log(
-	const char *log_info, const struct rte_flow *flow, int qos_index, FILE *f)
+static inline void
+dpaa2_flow_qos_entry_log(const char *log_info,
+	const struct dpaa2_dev_flow *flow, int qos_index)
 {
 	int idx;
 	uint8_t *key, *mask;
@@ -295,27 +316,34 @@ static inline void dpaa2_flow_qos_entry_log(
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "\r\n%s QoS entry[%d] for TC[%d], extracts size is %d\r\n",
-		log_info, qos_index, flow->tc_id, flow->qos_real_key_size);
-
-	key = (uint8_t *)(size_t)flow->qos_rule.key_iova;
-	mask = (uint8_t *)(size_t)flow->qos_rule.mask_iova;
+	if (qos_index >= 0) {
+		DPAA2_FLOW_DUMP("%s QoS entry[%d](size %d/%d) for TC[%d]\r\n",
+			log_info, qos_index, flow->qos_rule_size,
+			flow->qos_rule.key_size,
+			flow->tc_id);
+	} else {
+		DPAA2_FLOW_DUMP("%s QoS entry(size %d/%d) for TC[%d]\r\n",
+			log_info, flow->qos_rule_size,
+			flow->qos_rule.key_size,
+			flow->tc_id);
+	}
 
-	fprintf(f, "key:\r\n");
-	for (idx = 0; idx < flow->qos_real_key_size; idx++)
-		fprintf(f, "%02x ", key[idx]);
+	key = flow->qos_key_addr;
+	mask = flow->qos_mask_addr;
 
-	fprintf(f, "\r\nmask:\r\n");
-	for (idx = 0; idx < flow->qos_real_key_size; idx++)
-		fprintf(f, "%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("key:\r\n");
+	for (idx = 0; idx < flow->qos_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", key[idx]);
 
-	fprintf(f, "\r\n%s QoS ipsrc: %d, ipdst: %d\r\n", log_info,
-		flow->ipaddr_rule.qos_ipsrc_offset,
-		flow->ipaddr_rule.qos_ipdst_offset);
+	DPAA2_FLOW_DUMP("\r\nmask:\r\n");
+	for (idx = 0; idx < flow->qos_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_fs_entry_log(
-	const char *log_info, const struct rte_flow *flow, FILE *f)
+static inline void
+dpaa2_flow_fs_entry_log(const char *log_info,
+	const struct dpaa2_dev_flow *flow)
 {
 	int idx;
 	uint8_t *key, *mask;
@@ -323,187 +351,432 @@ static inline void dpaa2_flow_fs_entry_log(
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "\r\n%s FS/TC entry[%d] of TC[%d], extracts size is %d\r\n",
-		log_info, flow->tc_index, flow->tc_id, flow->fs_real_key_size);
+	DPAA2_FLOW_DUMP("%s FS/TC entry[%d](size %d/%d) of TC[%d]\r\n",
+		log_info, flow->tc_index,
+		flow->fs_rule_size, flow->fs_rule.key_size,
+		flow->tc_id);
+
+	key = flow->fs_key_addr;
+	mask = flow->fs_mask_addr;
+
+	DPAA2_FLOW_DUMP("key:\r\n");
+	for (idx = 0; idx < flow->fs_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", key[idx]);
+
+	DPAA2_FLOW_DUMP("\r\nmask:\r\n");
+	for (idx = 0; idx < flow->fs_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("\r\n");
+}
 
-	key = (uint8_t *)(size_t)flow->fs_rule.key_iova;
-	mask = (uint8_t *)(size_t)flow->fs_rule.mask_iova;
+static int
+dpaa2_flow_ip_address_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_IPV4 &&
+		(field == NH_FLD_IPV4_SRC_IP ||
+		field == NH_FLD_IPV4_DST_IP))
+		return true;
+	else if (prot == NET_PROT_IPV6 &&
+		(field == NH_FLD_IPV6_SRC_IP ||
+		field == NH_FLD_IPV6_DST_IP))
+		return true;
+	else if (prot == NET_PROT_IP &&
+		(field == NH_FLD_IP_SRC ||
+		field == NH_FLD_IP_DST))
+		return true;
 
-	fprintf(f, "key:\r\n");
-	for (idx = 0; idx < flow->fs_real_key_size; idx++)
-		fprintf(f, "%02x ", key[idx]);
+	return false;
+}
 
-	fprintf(f, "\r\nmask:\r\n");
-	for (idx = 0; idx < flow->fs_real_key_size; idx++)
-		fprintf(f, "%02x ", mask[idx]);
+static int
+dpaa2_flow_l4_src_port_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_TCP &&
+		field == NH_FLD_TCP_PORT_SRC)
+		return true;
+	else if (prot == NET_PROT_UDP &&
+		field == NH_FLD_UDP_PORT_SRC)
+		return true;
+	else if (prot == NET_PROT_SCTP &&
+		field == NH_FLD_SCTP_PORT_SRC)
+		return true;
+
+	return false;
+}
 
-	fprintf(f, "\r\n%s FS ipsrc: %d, ipdst: %d\r\n", log_info,
-		flow->ipaddr_rule.fs_ipsrc_offset,
-		flow->ipaddr_rule.fs_ipdst_offset);
+static int
+dpaa2_flow_l4_dst_port_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_TCP &&
+		field == NH_FLD_TCP_PORT_DST)
+		return true;
+	else if (prot == NET_PROT_UDP &&
+		field == NH_FLD_UDP_PORT_DST)
+		return true;
+	else if (prot == NET_PROT_SCTP &&
+		field == NH_FLD_SCTP_PORT_DST)
+		return true;
+
+	return false;
 }
 
-static inline void dpaa2_flow_extract_key_set(
-	struct dpaa2_key_info *key_info, int index, uint8_t size)
+static int
+dpaa2_flow_add_qos_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow)
 {
-	key_info->key_size[index] = size;
-	if (index > 0) {
-		key_info->key_offset[index] =
-			key_info->key_offset[index - 1] +
-			key_info->key_size[index - 1];
-	} else {
-		key_info->key_offset[index] = 0;
+	uint16_t qos_index;
+	int ret;
+	struct fsl_mc_io *dpni = priv->hw;
+
+	if (priv->num_rx_tc <= 1 &&
+		flow->action_type != RTE_FLOW_ACTION_TYPE_RSS) {
+		DPAA2_PMD_WARN("No QoS Table for FS");
+		return -EINVAL;
 	}
-	key_info->key_total_size += size;
+
+	/* QoS entry added is only effective for multiple TCs.*/
+	qos_index = flow->tc_id * priv->fs_entries + flow->tc_index;
+	if (qos_index >= priv->qos_entries) {
+		DPAA2_PMD_ERR("QoS table full(%d >= %d)",
+			qos_index, priv->qos_entries);
+		return -EINVAL;
+	}
+
+	dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
+
+	ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+			priv->token, &flow->qos_rule,
+			flow->tc_id, qos_index,
+			0, 0);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("Add entry(%d) to table(%d) failed",
+			qos_index, flow->tc_id);
+		return ret;
+	}
+
+	return 0;
 }
 
-static int dpaa2_flow_extract_add(
-	struct dpaa2_key_extract *key_extract,
-	enum net_prot prot,
-	uint32_t field, uint8_t field_size)
+static int
+dpaa2_flow_add_fs_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow)
 {
-	int index, ip_src = -1, ip_dst = -1;
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
+	int ret;
+	struct fsl_mc_io *dpni = priv->hw;
 
-	if (dpkg->num_extracts >=
-		DPKG_MAX_NUM_OF_EXTRACTS) {
-		DPAA2_PMD_WARN("Number of extracts overflows");
-		return -1;
+	if (flow->tc_index >= priv->fs_entries) {
+		DPAA2_PMD_ERR("FS table full(%d >= %d)",
+			flow->tc_index, priv->fs_entries);
+		return -EINVAL;
 	}
-	/* Before reorder, the IP SRC and IP DST are already last
-	 * extract(s).
-	 */
-	for (index = 0; index < dpkg->num_extracts; index++) {
-		if (dpkg->extracts[index].extract.from_hdr.prot ==
-			NET_PROT_IP) {
-			if (dpkg->extracts[index].extract.from_hdr.field ==
-				NH_FLD_IP_SRC) {
-				ip_src = index;
-			}
-			if (dpkg->extracts[index].extract.from_hdr.field ==
-				NH_FLD_IP_DST) {
-				ip_dst = index;
+
+	dpaa2_flow_fs_entry_log("Start add", flow);
+
+	ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
+			priv->token, flow->tc_id,
+			flow->tc_index, &flow->fs_rule,
+			&flow->fs_action_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("Add rule(%d) to FS table(%d) failed",
+			flow->tc_index, flow->tc_id);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_rule_insert_hole(struct dpaa2_dev_flow *flow,
+	int offset, int size,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int end;
+
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		end = flow->qos_rule_size;
+		if (end > offset) {
+			memmove(flow->qos_key_addr + offset + size,
+					flow->qos_key_addr + offset,
+					end - offset);
+			memset(flow->qos_key_addr + offset,
+					0, size);
+
+			memmove(flow->qos_mask_addr + offset + size,
+					flow->qos_mask_addr + offset,
+					end - offset);
+			memset(flow->qos_mask_addr + offset,
+					0, size);
+		}
+		flow->qos_rule_size += size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		end = flow->fs_rule_size;
+		if (end > offset) {
+			memmove(flow->fs_key_addr + offset + size,
+					flow->fs_key_addr + offset,
+					end - offset);
+			memset(flow->fs_key_addr + offset,
+					0, size);
+
+			memmove(flow->fs_mask_addr + offset + size,
+					flow->fs_mask_addr + offset,
+					end - offset);
+			memset(flow->fs_mask_addr + offset,
+					0, size);
+		}
+		flow->fs_rule_size += size;
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_rule_add_all(struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type,
+	uint16_t entry_size, uint8_t tc_id)
+{
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	int ret;
+
+	while (curr) {
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			if (priv->num_rx_tc > 1 ||
+				curr->action_type ==
+				RTE_FLOW_ACTION_TYPE_RSS) {
+				curr->qos_rule.key_size = entry_size;
+				ret = dpaa2_flow_add_qos_rule(priv, curr);
+				if (ret)
+					return ret;
 			}
 		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE &&
+			curr->tc_id == tc_id) {
+			curr->fs_rule.key_size = entry_size;
+			ret = dpaa2_flow_add_fs_rule(priv, curr);
+			if (ret)
+				return ret;
+		}
+		curr = LIST_NEXT(curr, next);
 	}
 
-	if (ip_src >= 0)
-		RTE_ASSERT((ip_src + 2) >= dpkg->num_extracts);
+	return 0;
+}
 
-	if (ip_dst >= 0)
-		RTE_ASSERT((ip_dst + 2) >= dpkg->num_extracts);
+static int
+dpaa2_flow_qos_rule_insert_hole(struct dpaa2_dev_priv *priv,
+	int offset, int size)
+{
+	struct dpaa2_dev_flow *curr;
+	int ret;
 
-	if (prot == NET_PROT_IP &&
-		(field == NH_FLD_IP_SRC ||
-		field == NH_FLD_IP_DST)) {
-		index = dpkg->num_extracts;
+	curr = priv->curr;
+	if (!curr) {
+		DPAA2_PMD_ERR("Current qos flow insert hole failed.");
+		return -EINVAL;
 	} else {
-		if (ip_src >= 0 && ip_dst >= 0)
-			index = dpkg->num_extracts - 2;
-		else if (ip_src >= 0 || ip_dst >= 0)
-			index = dpkg->num_extracts - 1;
-		else
-			index = dpkg->num_extracts;
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	dpkg->extracts[index].type =	DPKG_EXTRACT_FROM_HDR;
-	dpkg->extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-	dpkg->extracts[index].extract.from_hdr.prot = prot;
-	dpkg->extracts[index].extract.from_hdr.field = field;
-	if (prot == NET_PROT_IP &&
-		(field == NH_FLD_IP_SRC ||
-		field == NH_FLD_IP_DST)) {
-		dpaa2_flow_extract_key_set(key_info, index, 0);
+	curr = LIST_FIRST(&priv->flows);
+	while (curr) {
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		curr = LIST_NEXT(curr, next);
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_fs_rule_insert_hole(struct dpaa2_dev_priv *priv,
+	int offset, int size, int tc_id)
+{
+	struct dpaa2_dev_flow *curr;
+	int ret;
+
+	curr = priv->curr;
+	if (!curr || curr->tc_id != tc_id) {
+		DPAA2_PMD_ERR("Current flow insert hole failed.");
+		return -EINVAL;
 	} else {
-		dpaa2_flow_extract_key_set(key_info, index, field_size);
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	if (prot == NET_PROT_IP) {
-		if (field == NH_FLD_IP_SRC) {
-			if (key_info->ipv4_dst_offset >= 0) {
-				key_info->ipv4_src_offset =
-					key_info->ipv4_dst_offset +
-					NH_FLD_IPV4_ADDR_SIZE;
-			} else {
-				key_info->ipv4_src_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-			if (key_info->ipv6_dst_offset >= 0) {
-				key_info->ipv6_src_offset =
-					key_info->ipv6_dst_offset +
-					NH_FLD_IPV6_ADDR_SIZE;
-			} else {
-				key_info->ipv6_src_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-		} else if (field == NH_FLD_IP_DST) {
-			if (key_info->ipv4_src_offset >= 0) {
-				key_info->ipv4_dst_offset =
-					key_info->ipv4_src_offset +
-					NH_FLD_IPV4_ADDR_SIZE;
-			} else {
-				key_info->ipv4_dst_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-			if (key_info->ipv6_src_offset >= 0) {
-				key_info->ipv6_dst_offset =
-					key_info->ipv6_src_offset +
-					NH_FLD_IPV6_ADDR_SIZE;
-			} else {
-				key_info->ipv6_dst_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
+	curr = LIST_FIRST(&priv->flows);
+
+	while (curr) {
+		if (curr->tc_id != tc_id) {
+			curr = LIST_NEXT(curr, next);
+			continue;
 		}
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		curr = LIST_NEXT(curr, next);
 	}
 
-	if (index == dpkg->num_extracts) {
-		dpkg->num_extracts++;
-		return 0;
+	return 0;
+}
+
+/* Move IPv4/IPv6 addresses to fill new extract previous IP address.
+ * Current MC/WRIOP only support generic IP extract but IP address
+ * is not fixed, so we have to put them at end of extracts, otherwise,
+ * the extracts position following them can't be identified.
+ */
+static int
+dpaa2_flow_key_profile_advance(enum net_prot prot,
+	uint32_t field, uint8_t field_size,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off += field_size;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, field_size);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, field_size, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = field_size;
+	key_profile->prot_field[pos].prot = prot;
+	key_profile->prot_field[pos].key_field = field;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	if (dpaa2_flow_l4_src_port_extract(prot, field)) {
+		key_profile->l4_src_port_present = 1;
+		key_profile->l4_src_port_pos = pos;
+		key_profile->l4_src_port_offset =
+			key_profile->key_offset[pos];
+	} else if (dpaa2_flow_l4_dst_port_extract(prot, field)) {
+		key_profile->l4_dst_port_present = 1;
+		key_profile->l4_dst_port_pos = pos;
+		key_profile->l4_dst_port_offset =
+			key_profile->key_offset[pos];
+	}
+	key_profile->key_max_size += field_size;
+
+	return pos;
+}
+
+static int
+dpaa2_flow_extract_add_hdr(enum net_prot prot,
+	uint32_t field, uint8_t field_size,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
 	}
 
-	if (ip_src >= 0) {
-		ip_src++;
-		dpkg->extracts[ip_src].type =
-			DPKG_EXTRACT_FROM_HDR;
-		dpkg->extracts[ip_src].extract.from_hdr.type =
-			DPKG_FULL_FIELD;
-		dpkg->extracts[ip_src].extract.from_hdr.prot =
-			NET_PROT_IP;
-		dpkg->extracts[ip_src].extract.from_hdr.field =
-			NH_FLD_IP_SRC;
-		dpaa2_flow_extract_key_set(key_info, ip_src, 0);
-		key_info->ipv4_src_offset += field_size;
-		key_info->ipv6_src_offset += field_size;
-	}
-	if (ip_dst >= 0) {
-		ip_dst++;
-		dpkg->extracts[ip_dst].type =
-			DPKG_EXTRACT_FROM_HDR;
-		dpkg->extracts[ip_dst].extract.from_hdr.type =
-			DPKG_FULL_FIELD;
-		dpkg->extracts[ip_dst].extract.from_hdr.prot =
-			NET_PROT_IP;
-		dpkg->extracts[ip_dst].extract.from_hdr.field =
-			NH_FLD_IP_DST;
-		dpaa2_flow_extract_key_set(key_info, ip_dst, 0);
-		key_info->ipv4_dst_offset += field_size;
-		key_info->ipv6_dst_offset += field_size;
+	pos = dpaa2_flow_key_profile_advance(prot,
+			field, field_size, priv,
+			dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
 	}
 
+	extracts[pos].type = DPKG_EXTRACT_FROM_HDR;
+	extracts[pos].extract.from_hdr.prot = prot;
+	extracts[pos].extract.from_hdr.type = DPKG_FULL_FIELD;
+	extracts[pos].extract.from_hdr.field = field;
+
 	dpkg->num_extracts++;
 
 	return 0;
 }
 
-static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
-				      int size)
+static int
+dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
+	int size)
 {
 	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
+	struct dpaa2_key_profile *key_info = &key_extract->key_profile;
 	int last_extract_size, index;
 
 	if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
@@ -531,83 +804,58 @@ static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
 			DPAA2_FLOW_MAX_KEY_SIZE * index;
 	}
 
-	key_info->key_total_size = size;
+	key_info->key_max_size = size;
 	return 0;
 }
 
-/* Protocol discrimination.
- * Discriminate IPv4/IPv6/vLan by Eth type.
- * Discriminate UDP/TCP/ICMP by next proto of IP.
- */
 static inline int
-dpaa2_flow_proto_discrimination_extract(
-	struct dpaa2_key_extract *key_extract,
-	enum rte_flow_item_type type)
+dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t key_field)
 {
-	if (type == RTE_FLOW_ITEM_TYPE_ETH) {
-		return dpaa2_flow_extract_add(
-				key_extract, NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				sizeof(rte_be16_t));
-	} else if (type == (enum rte_flow_item_type)
-		DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
-		return dpaa2_flow_extract_add(
-				key_extract, NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				NH_FLD_IP_PROTO_SIZE);
-	}
-
-	return -1;
-}
+	int pos;
+	struct key_prot_field *prot_field;
 
-static inline int dpaa2_flow_extract_search(
-	struct dpkg_profile_cfg *dpkg,
-	enum net_prot prot, uint32_t field)
-{
-	int i;
+	if (dpaa2_flow_ip_address_extract(prot, key_field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
 
-	for (i = 0; i < dpkg->num_extracts; i++) {
-		if (dpkg->extracts[i].extract.from_hdr.prot == prot &&
-			dpkg->extracts[i].extract.from_hdr.field == field) {
-			return i;
+	prot_field = key_profile->prot_field;
+	for (pos = 0; pos < key_profile->num; pos++) {
+		if (prot_field[pos].prot == prot &&
+			prot_field[pos].key_field == key_field) {
+			return pos;
 		}
 	}
 
-	return -1;
+	if (dpaa2_flow_l4_src_port_extract(prot, key_field)) {
+		if (key_profile->l4_src_port_present)
+			return key_profile->l4_src_port_pos;
+	} else if (dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
+		if (key_profile->l4_dst_port_present)
+			return key_profile->l4_dst_port_pos;
+	}
+
+	return -ENXIO;
 }
 
-static inline int dpaa2_flow_extract_key_offset(
-	struct dpaa2_key_extract *key_extract,
-	enum net_prot prot, uint32_t field)
+static inline int
+dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t key_field)
 {
 	int i;
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
 
-	if (prot == NET_PROT_IPV4 ||
-		prot == NET_PROT_IPV6)
-		i = dpaa2_flow_extract_search(dpkg, NET_PROT_IP, field);
+	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
+
+	if (i >= 0)
+		return key_profile->key_offset[i];
 	else
-		i = dpaa2_flow_extract_search(dpkg, prot, field);
-
-	if (i >= 0) {
-		if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_SRC)
-			return key_info->ipv4_src_offset;
-		else if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_DST)
-			return key_info->ipv4_dst_offset;
-		else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_SRC)
-			return key_info->ipv6_src_offset;
-		else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_DST)
-			return key_info->ipv6_dst_offset;
-		else
-			return key_info->key_offset[i];
-	} else {
-		return -1;
-	}
+		return i;
 }
 
-struct proto_discrimination {
-	enum rte_flow_item_type type;
+struct prev_proto_field_id {
+	enum net_prot prot;
 	union {
 		rte_be16_t eth_type;
 		uint8_t ip_proto;
@@ -615,103 +863,134 @@ struct proto_discrimination {
 };
 
 static int
-dpaa2_flow_proto_discrimination_rule(
-	struct dpaa2_dev_priv *priv, struct rte_flow *flow,
-	struct proto_discrimination proto, int group)
+dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct prev_proto_field_id *prev_proto,
+	int group,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	enum net_prot prot;
-	uint32_t field;
 	int offset;
-	size_t key_iova;
-	size_t mask_iova;
+	uint8_t *key_addr;
+	uint8_t *mask_addr;
+	uint32_t field = 0;
 	rte_be16_t eth_type;
 	uint8_t ip_proto;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
 
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		prot = NET_PROT_ETH;
+	if (prev_proto->prot == NET_PROT_ETH) {
 		field = NH_FLD_ETH_TYPE;
-	} else if (proto.type == DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
-		prot = NET_PROT_IP;
+	} else if (prev_proto->prot == NET_PROT_IP) {
 		field = NH_FLD_IP_PROTO;
 	} else {
-		DPAA2_PMD_ERR(
-			"Only Eth and IP support to discriminate next proto.");
-		return -1;
-	}
-
-	offset = dpaa2_flow_extract_key_offset(&priv->extract.qos_key_extract,
-			prot, field);
-	if (offset < 0) {
-		DPAA2_PMD_ERR("QoS prot %d field %d extract failed",
-				prot, field);
-		return -1;
-	}
-	key_iova = flow->qos_rule.key_iova + offset;
-	mask_iova = flow->qos_rule.mask_iova + offset;
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		eth_type = proto.eth_type;
-		memcpy((void *)key_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-		eth_type = 0xffff;
-		memcpy((void *)mask_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-	} else {
-		ip_proto = proto.ip_proto;
-		memcpy((void *)key_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-		ip_proto = 0xff;
-		memcpy((void *)mask_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-	}
-
-	offset = dpaa2_flow_extract_key_offset(
-			&priv->extract.tc_key_extract[group],
-			prot, field);
-	if (offset < 0) {
-		DPAA2_PMD_ERR("FS prot %d field %d extract failed",
-				prot, field);
-		return -1;
+		DPAA2_PMD_ERR("Prev proto(%d) not support!",
+			prev_proto->prot);
+		return -EINVAL;
 	}
-	key_iova = flow->fs_rule.key_iova + offset;
-	mask_iova = flow->fs_rule.mask_iova + offset;
 
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		eth_type = proto.eth_type;
-		memcpy((void *)key_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-		eth_type = 0xffff;
-		memcpy((void *)mask_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-	} else {
-		ip_proto = proto.ip_proto;
-		memcpy((void *)key_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-		ip_proto = 0xff;
-		memcpy((void *)mask_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		key_extract = &priv->extract.qos_key_extract;
+		key_profile = &key_extract->key_profile;
+
+		offset = dpaa2_flow_extract_key_offset(key_profile,
+				prev_proto->prot, field);
+		if (offset < 0) {
+			DPAA2_PMD_ERR("%s QoS key extract failed", __func__);
+			return -EINVAL;
+		}
+		key_addr = flow->qos_key_addr + offset;
+		mask_addr = flow->qos_mask_addr + offset;
+		if (prev_proto->prot == NET_PROT_ETH) {
+			eth_type = prev_proto->eth_type;
+			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
+			eth_type = 0xffff;
+			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
+			flow->qos_rule_size += sizeof(rte_be16_t);
+		} else if (prev_proto->prot == NET_PROT_IP) {
+			ip_proto = prev_proto->ip_proto;
+			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
+			ip_proto = 0xff;
+			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
+			flow->qos_rule_size += sizeof(uint8_t);
+		} else {
+			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
+				prev_proto->prot);
+			return -EINVAL;
+		}
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		key_extract = &priv->extract.tc_key_extract[group];
+		key_profile = &key_extract->key_profile;
+
+		offset = dpaa2_flow_extract_key_offset(key_profile,
+				prev_proto->prot, field);
+		if (offset < 0) {
+			DPAA2_PMD_ERR("%s TC[%d] key extract failed",
+				__func__, group);
+			return -EINVAL;
+		}
+		key_addr = flow->fs_key_addr + offset;
+		mask_addr = flow->fs_mask_addr + offset;
+
+		if (prev_proto->prot == NET_PROT_ETH) {
+			eth_type = prev_proto->eth_type;
+			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
+			eth_type = 0xffff;
+			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
+			flow->fs_rule_size += sizeof(rte_be16_t);
+		} else if (prev_proto->prot == NET_PROT_IP) {
+			ip_proto = prev_proto->ip_proto;
+			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
+			ip_proto = 0xff;
+			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
+			flow->fs_rule_size += sizeof(uint8_t);
+		} else {
+			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
+				prev_proto->prot);
+			return -EINVAL;
+		}
 	}
 
 	return 0;
 }
 
 static inline int
-dpaa2_flow_rule_data_set(
-	struct dpaa2_key_extract *key_extract,
-	struct dpni_rule_cfg *rule,
-	enum net_prot prot, uint32_t field,
-	const void *key, const void *mask, int size)
+dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t field, int size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int offset = dpaa2_flow_extract_key_offset(key_extract,
-				prot, field);
+	int offset;
 
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			prot, field);
 	if (offset < 0) {
-		DPAA2_PMD_ERR("prot %d, field %d extract failed",
+		DPAA2_PMD_ERR("P(%d)/F(%d) does not exist!",
 			prot, field);
-		return -1;
+		return -EINVAL;
 	}
 
-	memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
-	memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, size);
+		memcpy((flow->qos_mask_addr + offset), mask, size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size = offset + size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, size);
+		memcpy((flow->fs_mask_addr + offset), mask, size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size = offset + size;
+	}
 
 	return 0;
 }
@@ -728,145 +1007,13 @@ dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
 	return 0;
 }
 
-static inline int
-_dpaa2_flow_rule_move_ipaddr_tail(
-	struct dpaa2_key_extract *key_extract,
-	struct dpni_rule_cfg *rule, int src_offset,
-	uint32_t field, bool ipv4)
-{
-	size_t key_src;
-	size_t mask_src;
-	size_t key_dst;
-	size_t mask_dst;
-	int dst_offset, len;
-	enum net_prot prot;
-	char tmp[NH_FLD_IPV6_ADDR_SIZE];
-
-	if (field != NH_FLD_IP_SRC &&
-		field != NH_FLD_IP_DST) {
-		DPAA2_PMD_ERR("Field of IP addr reorder must be IP SRC/DST");
-		return -1;
-	}
-	if (ipv4)
-		prot = NET_PROT_IPV4;
-	else
-		prot = NET_PROT_IPV6;
-	dst_offset = dpaa2_flow_extract_key_offset(key_extract,
-				prot, field);
-	if (dst_offset < 0) {
-		DPAA2_PMD_ERR("Field %d reorder extract failed", field);
-		return -1;
-	}
-	key_src = rule->key_iova + src_offset;
-	mask_src = rule->mask_iova + src_offset;
-	key_dst = rule->key_iova + dst_offset;
-	mask_dst = rule->mask_iova + dst_offset;
-	if (ipv4)
-		len = sizeof(rte_be32_t);
-	else
-		len = NH_FLD_IPV6_ADDR_SIZE;
-
-	memcpy(tmp, (char *)key_src, len);
-	memset((char *)key_src, 0, len);
-	memcpy((char *)key_dst, tmp, len);
-
-	memcpy(tmp, (char *)mask_src, len);
-	memset((char *)mask_src, 0, len);
-	memcpy((char *)mask_dst, tmp, len);
-
-	return 0;
-}
-
-static inline int
-dpaa2_flow_rule_move_ipaddr_tail(
-	struct rte_flow *flow, struct dpaa2_dev_priv *priv,
-	int fs_group)
+static int
+dpaa2_flow_extract_support(const uint8_t *mask_src,
+	enum rte_flow_item_type type)
 {
-	int ret;
-	enum net_prot prot;
-
-	if (flow->ipaddr_rule.ipaddr_type == FLOW_NONE_IPADDR)
-		return 0;
-
-	if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR)
-		prot = NET_PROT_IPV4;
-	else
-		prot = NET_PROT_IPV6;
-
-	if (flow->ipaddr_rule.qos_ipsrc_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				flow->ipaddr_rule.qos_ipsrc_offset,
-				NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS src address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_SRC);
-	}
-
-	if (flow->ipaddr_rule.qos_ipdst_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				flow->ipaddr_rule.qos_ipdst_offset,
-				NH_FLD_IP_DST, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS dst address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_DST);
-	}
-
-	if (flow->ipaddr_rule.fs_ipsrc_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.tc_key_extract[fs_group],
-				&flow->fs_rule,
-				flow->ipaddr_rule.fs_ipsrc_offset,
-				NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("FS src address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.fs_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[fs_group],
-				prot, NH_FLD_IP_SRC);
-	}
-	if (flow->ipaddr_rule.fs_ipdst_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.tc_key_extract[fs_group],
-				&flow->fs_rule,
-				flow->ipaddr_rule.fs_ipdst_offset,
-				NH_FLD_IP_DST, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("FS dst address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.fs_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[fs_group],
-				prot, NH_FLD_IP_DST);
-	}
-
-	return 0;
-}
-
-static int
-dpaa2_flow_extract_support(
-	const uint8_t *mask_src,
-	enum rte_flow_item_type type)
-{
-	char mask[64];
-	int i, size = 0;
-	const char *mask_support = 0;
+	char mask[64];
+	int i, size = 0;
+	const char *mask_support = 0;
 
 	switch (type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
@@ -906,7 +1053,7 @@ dpaa2_flow_extract_support(
 		size = sizeof(struct rte_flow_item_gre);
 		break;
 	default:
-		return -1;
+		return -EINVAL;
 	}
 
 	memcpy(mask, mask_support, size);
@@ -921,491 +1068,444 @@ dpaa2_flow_extract_support(
 }
 
 static int
-dpaa2_configure_flow_eth(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct prev_proto_field_id *prev_prot,
+	enum dpaa2_flow_dist_type dist_type,
+	int group, int *recfg)
 {
-	int index, ret;
-	int local_cfg = 0;
-	uint32_t group;
-	const struct rte_flow_item_eth *spec, *mask;
-
-	/* TODO: Currently upper bound of range parameter is not implemented */
-	const struct rte_flow_item_eth *last __rte_unused;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
-
-	group = attr->group;
-
-	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_eth *)pattern->spec;
-	last    = (const struct rte_flow_item_eth *)pattern->last;
-	mask    = (const struct rte_flow_item_eth *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_eth_mask);
-	if (!spec) {
-		/* Don't care any field of eth header,
-		 * only care eth protocol.
-		 */
-		DPAA2_PMD_WARN("No pattern spec for Eth flow, just skip");
-		return 0;
-	}
-
-	/* Get traffic class index and flow id to be configured */
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
-
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
-		DPAA2_PMD_WARN("Extract field(s) of ethernet not support.");
-
-		return -1;
-	}
-
-	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_SA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_SA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_SA failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_SA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_SA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_SA failed.");
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	int ret, index, local_cfg = 0, size = 0;
+	struct dpaa2_key_extract *extract;
+	struct dpaa2_key_profile *key_profile;
+	enum net_prot prot = prev_prot->prot;
+	uint32_t key_field = 0;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH_SA rule set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_SA,
-				&spec->hdr.src_addr.addr_bytes,
-				&mask->hdr.src_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_SA,
-				&spec->hdr.src_addr.addr_bytes,
-				&mask->hdr.src_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
-			return -1;
-		}
+	if (prot == NET_PROT_ETH) {
+		key_field = NH_FLD_ETH_TYPE;
+		size = sizeof(rte_be16_t);
+	} else if (prot == NET_PROT_IP) {
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else if (prot == NET_PROT_IPV4) {
+		prot = NET_PROT_IP;
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else if (prot == NET_PROT_IPV6) {
+		prot = NET_PROT_IP;
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else {
+		DPAA2_PMD_ERR("Invalid Prev prot(%d)", prot);
+		return -EINVAL;
 	}
 
-	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_DA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_DA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_DA failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		extract = &priv->extract.qos_key_extract;
+		key_profile = &extract->key_profile;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_DA);
+		index = dpaa2_flow_extract_search(key_profile,
+				prot, key_field);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_DA,
-					RTE_ETHER_ADDR_LEN);
+			ret = dpaa2_flow_extract_add_hdr(prot,
+					key_field, size, priv,
+					DPAA2_FLOW_QOS_TYPE, group,
+					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_DA failed.");
+				DPAA2_PMD_ERR("QOS prev extract add failed");
 
-				return -1;
+				return -EINVAL;
 			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH DA rule set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_DA,
-				&spec->hdr.dst_addr.addr_bytes,
-				&mask->hdr.dst_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
-			return -1;
+			local_cfg |= DPAA2_FLOW_QOS_TYPE;
 		}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_DA,
-				&spec->hdr.dst_addr.addr_bytes,
-				&mask->hdr.dst_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
+		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
-			return -1;
+			DPAA2_PMD_ERR("QoS prev rule set failed");
+			return -EINVAL;
 		}
 	}
 
-	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_TYPE,
-					RTE_ETHER_TYPE_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_TYPE failed.");
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		extract = &priv->extract.tc_key_extract[group];
+		key_profile = &extract->key_profile;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
+		index = dpaa2_flow_extract_search(key_profile,
+				prot, key_field);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_TYPE,
-					RTE_ETHER_TYPE_LEN);
+			ret = dpaa2_flow_extract_add_hdr(prot,
+					key_field, size, priv,
+					DPAA2_FLOW_FS_TYPE, group,
+					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_TYPE failed.");
+				DPAA2_PMD_ERR("FS[%d] prev extract add failed",
+					group);
 
-				return -1;
+				return -EINVAL;
 			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH TYPE rule set failed");
-				return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				&spec->hdr.ether_type,
-				&mask->hdr.ether_type,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
-			return -1;
+			local_cfg |= DPAA2_FLOW_FS_TYPE;
 		}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				&spec->hdr.ether_type,
-				&mask->hdr.ether_type,
-				sizeof(rte_be16_t));
+		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
-			return -1;
+			DPAA2_PMD_ERR("FS[%d] prev rule set failed",
+				group);
+			return -EINVAL;
 		}
 	}
 
-	(*device_configured) |= local_cfg;
+	if (recfg)
+		*recfg = local_cfg;
 
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_vlan(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
+	enum net_prot prot, uint32_t field,
+	const void *key, const void *mask, int size,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int index, ret;
-	int local_cfg = 0;
-	uint32_t group;
-	const struct rte_flow_item_vlan *spec, *mask;
-
-	const struct rte_flow_item_vlan *last __rte_unused;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-
-	group = attr->group;
+	int index, ret, local_cfg = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
 
-	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_vlan *)pattern->spec;
-	last    = (const struct rte_flow_item_vlan *)pattern->last;
-	mask    = (const struct rte_flow_item_vlan *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask);
+	if (dpaa2_flow_ip_address_extract(prot, field))
+		return -EINVAL;
 
-	/* Get traffic class index and flow id to be configured */
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
 
-	if (!spec) {
-		/* Don't care any field of vlan header,
-		 * only care vlan protocol.
-		 */
-		/* Eth type is actually used for vLan classification.
-		 */
-		struct proto_discrimination proto;
+	key_profile = &key_extract->key_profile;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-						&priv->extract.qos_key_extract,
-						RTE_FLOW_ITEM_TYPE_ETH);
-			if (ret) {
-				DPAA2_PMD_ERR(
-				"QoS Ext ETH_TYPE to discriminate vLan failed");
+	index = dpaa2_flow_extract_search(key_profile,
+			prot, field);
+	if (index < 0) {
+		ret = dpaa2_flow_extract_add_hdr(prot,
+				field, size, priv,
+				dist_type, tc_id, NULL);
+		if (ret) {
+			DPAA2_PMD_ERR("QoS Extract P(%d)/F(%d) failed",
+				prot, field);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+			return ret;
 		}
+		local_cfg |= dist_type;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					RTE_FLOW_ITEM_TYPE_ETH);
-			if (ret) {
-				DPAA2_PMD_ERR(
-				"FS Ext ETH_TYPE to discriminate vLan failed.");
+	ret = dpaa2_flow_hdr_rule_data_set(flow, key_profile,
+			prot, field, size, key, mask, dist_type);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS P(%d)/F(%d) rule data set failed",
+			prot, field);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		return ret;
+	}
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"Move ipaddr before vLan discrimination set failed");
-			return -1;
-		}
+	if (recfg)
+		*recfg |= local_cfg;
 
-		proto.type = RTE_FLOW_ITEM_TYPE_ETH;
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("vLan discrimination rule set failed");
-			return -1;
-		}
+	return 0;
+}
 
-		(*device_configured) |= local_cfg;
+static int
+dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
+	enum net_prot prot, uint32_t field,
+	const void *key, const void *mask, int size,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int local_cfg = 0, num, ipaddr_extract_len = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
+	struct dpkg_profile_cfg *dpkg;
+	uint8_t *key_addr, *mask_addr;
+	union ip_addr_extract_rule *ip_addr_data;
+	union ip_addr_extract_rule *ip_addr_mask;
+	enum net_prot orig_prot;
+	uint32_t orig_field;
+
+	if (prot != NET_PROT_IPV4 && prot != NET_PROT_IPV6)
+		return -EINVAL;
 
-		return 0;
+	if (prot == NET_PROT_IPV4 && field != NH_FLD_IPV4_SRC_IP &&
+		field != NH_FLD_IPV4_DST_IP) {
+		return -EINVAL;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VLAN)) {
-		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
-
-		return -1;
+	if (prot == NET_PROT_IPV6 && field != NH_FLD_IPV6_SRC_IP &&
+		field != NH_FLD_IPV6_DST_IP) {
+		return -EINVAL;
 	}
 
-	if (!mask->hdr.vlan_tci)
-		return 0;
-
-	index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_VLAN, NH_FLD_VLAN_TCI);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-						&priv->extract.qos_key_extract,
-						NET_PROT_VLAN,
-						NH_FLD_VLAN_TCI,
-						sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract add VLAN_TCI failed.");
+	orig_prot = prot;
+	orig_field = field;
 
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+	if (prot == NET_PROT_IPV4 &&
+		field == NH_FLD_IPV4_SRC_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_SRC;
+	} else if (prot == NET_PROT_IPV4 &&
+		field == NH_FLD_IPV4_DST_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_DST;
+	} else if (prot == NET_PROT_IPV6 &&
+		field == NH_FLD_IPV6_SRC_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_SRC;
+	} else if (prot == NET_PROT_IPV6 &&
+		field == NH_FLD_IPV6_DST_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_DST;
+	} else {
+		DPAA2_PMD_ERR("Inval P(%d)/F(%d) to extract ip address",
+			prot, field);
+		return -EINVAL;
 	}
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_VLAN, NH_FLD_VLAN_TCI);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.tc_key_extract[group],
-				NET_PROT_VLAN,
-				NH_FLD_VLAN_TCI,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract add VLAN_TCI failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		key_extract = &priv->extract.qos_key_extract;
+		key_profile = &key_extract->key_profile;
+		dpkg = &key_extract->dpkg;
+		num = key_profile->num;
+		key_addr = flow->qos_key_addr;
+		mask_addr = flow->qos_mask_addr;
+	} else {
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+		key_profile = &key_extract->key_profile;
+		dpkg = &key_extract->dpkg;
+		num = key_profile->num;
+		key_addr = flow->fs_key_addr;
+		mask_addr = flow->fs_mask_addr;
 	}
 
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before VLAN TCI rule set failed");
-		return -1;
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_VLAN,
-				NH_FLD_VLAN_TCI,
-				&spec->hdr.vlan_tci,
-				&mask->hdr.vlan_tci,
-				sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
-		return -1;
+	if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT) {
+		if (field == NH_FLD_IP_SRC)
+			key_profile->ip_addr_type = IP_SRC_EXTRACT;
+		else
+			key_profile->ip_addr_type = IP_DST_EXTRACT;
+		ipaddr_extract_len = size;
+
+		key_profile->ip_addr_extract_pos = num;
+		if (num > 0) {
+			key_profile->ip_addr_extract_off =
+				key_profile->key_offset[num - 1] +
+				key_profile->key_size[num - 1];
+		} else {
+			key_profile->ip_addr_extract_off = 0;
+		}
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	} else if (key_profile->ip_addr_type == IP_SRC_EXTRACT) {
+		if (field == NH_FLD_IP_SRC) {
+			ipaddr_extract_len = size;
+			goto rule_configure;
+		}
+		key_profile->ip_addr_type = IP_SRC_DST_EXTRACT;
+		ipaddr_extract_len = size * 2;
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	} else if (key_profile->ip_addr_type == IP_DST_EXTRACT) {
+		if (field == NH_FLD_IP_DST) {
+			ipaddr_extract_len = size;
+			goto rule_configure;
+		}
+		key_profile->ip_addr_type = IP_DST_SRC_EXTRACT;
+		ipaddr_extract_len = size * 2;
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	}
+	key_profile->num++;
+
+	dpkg->extracts[num].extract.from_hdr.prot = prot;
+	dpkg->extracts[num].extract.from_hdr.field = field;
+	dpkg->extracts[num].extract.from_hdr.type = DPKG_FULL_FIELD;
+	dpkg->num_extracts++;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		local_cfg = DPAA2_FLOW_QOS_TYPE;
+	else
+		local_cfg = DPAA2_FLOW_FS_TYPE;
+
+rule_configure:
+	key_addr += key_profile->ip_addr_extract_off;
+	ip_addr_data = (union ip_addr_extract_rule *)key_addr;
+	mask_addr += key_profile->ip_addr_extract_off;
+	ip_addr_mask = (union ip_addr_extract_rule *)mask_addr;
+
+	if (orig_prot == NET_PROT_IPV4 &&
+		orig_field == NH_FLD_IPV4_SRC_IP) {
+		if (key_profile->ip_addr_type == IP_SRC_EXTRACT ||
+			key_profile->ip_addr_type == IP_SRC_DST_EXTRACT) {
+			memcpy(&ip_addr_data->ipv4_sd_addr.ipv4_src,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_sd_addr.ipv4_src,
+				mask, size);
+		} else {
+			memcpy(&ip_addr_data->ipv4_ds_addr.ipv4_src,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_ds_addr.ipv4_src,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV4 &&
+		orig_field == NH_FLD_IPV4_DST_IP) {
+		if (key_profile->ip_addr_type == IP_DST_EXTRACT ||
+			key_profile->ip_addr_type == IP_DST_SRC_EXTRACT) {
+			memcpy(&ip_addr_data->ipv4_ds_addr.ipv4_dst,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_ds_addr.ipv4_dst,
+				mask, size);
+		} else {
+			memcpy(&ip_addr_data->ipv4_sd_addr.ipv4_dst,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_sd_addr.ipv4_dst,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV6 &&
+		orig_field == NH_FLD_IPV6_SRC_IP) {
+		if (key_profile->ip_addr_type == IP_SRC_EXTRACT ||
+			key_profile->ip_addr_type == IP_SRC_DST_EXTRACT) {
+			memcpy(ip_addr_data->ipv6_sd_addr.ipv6_src,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_sd_addr.ipv6_src,
+				mask, size);
+		} else {
+			memcpy(ip_addr_data->ipv6_ds_addr.ipv6_src,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_ds_addr.ipv6_src,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV6 &&
+		orig_field == NH_FLD_IPV6_DST_IP) {
+		if (key_profile->ip_addr_type == IP_DST_EXTRACT ||
+			key_profile->ip_addr_type == IP_DST_SRC_EXTRACT) {
+			memcpy(ip_addr_data->ipv6_ds_addr.ipv6_dst,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_ds_addr.ipv6_dst,
+				mask, size);
+		} else {
+			memcpy(ip_addr_data->ipv6_sd_addr.ipv6_dst,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_sd_addr.ipv6_dst,
+				mask, size);
+		}
 	}
 
-	ret = dpaa2_flow_rule_data_set(
-			&priv->extract.tc_key_extract[group],
-			&flow->fs_rule,
-			NET_PROT_VLAN,
-			NH_FLD_VLAN_TCI,
-			&spec->hdr.vlan_tci,
-			&mask->hdr.vlan_tci,
-			sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
-		return -1;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		flow->qos_rule_size =
+			key_profile->ip_addr_extract_off + ipaddr_extract_len;
+	} else {
+		flow->fs_rule_size =
+			key_profile->ip_addr_extract_off + ipaddr_extract_len;
 	}
 
-	(*device_configured) |= local_cfg;
+	if (recfg)
+		*recfg |= local_cfg;
 
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_ip_discrimation(
-	struct dpaa2_dev_priv *priv, struct rte_flow *flow,
+dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
-	int *local_cfg,	int *device_configured,
-	uint32_t group)
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	struct proto_discrimination proto;
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_eth *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.qos_key_extract.dpkg,
-			NET_PROT_ETH, NH_FLD_ETH_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.qos_key_extract,
-				RTE_FLOW_ITEM_TYPE_ETH);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"QoS Extract ETH_TYPE to discriminate IP failed.");
-			return -1;
-		}
-		(*local_cfg) |= DPAA2_QOS_TABLE_RECONFIGURE;
-	}
+	group = attr->group;
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_ETH, NH_FLD_ETH_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				RTE_FLOW_ITEM_TYPE_ETH);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"FS Extract ETH_TYPE to discriminate IP failed.");
-			return -1;
-		}
-		(*local_cfg) |= DPAA2_FS_TABLE_RECONFIGURE;
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+			pattern->mask : &dpaa2_flow_item_eth_mask;
+	if (!spec) {
+		DPAA2_PMD_WARN("No pattern spec for Eth flow");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before IP discrimination set failed");
-		return -1;
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH)) {
+		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
+
+		return -EINVAL;
 	}
 
-	proto.type = RTE_FLOW_ITEM_TYPE_ETH;
-	if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
-	else
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	ret = dpaa2_flow_proto_discrimination_rule(priv, flow, proto, group);
-	if (ret) {
-		DPAA2_PMD_ERR("IP discrimination rule set failed");
-		return -1;
+	if (memcmp((const char *)&mask->src,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_SA, &spec->src.addr_bytes,
+			&mask->src.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_SA, &spec->src.addr_bytes,
+			&mask->src.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->dst,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_DA, &spec->dst.addr_bytes,
+			&mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_DA, &spec->dst.addr_bytes,
+			&mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->type,
+		zero_cmp, sizeof(rte_be16_t))) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_TYPE, &spec->type,
+			&mask->type, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_TYPE, &spec->type,
+			&mask->type, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	(*device_configured) |= (*local_cfg);
+	(*device_configured) |= local_cfg;
 
 	return 0;
 }
 
-
 static int
-dpaa2_configure_flow_generic_ip(
-	struct rte_flow *flow,
+dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
@@ -1413,419 +1513,338 @@ dpaa2_configure_flow_generic_ip(
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
-	const struct rte_flow_item_ipv4 *spec_ipv4 = 0,
-		*mask_ipv4 = 0;
-	const struct rte_flow_item_ipv6 *spec_ipv6 = 0,
-		*mask_ipv6 = 0;
-	const void *key, *mask;
-	enum net_prot prot;
-
+	const struct rte_flow_item_vlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
-	int size;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
-		spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec;
-		mask_ipv4 = (const struct rte_flow_item_ipv4 *)
-			(pattern->mask ? pattern->mask :
-					&dpaa2_flow_item_ipv4_mask);
-	} else {
-		spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec;
-		mask_ipv6 = (const struct rte_flow_item_ipv6 *)
-			(pattern->mask ? pattern->mask :
-					&dpaa2_flow_item_ipv6_mask);
-	}
+	spec = pattern->spec;
+	mask = pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	ret = dpaa2_configure_flow_ip_discrimation(priv,
-			flow, pattern, &local_cfg,
-			device_configured, group);
-	if (ret) {
-		DPAA2_PMD_ERR("IP discrimination failed!");
-		return -1;
+	if (!spec) {
+		struct prev_proto_field_id prev_proto;
+
+		prev_proto.prot = NET_PROT_ETH;
+		prev_proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_proto,
+				DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+				       RTE_FLOW_ITEM_TYPE_VLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+		return -EINVAL;
 	}
 
-	if (!spec_ipv4 && !spec_ipv6)
+	if (!mask->tci)
 		return 0;
 
-	if (mask_ipv4) {
-		if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
-			RTE_FLOW_ITEM_TYPE_IPV4)) {
-			DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
+					      NH_FLD_VLAN_TCI, &spec->tci,
+					      &mask->tci, sizeof(rte_be16_t),
+					      priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
 
-			return -1;
-		}
-	}
-
-	if (mask_ipv6) {
-		if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
-			RTE_FLOW_ITEM_TYPE_IPV6)) {
-			DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
-
-			return -1;
-		}
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
+					      NH_FLD_VLAN_TCI, &spec->tci,
+					      &mask->tci, sizeof(rte_be16_t),
+					      priv, group, &local_cfg,
+					      DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
 
-	if (mask_ipv4 && (mask_ipv4->hdr.src_addr ||
-		mask_ipv4->hdr.dst_addr)) {
-		flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR;
-	} else if (mask_ipv6 &&
-		(memcmp((const char *)mask_ipv6->hdr.src_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE) ||
-		memcmp((const char *)mask_ipv6->hdr.dst_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		flow->ipaddr_rule.ipaddr_type = FLOW_IPV6_ADDR;
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.src_addr) ||
-		(mask_ipv6 &&
-			memcmp((const char *)mask_ipv6->hdr.src_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_IP,
-					NH_FLD_IP_SRC,
-					0);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_SRC failed.");
+	(*device_configured) |= local_cfg;
+	return 0;
+}
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+static int
+dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
+			  const struct rte_flow_attr *attr,
+			  const struct rte_flow_item *pattern,
+			  const struct rte_flow_action actions[] __rte_unused,
+			  struct rte_flow_error *error __rte_unused,
+			  int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ipv4 *spec_ipv4 = 0, *mask_ipv4 = 0;
+	const void *key, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	int size;
+	struct prev_proto_field_id prev_prot;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_SRC,
-					0);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_SRC failed.");
+	group = attr->group;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	/* Parse pattern list to get the matching parameters */
+	spec_ipv4 = pattern->spec;
+	mask_ipv4 = pattern->mask ?
+		    pattern->mask : &dpaa2_flow_item_ipv4_mask;
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.src_addr;
-		else
-			key = &spec_ipv6->hdr.src_addr[0];
-		if (mask_ipv4) {
-			mask = &mask_ipv4->hdr.src_addr;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-			prot = NET_PROT_IPV4;
-		} else {
-			mask = &mask_ipv6->hdr.src_addr[0];
-			size = NH_FLD_IPV6_ADDR_SIZE;
-			prot = NET_PROT_IPV6;
-		}
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				prot, NH_FLD_IP_SRC,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_SRC rule data set failed");
-			return -1;
-		}
+	prev_prot.prot = NET_PROT_ETH;
+	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				prot, NH_FLD_IP_SRC,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_SRC rule data set failed");
-			return -1;
-		}
+	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE, group,
+			&local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("IPv4 identification failed!");
+		return ret;
+	}
 
-		flow->ipaddr_rule.qos_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_SRC);
-		flow->ipaddr_rule.fs_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[group],
-				prot, NH_FLD_IP_SRC);
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.dst_addr) ||
-		(mask_ipv6 &&
-			memcmp((const char *)mask_ipv6->hdr.dst_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_DST);
-		if (index < 0) {
-			if (mask_ipv4)
-				size = NH_FLD_IPV4_ADDR_SIZE;
-			else
-				size = NH_FLD_IPV6_ADDR_SIZE;
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_IP,
-					NH_FLD_IP_DST,
-					size);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+	if (!spec_ipv4)
+		return 0;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+				       RTE_FLOW_ITEM_TYPE_IPV4)) {
+		DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+		return -EINVAL;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_DST);
-		if (index < 0) {
-			if (mask_ipv4)
-				size = NH_FLD_IPV4_ADDR_SIZE;
-			else
-				size = NH_FLD_IPV6_ADDR_SIZE;
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_DST,
-					size);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+	if (mask_ipv4->hdr.src_addr) {
+		key = &spec_ipv4->hdr.src_addr;
+		mask = &mask_ipv4->hdr.src_addr;
+		size = sizeof(rte_be32_t);
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv4->hdr.dst_addr) {
+		key = &spec_ipv4->hdr.dst_addr;
+		mask = &mask_ipv4->hdr.dst_addr;
+		size = sizeof(rte_be32_t);
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv4->hdr.next_proto_id) {
+		key = &spec_ipv4->hdr.next_proto_id;
+		mask = &mask_ipv4->hdr.next_proto_id;
+		size = sizeof(uint8_t);
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	(*device_configured) |= local_cfg;
+	return 0;
+}
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.dst_addr;
-		else
-			key = spec_ipv6->hdr.dst_addr;
-		if (mask_ipv4) {
-			mask = &mask_ipv4->hdr.dst_addr;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-			prot = NET_PROT_IPV4;
-		} else {
-			mask = &mask_ipv6->hdr.dst_addr[0];
-			size = NH_FLD_IPV6_ADDR_SIZE;
-			prot = NET_PROT_IPV6;
-		}
+static int
+dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
+			  const struct rte_flow_attr *attr,
+			  const struct rte_flow_item *pattern,
+			  const struct rte_flow_action actions[] __rte_unused,
+			  struct rte_flow_error *error __rte_unused,
+			  int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ipv6 *spec_ipv6 = 0, *mask_ipv6 = 0;
+	const void *key, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
+	int size;
+	struct prev_proto_field_id prev_prot;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				prot, NH_FLD_IP_DST,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_DST rule data set failed");
-			return -1;
-		}
+	group = attr->group;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				prot, NH_FLD_IP_DST,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_DST rule data set failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_DST);
-		flow->ipaddr_rule.fs_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[group],
-				prot, NH_FLD_IP_DST);
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.next_proto_id) ||
-		(mask_ipv6 && mask_ipv6->hdr.proto)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-				&priv->extract.qos_key_extract,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				NH_FLD_IP_PROTO_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+	/* Parse pattern list to get the matching parameters */
+	spec_ipv6 = pattern->spec;
+	mask_ipv6 = pattern->mask ? pattern->mask : &dpaa2_flow_item_ipv6_mask;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_PROTO,
-					NH_FLD_IP_PROTO_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+	prev_prot.prot = NET_PROT_ETH;
+	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("IPv6 identification failed!");
+		return ret;
+	}
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr after NH_FLD_IP_PROTO rule set failed");
-			return -1;
-		}
+	if (!spec_ipv6)
+		return 0;
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.next_proto_id;
-		else
-			key = &spec_ipv6->hdr.proto;
-		if (mask_ipv4)
-			mask = &mask_ipv4->hdr.next_proto_id;
-		else
-			mask = &mask_ipv6->hdr.proto;
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				key,	mask, NH_FLD_IP_PROTO_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_PROTO rule data set failed");
-			return -1;
-		}
+	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+				       RTE_FLOW_ITEM_TYPE_IPV6)) {
+		DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
+		return -EINVAL;
+	}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				key,	mask, NH_FLD_IP_PROTO_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_PROTO rule data set failed");
-			return -1;
-		}
+	if (memcmp(mask_ipv6->hdr.src_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
+		key = &spec_ipv6->hdr.src_addr[0];
+		mask = &mask_ipv6->hdr.src_addr[0];
+		size = NH_FLD_IPV6_ADDR_SIZE;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp(mask_ipv6->hdr.dst_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
+		key = &spec_ipv6->hdr.dst_addr[0];
+		mask = &mask_ipv6->hdr.dst_addr[0];
+		size = NH_FLD_IPV6_ADDR_SIZE;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv6->hdr.proto) {
+		key = &spec_ipv6->hdr.proto;
+		mask = &mask_ipv6->hdr.proto;
+		size = sizeof(uint8_t);
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
-
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_icmp(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_icmp *spec, *mask;
-
-	const struct rte_flow_item_icmp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_icmp *)pattern->spec;
-	last    = (const struct rte_flow_item_icmp *)pattern->last;
-	mask    = (const struct rte_flow_item_icmp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_icmp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_icmp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		/* Don't care any field of ICMP header,
-		 * only care ICMP protocol.
-		 * Example: flow create 0 ingress pattern icmp /
-		 */
 		/* Next proto of Generical IP is actually used
 		 * for ICMP identification.
+		 * Example: flow create 0 ingress pattern icmp
 		 */
-		struct proto_discrimination proto;
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate ICMP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate ICMP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before ICMP discrimination set failed");
-			return -1;
-		}
+		struct prev_proto_field_id prev_proto;
 
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_ICMP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("ICMP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_ICMP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
-
 		return 0;
 	}
 
@@ -1833,145 +1852,39 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_ICMP)) {
 		DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.icmp_type) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_TYPE,
-					NH_FLD_ICMP_TYPE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ICMP_TYPE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_TYPE,
-					NH_FLD_ICMP_TYPE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ICMP_TYPE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ICMP TYPE set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_TYPE, &spec->hdr.icmp_type,
+			&mask->hdr.icmp_type, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_TYPE,
-				&spec->hdr.icmp_type,
-				&mask->hdr.icmp_type,
-				NH_FLD_ICMP_TYPE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ICMP_TYPE rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_TYPE,
-				&spec->hdr.icmp_type,
-				&mask->hdr.icmp_type,
-				NH_FLD_ICMP_TYPE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ICMP_TYPE rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_TYPE, &spec->hdr.icmp_type,
+			&mask->hdr.icmp_type, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.icmp_code) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_CODE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_CODE,
-					NH_FLD_ICMP_CODE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ICMP_CODE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_CODE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_CODE,
-					NH_FLD_ICMP_CODE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ICMP_CODE failed.");
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_CODE, &spec->hdr.icmp_code,
+			&mask->hdr.icmp_code, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr after ICMP CODE set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_CODE,
-				&spec->hdr.icmp_code,
-				&mask->hdr.icmp_code,
-				NH_FLD_ICMP_CODE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ICMP_CODE rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_CODE,
-				&spec->hdr.icmp_code,
-				&mask->hdr.icmp_code,
-				NH_FLD_ICMP_CODE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ICMP_CODE rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_CODE, &spec->hdr.icmp_code,
+			&mask->hdr.icmp_code, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -1980,84 +1893,41 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_udp(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_udp *spec, *mask;
-
-	const struct rte_flow_item_udp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_udp *)pattern->spec;
-	last    = (const struct rte_flow_item_udp *)pattern->last;
-	mask    = (const struct rte_flow_item_udp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_udp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_udp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate UDP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate UDP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before UDP discrimination set failed");
-			return -1;
-		}
+		struct prev_proto_field_id prev_proto;
 
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_UDP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("UDP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_UDP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2069,149 +1939,40 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_UDP)) {
 		DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add UDP_SRC failed.");
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_SRC,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add UDP_SRC failed.");
+	if (mask->hdr.dst_port) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before UDP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_UDP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_UDP_PORT_SRC rule data set failed");
-			return -1;
-		}
-	}
-
-	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_DST,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add UDP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_DST,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add UDP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before UDP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_UDP_PORT_DST rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_UDP_PORT_DST rule data set failed");
-			return -1;
-		}
-	}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
 	(*device_configured) |= local_cfg;
 
@@ -2219,84 +1980,41 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_tcp(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_tcp *spec, *mask;
-
-	const struct rte_flow_item_tcp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_tcp *)pattern->spec;
-	last    = (const struct rte_flow_item_tcp *)pattern->last;
-	mask    = (const struct rte_flow_item_tcp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_tcp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_tcp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate TCP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate TCP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before TCP discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_TCP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("TCP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_TCP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2308,149 +2026,39 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_TCP)) {
 		DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_SRC,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add TCP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_SRC,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add TCP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before TCP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_TCP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_TCP_PORT_SRC rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_DST,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add TCP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_DST,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add TCP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before TCP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_TCP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_TCP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -2459,85 +2067,41 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_sctp(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_sctp *spec, *mask;
-
-	const struct rte_flow_item_sctp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_sctp *)pattern->spec;
-	last    = (const struct rte_flow_item_sctp *)pattern->last;
-	mask    = (const struct rte_flow_item_sctp *)
-			(pattern->mask ? pattern->mask :
-				&dpaa2_flow_item_sctp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_sctp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate SCTP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate SCTP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_SCTP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("SCTP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_SCTP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2553,145 +2117,35 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_SRC,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add SCTP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_SRC,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add SCTP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_SCTP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_SCTP_PORT_SRC rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_DST,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add SCTP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_DST,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add SCTP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_SCTP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_SCTP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -2700,88 +2154,46 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_gre(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_gre *spec, *mask;
-
-	const struct rte_flow_item_gre *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_gre *)pattern->spec;
-	last    = (const struct rte_flow_item_gre *)pattern->last;
-	mask    = (const struct rte_flow_item_gre *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_gre_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_gre_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate GRE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate GRE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before GRE discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_GRE;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("GRE discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_GRE;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
-		return 0;
+		if (!spec)
+			return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2794,74 +2206,19 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 	if (!mask->protocol)
 		return 0;
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.qos_key_extract.dpkg,
-			NET_PROT_GRE, NH_FLD_GRE_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.qos_key_extract,
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract add GRE_TYPE failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-	}
-
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_GRE, NH_FLD_GRE_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.tc_key_extract[group],
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract add GRE_TYPE failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-	}
-
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before GRE_TYPE set failed");
-		return -1;
-	}
-
-	ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
-				sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"QoS NH_FLD_GRE_TYPE rule data set failed");
-		return -1;
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GRE,
+			NH_FLD_GRE_TYPE, &spec->protocol,
+			&mask->protocol, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_rule_data_set(
-			&priv->extract.tc_key_extract[group],
-			&flow->fs_rule,
-			NET_PROT_GRE,
-			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
-			sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"FS NH_FLD_GRE_TYPE rule data set failed");
-		return -1;
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GRE,
+			NH_FLD_GRE_TYPE, &spec->protocol,
+			&mask->protocol, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
 
 	(*device_configured) |= local_cfg;
 
@@ -2869,404 +2226,109 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_raw(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_raw *spec = pattern->spec;
 	const struct rte_flow_item_raw *mask = pattern->mask;
 	int prev_key_size =
-		priv->extract.qos_key_extract.key_info.key_total_size;
+		priv->extract.qos_key_extract.key_profile.key_max_size;
 	int local_cfg = 0, ret;
 	uint32_t group;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
-		DPAA2_PMD_ERR("spec or mask not present.");
-		return -EINVAL;
-	}
-	/* Only supports non-relative with offset 0 */
-	if (spec->relative || spec->offset != 0 ||
-	    spec->search || spec->limit) {
-		DPAA2_PMD_ERR("relative and non zero offset not supported.");
-		return -EINVAL;
-	}
-	/* Spec len and mask len should be same */
-	if (spec->length != mask->length) {
-		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
-		return -EINVAL;
-	}
-
-	/* Get traffic class index and flow id to be configured */
-	group = attr->group;
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
-
-	if (prev_key_size <= spec->length) {
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
-						 spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-
-		ret = dpaa2_flow_extract_add_raw(
-					&priv->extract.tc_key_extract[group],
-					spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-	}
-
-	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
-					   mask->pattern, spec->length);
-	if (ret) {
-		DPAA2_PMD_ERR("QoS RAW rule data set failed");
-		return -1;
-	}
-
-	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
-					   mask->pattern, spec->length);
-	if (ret) {
-		DPAA2_PMD_ERR("FS RAW rule data set failed");
-		return -1;
-	}
-
-	(*device_configured) |= local_cfg;
-
-	return 0;
-}
-
-static inline int
-dpaa2_fs_action_supported(enum rte_flow_action_type action)
-{
-	int i;
-
-	for (i = 0; i < (int)(sizeof(dpaa2_supported_fs_action_type) /
-					sizeof(enum rte_flow_action_type)); i++) {
-		if (action == dpaa2_supported_fs_action_type[i])
-			return 1;
-	}
-
-	return 0;
-}
-/* The existing QoS/FS entry with IP address(es)
- * needs update after
- * new extract(s) are inserted before IP
- * address(es) extract(s).
- */
-static int
-dpaa2_flow_entry_update(
-	struct dpaa2_dev_priv *priv, uint8_t tc_id)
-{
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
-	int ret;
-	int qos_ipsrc_offset = -1, qos_ipdst_offset = -1;
-	int fs_ipsrc_offset = -1, fs_ipdst_offset = -1;
-	struct dpaa2_key_extract *qos_key_extract =
-		&priv->extract.qos_key_extract;
-	struct dpaa2_key_extract *tc_key_extract =
-		&priv->extract.tc_key_extract[tc_id];
-	char ipsrc_key[NH_FLD_IPV6_ADDR_SIZE];
-	char ipdst_key[NH_FLD_IPV6_ADDR_SIZE];
-	char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
-	char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
-	int extend = -1, extend1, size = -1;
-	uint16_t qos_index;
-
-	while (curr) {
-		if (curr->ipaddr_rule.ipaddr_type ==
-			FLOW_NONE_IPADDR) {
-			curr = LIST_NEXT(curr, next);
-			continue;
-		}
-
-		if (curr->ipaddr_rule.ipaddr_type ==
-			FLOW_IPV4_ADDR) {
-			qos_ipsrc_offset =
-				qos_key_extract->key_info.ipv4_src_offset;
-			qos_ipdst_offset =
-				qos_key_extract->key_info.ipv4_dst_offset;
-			fs_ipsrc_offset =
-				tc_key_extract->key_info.ipv4_src_offset;
-			fs_ipdst_offset =
-				tc_key_extract->key_info.ipv4_dst_offset;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-		} else {
-			qos_ipsrc_offset =
-				qos_key_extract->key_info.ipv6_src_offset;
-			qos_ipdst_offset =
-				qos_key_extract->key_info.ipv6_dst_offset;
-			fs_ipsrc_offset =
-				tc_key_extract->key_info.ipv6_src_offset;
-			fs_ipdst_offset =
-				tc_key_extract->key_info.ipv6_dst_offset;
-			size = NH_FLD_IPV6_ADDR_SIZE;
-		}
-
-		qos_index = curr->tc_id * priv->fs_entries +
-			curr->tc_index;
-
-		dpaa2_flow_qos_entry_log("Before update", curr, qos_index, stdout);
-
-		if (priv->num_rx_tc > 1) {
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
-					priv->token, &curr->qos_rule);
-			if (ret) {
-				DPAA2_PMD_ERR("Qos entry remove failed.");
-				return -1;
-			}
-		}
-
-		extend = -1;
-
-		if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
-			RTE_ASSERT(qos_ipsrc_offset >=
-				curr->ipaddr_rule.qos_ipsrc_offset);
-			extend1 = qos_ipsrc_offset -
-				curr->ipaddr_rule.qos_ipsrc_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-
-			memcpy(ipsrc_key,
-				(char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				0, size);
-
-			memcpy(ipsrc_mask,
-				(char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				0, size);
-
-			curr->ipaddr_rule.qos_ipsrc_offset = qos_ipsrc_offset;
-		}
-
-		if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
-			RTE_ASSERT(qos_ipdst_offset >=
-				curr->ipaddr_rule.qos_ipdst_offset);
-			extend1 = qos_ipdst_offset -
-				curr->ipaddr_rule.qos_ipdst_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-
-			memcpy(ipdst_key,
-				(char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				0, size);
-
-			memcpy(ipdst_mask,
-				(char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				0, size);
-
-			curr->ipaddr_rule.qos_ipdst_offset = qos_ipdst_offset;
-		}
-
-		if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-			memcpy((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				ipsrc_key,
-				size);
-			memcpy((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				ipsrc_mask,
-				size);
-		}
-		if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-			memcpy((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				ipdst_key,
-				size);
-			memcpy((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				ipdst_mask,
-				size);
-		}
-
-		if (extend >= 0)
-			curr->qos_real_key_size += extend;
-
-		curr->qos_rule.key_size = FIXED_ENTRY_SIZE;
-
-		dpaa2_flow_qos_entry_log("Start update", curr, qos_index, stdout);
-
-		if (priv->num_rx_tc > 1) {
-			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
-					priv->token, &curr->qos_rule,
-					curr->tc_id, qos_index,
-					0, 0);
-			if (ret) {
-				DPAA2_PMD_ERR("Qos entry update failed.");
-				return -1;
-			}
-		}
-
-		if (!dpaa2_fs_action_supported(curr->action)) {
-			curr = LIST_NEXT(curr, next);
-			continue;
-		}
+		DPAA2_PMD_ERR("spec or mask not present.");
+		return -EINVAL;
+	}
+	/* Only supports non-relative with offset 0 */
+	if (spec->relative || spec->offset != 0 ||
+	    spec->search || spec->limit) {
+		DPAA2_PMD_ERR("relative and non zero offset not supported.");
+		return -EINVAL;
+	}
+	/* Spec len and mask len should be same */
+	if (spec->length != mask->length) {
+		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
+		return -EINVAL;
+	}
 
-		dpaa2_flow_fs_entry_log("Before update", curr, stdout);
-		extend = -1;
+	/* Get traffic class index and flow id to be configured */
+	group = attr->group;
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW,
-				priv->token, curr->tc_id, &curr->fs_rule);
+	if (prev_key_size <= spec->length) {
+		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
+						 spec->length);
 		if (ret) {
-			DPAA2_PMD_ERR("FS entry remove failed.");
+			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
 			return -1;
 		}
+		local_cfg |= DPAA2_FLOW_QOS_TYPE;
 
-		if (curr->ipaddr_rule.fs_ipsrc_offset >= 0 &&
-			tc_id == curr->tc_id) {
-			RTE_ASSERT(fs_ipsrc_offset >=
-				curr->ipaddr_rule.fs_ipsrc_offset);
-			extend1 = fs_ipsrc_offset -
-				curr->ipaddr_rule.fs_ipsrc_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			memcpy(ipsrc_key,
-				(char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				0, size);
-
-			memcpy(ipsrc_mask,
-				(char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				0, size);
-
-			curr->ipaddr_rule.fs_ipsrc_offset = fs_ipsrc_offset;
+		ret = dpaa2_flow_extract_add_raw(&priv->extract.tc_key_extract[group],
+					spec->length);
+		if (ret) {
+			DPAA2_PMD_ERR("FS Extract RAW add failed.");
+			return -1;
 		}
+		local_cfg |= DPAA2_FLOW_FS_TYPE;
+	}
 
-		if (curr->ipaddr_rule.fs_ipdst_offset >= 0 &&
-			tc_id == curr->tc_id) {
-			RTE_ASSERT(fs_ipdst_offset >=
-				curr->ipaddr_rule.fs_ipdst_offset);
-			extend1 = fs_ipdst_offset -
-				curr->ipaddr_rule.fs_ipdst_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			memcpy(ipdst_key,
-				(char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				0, size);
-
-			memcpy(ipdst_mask,
-				(char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				0, size);
-
-			curr->ipaddr_rule.fs_ipdst_offset = fs_ipdst_offset;
-		}
+	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
+					   mask->pattern, spec->length);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS RAW rule data set failed");
+		return -1;
+	}
 
-		if (curr->ipaddr_rule.fs_ipsrc_offset >= 0) {
-			memcpy((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				ipsrc_key,
-				size);
-			memcpy((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				ipsrc_mask,
-				size);
-		}
-		if (curr->ipaddr_rule.fs_ipdst_offset >= 0) {
-			memcpy((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				ipdst_key,
-				size);
-			memcpy((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				ipdst_mask,
-				size);
-		}
+	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
+					   mask->pattern, spec->length);
+	if (ret) {
+		DPAA2_PMD_ERR("FS RAW rule data set failed");
+		return -1;
+	}
 
-		if (extend >= 0)
-			curr->fs_real_key_size += extend;
-		curr->fs_rule.key_size = FIXED_ENTRY_SIZE;
+	(*device_configured) |= local_cfg;
 
-		dpaa2_flow_fs_entry_log("Start update", curr, stdout);
+	return 0;
+}
 
-		ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
-				priv->token, curr->tc_id, curr->tc_index,
-				&curr->fs_rule, &curr->action_cfg);
-		if (ret) {
-			DPAA2_PMD_ERR("FS entry update failed.");
-			return -1;
-		}
+static inline int
+dpaa2_fs_action_supported(enum rte_flow_action_type action)
+{
+	int i;
+	int action_num = sizeof(dpaa2_supported_fs_action_type) /
+		sizeof(enum rte_flow_action_type);
 
-		curr = LIST_NEXT(curr, next);
+	for (i = 0; i < action_num; i++) {
+		if (action == dpaa2_supported_fs_action_type[i])
+			return true;
 	}
 
-	return 0;
+	return false;
 }
 
 static inline int
-dpaa2_flow_verify_attr(
-	struct dpaa2_dev_priv *priv,
+dpaa2_flow_verify_attr(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr)
 {
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
 
 	while (curr) {
 		if (curr->tc_id == attr->group &&
 			curr->tc_index == attr->priority) {
-			DPAA2_PMD_ERR(
-				"Flow with group %d and priority %d already exists.",
+			DPAA2_PMD_ERR("Flow(TC[%d].entry[%d] exists",
 				attr->group, attr->priority);
 
-			return -1;
+			return -EINVAL;
 		}
 		curr = LIST_NEXT(curr, next);
 	}
@@ -3279,18 +2341,16 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_action *action)
 {
 	const struct rte_flow_action_port_id *port_id;
+	const struct rte_flow_action_ethdev *ethdev;
 	int idx = -1;
 	struct rte_eth_dev *dest_dev;
 
 	if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
-		port_id = (const struct rte_flow_action_port_id *)
-					action->conf;
+		port_id = action->conf;
 		if (!port_id->original)
 			idx = port_id->id;
 	} else if (action->type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) {
-		const struct rte_flow_action_ethdev *ethdev;
-
-		ethdev = (const struct rte_flow_action_ethdev *)action->conf;
+		ethdev = action->conf;
 		idx = ethdev->port_id;
 	} else {
 		return NULL;
@@ -3310,8 +2370,7 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 }
 
 static inline int
-dpaa2_flow_verify_action(
-	struct dpaa2_dev_priv *priv,
+dpaa2_flow_verify_action(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_action actions[])
 {
@@ -3323,15 +2382,14 @@ dpaa2_flow_verify_action(
 	while (!end_of_list) {
 		switch (actions[j].type) {
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
-			dest_queue = (const struct rte_flow_action_queue *)
-					(actions[j].conf);
+			dest_queue = actions[j].conf;
 			rxq = priv->rx_vq[dest_queue->index];
 			if (attr->group != rxq->tc_index) {
-				DPAA2_PMD_ERR(
-					"RXQ[%d] does not belong to the group %d",
-					dest_queue->index, attr->group);
+				DPAA2_PMD_ERR("FSQ(%d.%d) not in TC[%d]",
+					rxq->tc_index, rxq->flow_id,
+					attr->group);
 
-				return -1;
+				return -ENOTSUP;
 			}
 			break;
 		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
@@ -3345,20 +2403,17 @@ dpaa2_flow_verify_action(
 			rss_conf = (const struct rte_flow_action_rss *)
 					(actions[j].conf);
 			if (rss_conf->queue_num > priv->dist_queues) {
-				DPAA2_PMD_ERR(
-					"RSS number exceeds the distribution size");
+				DPAA2_PMD_ERR("RSS number too large");
 				return -ENOTSUP;
 			}
 			for (i = 0; i < (int)rss_conf->queue_num; i++) {
 				if (rss_conf->queue[i] >= priv->nb_rx_queues) {
-					DPAA2_PMD_ERR(
-						"RSS queue index exceeds the number of RXQs");
+					DPAA2_PMD_ERR("RSS queue not in range");
 					return -ENOTSUP;
 				}
 				rxq = priv->rx_vq[rss_conf->queue[i]];
 				if (rxq->tc_index != attr->group) {
-					DPAA2_PMD_ERR(
-						"Queue/Group combination are not supported\n");
+					DPAA2_PMD_ERR("RSS queue not in group");
 					return -ENOTSUP;
 				}
 			}
@@ -3378,28 +2433,248 @@ dpaa2_flow_verify_action(
 }
 
 static int
-dpaa2_generic_flow_set(struct rte_flow *flow,
-		       struct rte_eth_dev *dev,
-		       const struct rte_flow_attr *attr,
-		       const struct rte_flow_item pattern[],
-		       const struct rte_flow_action actions[],
-		       struct rte_flow_error *error)
+dpaa2_configure_flow_fs_action(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct rte_flow_action *rte_action)
 {
+	struct rte_eth_dev *dest_dev;
+	struct dpaa2_dev_priv *dest_priv;
 	const struct rte_flow_action_queue *dest_queue;
+	struct dpaa2_queue *dest_q;
+
+	memset(&flow->fs_action_cfg, 0,
+		sizeof(struct dpni_fs_action_cfg));
+	flow->action_type = rte_action->type;
+
+	if (flow->action_type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		dest_queue = rte_action->conf;
+		dest_q = priv->rx_vq[dest_queue->index];
+		flow->fs_action_cfg.flow_id = dest_q->flow_id;
+	} else if (flow->action_type == RTE_FLOW_ACTION_TYPE_PORT_ID ||
+		   flow->action_type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) {
+		dest_dev = dpaa2_flow_redirect_dev(priv, rte_action);
+		if (!dest_dev) {
+			DPAA2_PMD_ERR("Invalid device to redirect");
+			return -EINVAL;
+		}
+
+		dest_priv = dest_dev->data->dev_private;
+		dest_q = dest_priv->tx_vq[0];
+		flow->fs_action_cfg.options =
+			DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
+		flow->fs_action_cfg.redirect_obj_token =
+			dest_priv->token;
+		flow->fs_action_cfg.flow_id = dest_q->flow_id;
+	}
+
+	return 0;
+}
+
+static inline uint16_t
+dpaa2_flow_entry_size(uint16_t key_max_size)
+{
+	if (key_max_size > DPAA2_FLOW_ENTRY_MAX_SIZE) {
+		DPAA2_PMD_ERR("Key size(%d) > max(%d)",
+			key_max_size,
+			DPAA2_FLOW_ENTRY_MAX_SIZE);
+
+		return 0;
+	}
+
+	if (key_max_size > DPAA2_FLOW_ENTRY_MIN_SIZE)
+		return DPAA2_FLOW_ENTRY_MAX_SIZE;
+
+	/* Current MC only support fixed entry size(56)*/
+	return DPAA2_FLOW_ENTRY_MAX_SIZE;
+}
+
+static inline int
+dpaa2_flow_clear_fs_table(struct dpaa2_dev_priv *priv,
+	uint8_t tc_id)
+{
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	int need_clear = 0, ret;
+	struct fsl_mc_io *dpni = priv->hw;
+
+	while (curr) {
+		if (curr->tc_id == tc_id) {
+			need_clear = 1;
+			break;
+		}
+		curr = LIST_NEXT(curr, next);
+	}
+
+	if (need_clear) {
+		ret = dpni_clear_fs_entries(dpni, CMD_PRI_LOW,
+				priv->token, tc_id);
+		if (ret) {
+			DPAA2_PMD_ERR("TC[%d] clear failed", tc_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_configure_fs_rss_table(struct dpaa2_dev_priv *priv,
+	uint8_t tc_id, uint16_t dist_size, int rss_dist)
+{
+	struct dpaa2_key_extract *tc_extract;
+	uint8_t *key_cfg_buf;
+	uint64_t key_cfg_iova;
+	int ret;
+	struct dpni_rx_dist_cfg tc_cfg;
+	struct fsl_mc_io *dpni = priv->hw;
+	uint16_t entry_size;
+	uint16_t key_max_size;
+
+	ret = dpaa2_flow_clear_fs_table(priv, tc_id);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] clear failed", tc_id);
+		return ret;
+	}
+
+	tc_extract = &priv->extract.tc_key_extract[tc_id];
+	key_cfg_buf = priv->extract.tc_extract_param[tc_id];
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+
+	key_max_size = tc_extract->key_profile.key_max_size;
+	entry_size = dpaa2_flow_entry_size(key_max_size);
+
+	dpaa2_flow_fs_extracts_log(priv, tc_id);
+	ret = dpkg_prepare_key_cfg(&tc_extract->dpkg,
+			key_cfg_buf);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] prepare key failed", tc_id);
+		return ret;
+	}
+
+	memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
+	tc_cfg.dist_size = dist_size;
+	tc_cfg.key_cfg_iova = key_cfg_iova;
+	if (rss_dist)
+		tc_cfg.enable = true;
+	else
+		tc_cfg.enable = false;
+	tc_cfg.tc = tc_id;
+	ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
+			priv->token, &tc_cfg);
+	if (ret < 0) {
+		if (rss_dist) {
+			DPAA2_PMD_ERR("RSS TC[%d] set failed",
+				tc_id);
+		} else {
+			DPAA2_PMD_ERR("FS TC[%d] hash disable failed",
+				tc_id);
+		}
+
+		return ret;
+	}
+
+	if (rss_dist)
+		return 0;
+
+	tc_cfg.enable = true;
+	tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
+	ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
+			priv->token, &tc_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] FS configured failed", tc_id);
+		return ret;
+	}
+
+	ret = dpaa2_flow_rule_add_all(priv, DPAA2_FLOW_FS_TYPE,
+			entry_size, tc_id);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
+	int rss_dist)
+{
+	struct dpaa2_key_extract *qos_extract;
+	uint8_t *key_cfg_buf;
+	uint64_t key_cfg_iova;
+	int ret;
+	struct dpni_qos_tbl_cfg qos_cfg;
+	struct fsl_mc_io *dpni = priv->hw;
+	uint16_t entry_size;
+	uint16_t key_max_size;
+
+	if (!rss_dist && priv->num_rx_tc <= 1) {
+		/* QoS table is effecitive for FS multiple TCs or RSS.*/
+		return 0;
+	}
+
+	if (LIST_FIRST(&priv->flows)) {
+		ret = dpni_clear_qos_table(dpni, CMD_PRI_LOW,
+				priv->token);
+		if (ret < 0) {
+			DPAA2_PMD_ERR("QoS table clear failed");
+			return ret;
+		}
+	}
+
+	qos_extract = &priv->extract.qos_key_extract;
+	key_cfg_buf = priv->extract.qos_extract_param;
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+
+	key_max_size = qos_extract->key_profile.key_max_size;
+	entry_size = dpaa2_flow_entry_size(key_max_size);
+
+	dpaa2_flow_qos_extracts_log(priv);
+
+	ret = dpkg_prepare_key_cfg(&qos_extract->dpkg,
+			key_cfg_buf);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("QoS prepare extract failed");
+		return ret;
+	}
+	memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
+	qos_cfg.keep_entries = true;
+	qos_cfg.key_cfg_iova = key_cfg_iova;
+	if (rss_dist) {
+		qos_cfg.discard_on_miss = true;
+	} else {
+		qos_cfg.discard_on_miss = false;
+		qos_cfg.default_tc = 0;
+	}
+
+	ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
+			priv->token, &qos_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("QoS table set failed");
+		return ret;
+	}
+
+	ret = dpaa2_flow_rule_add_all(priv, DPAA2_FLOW_QOS_TYPE,
+			entry_size, 0);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item pattern[],
+	const struct rte_flow_action actions[],
+	struct rte_flow_error *error)
+{
 	const struct rte_flow_action_rss *rss_conf;
 	int is_keycfg_configured = 0, end_of_list = 0;
 	int ret = 0, i = 0, j = 0;
-	struct dpni_rx_dist_cfg tc_cfg;
-	struct dpni_qos_tbl_cfg qos_cfg;
-	struct dpni_fs_action_cfg action;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct dpaa2_queue *dest_q;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
-	size_t param;
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
-	uint16_t qos_index;
-	struct rte_eth_dev *dest_dev;
-	struct dpaa2_dev_priv *dest_priv;
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	uint16_t dist_size, key_size;
+	struct dpaa2_key_extract *qos_key_extract;
+	struct dpaa2_key_extract *tc_key_extract;
 
 	ret = dpaa2_flow_verify_attr(priv, attr);
 	if (ret)
@@ -3417,7 +2692,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("ETH flow configuration failed!");
+				DPAA2_PMD_ERR("ETH flow config failed!");
 				return ret;
 			}
 			break;
@@ -3426,17 +2701,25 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("vLan flow configuration failed!");
+				DPAA2_PMD_ERR("vLan flow config failed!");
 				return ret;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
+			ret = dpaa2_configure_flow_ipv4(flow,
+					dev, attr, &pattern[i], actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("IPV4 flow config failed!");
+				return ret;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ret = dpaa2_configure_flow_generic_ip(flow,
+			ret = dpaa2_configure_flow_ipv6(flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("IP flow configuration failed!");
+				DPAA2_PMD_ERR("IPV6 flow config failed!");
 				return ret;
 			}
 			break;
@@ -3445,7 +2728,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("ICMP flow configuration failed!");
+				DPAA2_PMD_ERR("ICMP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3454,7 +2737,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("UDP flow configuration failed!");
+				DPAA2_PMD_ERR("UDP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3463,7 +2746,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("TCP flow configuration failed!");
+				DPAA2_PMD_ERR("TCP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3472,7 +2755,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("SCTP flow configuration failed!");
+				DPAA2_PMD_ERR("SCTP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3481,17 +2764,17 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("GRE flow configuration failed!");
+				DPAA2_PMD_ERR("GRE flow config failed!");
 				return ret;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow,
-						       dev, attr, &pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					dev, attr, &pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("RAW flow configuration failed!");
+				DPAA2_PMD_ERR("RAW flow config failed!");
 				return ret;
 			}
 			break;
@@ -3506,6 +2789,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 		i++;
 	}
 
+	qos_key_extract = &priv->extract.qos_key_extract;
+	key_size = qos_key_extract->key_profile.key_max_size;
+	flow->qos_rule.key_size = dpaa2_flow_entry_size(key_size);
+
+	tc_key_extract = &priv->extract.tc_key_extract[flow->tc_id];
+	key_size = tc_key_extract->key_profile.key_max_size;
+	flow->fs_rule.key_size = dpaa2_flow_entry_size(key_size);
+
 	/* Let's parse action on matching traffic */
 	end_of_list = 0;
 	while (!end_of_list) {
@@ -3513,150 +2804,33 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
 		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
 		case RTE_FLOW_ACTION_TYPE_PORT_ID:
-			memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
-			flow->action = actions[j].type;
-
-			if (actions[j].type == RTE_FLOW_ACTION_TYPE_QUEUE) {
-				dest_queue = (const struct rte_flow_action_queue *)
-								(actions[j].conf);
-				dest_q = priv->rx_vq[dest_queue->index];
-				action.flow_id = dest_q->flow_id;
-			} else {
-				dest_dev = dpaa2_flow_redirect_dev(priv,
-								   &actions[j]);
-				if (!dest_dev) {
-					DPAA2_PMD_ERR("Invalid destination device to redirect!");
-					return -1;
-				}
-
-				dest_priv = dest_dev->data->dev_private;
-				dest_q = dest_priv->tx_vq[0];
-				action.options =
-						DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
-				action.redirect_obj_token = dest_priv->token;
-				action.flow_id = dest_q->flow_id;
-			}
+			ret = dpaa2_configure_flow_fs_action(priv, flow,
+							     &actions[j]);
+			if (ret)
+				return ret;
 
 			/* Configure FS table first*/
-			if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
-				dpaa2_flow_fs_table_extracts_log(priv,
-							flow->tc_id, stdout);
-				if (dpkg_prepare_key_cfg(
-				&priv->extract.tc_key_extract[flow->tc_id].dpkg,
-				(uint8_t *)(size_t)priv->extract
-				.tc_extract_param[flow->tc_id]) < 0) {
-					DPAA2_PMD_ERR(
-					"Unable to prepare extract parameters");
-					return -1;
-				}
-
-				memset(&tc_cfg, 0,
-					sizeof(struct dpni_rx_dist_cfg));
-				tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc;
-				tc_cfg.key_cfg_iova =
-					(uint64_t)priv->extract.tc_extract_param[flow->tc_id];
-				tc_cfg.tc = flow->tc_id;
-				tc_cfg.enable = false;
-				ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
-						priv->token, &tc_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"TC hash cannot be disabled.(%d)",
-						ret);
-					return -1;
-				}
-				tc_cfg.enable = true;
-				tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
-				ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
-							 priv->token, &tc_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"TC distribution cannot be configured.(%d)",
-						ret);
-					return -1;
-				}
+			dist_size = priv->nb_rx_queues / priv->num_rx_tc;
+			if (is_keycfg_configured & DPAA2_FLOW_FS_TYPE) {
+				ret = dpaa2_configure_fs_rss_table(priv,
+								   flow->tc_id,
+								   dist_size,
+								   false);
+				if (ret)
+					return ret;
 			}
 
 			/* Configure QoS table then.*/
-			if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
-				dpaa2_flow_qos_table_extracts_log(priv, stdout);
-				if (dpkg_prepare_key_cfg(
-					&priv->extract.qos_key_extract.dpkg,
-					(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
-					DPAA2_PMD_ERR(
-						"Unable to prepare extract parameters");
-					return -1;
-				}
-
-				memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
-				qos_cfg.discard_on_miss = false;
-				qos_cfg.default_tc = 0;
-				qos_cfg.keep_entries = true;
-				qos_cfg.key_cfg_iova =
-					(size_t)priv->extract.qos_extract_param;
-				/* QoS table is effective for multiple TCs. */
-				if (priv->num_rx_tc > 1) {
-					ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
-						priv->token, &qos_cfg);
-					if (ret < 0) {
-						DPAA2_PMD_ERR(
-						"RSS QoS table can not be configured(%d)\n",
-							ret);
-						return -1;
-					}
-				}
-			}
-
-			flow->qos_real_key_size = priv->extract
-				.qos_key_extract.key_info.key_total_size;
-			if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
-				if (flow->ipaddr_rule.qos_ipdst_offset >=
-					flow->ipaddr_rule.qos_ipsrc_offset) {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipdst_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				} else {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipsrc_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				}
-			} else if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV6_ADDR) {
-				if (flow->ipaddr_rule.qos_ipdst_offset >=
-					flow->ipaddr_rule.qos_ipsrc_offset) {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipdst_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				} else {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipsrc_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				}
+			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
+				ret = dpaa2_configure_qos_table(priv, false);
+				if (ret)
+					return ret;
 			}
 
-			/* QoS entry added is only effective for multiple TCs.*/
 			if (priv->num_rx_tc > 1) {
-				qos_index = flow->tc_id * priv->fs_entries +
-					flow->tc_index;
-				if (qos_index >= priv->qos_entries) {
-					DPAA2_PMD_ERR("QoS table with %d entries full",
-						priv->qos_entries);
-					return -1;
-				}
-				flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
-
-				dpaa2_flow_qos_entry_log("Start add", flow,
-							qos_index, stdout);
-
-				ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
-						priv->token, &flow->qos_rule,
-						flow->tc_id, qos_index,
-						0, 0);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"Error in adding entry to QoS table(%d)", ret);
+				ret = dpaa2_flow_add_qos_rule(priv, flow);
+				if (ret)
 					return ret;
-				}
 			}
 
 			if (flow->tc_index >= priv->fs_entries) {
@@ -3665,140 +2839,47 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 				return -1;
 			}
 
-			flow->fs_real_key_size =
-				priv->extract.tc_key_extract[flow->tc_id]
-				.key_info.key_total_size;
-
-			if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV4_ADDR) {
-				if (flow->ipaddr_rule.fs_ipdst_offset >=
-					flow->ipaddr_rule.fs_ipsrc_offset) {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipdst_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				} else {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipsrc_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				}
-			} else if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV6_ADDR) {
-				if (flow->ipaddr_rule.fs_ipdst_offset >=
-					flow->ipaddr_rule.fs_ipsrc_offset) {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipdst_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				} else {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipsrc_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				}
-			}
-
-			flow->fs_rule.key_size = FIXED_ENTRY_SIZE;
-
-			dpaa2_flow_fs_entry_log("Start add", flow, stdout);
-
-			ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
-						flow->tc_id, flow->tc_index,
-						&flow->fs_rule, &action);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"Error in adding entry to FS table(%d)", ret);
+			ret = dpaa2_flow_add_fs_rule(priv, flow);
+			if (ret)
 				return ret;
-			}
-			memcpy(&flow->action_cfg, &action,
-				sizeof(struct dpni_fs_action_cfg));
+
 			break;
 		case RTE_FLOW_ACTION_TYPE_RSS:
-			rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
+			rss_conf = actions[j].conf;
+			flow->action_type = RTE_FLOW_ACTION_TYPE_RSS;
 
-			flow->action = RTE_FLOW_ACTION_TYPE_RSS;
 			ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types,
-					&priv->extract.tc_key_extract[flow->tc_id].dpkg);
+					&tc_key_extract->dpkg);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"unable to set flow distribution.please check queue config\n");
+				DPAA2_PMD_ERR("TC[%d] distset RSS failed",
+					      flow->tc_id);
 				return ret;
 			}
 
-			/* Allocate DMA'ble memory to write the rules */
-			param = (size_t)rte_malloc(NULL, 256, 64);
-			if (!param) {
-				DPAA2_PMD_ERR("Memory allocation failure\n");
-				return -1;
-			}
-
-			if (dpkg_prepare_key_cfg(
-				&priv->extract.tc_key_extract[flow->tc_id].dpkg,
-				(uint8_t *)param) < 0) {
-				DPAA2_PMD_ERR(
-				"Unable to prepare extract parameters");
-				rte_free((void *)param);
-				return -1;
-			}
-
-			memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
-			tc_cfg.dist_size = rss_conf->queue_num;
-			tc_cfg.key_cfg_iova = (size_t)param;
-			tc_cfg.enable = true;
-			tc_cfg.tc = flow->tc_id;
-			ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
-						 priv->token, &tc_cfg);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"RSS TC table cannot be configured: %d\n",
-					ret);
-				rte_free((void *)param);
-				return -1;
+			dist_size = rss_conf->queue_num;
+			if (is_keycfg_configured & DPAA2_FLOW_FS_TYPE) {
+				ret = dpaa2_configure_fs_rss_table(priv,
+								   flow->tc_id,
+								   dist_size,
+								   true);
+				if (ret)
+					return ret;
 			}
 
-			rte_free((void *)param);
-			if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
-				if (dpkg_prepare_key_cfg(
-					&priv->extract.qos_key_extract.dpkg,
-					(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
-					DPAA2_PMD_ERR(
-					"Unable to prepare extract parameters");
-					return -1;
-				}
-				memset(&qos_cfg, 0,
-					sizeof(struct dpni_qos_tbl_cfg));
-				qos_cfg.discard_on_miss = true;
-				qos_cfg.keep_entries = true;
-				qos_cfg.key_cfg_iova =
-					(size_t)priv->extract.qos_extract_param;
-				ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
-							 priv->token, &qos_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-					"RSS QoS dist can't be configured-%d\n",
-					ret);
-					return -1;
-				}
+			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
+				ret = dpaa2_configure_qos_table(priv, true);
+				if (ret)
+					return ret;
 			}
 
-			/* Add Rule into QoS table */
-			qos_index = flow->tc_id * priv->fs_entries +
-				flow->tc_index;
-			if (qos_index >= priv->qos_entries) {
-				DPAA2_PMD_ERR("QoS table with %d entries full",
-					priv->qos_entries);
-				return -1;
-			}
+			ret = dpaa2_flow_add_qos_rule(priv, flow);
+			if (ret)
+				return ret;
 
-			flow->qos_real_key_size =
-			  priv->extract.qos_key_extract.key_info.key_total_size;
-			flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
-			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-						&flow->qos_rule, flow->tc_id,
-						qos_index, 0, 0);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"Error in entry addition in QoS table(%d)",
-				ret);
+			ret = dpaa2_flow_add_fs_rule(priv, flow);
+			if (ret)
 				return ret;
-			}
+
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
@@ -3812,16 +2893,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 	}
 
 	if (!ret) {
-		if (is_keycfg_configured &
-			(DPAA2_QOS_TABLE_RECONFIGURE |
-			DPAA2_FS_TABLE_RECONFIGURE)) {
-			ret = dpaa2_flow_entry_update(priv, flow->tc_id);
-			if (ret) {
-				DPAA2_PMD_ERR("Flow entry update failed.");
-
-				return -1;
-			}
-		}
 		/* New rules are inserted. */
 		if (!curr) {
 			LIST_INSERT_HEAD(&priv->flows, flow, next);
@@ -3836,7 +2907,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 
 static inline int
 dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
-		      const struct rte_flow_attr *attr)
+	const struct rte_flow_attr *attr)
 {
 	int ret = 0;
 
@@ -3910,18 +2981,18 @@ dpaa2_dev_verify_actions(const struct rte_flow_action actions[])
 	}
 	for (j = 0; actions[j].type != RTE_FLOW_ACTION_TYPE_END; j++) {
 		if (actions[j].type != RTE_FLOW_ACTION_TYPE_DROP &&
-				!actions[j].conf)
+		    !actions[j].conf)
 			ret = -EINVAL;
 	}
 	return ret;
 }
 
-static
-int dpaa2_flow_validate(struct rte_eth_dev *dev,
-			const struct rte_flow_attr *flow_attr,
-			const struct rte_flow_item pattern[],
-			const struct rte_flow_action actions[],
-			struct rte_flow_error *error)
+static int
+dpaa2_flow_validate(struct rte_eth_dev *dev,
+	const struct rte_flow_attr *flow_attr,
+	const struct rte_flow_item pattern[],
+	const struct rte_flow_action actions[],
+	struct rte_flow_error *error)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpni_attr dpni_attr;
@@ -3975,127 +3046,128 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static
-struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
-				   const struct rte_flow_attr *attr,
-				   const struct rte_flow_item pattern[],
-				   const struct rte_flow_action actions[],
-				   struct rte_flow_error *error)
+static struct rte_flow *
+dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
 {
-	struct rte_flow *flow = NULL;
-	size_t key_iova = 0, mask_iova = 0;
+	struct dpaa2_dev_flow *flow = NULL;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int ret;
 
 	dpaa2_flow_control_log =
 		getenv("DPAA2_FLOW_CONTROL_LOG");
 
 	if (getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")) {
-		struct dpaa2_dev_priv *priv = dev->data->dev_private;
-
 		dpaa2_flow_miss_flow_id =
 			(uint16_t)atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
 		if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
-			DPAA2_PMD_ERR(
-				"The missed flow ID %d exceeds the max flow ID %d",
-				dpaa2_flow_miss_flow_id,
-				priv->dist_queues - 1);
+			DPAA2_PMD_ERR("Missed flow ID %d >= dist size(%d)",
+				      dpaa2_flow_miss_flow_id,
+				      priv->dist_queues);
 			return NULL;
 		}
 	}
 
-	flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
+	flow = rte_zmalloc(NULL, sizeof(struct dpaa2_dev_flow),
+			   RTE_CACHE_LINE_SIZE);
 	if (!flow) {
 		DPAA2_PMD_ERR("Failure to allocate memory for flow");
 		goto mem_failure;
 	}
-	/* Allocate DMA'ble memory to write the rules */
-	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!key_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+
+	/* Allocate DMA'ble memory to write the qos rules */
+	flow->qos_key_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->qos_key_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!mask_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+	flow->qos_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->qos_key_addr);
+
+	flow->qos_mask_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->qos_mask_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
+	flow->qos_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->qos_mask_addr);
 
-	flow->qos_rule.key_iova = key_iova;
-	flow->qos_rule.mask_iova = mask_iova;
-
-	/* Allocate DMA'ble memory to write the rules */
-	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!key_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+	/* Allocate DMA'ble memory to write the FS rules */
+	flow->fs_key_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->fs_key_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!mask_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+	flow->fs_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->fs_key_addr);
+
+	flow->fs_mask_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->fs_mask_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
+	flow->fs_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->fs_mask_addr);
 
-	flow->fs_rule.key_iova = key_iova;
-	flow->fs_rule.mask_iova = mask_iova;
-
-	flow->ipaddr_rule.ipaddr_type = FLOW_NONE_IPADDR;
-	flow->ipaddr_rule.qos_ipsrc_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.qos_ipdst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.fs_ipsrc_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.fs_ipdst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
+	priv->curr = flow;
 
-	ret = dpaa2_generic_flow_set(flow, dev, attr, pattern,
-			actions, error);
+	ret = dpaa2_generic_flow_set(flow, dev, attr, pattern, actions, error);
 	if (ret < 0) {
 		if (error && error->type > RTE_FLOW_ERROR_TYPE_ACTION)
 			rte_flow_error_set(error, EPERM,
-					RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-					attr, "unknown");
-		DPAA2_PMD_ERR("Failure to create flow, return code (%d)", ret);
+					   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   attr, "unknown");
+		DPAA2_PMD_ERR("Create flow failed (%d)", ret);
 		goto creation_error;
 	}
 
-	return flow;
+	priv->curr = NULL;
+	return (struct rte_flow *)flow;
+
 mem_failure:
-	rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-			   NULL, "memory alloc");
+	rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "memory alloc");
+
 creation_error:
-	rte_free((void *)flow);
-	rte_free((void *)key_iova);
-	rte_free((void *)mask_iova);
+	if (flow) {
+		if (flow->qos_key_addr)
+			rte_free(flow->qos_key_addr);
+		if (flow->qos_mask_addr)
+			rte_free(flow->qos_mask_addr);
+		if (flow->fs_key_addr)
+			rte_free(flow->fs_key_addr);
+		if (flow->fs_mask_addr)
+			rte_free(flow->fs_mask_addr);
+		rte_free(flow);
+	}
+	priv->curr = NULL;
 
 	return NULL;
 }
 
-static
-int dpaa2_flow_destroy(struct rte_eth_dev *dev,
-		       struct rte_flow *flow,
-		       struct rte_flow_error *error)
+static int
+dpaa2_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *_flow,
+		   struct rte_flow_error *error)
 {
 	int ret = 0;
+	struct dpaa2_dev_flow *flow;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
+	struct fsl_mc_io *dpni = priv->hw;
 
-	switch (flow->action) {
+	flow = (struct dpaa2_dev_flow *)_flow;
+
+	switch (flow->action_type) {
 	case RTE_FLOW_ACTION_TYPE_QUEUE:
 	case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
 	case RTE_FLOW_ACTION_TYPE_PORT_ID:
 		if (priv->num_rx_tc > 1) {
 			/* Remove entry from QoS table first */
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-					&flow->qos_rule);
+			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+						    priv->token,
+						    &flow->qos_rule);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"Error in removing entry from QoS table(%d)", ret);
+				DPAA2_PMD_ERR("Remove FS QoS entry failed");
+				dpaa2_flow_qos_entry_log("Delete failed", flow,
+							 -1);
+				abort();
 				goto error;
 			}
 		}
@@ -4104,34 +3176,37 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
 		ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token,
 					   flow->tc_id, &flow->fs_rule);
 		if (ret < 0) {
-			DPAA2_PMD_ERR(
-				"Error in removing entry from FS table(%d)", ret);
+			DPAA2_PMD_ERR("Remove entry from FS[%d] failed",
+				      flow->tc_id);
 			goto error;
 		}
 		break;
 	case RTE_FLOW_ACTION_TYPE_RSS:
 		if (priv->num_rx_tc > 1) {
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-					&flow->qos_rule);
+			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+						    priv->token,
+						    &flow->qos_rule);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"Error in entry addition in QoS table(%d)", ret);
+				DPAA2_PMD_ERR("Remove RSS QoS entry failed");
 				goto error;
 			}
 		}
 		break;
 	default:
-		DPAA2_PMD_ERR(
-		"Action type (%d) is not supported", flow->action);
+		DPAA2_PMD_ERR("Action(%d) not supported", flow->action_type);
 		ret = -ENOTSUP;
 		break;
 	}
 
 	LIST_REMOVE(flow, next);
-	rte_free((void *)(size_t)flow->qos_rule.key_iova);
-	rte_free((void *)(size_t)flow->qos_rule.mask_iova);
-	rte_free((void *)(size_t)flow->fs_rule.key_iova);
-	rte_free((void *)(size_t)flow->fs_rule.mask_iova);
+	if (flow->qos_key_addr)
+		rte_free(flow->qos_key_addr);
+	if (flow->qos_mask_addr)
+		rte_free(flow->qos_mask_addr);
+	if (flow->fs_key_addr)
+		rte_free(flow->fs_key_addr);
+	if (flow->fs_mask_addr)
+		rte_free(flow->fs_mask_addr);
 	/* Now free the flow */
 	rte_free(flow);
 
@@ -4156,12 +3231,12 @@ dpaa2_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct rte_flow *flow = LIST_FIRST(&priv->flows);
+	struct dpaa2_dev_flow *flow = LIST_FIRST(&priv->flows);
 
 	while (flow) {
-		struct rte_flow *next = LIST_NEXT(flow, next);
+		struct dpaa2_dev_flow *next = LIST_NEXT(flow, next);
 
-		dpaa2_flow_destroy(dev, flow, error);
+		dpaa2_flow_destroy(dev, (struct rte_flow *)flow, error);
 		flow = next;
 	}
 	return 0;
@@ -4169,10 +3244,10 @@ dpaa2_flow_flush(struct rte_eth_dev *dev,
 
 static int
 dpaa2_flow_query(struct rte_eth_dev *dev __rte_unused,
-		struct rte_flow *flow __rte_unused,
-		const struct rte_flow_action *actions __rte_unused,
-		void *data __rte_unused,
-		struct rte_flow_error *error __rte_unused)
+	struct rte_flow *_flow __rte_unused,
+	const struct rte_flow_action *actions __rte_unused,
+	void *data __rte_unused,
+	struct rte_flow_error *error __rte_unused)
 {
 	return 0;
 }
@@ -4189,11 +3264,11 @@ dpaa2_flow_query(struct rte_eth_dev *dev __rte_unused,
 void
 dpaa2_flow_clean(struct rte_eth_dev *dev)
 {
-	struct rte_flow *flow;
+	struct dpaa2_dev_flow *flow;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	while ((flow = LIST_FIRST(&priv->flows)))
-		dpaa2_flow_destroy(dev, flow, NULL);
+		dpaa2_flow_destroy(dev, (struct rte_flow *)flow, NULL);
 }
 
 const struct rte_flow_ops dpaa2_flow_ops = {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 25/43] net/dpaa2: dump Rx parser result
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (23 preceding siblings ...)
  2024-09-13  5:59 ` [v1 24/43] net/dpaa2: flow API refactor vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 26/43] net/dpaa2: enhancement of raw flow extract vanshika.shukla
                   ` (18 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

export DPAA2_PRINT_RX_PARSER_RESULT=1 is used to dump
RX parser result and frame attribute flags generated by
hardware parser and soft parser.
The parser results are converted to big endian described in RM.
The areas set by soft parser are dump as well.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c     |   5 +
 drivers/net/dpaa2/dpaa2_ethdev.h     |  90 ++++++++++
 drivers/net/dpaa2/dpaa2_parse_dump.h | 248 +++++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_rxtx.c       |   7 +
 4 files changed, 350 insertions(+)
 create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 533effd72b..000d7da85c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -75,6 +75,8 @@ int dpaa2_timestamp_dynfield_offset = -1;
 /* Enable error queue */
 bool dpaa2_enable_err_queue;
 
+bool dpaa2_print_parser_result;
+
 #define MAX_NB_RX_DESC		11264
 int total_nb_rx_desc;
 
@@ -2727,6 +2729,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		DPAA2_PMD_INFO("Enable error queue");
 	}
 
+	if (getenv("DPAA2_PRINT_RX_PARSER_RESULT"))
+		dpaa2_print_parser_result = 1;
+
 	/* Allocate memory for hardware structure for queues */
 	ret = dpaa2_alloc_rx_tx_queues(eth_dev);
 	if (ret) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index ea1c1b5117..c864859b3f 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -19,6 +19,8 @@
 #include <mc/fsl_dpni.h>
 #include <mc/fsl_mc_sys.h>
 
+#include "base/dpaa2_hw_dpni_annot.h"
+
 #define DPAA2_MIN_RX_BUF_SIZE 512
 #define DPAA2_MAX_RX_PKT_LEN  10240 /*WRIOP support*/
 #define NET_DPAA2_PMD_DRIVER_NAME net_dpaa2
@@ -152,6 +154,88 @@ extern const struct rte_tm_ops dpaa2_tm_ops;
 
 extern bool dpaa2_enable_err_queue;
 
+extern bool dpaa2_print_parser_result;
+
+#define DPAA2_FAPR_SIZE \
+	(sizeof(struct dpaa2_annot_hdr) - \
+	offsetof(struct dpaa2_annot_hdr, word3))
+
+#define DPAA2_PR_NXTHDR_OFFSET 0
+
+#define DPAA2_FAFE_PSR_OFFSET 2
+#define DPAA2_FAFE_PSR_SIZE 2
+
+#define DPAA2_FAF_PSR_OFFSET 4
+#define DPAA2_FAF_PSR_SIZE 12
+
+#define DPAA2_FAF_TOTAL_SIZE \
+	(DPAA2_FAFE_PSR_SIZE + DPAA2_FAF_PSR_SIZE)
+
+/* Just most popular Frame attribute flags (FAF) here.*/
+enum dpaa2_rx_faf_offset {
+	/* Set by SP start*/
+	FAFE_VXLAN_IN_VLAN_FRAM = 0,
+	FAFE_VXLAN_IN_IPV4_FRAM = 1,
+	FAFE_VXLAN_IN_IPV6_FRAM = 2,
+	FAFE_VXLAN_IN_UDP_FRAM = 3,
+	FAFE_VXLAN_IN_TCP_FRAM = 4,
+	/* Set by SP end*/
+
+	FAF_GTP_PRIMED_FRAM = 1 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_PTP_FRAM = 3 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_VXLAN_FRAM = 4 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ETH_FRAM = 10 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_LLC_SNAP_FRAM = 18 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_VLAN_FRAM = 21 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_PPPOE_PPP_FRAM = 25 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_MPLS_FRAM = 27 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ARP_FRAM = 30 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPV4_FRAM = 34 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPV6_FRAM = 42 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IP_FRAM = 48 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ICMP_FRAM = 57 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IGMP_FRAM = 58 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_GRE_FRAM = 65 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_UDP_FRAM = 70 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_TCP_FRAM = 72 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_FRAM = 77 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_ESP_FRAM = 78 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_AH_FRAM = 79 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_SCTP_FRAM = 81 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_DCCP_FRAM = 83 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_GTP_FRAM = 87 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ESP_FRAM = 89 + DPAA2_FAFE_PSR_SIZE * 8,
+};
+
+#define DPAA2_PR_ETH_OFF_OFFSET 19
+#define DPAA2_PR_TCI_OFF_OFFSET 21
+#define DPAA2_PR_LAST_ETYPE_OFFSET 23
+#define DPAA2_PR_L3_OFF_OFFSET 27
+#define DPAA2_PR_L4_OFF_OFFSET 30
+#define DPAA2_PR_L5_OFF_OFFSET 31
+#define DPAA2_PR_NXTHDR_OFF_OFFSET 34
+
+/* Set by SP for vxlan distribution start*/
+#define DPAA2_VXLAN_IN_TCI_OFFSET 16
+
+#define DPAA2_VXLAN_IN_DADDR0_OFFSET 20
+#define DPAA2_VXLAN_IN_DADDR1_OFFSET 22
+#define DPAA2_VXLAN_IN_DADDR2_OFFSET 24
+#define DPAA2_VXLAN_IN_DADDR3_OFFSET 25
+#define DPAA2_VXLAN_IN_DADDR4_OFFSET 26
+#define DPAA2_VXLAN_IN_DADDR5_OFFSET 28
+
+#define DPAA2_VXLAN_IN_SADDR0_OFFSET 29
+#define DPAA2_VXLAN_IN_SADDR1_OFFSET 32
+#define DPAA2_VXLAN_IN_SADDR2_OFFSET 33
+#define DPAA2_VXLAN_IN_SADDR3_OFFSET 35
+#define DPAA2_VXLAN_IN_SADDR4_OFFSET 41
+#define DPAA2_VXLAN_IN_SADDR5_OFFSET 42
+
+#define DPAA2_VXLAN_VNI_OFFSET 43
+#define DPAA2_VXLAN_IN_TYPE_OFFSET 46
+/* Set by SP for vxlan distribution end*/
+
 struct ipv4_sd_addr_extract_rule {
 	uint32_t ipv4_src;
 	uint32_t ipv4_dst;
@@ -197,7 +281,13 @@ enum ip_addr_extract_type {
 	IP_DST_SRC_EXTRACT
 };
 
+enum key_prot_type {
+	DPAA2_NET_PROT_KEY,
+	DPAA2_FAF_KEY
+};
+
 struct key_prot_field {
+	enum key_prot_type type;
 	enum net_prot prot;
 	uint32_t key_field;
 };
diff --git a/drivers/net/dpaa2/dpaa2_parse_dump.h b/drivers/net/dpaa2/dpaa2_parse_dump.h
new file mode 100644
index 0000000000..f1cdc003de
--- /dev/null
+++ b/drivers/net/dpaa2/dpaa2_parse_dump.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ *   Copyright 2022 NXP
+ *
+ */
+
+#ifndef _DPAA2_PARSE_DUMP_H
+#define _DPAA2_PARSE_DUMP_H
+
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_pmd_dpaa2.h>
+
+#include <dpaa2_hw_pvt.h>
+#include "dpaa2_tm.h"
+
+#include <mc/fsl_dpni.h>
+#include <mc/fsl_mc_sys.h>
+
+#include "base/dpaa2_hw_dpni_annot.h"
+
+#define DPAA2_PR_PRINT printf
+
+struct dpaa2_faf_bit_info {
+	const char *name;
+	int position;
+};
+
+struct dpaa2_fapr_field_info {
+	const char *name;
+	uint16_t value;
+};
+
+struct dpaa2_fapr_array {
+	union {
+		uint64_t pr_64[DPAA2_FAPR_SIZE / 8];
+		uint8_t pr[DPAA2_FAPR_SIZE];
+	};
+};
+
+#define NEXT_HEADER_NAME "Next Header"
+#define ETH_OFF_NAME "ETH OFFSET"
+#define VLAN_TCI_OFF_NAME "VLAN TCI OFFSET"
+#define LAST_ENTRY_OFF_NAME "LAST ETYPE Offset"
+#define L3_OFF_NAME "L3 Offset"
+#define L4_OFF_NAME "L4 Offset"
+#define L5_OFF_NAME "L5 Offset"
+#define NEXT_HEADER_OFF_NAME "Next Header Offset"
+
+static const
+struct dpaa2_fapr_field_info support_dump_fields[] = {
+	{
+		.name = NEXT_HEADER_NAME,
+	},
+	{
+		.name = ETH_OFF_NAME,
+	},
+	{
+		.name = VLAN_TCI_OFF_NAME,
+	},
+	{
+		.name = LAST_ENTRY_OFF_NAME,
+	},
+	{
+		.name = L3_OFF_NAME,
+	},
+	{
+		.name = L4_OFF_NAME,
+	},
+	{
+		.name = L5_OFF_NAME,
+	},
+	{
+		.name = NEXT_HEADER_OFF_NAME,
+	}
+};
+
+static inline void
+dpaa2_print_faf(struct dpaa2_fapr_array *fapr)
+{
+	const int faf_bit_len = DPAA2_FAF_TOTAL_SIZE * 8;
+	struct dpaa2_faf_bit_info faf_bits[faf_bit_len];
+	int i, byte_pos, bit_pos, vxlan = 0, vxlan_vlan = 0;
+	struct rte_ether_hdr vxlan_in_eth;
+	uint16_t vxlan_vlan_tci;
+
+	for (i = 0; i < faf_bit_len; i++) {
+		faf_bits[i].position = i;
+		if (i == FAFE_VXLAN_IN_VLAN_FRAM)
+			faf_bits[i].name = "VXLAN VLAN Present";
+		else if (i == FAFE_VXLAN_IN_IPV4_FRAM)
+			faf_bits[i].name = "VXLAN IPV4 Present";
+		else if (i == FAFE_VXLAN_IN_IPV6_FRAM)
+			faf_bits[i].name = "VXLAN IPV6 Present";
+		else if (i == FAFE_VXLAN_IN_UDP_FRAM)
+			faf_bits[i].name = "VXLAN UDP Present";
+		else if (i == FAFE_VXLAN_IN_TCP_FRAM)
+			faf_bits[i].name = "VXLAN TCP Present";
+		else if (i == FAF_VXLAN_FRAM)
+			faf_bits[i].name = "VXLAN Present";
+		else if (i == FAF_ETH_FRAM)
+			faf_bits[i].name = "Ethernet MAC Present";
+		else if (i == FAF_VLAN_FRAM)
+			faf_bits[i].name = "VLAN 1 Present";
+		else if (i == FAF_IPV4_FRAM)
+			faf_bits[i].name = "IPv4 1 Present";
+		else if (i == FAF_IPV6_FRAM)
+			faf_bits[i].name = "IPv6 1 Present";
+		else if (i == FAF_UDP_FRAM)
+			faf_bits[i].name = "UDP Present";
+		else if (i == FAF_TCP_FRAM)
+			faf_bits[i].name = "TCP Present";
+		else
+			faf_bits[i].name = "Check RM for this unusual frame";
+	}
+
+	DPAA2_PR_PRINT("Frame Annotation Flags:\r\n");
+	for (i = 0; i < faf_bit_len; i++) {
+		byte_pos = i / 8 + DPAA2_FAFE_PSR_OFFSET;
+		bit_pos = i % 8;
+		if (fapr->pr[byte_pos] & (1 << (7 - bit_pos))) {
+			DPAA2_PR_PRINT("FAF bit %d : %s\r\n",
+				faf_bits[i].position, faf_bits[i].name);
+			if (i == FAF_VXLAN_FRAM)
+				vxlan = 1;
+		}
+	}
+
+	if (vxlan) {
+		vxlan_in_eth.dst_addr.addr_bytes[0] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR0_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[1] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR1_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[2] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR2_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[3] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR3_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[4] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR4_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[5] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR5_OFFSET];
+
+		vxlan_in_eth.src_addr.addr_bytes[0] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR0_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[1] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR1_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[2] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR2_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[3] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR3_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[4] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR4_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[5] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR5_OFFSET];
+
+		vxlan_in_eth.ether_type =
+			fapr->pr[DPAA2_VXLAN_IN_TYPE_OFFSET];
+		vxlan_in_eth.ether_type =
+			vxlan_in_eth.ether_type << 8;
+		vxlan_in_eth.ether_type |=
+			fapr->pr[DPAA2_VXLAN_IN_TYPE_OFFSET + 1];
+
+		if (vxlan_in_eth.ether_type == RTE_ETHER_TYPE_VLAN)
+			vxlan_vlan = 1;
+		DPAA2_PR_PRINT("VXLAN inner eth:\r\n");
+		DPAA2_PR_PRINT("dst addr: ");
+		for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+			if (i != 0)
+				DPAA2_PR_PRINT(":");
+			DPAA2_PR_PRINT("%02x",
+				vxlan_in_eth.dst_addr.addr_bytes[i]);
+		}
+		DPAA2_PR_PRINT("\r\n");
+		DPAA2_PR_PRINT("src addr: ");
+		for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+			if (i != 0)
+				DPAA2_PR_PRINT(":");
+			DPAA2_PR_PRINT("%02x",
+				vxlan_in_eth.src_addr.addr_bytes[i]);
+		}
+		DPAA2_PR_PRINT("\r\n");
+		DPAA2_PR_PRINT("type: 0x%04x\r\n",
+			vxlan_in_eth.ether_type);
+		if (vxlan_vlan) {
+			vxlan_vlan_tci = fapr->pr[DPAA2_VXLAN_IN_TCI_OFFSET];
+			vxlan_vlan_tci = vxlan_vlan_tci << 8;
+			vxlan_vlan_tci |=
+				fapr->pr[DPAA2_VXLAN_IN_TCI_OFFSET + 1];
+
+			DPAA2_PR_PRINT("vlan tci: 0x%04x\r\n",
+				vxlan_vlan_tci);
+		}
+	}
+}
+
+static inline void
+dpaa2_print_parse_result(struct dpaa2_annot_hdr *annotation)
+{
+	struct dpaa2_fapr_array fapr;
+	struct dpaa2_fapr_field_info
+		fapr_fields[sizeof(support_dump_fields) /
+		sizeof(struct dpaa2_fapr_field_info)];
+	uint64_t len, i;
+
+	memcpy(&fapr, &annotation->word3, DPAA2_FAPR_SIZE);
+	for (i = 0; i < (DPAA2_FAPR_SIZE / 8); i++)
+		fapr.pr_64[i] = rte_cpu_to_be_64(fapr.pr_64[i]);
+
+	memcpy(fapr_fields, support_dump_fields,
+		sizeof(support_dump_fields));
+
+	for (i = 0;
+		i < sizeof(fapr_fields) /
+		sizeof(struct dpaa2_fapr_field_info);
+		i++) {
+		if (!strcmp(fapr_fields[i].name, NEXT_HEADER_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_NXTHDR_OFFSET];
+			fapr_fields[i].value = fapr_fields[i].value << 8;
+			fapr_fields[i].value |=
+				fapr.pr[DPAA2_PR_NXTHDR_OFFSET + 1];
+		} else if (!strcmp(fapr_fields[i].name, ETH_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_ETH_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, VLAN_TCI_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_TCI_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, LAST_ENTRY_OFF_NAME)) {
+			fapr_fields[i].value =
+				fapr.pr[DPAA2_PR_LAST_ETYPE_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L3_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L3_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L4_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L4_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L5_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L5_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, NEXT_HEADER_OFF_NAME)) {
+			fapr_fields[i].value =
+				fapr.pr[DPAA2_PR_NXTHDR_OFF_OFFSET];
+		}
+	}
+
+	len = sizeof(fapr_fields) / sizeof(struct dpaa2_fapr_field_info);
+	DPAA2_PR_PRINT("Parse Result:\r\n");
+	for (i = 0; i < len; i++) {
+		DPAA2_PR_PRINT("%21s : 0x%02x\r\n",
+			fapr_fields[i].name, fapr_fields[i].value);
+	}
+	dpaa2_print_faf(&fapr);
+}
+
+#endif
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 23f7c4132d..4bb785aa49 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -25,6 +25,7 @@
 #include "dpaa2_pmd_logs.h"
 #include "dpaa2_ethdev.h"
 #include "base/dpaa2_hw_dpni_annot.h"
+#include "dpaa2_parse_dump.h"
 
 static inline uint32_t __rte_hot
 dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
@@ -57,6 +58,9 @@ dpaa2_dev_rx_parse_new(struct rte_mbuf *m, const struct qbman_fd *fd,
 	struct dpaa2_annot_hdr *annotation =
 			(struct dpaa2_annot_hdr *)hw_annot_addr;
 
+	if (unlikely(dpaa2_print_parser_result))
+		dpaa2_print_parse_result(annotation);
+
 	m->packet_type = RTE_PTYPE_UNKNOWN;
 	switch (frc) {
 	case DPAA2_PKT_TYPE_ETHER:
@@ -252,6 +256,9 @@ dpaa2_dev_rx_parse(struct rte_mbuf *mbuf, void *hw_annot_addr)
 	else
 		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
+	if (unlikely(dpaa2_print_parser_result))
+		dpaa2_print_parse_result(annotation);
+
 	if (dpaa2_enable_ts[mbuf->port]) {
 		*dpaa2_timestamp_dynfield(mbuf) = annotation->word2;
 		mbuf->ol_flags |= dpaa2_timestamp_rx_dynflag;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 26/43] net/dpaa2: enhancement of raw flow extract
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (24 preceding siblings ...)
  2024-09-13  5:59 ` [v1 25/43] net/dpaa2: dump Rx parser result vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 27/43] net/dpaa2: frame attribute flags parser vanshika.shukla
                   ` (17 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support combination of RAW extract and header extracts.
RAW extract can start from any absolute offset.

TBD: relative offset support.
To support relative offset of previous L3 protocol item,
extracts should be expanded to identify if the frame is:
vlan or none-vlan.

To support relative offset of previous L4 protocol item,
extracts should be expanded to identify if the frame is:
vlan/IPv4 or vlan/IPv6 or none-vlan/IPv4 or none-vlan/IPv6.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |  10 +
 drivers/net/dpaa2/dpaa2_flow.c   | 385 ++++++++++++++++++++++++++-----
 2 files changed, 340 insertions(+), 55 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index c864859b3f..8f548467a4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -292,6 +292,11 @@ struct key_prot_field {
 	uint32_t key_field;
 };
 
+struct dpaa2_raw_region {
+	uint8_t raw_start;
+	uint8_t raw_size;
+};
+
 struct dpaa2_key_profile {
 	uint8_t num;
 	uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
@@ -301,6 +306,10 @@ struct dpaa2_key_profile {
 	uint8_t ip_addr_extract_pos;
 	uint8_t ip_addr_extract_off;
 
+	uint8_t raw_extract_pos;
+	uint8_t raw_extract_off;
+	uint8_t raw_extract_num;
+
 	uint8_t l4_src_port_present;
 	uint8_t l4_src_port_pos;
 	uint8_t l4_src_port_offset;
@@ -309,6 +318,7 @@ struct dpaa2_key_profile {
 	uint8_t l4_dst_port_offset;
 	struct key_prot_field prot_field[DPKG_MAX_NUM_OF_EXTRACTS];
 	uint16_t key_max_size;
+	struct dpaa2_raw_region raw_region;
 };
 
 struct dpaa2_key_extract {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 0522fdb026..fe3c9f6d7d 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -772,42 +772,272 @@ dpaa2_flow_extract_add_hdr(enum net_prot prot,
 }
 
 static int
-dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
-	int size)
+dpaa2_flow_extract_new_raw(struct dpaa2_dev_priv *priv,
+	int offset, int size,
+	enum dpaa2_flow_dist_type dist_type, int tc_id)
 {
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_profile *key_info = &key_extract->key_profile;
-	int last_extract_size, index;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpaa2_key_profile *key_profile;
+	int last_extract_size, index, pos, item_size;
+	uint8_t num_extracts;
+	uint32_t field;
 
-	if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
-	    DPKG_EXTRACT_FROM_DATA) {
-		DPAA2_PMD_WARN("RAW extract cannot be combined with others");
-		return -1;
-	}
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	key_profile = &key_extract->key_profile;
+
+	key_profile->raw_region.raw_start = 0;
+	key_profile->raw_region.raw_size = 0;
 
 	last_extract_size = (size % DPAA2_FLOW_MAX_KEY_SIZE);
-	dpkg->num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
+	num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
 	if (last_extract_size)
-		dpkg->num_extracts++;
+		num_extracts++;
 	else
 		last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
 
-	for (index = 0; index < dpkg->num_extracts; index++) {
-		dpkg->extracts[index].type = DPKG_EXTRACT_FROM_DATA;
-		if (index == dpkg->num_extracts - 1)
-			dpkg->extracts[index].extract.from_data.size =
-				last_extract_size;
+	for (index = 0; index < num_extracts; index++) {
+		if (index == num_extracts - 1)
+			item_size = last_extract_size;
 		else
-			dpkg->extracts[index].extract.from_data.size =
-				DPAA2_FLOW_MAX_KEY_SIZE;
-		dpkg->extracts[index].extract.from_data.offset =
-			DPAA2_FLOW_MAX_KEY_SIZE * index;
+			item_size = DPAA2_FLOW_MAX_KEY_SIZE;
+		field = offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+		field |= item_size;
+
+		pos = dpaa2_flow_key_profile_advance(NET_PROT_PAYLOAD,
+				field, item_size, priv, dist_type,
+				tc_id, NULL);
+		if (pos < 0)
+			return pos;
+
+		dpkg->extracts[pos].type = DPKG_EXTRACT_FROM_DATA;
+		dpkg->extracts[pos].extract.from_data.size = item_size;
+		dpkg->extracts[pos].extract.from_data.offset = offset;
+
+		if (index == 0) {
+			key_profile->raw_extract_pos = pos;
+			key_profile->raw_extract_off =
+				key_profile->key_offset[pos];
+			key_profile->raw_region.raw_start = offset;
+		}
+		key_profile->raw_extract_num++;
+		key_profile->raw_region.raw_size +=
+			key_profile->key_size[pos];
+
+		offset += item_size;
+		dpkg->num_extracts++;
 	}
 
-	key_info->key_max_size = size;
 	return 0;
 }
 
+static int
+dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
+	int offset, int size, enum dpaa2_flow_dist_type dist_type,
+	int tc_id, int *recfg)
+{
+	struct dpaa2_key_profile *key_profile;
+	struct dpaa2_raw_region *raw_region;
+	int end = offset + size, ret = 0, extract_extended, sz_extend;
+	int start_cmp, end_cmp, new_size, index, pos, end_pos;
+	int last_extract_size, item_size, num_extracts, bk_num = 0;
+	struct dpkg_extract extract_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint8_t key_offset_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint8_t key_size_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	struct key_prot_field prot_field_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	struct dpaa2_raw_region raw_hole;
+	struct dpkg_profile_cfg *dpkg;
+	enum net_prot prot;
+	uint32_t field;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+		dpkg = &priv->extract.qos_key_extract.dpkg;
+	} else {
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+		dpkg = &priv->extract.tc_key_extract[tc_id].dpkg;
+	}
+
+	raw_region = &key_profile->raw_region;
+	if (!raw_region->raw_size) {
+		/* New RAW region*/
+		ret = dpaa2_flow_extract_new_raw(priv, offset, size,
+			dist_type, tc_id);
+		if (!ret && recfg)
+			(*recfg) |= dist_type;
+
+		return ret;
+	}
+	start_cmp = raw_region->raw_start;
+	end_cmp = raw_region->raw_start + raw_region->raw_size;
+
+	if (offset >= start_cmp && end <= end_cmp)
+		return 0;
+
+	sz_extend = 0;
+	new_size = raw_region->raw_size;
+	if (offset < start_cmp) {
+		sz_extend += start_cmp - offset;
+		new_size += (start_cmp - offset);
+	}
+	if (end > end_cmp) {
+		sz_extend += end - end_cmp;
+		new_size += (end - end_cmp);
+	}
+
+	last_extract_size = (new_size % DPAA2_FLOW_MAX_KEY_SIZE);
+	num_extracts = (new_size / DPAA2_FLOW_MAX_KEY_SIZE);
+	if (last_extract_size)
+		num_extracts++;
+	else
+		last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
+
+	if ((key_profile->num + num_extracts -
+		key_profile->raw_extract_num) >=
+		DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("%s Failed to expand raw extracts",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (offset < start_cmp) {
+		raw_hole.raw_start = key_profile->raw_extract_off;
+		raw_hole.raw_size = start_cmp - offset;
+		raw_region->raw_start = offset;
+		raw_region->raw_size += start_cmp - offset;
+
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size);
+			if (ret)
+				return ret;
+		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE) {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size, tc_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	if (end > end_cmp) {
+		raw_hole.raw_start =
+			key_profile->raw_extract_off +
+			raw_region->raw_size;
+		raw_hole.raw_size = end - end_cmp;
+		raw_region->raw_size += end - end_cmp;
+
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size);
+			if (ret)
+				return ret;
+		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE) {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size, tc_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	end_pos = key_profile->raw_extract_pos +
+		key_profile->raw_extract_num;
+	if (key_profile->num > end_pos) {
+		bk_num = key_profile->num - end_pos;
+		memcpy(extract_bk, &dpkg->extracts[end_pos],
+			bk_num * sizeof(struct dpkg_extract));
+		memcpy(key_offset_bk, &key_profile->key_offset[end_pos],
+			bk_num * sizeof(uint8_t));
+		memcpy(key_size_bk, &key_profile->key_size[end_pos],
+			bk_num * sizeof(uint8_t));
+		memcpy(prot_field_bk, &key_profile->prot_field[end_pos],
+			bk_num * sizeof(struct key_prot_field));
+
+		for (index = 0; index < bk_num; index++) {
+			key_offset_bk[index] += sz_extend;
+			prot = prot_field_bk[index].prot;
+			field = prot_field_bk[index].key_field;
+			if (dpaa2_flow_l4_src_port_extract(prot,
+				field)) {
+				key_profile->l4_src_port_present = 1;
+				key_profile->l4_src_port_pos = end_pos + index;
+				key_profile->l4_src_port_offset =
+					key_offset_bk[index];
+			} else if (dpaa2_flow_l4_dst_port_extract(prot,
+				field)) {
+				key_profile->l4_dst_port_present = 1;
+				key_profile->l4_dst_port_pos = end_pos + index;
+				key_profile->l4_dst_port_offset =
+					key_offset_bk[index];
+			}
+		}
+	}
+
+	pos = key_profile->raw_extract_pos;
+
+	for (index = 0; index < num_extracts; index++) {
+		if (index == num_extracts - 1)
+			item_size = last_extract_size;
+		else
+			item_size = DPAA2_FLOW_MAX_KEY_SIZE;
+		field = offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+		field |= item_size;
+
+		if (pos > 0) {
+			key_profile->key_offset[pos] =
+				key_profile->key_offset[pos - 1] +
+				key_profile->key_size[pos - 1];
+		} else {
+			key_profile->key_offset[pos] = 0;
+		}
+		key_profile->key_size[pos] = item_size;
+		key_profile->prot_field[pos].prot = NET_PROT_PAYLOAD;
+		key_profile->prot_field[pos].key_field = field;
+
+		dpkg->extracts[pos].type = DPKG_EXTRACT_FROM_DATA;
+		dpkg->extracts[pos].extract.from_data.size = item_size;
+		dpkg->extracts[pos].extract.from_data.offset = offset;
+		offset += item_size;
+		pos++;
+	}
+
+	if (bk_num) {
+		memcpy(&dpkg->extracts[pos], extract_bk,
+			bk_num * sizeof(struct dpkg_extract));
+		memcpy(&key_profile->key_offset[end_pos],
+			key_offset_bk, bk_num * sizeof(uint8_t));
+		memcpy(&key_profile->key_size[end_pos],
+			key_size_bk, bk_num * sizeof(uint8_t));
+		memcpy(&key_profile->prot_field[end_pos],
+			prot_field_bk, bk_num * sizeof(struct key_prot_field));
+	}
+
+	extract_extended = num_extracts - key_profile->raw_extract_num;
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		key_profile->ip_addr_extract_pos += extract_extended;
+		key_profile->ip_addr_extract_off += sz_extend;
+	}
+	key_profile->raw_extract_num = num_extracts;
+	key_profile->num += extract_extended;
+	key_profile->key_max_size += sz_extend;
+
+	dpkg->num_extracts += extract_extended;
+	if (!ret && recfg)
+		(*recfg) |= dist_type;
+
+	return ret;
+}
+
 static inline int
 dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 	enum net_prot prot, uint32_t key_field)
@@ -847,7 +1077,6 @@ dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
 	int i;
 
 	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
-
 	if (i >= 0)
 		return key_profile->key_offset[i];
 	else
@@ -996,13 +1225,37 @@ dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 }
 
 static inline int
-dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
-			     const void *key, const void *mask, int size)
+dpaa2_flow_raw_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	uint32_t extract_offset, int size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int offset = 0;
+	int extract_size = size > DPAA2_FLOW_MAX_KEY_SIZE ?
+		DPAA2_FLOW_MAX_KEY_SIZE : size;
+	int offset, field;
+
+	field = extract_offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+	field |= extract_size;
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			NET_PROT_PAYLOAD, field);
+	if (offset < 0) {
+		DPAA2_PMD_ERR("offset(%d)/size(%d) raw extract failed",
+			extract_offset, size);
+		return -EINVAL;
+	}
 
-	memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
-	memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, size);
+		memcpy((flow->qos_mask_addr + offset), mask, size);
+		flow->qos_rule_size = offset + size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, size);
+		memcpy((flow->fs_mask_addr + offset), mask, size);
+		flow->fs_rule_size = offset + size;
+	}
 
 	return 0;
 }
@@ -2237,22 +2490,36 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_raw *spec = pattern->spec;
 	const struct rte_flow_item_raw *mask = pattern->mask;
-	int prev_key_size =
-		priv->extract.qos_key_extract.key_profile.key_max_size;
 	int local_cfg = 0, ret;
 	uint32_t group;
+	struct dpaa2_key_extract *qos_key_extract;
+	struct dpaa2_key_extract *tc_key_extract;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
 		DPAA2_PMD_ERR("spec or mask not present.");
 		return -EINVAL;
 	}
-	/* Only supports non-relative with offset 0 */
-	if (spec->relative || spec->offset != 0 ||
-	    spec->search || spec->limit) {
-		DPAA2_PMD_ERR("relative and non zero offset not supported.");
+
+	if (spec->relative) {
+		/* TBD: relative offset support.
+		 * To support relative offset of previous L3 protocol item,
+		 * extracts should be expanded to identify if the frame is:
+		 * vlan or none-vlan.
+		 *
+		 * To support relative offset of previous L4 protocol item,
+		 * extracts should be expanded to identify if the frame is:
+		 * vlan/IPv4 or vlan/IPv6 or none-vlan/IPv4 or none-vlan/IPv6.
+		 */
+		DPAA2_PMD_ERR("relative not supported.");
+		return -EINVAL;
+	}
+
+	if (spec->search) {
+		DPAA2_PMD_ERR("search not supported.");
 		return -EINVAL;
 	}
+
 	/* Spec len and mask len should be same */
 	if (spec->length != mask->length) {
 		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
@@ -2264,36 +2531,44 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (prev_key_size <= spec->length) {
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
-						 spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FLOW_QOS_TYPE;
+	qos_key_extract = &priv->extract.qos_key_extract;
+	tc_key_extract = &priv->extract.tc_key_extract[group];
 
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.tc_key_extract[group],
-					spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FLOW_FS_TYPE;
+	ret = dpaa2_flow_extract_add_raw(priv,
+			spec->offset, spec->length,
+			DPAA2_FLOW_QOS_TYPE, 0, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS Extract RAW add failed.");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
-					   mask->pattern, spec->length);
+	ret = dpaa2_flow_extract_add_raw(priv,
+			spec->offset, spec->length,
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("FS[%d] Extract RAW add failed.",
+			group);
+		return -EINVAL;
+	}
+
+	ret = dpaa2_flow_raw_rule_data_set(flow,
+			&qos_key_extract->key_profile,
+			spec->offset, spec->length,
+			spec->pattern, mask->pattern,
+			DPAA2_FLOW_QOS_TYPE);
 	if (ret) {
 		DPAA2_PMD_ERR("QoS RAW rule data set failed");
-		return -1;
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
-					   mask->pattern, spec->length);
+	ret = dpaa2_flow_raw_rule_data_set(flow,
+			&tc_key_extract->key_profile,
+			spec->offset, spec->length,
+			spec->pattern, mask->pattern,
+			DPAA2_FLOW_FS_TYPE);
 	if (ret) {
 		DPAA2_PMD_ERR("FS RAW rule data set failed");
-		return -1;
+		return -EINVAL;
 	}
 
 	(*device_configured) |= local_cfg;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 27/43] net/dpaa2: frame attribute flags parser
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (25 preceding siblings ...)
  2024-09-13  5:59 ` [v1 26/43] net/dpaa2: enhancement of raw flow extract vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 28/43] net/dpaa2: add VXLAN distribution support vanshika.shukla
                   ` (16 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

FAF parser extracts are used to identify protocol type
instead of extracts of previous protocol' type.
FAF starts from offset 2 to include user defined flags which
will be used for soft protocol distribution.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 475 +++++++++++++++++++--------------
 1 file changed, 273 insertions(+), 202 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index fe3c9f6d7d..d7b53a1916 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -22,13 +22,6 @@
 #include <dpaa2_ethdev.h>
 #include <dpaa2_pmd_logs.h>
 
-/* Workaround to discriminate the UDP/TCP/SCTP
- * with next protocol of l3.
- * MC/WRIOP are not able to identify
- * the l4 protocol with l4 ports.
- */
-static int mc_l4_port_identification;
-
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
@@ -260,6 +253,10 @@ dpaa2_flow_qos_extracts_log(const struct dpaa2_dev_priv *priv)
 			sprintf(string, "raw offset/len: %d/%d",
 				extract->extract.from_data.offset,
 				extract->extract.from_data.size);
+		} else if (type == DPKG_EXTRACT_FROM_PARSE) {
+			sprintf(string, "parse offset/len: %d/%d",
+				extract->extract.from_parse.offset,
+				extract->extract.from_parse.size);
 		}
 		DPAA2_FLOW_DUMP("%s", string);
 		if ((idx + 1) < dpkg->num_extracts)
@@ -298,6 +295,10 @@ dpaa2_flow_fs_extracts_log(const struct dpaa2_dev_priv *priv,
 			sprintf(string, "raw offset/len: %d/%d",
 				extract->extract.from_data.offset,
 				extract->extract.from_data.size);
+		} else if (type == DPKG_EXTRACT_FROM_PARSE) {
+			sprintf(string, "parse offset/len: %d/%d",
+				extract->extract.from_parse.offset,
+				extract->extract.from_parse.size);
 		}
 		DPAA2_FLOW_DUMP("%s", string);
 		if ((idx + 1) < dpkg->num_extracts)
@@ -631,6 +632,66 @@ dpaa2_flow_fs_rule_insert_hole(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_faf_advance(struct dpaa2_dev_priv *priv,
+	int faf_byte, enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off++;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, 1);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, 1, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = 1;
+	key_profile->prot_field[pos].type = DPAA2_FAF_KEY;
+	key_profile->prot_field[pos].key_field = faf_byte;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	key_profile->key_max_size++;
+
+	return pos;
+}
+
 /* Move IPv4/IPv6 addresses to fill new extract previous IP address.
  * Current MC/WRIOP only support generic IP extract but IP address
  * is not fixed, so we have to put them at end of extracts, otherwise,
@@ -692,6 +753,7 @@ dpaa2_flow_key_profile_advance(enum net_prot prot,
 	}
 
 	key_profile->key_size[pos] = field_size;
+	key_profile->prot_field[pos].type = DPAA2_NET_PROT_KEY;
 	key_profile->prot_field[pos].prot = prot;
 	key_profile->prot_field[pos].key_field = field;
 	key_profile->num++;
@@ -715,6 +777,55 @@ dpaa2_flow_key_profile_advance(enum net_prot prot,
 	return pos;
 }
 
+static int
+dpaa2_flow_faf_add_hdr(int faf_byte,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i, offset;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	pos = dpaa2_flow_faf_advance(priv,
+			faf_byte, dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
+	}
+
+	offset = DPAA2_FAFE_PSR_OFFSET + faf_byte;
+
+	extracts[pos].type = DPKG_EXTRACT_FROM_PARSE;
+	extracts[pos].extract.from_parse.offset = offset;
+	extracts[pos].extract.from_parse.size = 1;
+
+	dpkg->num_extracts++;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_extract_add_hdr(enum net_prot prot,
 	uint32_t field, uint8_t field_size,
@@ -1001,6 +1112,7 @@ dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
 			key_profile->key_offset[pos] = 0;
 		}
 		key_profile->key_size[pos] = item_size;
+		key_profile->prot_field[pos].type = DPAA2_NET_PROT_KEY;
 		key_profile->prot_field[pos].prot = NET_PROT_PAYLOAD;
 		key_profile->prot_field[pos].key_field = field;
 
@@ -1040,7 +1152,7 @@ dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
 
 static inline int
 dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
-	enum net_prot prot, uint32_t key_field)
+	enum key_prot_type type, enum net_prot prot, uint32_t key_field)
 {
 	int pos;
 	struct key_prot_field *prot_field;
@@ -1053,16 +1165,23 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 
 	prot_field = key_profile->prot_field;
 	for (pos = 0; pos < key_profile->num; pos++) {
-		if (prot_field[pos].prot == prot &&
-			prot_field[pos].key_field == key_field) {
+		if (type == DPAA2_NET_PROT_KEY &&
+			prot_field[pos].prot == prot &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
+			return pos;
+		else if (type == DPAA2_FAF_KEY &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
 			return pos;
-		}
 	}
 
-	if (dpaa2_flow_l4_src_port_extract(prot, key_field)) {
+	if (type == DPAA2_NET_PROT_KEY &&
+		dpaa2_flow_l4_src_port_extract(prot, key_field)) {
 		if (key_profile->l4_src_port_present)
 			return key_profile->l4_src_port_pos;
-	} else if (dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
+	} else if (type == DPAA2_NET_PROT_KEY &&
+		dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
 		if (key_profile->l4_dst_port_present)
 			return key_profile->l4_dst_port_pos;
 	}
@@ -1072,80 +1191,53 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 
 static inline int
 dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
-	enum net_prot prot, uint32_t key_field)
+	enum key_prot_type type, enum net_prot prot, uint32_t key_field)
 {
 	int i;
 
-	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
+	i = dpaa2_flow_extract_search(key_profile, type, prot, key_field);
 	if (i >= 0)
 		return key_profile->key_offset[i];
 	else
 		return i;
 }
 
-struct prev_proto_field_id {
-	enum net_prot prot;
-	union {
-		rte_be16_t eth_type;
-		uint8_t ip_proto;
-	};
-};
-
 static int
-dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
+dpaa2_flow_faf_add_rule(struct dpaa2_dev_priv *priv,
 	struct dpaa2_dev_flow *flow,
-	const struct prev_proto_field_id *prev_proto,
+	enum dpaa2_rx_faf_offset faf_bit_off,
 	int group,
 	enum dpaa2_flow_dist_type dist_type)
 {
 	int offset;
 	uint8_t *key_addr;
 	uint8_t *mask_addr;
-	uint32_t field = 0;
-	rte_be16_t eth_type;
-	uint8_t ip_proto;
 	struct dpaa2_key_extract *key_extract;
 	struct dpaa2_key_profile *key_profile;
+	uint8_t faf_byte = faf_bit_off / 8;
+	uint8_t faf_bit_in_byte = faf_bit_off % 8;
 
-	if (prev_proto->prot == NET_PROT_ETH) {
-		field = NH_FLD_ETH_TYPE;
-	} else if (prev_proto->prot == NET_PROT_IP) {
-		field = NH_FLD_IP_PROTO;
-	} else {
-		DPAA2_PMD_ERR("Prev proto(%d) not support!",
-			prev_proto->prot);
-		return -EINVAL;
-	}
+	faf_bit_in_byte = 7 - faf_bit_in_byte;
 
 	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
 		key_extract = &priv->extract.qos_key_extract;
 		key_profile = &key_extract->key_profile;
 
 		offset = dpaa2_flow_extract_key_offset(key_profile,
-				prev_proto->prot, field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (offset < 0) {
 			DPAA2_PMD_ERR("%s QoS key extract failed", __func__);
 			return -EINVAL;
 		}
 		key_addr = flow->qos_key_addr + offset;
 		mask_addr = flow->qos_mask_addr + offset;
-		if (prev_proto->prot == NET_PROT_ETH) {
-			eth_type = prev_proto->eth_type;
-			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
-			eth_type = 0xffff;
-			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
-			flow->qos_rule_size += sizeof(rte_be16_t);
-		} else if (prev_proto->prot == NET_PROT_IP) {
-			ip_proto = prev_proto->ip_proto;
-			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
-			ip_proto = 0xff;
-			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
-			flow->qos_rule_size += sizeof(uint8_t);
-		} else {
-			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
-				prev_proto->prot);
-			return -EINVAL;
-		}
+
+		if (!(*key_addr) &&
+			key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size++;
+
+		*key_addr |=  (1 << faf_bit_in_byte);
+		*mask_addr |=  (1 << faf_bit_in_byte);
 	}
 
 	if (dist_type & DPAA2_FLOW_FS_TYPE) {
@@ -1153,7 +1245,7 @@ dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
 		key_profile = &key_extract->key_profile;
 
 		offset = dpaa2_flow_extract_key_offset(key_profile,
-				prev_proto->prot, field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (offset < 0) {
 			DPAA2_PMD_ERR("%s TC[%d] key extract failed",
 				__func__, group);
@@ -1162,23 +1254,12 @@ dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
 		key_addr = flow->fs_key_addr + offset;
 		mask_addr = flow->fs_mask_addr + offset;
 
-		if (prev_proto->prot == NET_PROT_ETH) {
-			eth_type = prev_proto->eth_type;
-			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
-			eth_type = 0xffff;
-			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
-			flow->fs_rule_size += sizeof(rte_be16_t);
-		} else if (prev_proto->prot == NET_PROT_IP) {
-			ip_proto = prev_proto->ip_proto;
-			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
-			ip_proto = 0xff;
-			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
-			flow->fs_rule_size += sizeof(uint8_t);
-		} else {
-			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
-				prev_proto->prot);
-			return -EINVAL;
-		}
+		if (!(*key_addr) &&
+			key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size++;
+
+		*key_addr |=  (1 << faf_bit_in_byte);
+		*mask_addr |=  (1 << faf_bit_in_byte);
 	}
 
 	return 0;
@@ -1200,7 +1281,7 @@ dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 	}
 
 	offset = dpaa2_flow_extract_key_offset(key_profile,
-			prot, field);
+			DPAA2_NET_PROT_KEY, prot, field);
 	if (offset < 0) {
 		DPAA2_PMD_ERR("P(%d)/F(%d) does not exist!",
 			prot, field);
@@ -1238,7 +1319,7 @@ dpaa2_flow_raw_rule_data_set(struct dpaa2_dev_flow *flow,
 	field = extract_offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
 	field |= extract_size;
 	offset = dpaa2_flow_extract_key_offset(key_profile,
-			NET_PROT_PAYLOAD, field);
+			DPAA2_NET_PROT_KEY, NET_PROT_PAYLOAD, field);
 	if (offset < 0) {
 		DPAA2_PMD_ERR("offset(%d)/size(%d) raw extract failed",
 			extract_offset, size);
@@ -1321,60 +1402,39 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 }
 
 static int
-dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
+dpaa2_flow_identify_by_faf(struct dpaa2_dev_priv *priv,
 	struct dpaa2_dev_flow *flow,
-	const struct prev_proto_field_id *prev_prot,
+	enum dpaa2_rx_faf_offset faf_off,
 	enum dpaa2_flow_dist_type dist_type,
 	int group, int *recfg)
 {
-	int ret, index, local_cfg = 0, size = 0;
+	int ret, index, local_cfg = 0;
 	struct dpaa2_key_extract *extract;
 	struct dpaa2_key_profile *key_profile;
-	enum net_prot prot = prev_prot->prot;
-	uint32_t key_field = 0;
-
-	if (prot == NET_PROT_ETH) {
-		key_field = NH_FLD_ETH_TYPE;
-		size = sizeof(rte_be16_t);
-	} else if (prot == NET_PROT_IP) {
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else if (prot == NET_PROT_IPV4) {
-		prot = NET_PROT_IP;
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else if (prot == NET_PROT_IPV6) {
-		prot = NET_PROT_IP;
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else {
-		DPAA2_PMD_ERR("Invalid Prev prot(%d)", prot);
-		return -EINVAL;
-	}
+	uint8_t faf_byte = faf_off / 8;
 
 	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
 		extract = &priv->extract.qos_key_extract;
 		key_profile = &extract->key_profile;
 
 		index = dpaa2_flow_extract_search(key_profile,
-				prot, key_field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add_hdr(prot,
-					key_field, size, priv,
-					DPAA2_FLOW_QOS_TYPE, group,
+			ret = dpaa2_flow_faf_add_hdr(faf_byte,
+					priv, DPAA2_FLOW_QOS_TYPE, group,
 					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("QOS prev extract add failed");
+				DPAA2_PMD_ERR("QOS faf extract add failed");
 
 				return -EINVAL;
 			}
 			local_cfg |= DPAA2_FLOW_QOS_TYPE;
 		}
 
-		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+		ret = dpaa2_flow_faf_add_rule(priv, flow, faf_off, group,
 				DPAA2_FLOW_QOS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("QoS prev rule set failed");
+			DPAA2_PMD_ERR("QoS faf rule set failed");
 			return -EINVAL;
 		}
 	}
@@ -1384,14 +1444,13 @@ dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
 		key_profile = &extract->key_profile;
 
 		index = dpaa2_flow_extract_search(key_profile,
-				prot, key_field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add_hdr(prot,
-					key_field, size, priv,
-					DPAA2_FLOW_FS_TYPE, group,
+			ret = dpaa2_flow_faf_add_hdr(faf_byte,
+					priv, DPAA2_FLOW_FS_TYPE, group,
 					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS[%d] prev extract add failed",
+				DPAA2_PMD_ERR("FS[%d] faf extract add failed",
 					group);
 
 				return -EINVAL;
@@ -1399,17 +1458,17 @@ dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
 			local_cfg |= DPAA2_FLOW_FS_TYPE;
 		}
 
-		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+		ret = dpaa2_flow_faf_add_rule(priv, flow, faf_off, group,
 				DPAA2_FLOW_FS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS[%d] prev rule set failed",
+			DPAA2_PMD_ERR("FS[%d] faf rule set failed",
 				group);
 			return -EINVAL;
 		}
 	}
 
 	if (recfg)
-		*recfg = local_cfg;
+		*recfg |= local_cfg;
 
 	return 0;
 }
@@ -1436,7 +1495,7 @@ dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
 	key_profile = &key_extract->key_profile;
 
 	index = dpaa2_flow_extract_search(key_profile,
-			prot, field);
+			DPAA2_NET_PROT_KEY, prot, field);
 	if (index < 0) {
 		ret = dpaa2_flow_extract_add_hdr(prot,
 				field, size, priv,
@@ -1575,6 +1634,7 @@ dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
 		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
 	}
 	key_profile->num++;
+	key_profile->prot_field[num].type = DPAA2_NET_PROT_KEY;
 
 	dpkg->extracts[num].extract.from_hdr.prot = prot;
 	dpkg->extracts[num].extract.from_hdr.field = field;
@@ -1685,15 +1745,28 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 	spec = pattern->spec;
 	mask = pattern->mask ?
 			pattern->mask : &dpaa2_flow_item_eth_mask;
-	if (!spec) {
-		DPAA2_PMD_WARN("No pattern spec for Eth flow");
-		return -EINVAL;
-	}
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ETH_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ETH_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
 		RTE_FLOW_ITEM_TYPE_ETH)) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
@@ -1782,15 +1855,18 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_ETH;
-		prev_proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
-		ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_proto,
-				DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-				group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
 		if (ret)
 			return ret;
+
 		(*device_configured) |= local_cfg;
 		return 0;
 	}
@@ -1837,7 +1913,6 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	const void *key, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int size;
-	struct prev_proto_field_id prev_prot;
 
 	group = attr->group;
 
@@ -1850,19 +1925,21 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	prev_prot.prot = NET_PROT_ETH;
-	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
+					 DPAA2_FLOW_QOS_TYPE, group,
+					 &local_cfg);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE, group,
-			&local_cfg);
-	if (ret) {
-		DPAA2_PMD_ERR("IPv4 identification failed!");
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
+					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret)
 		return ret;
-	}
 
-	if (!spec_ipv4)
+	if (!spec_ipv4) {
+		(*device_configured) |= local_cfg;
 		return 0;
+	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
 				       RTE_FLOW_ITEM_TYPE_IPV4)) {
@@ -1954,7 +2031,6 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
 	int size;
-	struct prev_proto_field_id prev_prot;
 
 	group = attr->group;
 
@@ -1966,19 +2042,21 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	prev_prot.prot = NET_PROT_ETH;
-	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
+					 DPAA2_FLOW_QOS_TYPE, group,
+					 &local_cfg);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
-	if (ret) {
-		DPAA2_PMD_ERR("IPv6 identification failed!");
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
+					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret)
 		return ret;
-	}
 
-	if (!spec_ipv6)
+	if (!spec_ipv6) {
+		(*device_configured) |= local_cfg;
 		return 0;
+	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
 				       RTE_FLOW_ITEM_TYPE_IPV6)) {
@@ -2082,18 +2160,15 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		/* Next proto of Generical IP is actually used
-		 * for ICMP identification.
-		 * Example: flow create 0 ingress pattern icmp
-		 */
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ICMP_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_ICMP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ICMP_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
 		if (ret)
 			return ret;
 
@@ -2170,22 +2245,21 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_UDP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_UDP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_UDP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2257,22 +2331,21 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_TCP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_TCP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_TCP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2344,22 +2417,21 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_SCTP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_SCTP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_SCTP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2432,21 +2504,20 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GRE_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_GRE;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GRE_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
 		if (ret)
 			return ret;
 
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 28/43] net/dpaa2: add VXLAN distribution support
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (26 preceding siblings ...)
  2024-09-13  5:59 ` [v1 27/43] net/dpaa2: frame attribute flags parser vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 29/43] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
                   ` (15 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Extracts from vxlan header for distribution.
The vxlan header is set by soft parser code in
soft parser context located from offset 43 of parser results:

<assign-variable name="$softparsectx[0:3]" value="vxlan.vnid"/>

vxlan protocol is identified by vxlan bit of frame attribut flags.
The parser result extracts are added for this functionality.

Example:
flow create 0 ingress pattern vxlan / end actions pf / queue index 4 / end

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |   6 +-
 drivers/net/dpaa2/dpaa2_flow.c   | 313 +++++++++++++++++++++++++++++++
 2 files changed, 318 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 8f548467a4..aeddcfdfa9 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -282,8 +282,12 @@ enum ip_addr_extract_type {
 };
 
 enum key_prot_type {
+	/* HW extracts from standard protocol fields*/
 	DPAA2_NET_PROT_KEY,
-	DPAA2_FAF_KEY
+	/* HW extracts from FAF of PR*/
+	DPAA2_FAF_KEY,
+	/* HW extracts from PR other than FAF*/
+	DPAA2_PR_KEY
 };
 
 struct key_prot_field {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index d7b53a1916..7bec13d4eb 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -38,6 +38,8 @@ enum dpaa2_flow_dist_type {
 #define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
 #define DPAA2_FLOW_MAX_KEY_SIZE			16
 
+#define VXLAN_HF_VNI 0x08
+
 struct dpaa2_dev_flow {
 	LIST_ENTRY(dpaa2_dev_flow) next;
 	struct dpni_rule_cfg qos_rule;
@@ -144,6 +146,11 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
+
+static const struct rte_flow_item_vxlan dpaa2_flow_item_vxlan_mask = {
+	.flags = 0xff,
+	.vni = "\xff\xff\xff",
+};
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -692,6 +699,68 @@ dpaa2_flow_faf_advance(struct dpaa2_dev_priv *priv,
 	return pos;
 }
 
+static int
+dpaa2_flow_pr_advance(struct dpaa2_dev_priv *priv,
+	uint32_t pr_offset, uint32_t pr_size,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off += pr_size;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, pr_size);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, pr_size, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = pr_size;
+	key_profile->prot_field[pos].type = DPAA2_PR_KEY;
+	key_profile->prot_field[pos].key_field =
+		(pr_offset << 16) | pr_size;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	key_profile->key_max_size += pr_size;
+
+	return pos;
+}
+
 /* Move IPv4/IPv6 addresses to fill new extract previous IP address.
  * Current MC/WRIOP only support generic IP extract but IP address
  * is not fixed, so we have to put them at end of extracts, otherwise,
@@ -826,6 +895,59 @@ dpaa2_flow_faf_add_hdr(int faf_byte,
 	return 0;
 }
 
+static int
+dpaa2_flow_pr_add_hdr(uint32_t pr_offset,
+	uint32_t pr_size, struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if ((pr_offset + pr_size) > DPAA2_FAPR_SIZE) {
+		DPAA2_PMD_ERR("PR extracts(%d:%d) overflow",
+			pr_offset, pr_size);
+		return -EINVAL;
+	}
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	pos = dpaa2_flow_pr_advance(priv,
+			pr_offset, pr_size, dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
+	}
+
+	extracts[pos].type = DPKG_EXTRACT_FROM_PARSE;
+	extracts[pos].extract.from_parse.offset = pr_offset;
+	extracts[pos].extract.from_parse.size = pr_size;
+
+	dpkg->num_extracts++;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_extract_add_hdr(enum net_prot prot,
 	uint32_t field, uint8_t field_size,
@@ -1174,6 +1296,10 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 			prot_field[pos].key_field == key_field &&
 			prot_field[pos].type == type)
 			return pos;
+		else if (type == DPAA2_PR_KEY &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
+			return pos;
 	}
 
 	if (type == DPAA2_NET_PROT_KEY &&
@@ -1265,6 +1391,41 @@ dpaa2_flow_faf_add_rule(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static inline int
+dpaa2_flow_pr_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	uint32_t pr_offset, uint32_t pr_size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int offset;
+	uint32_t pr_field = pr_offset << 16 | pr_size;
+
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			DPAA2_PR_KEY, NET_PROT_NONE, pr_field);
+	if (offset < 0) {
+		DPAA2_PMD_ERR("PR off(%d)/size(%d) does not exist!",
+			pr_offset, pr_size);
+		return -EINVAL;
+	}
+
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, pr_size);
+		memcpy((flow->qos_mask_addr + offset), mask, pr_size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size = offset + pr_size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, pr_size);
+		memcpy((flow->fs_mask_addr + offset), mask, pr_size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size = offset + pr_size;
+	}
+
+	return 0;
+}
+
 static inline int
 dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 	struct dpaa2_key_profile *key_profile,
@@ -1386,6 +1547,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_gre_mask;
 		size = sizeof(struct rte_flow_item_gre);
 		break;
+	case RTE_FLOW_ITEM_TYPE_VXLAN:
+		mask_support = (const char *)&dpaa2_flow_item_vxlan_mask;
+		size = sizeof(struct rte_flow_item_vxlan);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -1473,6 +1638,55 @@ dpaa2_flow_identify_by_faf(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_add_pr_extract_rule(struct dpaa2_dev_flow *flow,
+	uint32_t pr_offset, uint32_t pr_size,
+	const void *key, const void *mask,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int index, ret, local_cfg = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
+	uint32_t pr_field = pr_offset << 16 | pr_size;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	key_profile = &key_extract->key_profile;
+
+	index = dpaa2_flow_extract_search(key_profile,
+			DPAA2_PR_KEY, NET_PROT_NONE, pr_field);
+	if (index < 0) {
+		ret = dpaa2_flow_pr_add_hdr(pr_offset,
+				pr_size, priv,
+				dist_type, tc_id, NULL);
+		if (ret) {
+			DPAA2_PMD_ERR("PR add off(%d)/size(%d) failed",
+				pr_offset, pr_size);
+
+			return ret;
+		}
+		local_cfg |= dist_type;
+	}
+
+	ret = dpaa2_flow_pr_rule_data_set(flow, key_profile,
+			pr_offset, pr_size, key, mask, dist_type);
+	if (ret) {
+		DPAA2_PMD_ERR("PR off(%d)/size(%d) rule data set failed",
+			pr_offset, pr_size);
+
+		return ret;
+	}
+
+	if (recfg)
+		*recfg |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
 	enum net_prot prot, uint32_t field,
@@ -2549,6 +2763,90 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_vxlan *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_vxlan_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_VXLAN_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_VXLAN_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VXLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of VXLAN not support.");
+
+		return -1;
+	}
+
+	if (mask->flags) {
+		if (spec->flags != VXLAN_HF_VNI) {
+			DPAA2_PMD_ERR("vxlan flag(0x%02x) must be 0x%02x.",
+				spec->flags, VXLAN_HF_VNI);
+			return -EINVAL;
+		}
+		if (mask->flags != 0xff) {
+			DPAA2_PMD_ERR("Not support to extract vxlan flag.");
+			return -EINVAL;
+		}
+	}
+
+	if (mask->vni[0] || mask->vni[1] || mask->vni[2]) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_VNI_OFFSET,
+			sizeof(mask->vni), spec->vni,
+			mask->vni,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_VNI_OFFSET,
+			sizeof(mask->vni), spec->vni,
+			mask->vni,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -2764,6 +3062,9 @@ dpaa2_flow_verify_action(struct dpaa2_dev_priv *priv,
 				}
 			}
 
+			break;
+		case RTE_FLOW_ACTION_TYPE_PF:
+			/* Skip this action, have to add for vxlan*/
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
@@ -3114,6 +3415,15 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				return ret;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+			ret = dpaa2_configure_flow_vxlan(flow,
+					dev, attr, &pattern[i], actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("VXLAN flow config failed!");
+				return ret;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow,
 					dev, attr, &pattern[i],
@@ -3226,6 +3536,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			if (ret)
 				return ret;
 
+			break;
+		case RTE_FLOW_ACTION_TYPE_PF:
+			/* Skip this action, have to add for vxlan*/
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 29/43] net/dpaa2: protocol inside tunnel distribution
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (27 preceding siblings ...)
  2024-09-13  5:59 ` [v1 28/43] net/dpaa2: add VXLAN distribution support vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 30/43] net/dpaa2: eCPRI support by parser result vanshika.shukla
                   ` (14 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Control flow by protocols inside tunnel.
The tunnel flow items applied by application are in order from
outer to inner. The inner items start from tunnel item, something
like vxlan, GRE etc.

For example:
flow create 0 ingress pattern ipv4 / vxlan / ipv6 / end
	actions pf / queue index 2 / end

So the items following the tunnel item are tagged with "innner".
The inner items are extracted from parser results which are set
by soft parser.
So far only vxlan tunnel is supported. Limited by soft parser area,
only ehternet header and vlan header inside tunnel are able to be used
for flow distribution. IPv4, IPv6, UDP and TCP inside tunnel can be
detected by user defined FAF set by SP for flow distribution.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 587 +++++++++++++++++++++++++++++----
 1 file changed, 519 insertions(+), 68 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 7bec13d4eb..e4d7117192 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -58,6 +58,11 @@ struct dpaa2_dev_flow {
 	struct dpni_fs_action_cfg fs_action_cfg;
 };
 
+struct rte_dpaa2_flow_item {
+	struct rte_flow_item generic_item;
+	int in_tunnel;
+};
+
 static const
 enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
@@ -1939,10 +1944,203 @@ dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+dpaa2_configure_flow_tunnel_eth(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_eth *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+			pattern->mask : &dpaa2_flow_item_eth_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec)
+		return 0;
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH)) {
+		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
+
+		return -EINVAL;
+	}
+
+	if (memcmp((const char *)&mask->src,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		/*SRC[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR0_OFFSET,
+			1, &spec->src.addr_bytes[0],
+			&mask->src.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[1:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR1_OFFSET,
+			2, &spec->src.addr_bytes[1],
+			&mask->src.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[3:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR3_OFFSET,
+			1, &spec->src.addr_bytes[3],
+			&mask->src.addr_bytes[3],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[4:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR4_OFFSET,
+			2, &spec->src.addr_bytes[4],
+			&mask->src.addr_bytes[4],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		/*SRC[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR0_OFFSET,
+			1, &spec->src.addr_bytes[0],
+			&mask->src.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[1:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR1_OFFSET,
+			2, &spec->src.addr_bytes[1],
+			&mask->src.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[3:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR3_OFFSET,
+			1, &spec->src.addr_bytes[3],
+			&mask->src.addr_bytes[3],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[4:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR4_OFFSET,
+			2, &spec->src.addr_bytes[4],
+			&mask->src.addr_bytes[4],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->dst,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		/*DST[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR0_OFFSET,
+			1, &spec->dst.addr_bytes[0],
+			&mask->dst.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[1:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR1_OFFSET,
+			1, &spec->dst.addr_bytes[1],
+			&mask->dst.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[2:3]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR2_OFFSET,
+			3, &spec->dst.addr_bytes[2],
+			&mask->dst.addr_bytes[2],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[5:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR5_OFFSET,
+			1, &spec->dst.addr_bytes[5],
+			&mask->dst.addr_bytes[5],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		/*DST[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR0_OFFSET,
+			1, &spec->dst.addr_bytes[0],
+			&mask->dst.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[1:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR1_OFFSET,
+			1, &spec->dst.addr_bytes[1],
+			&mask->dst.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[2:3]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR2_OFFSET,
+			3, &spec->dst.addr_bytes[2],
+			&mask->dst.addr_bytes[2],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[5:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR5_OFFSET,
+			1, &spec->dst.addr_bytes[5],
+			&mask->dst.addr_bytes[5],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->type,
+		zero_cmp, sizeof(rte_be16_t))) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TYPE_OFFSET,
+			sizeof(rte_be16_t), &spec->type, &mask->type,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TYPE_OFFSET,
+			sizeof(rte_be16_t), &spec->type, &mask->type,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -1952,6 +2150,13 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 	const struct rte_flow_item_eth *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	if (dpaa2_pattern->in_tunnel) {
+		return dpaa2_configure_flow_tunnel_eth(flow,
+				dev, attr, pattern, device_configured);
+	}
 
 	group = attr->group;
 
@@ -2045,10 +2250,81 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
+dpaa2_configure_flow_tunnel_vlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_vlan *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_vlan_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAFE_VXLAN_IN_VLAN_FRAM,
+				DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAFE_VXLAN_IN_VLAN_FRAM,
+				DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+
+		return -EINVAL;
+	}
+
+	if (!mask->tci)
+		return 0;
+
+	ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TCI_OFFSET,
+			sizeof(rte_be16_t), &spec->tci, &mask->tci,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TCI_OFFSET,
+			sizeof(rte_be16_t), &spec->tci, &mask->tci,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2057,6 +2333,13 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_vlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	if (dpaa2_pattern->in_tunnel) {
+		return dpaa2_configure_flow_tunnel_vlan(flow,
+				dev, attr, pattern, device_configured);
+	}
 
 	group = attr->group;
 
@@ -2116,7 +2399,7 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 static int
 dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
+			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
 			  const struct rte_flow_action actions[] __rte_unused,
 			  struct rte_flow_error *error __rte_unused,
 			  int *device_configured)
@@ -2127,6 +2410,7 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	const void *key, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int size;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2135,6 +2419,26 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	mask_ipv4 = pattern->mask ?
 		    pattern->mask : &dpaa2_flow_item_ipv4_mask;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec_ipv4) {
+			DPAA2_PMD_ERR("Tunnel-IPv4 distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV4_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV4_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
@@ -2233,7 +2537,7 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 static int
 dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
+			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
 			  const struct rte_flow_action actions[] __rte_unused,
 			  struct rte_flow_error *error __rte_unused,
 			  int *device_configured)
@@ -2245,6 +2549,7 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
 	int size;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2256,6 +2561,26 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec_ipv6) {
+			DPAA2_PMD_ERR("Tunnel-IPv6 distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV6_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV6_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
 					 DPAA2_FLOW_QOS_TYPE, group,
 					 &local_cfg);
@@ -2352,7 +2677,7 @@ static int
 dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2361,6 +2686,7 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_icmp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2373,6 +2699,11 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ICMP distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_ICMP_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2438,7 +2769,7 @@ static int
 dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2447,6 +2778,7 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_udp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2459,6 +2791,26 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec) {
+			DPAA2_PMD_ERR("Tunnel-UDP distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_UDP_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_UDP_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_UDP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2524,7 +2876,7 @@ static int
 dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2533,6 +2885,7 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_tcp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2545,6 +2898,26 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec) {
+			DPAA2_PMD_ERR("Tunnel-TCP distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_TCP_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_TCP_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_TCP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2610,7 +2983,7 @@ static int
 dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2619,6 +2992,7 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_sctp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2631,6 +3005,11 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-SCTP distribution not support");
+		return -ENOTSUP;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_SCTP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2696,7 +3075,7 @@ static int
 dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2705,6 +3084,7 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_gre *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2717,6 +3097,11 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-GRE distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_GRE_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2767,7 +3152,7 @@ static int
 dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2776,6 +3161,7 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_vxlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2788,6 +3174,11 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-VXLAN distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_VXLAN_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2851,18 +3242,19 @@ static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const struct rte_flow_item_raw *spec = pattern->spec;
-	const struct rte_flow_item_raw *mask = pattern->mask;
 	int local_cfg = 0, ret;
 	uint32_t group;
 	struct dpaa2_key_extract *qos_key_extract;
 	struct dpaa2_key_extract *tc_key_extract;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
+	const struct rte_flow_item_raw *spec = pattern->spec;
+	const struct rte_flow_item_raw *mask = pattern->mask;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
@@ -3306,6 +3698,45 @@ dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_item_convert(const struct rte_flow_item pattern[],
+			struct rte_dpaa2_flow_item **dpaa2_pattern)
+{
+	struct rte_dpaa2_flow_item *new_pattern;
+	int num = 0, tunnel_start = 0;
+
+	while (1) {
+		num++;
+		if (pattern[num].type == RTE_FLOW_ITEM_TYPE_END)
+			break;
+	}
+
+	new_pattern = rte_malloc(NULL, sizeof(struct rte_dpaa2_flow_item) * num,
+				 RTE_CACHE_LINE_SIZE);
+	if (!new_pattern) {
+		DPAA2_PMD_ERR("Failed to alloc %d flow items", num);
+		return -ENOMEM;
+	}
+
+	num = 0;
+	while (pattern[num].type != RTE_FLOW_ITEM_TYPE_END) {
+		memcpy(&new_pattern[num].generic_item, &pattern[num],
+		       sizeof(struct rte_flow_item));
+		new_pattern[num].in_tunnel = 0;
+
+		if (pattern[num].type == RTE_FLOW_ITEM_TYPE_VXLAN)
+			tunnel_start = 1;
+		else if (tunnel_start)
+			new_pattern[num].in_tunnel = 1;
+		num++;
+	}
+
+	new_pattern[num].generic_item.type = RTE_FLOW_ITEM_TYPE_END;
+	*dpaa2_pattern = new_pattern;
+
+	return 0;
+}
+
 static int
 dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3322,6 +3753,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	uint16_t dist_size, key_size;
 	struct dpaa2_key_extract *qos_key_extract;
 	struct dpaa2_key_extract *tc_key_extract;
+	struct rte_dpaa2_flow_item *dpaa2_pattern = NULL;
 
 	ret = dpaa2_flow_verify_attr(priv, attr);
 	if (ret)
@@ -3331,107 +3763,121 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	if (ret)
 		return ret;
 
+	ret = dpaa2_flow_item_convert(pattern, &dpaa2_pattern);
+	if (ret)
+		return ret;
+
 	/* Parse pattern list to get the matching parameters */
 	while (!end_of_list) {
 		switch (pattern[i].type) {
 		case RTE_FLOW_ITEM_TYPE_ETH:
-			ret = dpaa2_configure_flow_eth(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_eth(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ETH flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_VLAN:
-			ret = dpaa2_configure_flow_vlan(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_vlan(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("vLan flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
-			ret = dpaa2_configure_flow_ipv4(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_ipv4(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV4 flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ret = dpaa2_configure_flow_ipv6(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_ipv6(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV6 flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_ICMP:
-			ret = dpaa2_configure_flow_icmp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_icmp(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ICMP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_UDP:
-			ret = dpaa2_configure_flow_udp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_udp(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("UDP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
-			ret = dpaa2_configure_flow_tcp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_tcp(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("TCP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_SCTP:
-			ret = dpaa2_configure_flow_sctp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_sctp(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("SCTP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
-			ret = dpaa2_configure_flow_gre(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_gre(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("GRE flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = dpaa2_configure_flow_vxlan(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_vxlan(flow, dev, attr,
+							 &dpaa2_pattern[i],
+							 actions, error,
+							 &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("VXLAN flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
-			ret = dpaa2_configure_flow_raw(flow,
-					dev, attr, &pattern[i],
-					actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_raw(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("RAW flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_END:
@@ -3463,7 +3909,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			ret = dpaa2_configure_flow_fs_action(priv, flow,
 							     &actions[j]);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			/* Configure FS table first*/
 			dist_size = priv->nb_rx_queues / priv->num_rx_tc;
@@ -3473,20 +3919,20 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 								   dist_size,
 								   false);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			/* Configure QoS table then.*/
 			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
 				ret = dpaa2_configure_qos_table(priv, false);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (priv->num_rx_tc > 1) {
 				ret = dpaa2_flow_add_qos_rule(priv, flow);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (flow->tc_index >= priv->fs_entries) {
@@ -3497,7 +3943,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 
 			ret = dpaa2_flow_add_fs_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			break;
 		case RTE_FLOW_ACTION_TYPE_RSS:
@@ -3509,7 +3955,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			if (ret < 0) {
 				DPAA2_PMD_ERR("TC[%d] distset RSS failed",
 					      flow->tc_id);
-				return ret;
+				goto end_flow_set;
 			}
 
 			dist_size = rss_conf->queue_num;
@@ -3519,22 +3965,22 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 								   dist_size,
 								   true);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
 				ret = dpaa2_configure_qos_table(priv, true);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			ret = dpaa2_flow_add_qos_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			ret = dpaa2_flow_add_fs_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			break;
 		case RTE_FLOW_ACTION_TYPE_PF:
@@ -3551,6 +3997,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 		j++;
 	}
 
+end_flow_set:
 	if (!ret) {
 		/* New rules are inserted. */
 		if (!curr) {
@@ -3561,6 +4008,10 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			LIST_INSERT_AFTER(curr, flow, next);
 		}
 	}
+
+	if (dpaa2_pattern)
+		rte_free(dpaa2_pattern);
+
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 30/43] net/dpaa2: eCPRI support by parser result
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (28 preceding siblings ...)
  2024-09-13  5:59 ` [v1 29/43] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 31/43] net/dpaa2: add GTP flow support vanshika.shukla
                   ` (13 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Soft parser extracts ECPRI header and message to specified
areas of parser result.
Flow is classified according to the ECPRI extracts from praser result.
This implementation supports ECPRI over ethernet/vlan/UDP and various
types/messages combinations.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |  18 ++
 drivers/net/dpaa2/dpaa2_flow.c   | 348 ++++++++++++++++++++++++++++++-
 2 files changed, 365 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index aeddcfdfa9..eaa653d266 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -179,6 +179,8 @@ enum dpaa2_rx_faf_offset {
 	FAFE_VXLAN_IN_IPV6_FRAM = 2,
 	FAFE_VXLAN_IN_UDP_FRAM = 3,
 	FAFE_VXLAN_IN_TCP_FRAM = 4,
+
+	FAFE_ECPRI_FRAM = 7,
 	/* Set by SP end*/
 
 	FAF_GTP_PRIMED_FRAM = 1 + DPAA2_FAFE_PSR_SIZE * 8,
@@ -207,6 +209,17 @@ enum dpaa2_rx_faf_offset {
 	FAF_ESP_FRAM = 89 + DPAA2_FAFE_PSR_SIZE * 8,
 };
 
+enum dpaa2_ecpri_fafe_type {
+	ECPRI_FAFE_TYPE_0 = (8 - FAFE_ECPRI_FRAM),
+	ECPRI_FAFE_TYPE_1 = (8 - FAFE_ECPRI_FRAM) | (1 << 1),
+	ECPRI_FAFE_TYPE_2 = (8 - FAFE_ECPRI_FRAM) | (2 << 1),
+	ECPRI_FAFE_TYPE_3 = (8 - FAFE_ECPRI_FRAM) | (3 << 1),
+	ECPRI_FAFE_TYPE_4 = (8 - FAFE_ECPRI_FRAM) | (4 << 1),
+	ECPRI_FAFE_TYPE_5 = (8 - FAFE_ECPRI_FRAM) | (5 << 1),
+	ECPRI_FAFE_TYPE_6 = (8 - FAFE_ECPRI_FRAM) | (6 << 1),
+	ECPRI_FAFE_TYPE_7 = (8 - FAFE_ECPRI_FRAM) | (7 << 1)
+};
+
 #define DPAA2_PR_ETH_OFF_OFFSET 19
 #define DPAA2_PR_TCI_OFF_OFFSET 21
 #define DPAA2_PR_LAST_ETYPE_OFFSET 23
@@ -236,6 +249,11 @@ enum dpaa2_rx_faf_offset {
 #define DPAA2_VXLAN_IN_TYPE_OFFSET 46
 /* Set by SP for vxlan distribution end*/
 
+/* ECPRI shares SP context with VXLAN*/
+#define DPAA2_ECPRI_MSG_OFFSET DPAA2_VXLAN_VNI_OFFSET
+
+#define DPAA2_ECPRI_MAX_EXTRACT_NB 8
+
 struct ipv4_sd_addr_extract_rule {
 	uint32_t ipv4_src;
 	uint32_t ipv4_dst;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index e4d7117192..e4fffdbf33 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -156,6 +156,13 @@ static const struct rte_flow_item_vxlan dpaa2_flow_item_vxlan_mask = {
 	.flags = 0xff,
 	.vni = "\xff\xff\xff",
 };
+
+static const struct rte_flow_item_ecpri dpaa2_flow_item_ecpri_mask = {
+	.hdr.common.type = 0xff,
+	.hdr.dummy[0] = RTE_BE32(0xffffffff),
+	.hdr.dummy[1] = RTE_BE32(0xffffffff),
+	.hdr.dummy[2] = RTE_BE32(0xffffffff),
+};
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -1556,6 +1563,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_vxlan_mask;
 		size = sizeof(struct rte_flow_item_vxlan);
 		break;
+	case RTE_FLOW_ITEM_TYPE_ECPRI:
+		mask_support = (const char *)&dpaa2_flow_item_ecpri_mask;
+		size = sizeof(struct rte_flow_item_ecpri);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -3238,6 +3249,330 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ecpri *spec, *mask;
+	struct rte_flow_item_ecpri local_mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+	uint8_t extract_nb = 0, i;
+	uint64_t rule_data[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint64_t mask_data[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint8_t extract_size[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint8_t extract_off[DPAA2_ECPRI_MAX_EXTRACT_NB];
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	if (pattern->mask) {
+		memcpy(&local_mask, pattern->mask,
+			sizeof(struct rte_flow_item_ecpri));
+		local_mask.hdr.common.u32 =
+			rte_be_to_cpu_32(local_mask.hdr.common.u32);
+		mask = &local_mask;
+	} else {
+		mask = &dpaa2_flow_item_ecpri_mask;
+	}
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ECPRI distribution not support");
+		return -ENOTSUP;
+	}
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAFE_ECPRI_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAFE_ECPRI_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ECPRI)) {
+		DPAA2_PMD_WARN("Extract field(s) of ECPRI not support.");
+
+		return -1;
+	}
+
+	if (mask->hdr.common.type != 0xff) {
+		DPAA2_PMD_WARN("ECPRI header type not specified.");
+
+		return -1;
+	}
+
+	if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_0;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type0.pc_id) {
+			rule_data[extract_nb] = spec->hdr.type0.pc_id;
+			mask_data[extract_nb] = mask->hdr.type0.pc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_iq_data, pc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type0.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type0.seq_id;
+			mask_data[extract_nb] = mask->hdr.type0.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_iq_data, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_BIT_SEQ) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_1;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type1.pc_id) {
+			rule_data[extract_nb] = spec->hdr.type1.pc_id;
+			mask_data[extract_nb] = mask->hdr.type1.pc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_bit_seq, pc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type1.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type1.seq_id;
+			mask_data[extract_nb] = mask->hdr.type1.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_bit_seq, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RTC_CTRL) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_2;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type2.rtc_id) {
+			rule_data[extract_nb] = spec->hdr.type2.rtc_id;
+			mask_data[extract_nb] = mask->hdr.type2.rtc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_rtc_ctrl, rtc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type2.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type2.seq_id;
+			mask_data[extract_nb] = mask->hdr.type2.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_rtc_ctrl, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_GEN_DATA) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_3;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type3.pc_id || mask->hdr.type3.seq_id)
+			DPAA2_PMD_WARN("Extract type3 msg not support.");
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RM_ACC) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_4;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type4.rma_id) {
+			rule_data[extract_nb] = spec->hdr.type4.rma_id;
+			mask_data[extract_nb] = mask->hdr.type4.rma_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET + 0;
+				/** Compiler not support to take address
+				 * of bit-field
+				 * offsetof(struct rte_ecpri_msg_rm_access,
+				 * rma_id);
+				 */
+			extract_nb++;
+		}
+		if (mask->hdr.type4.ele_id) {
+			rule_data[extract_nb] = spec->hdr.type4.ele_id;
+			mask_data[extract_nb] = mask->hdr.type4.ele_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET + 2;
+				/** Compiler not support to take address
+				 * of bit-field
+				 * offsetof(struct rte_ecpri_msg_rm_access,
+				 * ele_id);
+				 */
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_DLY_MSR) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_5;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type5.msr_id) {
+			rule_data[extract_nb] = spec->hdr.type5.msr_id;
+			mask_data[extract_nb] = mask->hdr.type5.msr_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_delay_measure,
+					msr_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type5.act_type) {
+			rule_data[extract_nb] = spec->hdr.type5.act_type;
+			mask_data[extract_nb] = mask->hdr.type5.act_type;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_delay_measure,
+					act_type);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RMT_RST) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_6;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type6.rst_id) {
+			rule_data[extract_nb] = spec->hdr.type6.rst_id;
+			mask_data[extract_nb] = mask->hdr.type6.rst_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_remote_reset,
+					rst_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type6.rst_op) {
+			rule_data[extract_nb] = spec->hdr.type6.rst_op;
+			mask_data[extract_nb] = mask->hdr.type6.rst_op;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_remote_reset,
+					rst_op);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_EVT_IND) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_7;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type7.evt_id) {
+			rule_data[extract_nb] = spec->hdr.type7.evt_id;
+			mask_data[extract_nb] = mask->hdr.type7.evt_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					evt_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.evt_type) {
+			rule_data[extract_nb] = spec->hdr.type7.evt_type;
+			mask_data[extract_nb] = mask->hdr.type7.evt_type;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					evt_type);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.seq) {
+			rule_data[extract_nb] = spec->hdr.type7.seq;
+			mask_data[extract_nb] = mask->hdr.type7.seq;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					seq);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.number) {
+			rule_data[extract_nb] = spec->hdr.type7.number;
+			mask_data[extract_nb] = mask->hdr.type7.number;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					number);
+			extract_nb++;
+		}
+	} else {
+		DPAA2_PMD_ERR("Invalid ecpri header type(%d)",
+				spec->hdr.common.type);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < extract_nb; i++) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			extract_off[i],
+			extract_size[i], &rule_data[i], &mask_data[i],
+			priv, group,
+			device_configured,
+			DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			extract_off[i],
+			extract_size[i], &rule_data[i], &mask_data[i],
+			priv, group,
+			device_configured,
+			DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3870,6 +4205,16 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_ECPRI:
+			ret = dpaa2_configure_flow_ecpri(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("ECPRI flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow, dev, attr,
 						       &dpaa2_pattern[i],
@@ -3884,7 +4229,8 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			end_of_list = 1;
 			break; /*End of List*/
 		default:
-			DPAA2_PMD_ERR("Invalid action type");
+			DPAA2_PMD_ERR("Invalid flow item[%d] type(%d)",
+				i, pattern[i].type);
 			ret = -ENOTSUP;
 			break;
 		}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 31/43] net/dpaa2: add GTP flow support
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (29 preceding siblings ...)
  2024-09-13  5:59 ` [v1 30/43] net/dpaa2: eCPRI support by parser result vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 32/43] net/dpaa2: check if Soft parser is loaded vanshika.shukla
                   ` (12 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Configure gtp flow to support RSS and FS.
Check FAF of parser result to identify GTP frame.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 170 ++++++++++++++++++++++++++-------
 1 file changed, 137 insertions(+), 33 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index e4fffdbf33..02938ad27b 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -75,6 +75,7 @@ enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_TCP,
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
+	RTE_FLOW_ITEM_TYPE_GTP
 };
 
 static const
@@ -163,6 +164,11 @@ static const struct rte_flow_item_ecpri dpaa2_flow_item_ecpri_mask = {
 	.hdr.dummy[1] = RTE_BE32(0xffffffff),
 	.hdr.dummy[2] = RTE_BE32(0xffffffff),
 };
+
+static const struct rte_flow_item_gtp dpaa2_flow_item_gtp_mask = {
+	.teid = RTE_BE32(0xffffffff),
+};
+
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -238,6 +244,12 @@ dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 			strcat(string, ".type");
 		else
 			strcat(string, ".unknown field");
+	} else if (prot == NET_PROT_GTP) {
+		strcpy(string, "gtp");
+		if (field == NH_FLD_GTP_TEID)
+			strcat(string, ".teid");
+		else
+			strcat(string, ".unknown field");
 	} else {
 		strcpy(string, "unknown protocol");
 	}
@@ -1567,6 +1579,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_ecpri_mask;
 		size = sizeof(struct rte_flow_item_ecpri);
 		break;
+	case RTE_FLOW_ITEM_TYPE_GTP:
+		mask_support = (const char *)&dpaa2_flow_item_gtp_mask;
+		size = sizeof(struct rte_flow_item_gtp);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -3573,6 +3589,84 @@ dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_gtp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_gtp *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_gtp_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-GTP distribution not support");
+		return -ENOTSUP;
+	}
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GTP_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GTP_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GTP)) {
+		DPAA2_PMD_WARN("Extract field(s) of GTP not support.");
+
+		return -1;
+	}
+
+	if (!mask->teid)
+		return 0;
+
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GTP,
+			NH_FLD_GTP_TEID, &spec->teid,
+			&mask->teid, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GTP,
+			NH_FLD_GTP_TEID, &spec->teid,
+			&mask->teid, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -4107,9 +4201,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 		switch (pattern[i].type) {
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			ret = dpaa2_configure_flow_eth(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ETH flow config failed!");
 				goto end_flow_set;
@@ -4117,9 +4211,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_VLAN:
 			ret = dpaa2_configure_flow_vlan(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("vLan flow config failed!");
 				goto end_flow_set;
@@ -4127,9 +4221,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			ret = dpaa2_configure_flow_ipv4(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV4 flow config failed!");
 				goto end_flow_set;
@@ -4137,9 +4231,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
 			ret = dpaa2_configure_flow_ipv6(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV6 flow config failed!");
 				goto end_flow_set;
@@ -4147,9 +4241,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_ICMP:
 			ret = dpaa2_configure_flow_icmp(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ICMP flow config failed!");
 				goto end_flow_set;
@@ -4157,9 +4251,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_UDP:
 			ret = dpaa2_configure_flow_udp(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("UDP flow config failed!");
 				goto end_flow_set;
@@ -4167,9 +4261,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			ret = dpaa2_configure_flow_tcp(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("TCP flow config failed!");
 				goto end_flow_set;
@@ -4177,9 +4271,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_SCTP:
 			ret = dpaa2_configure_flow_sctp(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("SCTP flow config failed!");
 				goto end_flow_set;
@@ -4187,9 +4281,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
 			ret = dpaa2_configure_flow_gre(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("GRE flow config failed!");
 				goto end_flow_set;
@@ -4197,9 +4291,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			ret = dpaa2_configure_flow_vxlan(flow, dev, attr,
-							 &dpaa2_pattern[i],
-							 actions, error,
-							 &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("VXLAN flow config failed!");
 				goto end_flow_set;
@@ -4215,11 +4309,21 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_GTP:
+			ret = dpaa2_configure_flow_gtp(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("GTP flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("RAW flow config failed!");
 				goto end_flow_set;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 32/43] net/dpaa2: check if Soft parser is loaded
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (30 preceding siblings ...)
  2024-09-13  5:59 ` [v1 31/43] net/dpaa2: add GTP flow support vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 33/43] net/dpaa2: soft parser flow verification vanshika.shukla
                   ` (11 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang, Vanshika Shukla

From: Jun Yang <jun.yang@nxp.com>

Access sp instruction area to check if sp is loaded.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c |  4 ++
 drivers/net/dpaa2/dpaa2_ethdev.h |  2 +
 drivers/net/dpaa2/dpaa2_flow.c   | 88 ++++++++++++++++++++++++++++++++
 3 files changed, 94 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 000d7da85c..21955ad903 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2858,6 +2858,10 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 			return ret;
 		}
 	}
+
+	ret = dpaa2_soft_parser_loaded();
+	if (ret > 0)
+		DPAA2_PMD_INFO("soft parser is loaded");
 	DPAA2_PMD_INFO("%s: netdev created, connected to %s",
 		eth_dev->data->name, dpaa2_dev->ep_name);
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index eaa653d266..db918725a7 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -479,6 +479,8 @@ int dpaa2_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
 
 int dpaa2_dev_recycle_config(struct rte_eth_dev *eth_dev);
 int dpaa2_dev_recycle_deconfig(struct rte_eth_dev *eth_dev);
+int dpaa2_soft_parser_loaded(void);
+
 int dpaa2_dev_recycle_qp_setup(struct rte_dpaa2_device *dpaa2_dev,
 	uint16_t qidx, uint64_t cntx,
 	eth_rx_burst_t tx_lpbk, eth_tx_burst_t rx_lpbk,
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 02938ad27b..a376acffcf 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -9,6 +9,7 @@
 #include <string.h>
 #include <unistd.h>
 #include <stdarg.h>
+#include <sys/mman.h>
 
 #include <rte_ethdev.h>
 #include <rte_log.h>
@@ -24,6 +25,7 @@
 
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
+static int dpaa2_sp_loaded = -1;
 
 enum dpaa2_flow_entry_size {
 	DPAA2_FLOW_ENTRY_MIN_SIZE = (DPNI_MAX_KEY_SIZE / 2),
@@ -401,6 +403,92 @@ dpaa2_flow_fs_entry_log(const char *log_info,
 	DPAA2_FLOW_DUMP("\r\n");
 }
 
+/** For LX2160A, LS2088A and LS1088A*/
+#define WRIOP_CCSR_BASE 0x8b80000
+#define WRIOP_CCSR_CTLU_OFFSET 0
+#define WRIOP_CCSR_CTLU_PARSER_OFFSET 0
+#define WRIOP_CCSR_CTLU_PARSER_INGRESS_OFFSET 0
+
+#define WRIOP_INGRESS_PARSER_PHY \
+	(WRIOP_CCSR_BASE + WRIOP_CCSR_CTLU_OFFSET + \
+	WRIOP_CCSR_CTLU_PARSER_OFFSET + \
+	WRIOP_CCSR_CTLU_PARSER_INGRESS_OFFSET)
+
+struct dpaa2_parser_ccsr {
+	uint32_t psr_cfg;
+	uint32_t psr_idle;
+	uint32_t psr_pclm;
+	uint8_t psr_ver_min;
+	uint8_t psr_ver_maj;
+	uint8_t psr_id1_l;
+	uint8_t psr_id1_h;
+	uint32_t psr_rev2;
+	uint8_t rsv[0x2c];
+	uint8_t sp_ins[4032];
+};
+
+int
+dpaa2_soft_parser_loaded(void)
+{
+	int fd, i, ret = 0;
+	struct dpaa2_parser_ccsr *parser_ccsr = NULL;
+
+	dpaa2_flow_control_log = getenv("DPAA2_FLOW_CONTROL_LOG");
+
+	if (dpaa2_sp_loaded >= 0)
+		return dpaa2_sp_loaded;
+
+	fd = open("/dev/mem", O_RDWR | O_SYNC);
+	if (fd < 0) {
+		DPAA2_PMD_ERR("open \"/dev/mem\" ERROR(%d)", fd);
+		ret = fd;
+		goto exit;
+	}
+
+	parser_ccsr = mmap(NULL, sizeof(struct dpaa2_parser_ccsr),
+		PROT_READ | PROT_WRITE, MAP_SHARED, fd,
+		WRIOP_INGRESS_PARSER_PHY);
+	if (!parser_ccsr) {
+		DPAA2_PMD_ERR("Map 0x%" PRIx64 "(size=0x%x) failed",
+			(uint64_t)WRIOP_INGRESS_PARSER_PHY,
+			(uint32_t)sizeof(struct dpaa2_parser_ccsr));
+		ret = -ENOBUFS;
+		goto exit;
+	}
+
+	DPAA2_PMD_INFO("Parser ID:0x%02x%02x, Rev:major(%02x), minor(%02x)",
+		parser_ccsr->psr_id1_h, parser_ccsr->psr_id1_l,
+		parser_ccsr->psr_ver_maj, parser_ccsr->psr_ver_min);
+
+	if (dpaa2_flow_control_log) {
+		for (i = 0; i < 64; i++) {
+			DPAA2_FLOW_DUMP("%02x ",
+				parser_ccsr->sp_ins[i]);
+			if (!((i + 1) % 16))
+				DPAA2_FLOW_DUMP("\r\n");
+		}
+	}
+
+	for (i = 0; i < 16; i++) {
+		if (parser_ccsr->sp_ins[i]) {
+			dpaa2_sp_loaded = 1;
+			break;
+		}
+	}
+	if (dpaa2_sp_loaded < 0)
+		dpaa2_sp_loaded = 0;
+
+	ret = dpaa2_sp_loaded;
+
+exit:
+	if (parser_ccsr)
+		munmap(parser_ccsr, sizeof(struct dpaa2_parser_ccsr));
+	if (fd >= 0)
+		close(fd);
+
+	return ret;
+}
+
 static int
 dpaa2_flow_ip_address_extract(enum net_prot prot,
 	uint32_t field)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 33/43] net/dpaa2: soft parser flow verification
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (31 preceding siblings ...)
  2024-09-13  5:59 ` [v1 32/43] net/dpaa2: check if Soft parser is loaded vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 34/43] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
                   ` (10 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Add flow supported by soft parser to verification list.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 86 ++++++++++++++++++++--------------
 1 file changed, 52 insertions(+), 34 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index a376acffcf..72075473fc 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -66,7 +66,7 @@ struct rte_dpaa2_flow_item {
 };
 
 static const
-enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
+enum rte_flow_item_type dpaa2_hp_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
 	RTE_FLOW_ITEM_TYPE_ETH,
 	RTE_FLOW_ITEM_TYPE_VLAN,
@@ -77,7 +77,14 @@ enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_TCP,
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
-	RTE_FLOW_ITEM_TYPE_GTP
+	RTE_FLOW_ITEM_TYPE_GTP,
+	RTE_FLOW_ITEM_TYPE_RAW
+};
+
+static const
+enum rte_flow_item_type dpaa2_sp_supported_pattern_type[] = {
+	RTE_FLOW_ITEM_TYPE_VXLAN,
+	RTE_FLOW_ITEM_TYPE_ECPRI
 };
 
 static const
@@ -4560,20 +4567,21 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
 	int ret = 0;
 
 	if (unlikely(attr->group >= dpni_attr->num_rx_tcs)) {
-		DPAA2_PMD_ERR("Priority group is out of range\n");
+		DPAA2_PMD_ERR("Group/TC(%d) is out of range(%d)",
+			attr->group, dpni_attr->num_rx_tcs);
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->priority >= dpni_attr->fs_entries)) {
-		DPAA2_PMD_ERR("Priority within the group is out of range\n");
+		DPAA2_PMD_ERR("Priority(%d) within group is out of range(%d)",
+			attr->priority, dpni_attr->fs_entries);
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->egress)) {
-		DPAA2_PMD_ERR(
-			"Flow configuration is not supported on egress side\n");
+		DPAA2_PMD_ERR("Egress flow configuration is not supported");
 		ret = -ENOTSUP;
 	}
 	if (unlikely(!attr->ingress)) {
-		DPAA2_PMD_ERR("Ingress flag must be configured\n");
+		DPAA2_PMD_ERR("Ingress flag must be configured");
 		ret = -EINVAL;
 	}
 	return ret;
@@ -4584,27 +4592,41 @@ dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
 {
 	unsigned int i, j, is_found = 0;
 	int ret = 0;
+	const enum rte_flow_item_type *hp_supported;
+	const enum rte_flow_item_type *sp_supported;
+	uint64_t hp_supported_num, sp_supported_num;
+
+	hp_supported = dpaa2_hp_supported_pattern_type;
+	hp_supported_num = RTE_DIM(dpaa2_hp_supported_pattern_type);
+
+	sp_supported = dpaa2_sp_supported_pattern_type;
+	sp_supported_num = RTE_DIM(dpaa2_sp_supported_pattern_type);
 
 	for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
-		for (i = 0; i < RTE_DIM(dpaa2_supported_pattern_type); i++) {
-			if (dpaa2_supported_pattern_type[i]
-					== pattern[j].type) {
+		is_found = 0;
+		for (i = 0; i < hp_supported_num; i++) {
+			if (hp_supported[i] == pattern[j].type) {
 				is_found = 1;
 				break;
 			}
 		}
+		if (is_found)
+			continue;
+		if (dpaa2_sp_loaded > 0) {
+			for (i = 0; i < sp_supported_num; i++) {
+				if (sp_supported[i] == pattern[j].type) {
+					is_found = 1;
+					break;
+				}
+			}
+		}
 		if (!is_found) {
+			DPAA2_PMD_WARN("Flow type(%d) not supported",
+				pattern[j].type);
 			ret = -ENOTSUP;
 			break;
 		}
 	}
-	/* Lets verify other combinations of given pattern rules */
-	for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
-		if (!pattern[j].spec) {
-			ret = -EINVAL;
-			break;
-		}
-	}
 
 	return ret;
 }
@@ -4651,43 +4673,39 @@ dpaa2_flow_validate(struct rte_eth_dev *dev,
 	memset(&dpni_attr, 0, sizeof(struct dpni_attr));
 	ret = dpni_get_attributes(dpni, CMD_PRI_LOW, token, &dpni_attr);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Failure to get dpni@%p attribute, err code  %d\n",
-			dpni, ret);
+		DPAA2_PMD_ERR("Get dpni@%d attribute failed(%d)",
+			priv->hw_id, ret);
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ATTR,
-			   flow_attr, "invalid");
+			RTE_FLOW_ERROR_TYPE_ATTR,
+			flow_attr, "invalid");
 		return ret;
 	}
 
 	/* Verify input attributes */
 	ret = dpaa2_dev_verify_attr(&dpni_attr, flow_attr);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid attributes are given\n");
+		DPAA2_PMD_ERR("Invalid attributes are given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ATTR,
-			   flow_attr, "invalid");
+			RTE_FLOW_ERROR_TYPE_ATTR,
+			flow_attr, "invalid");
 		goto not_valid_params;
 	}
 	/* Verify input pattern list */
 	ret = dpaa2_dev_verify_patterns(pattern);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid pattern list is given\n");
+		DPAA2_PMD_ERR("Invalid pattern list is given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ITEM,
-			   pattern, "invalid");
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			pattern, "invalid");
 		goto not_valid_params;
 	}
 	/* Verify input action list */
 	ret = dpaa2_dev_verify_actions(actions);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid action list is given\n");
+		DPAA2_PMD_ERR("Invalid action list is given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ACTION,
-			   actions, "invalid");
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			actions, "invalid");
 		goto not_valid_params;
 	}
 not_valid_params:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 34/43] net/dpaa2: add flow support for IPsec AH and ESP
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (32 preceding siblings ...)
  2024-09-13  5:59 ` [v1 33/43] net/dpaa2: soft parser flow verification vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 35/43] net/dpaa2: fix memory corruption in TM vanshika.shukla
                   ` (9 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support AH/ESP flow with SPI field.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 528 ++++++++++++++++++++++++---------
 1 file changed, 385 insertions(+), 143 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 72075473fc..3afe331023 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -78,6 +78,8 @@ enum rte_flow_item_type dpaa2_hp_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
 	RTE_FLOW_ITEM_TYPE_GTP,
+	RTE_FLOW_ITEM_TYPE_ESP,
+	RTE_FLOW_ITEM_TYPE_AH,
 	RTE_FLOW_ITEM_TYPE_RAW
 };
 
@@ -158,6 +160,17 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 	},
 };
 
+static const struct rte_flow_item_esp dpaa2_flow_item_esp_mask = {
+	.hdr = {
+		.spi = RTE_BE32(0xffffffff),
+		.seq = RTE_BE32(0xffffffff),
+	},
+};
+
+static const struct rte_flow_item_ah dpaa2_flow_item_ah_mask = {
+	.spi = RTE_BE32(0xffffffff),
+};
+
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
@@ -259,8 +272,16 @@ dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 			strcat(string, ".teid");
 		else
 			strcat(string, ".unknown field");
+	} else if (prot == NET_PROT_IPSEC_ESP) {
+		strcpy(string, "esp");
+		if (field == NH_FLD_IPSEC_ESP_SPI)
+			strcat(string, ".spi");
+		else if (field == NH_FLD_IPSEC_ESP_SEQUENCE_NUM)
+			strcat(string, ".seq");
+		else
+			strcat(string, ".unknown field");
 	} else {
-		strcpy(string, "unknown protocol");
+		sprintf(string, "unknown protocol(%d)", prot);
 	}
 }
 
@@ -1658,6 +1679,14 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_tcp_mask;
 		size = sizeof(struct rte_flow_item_tcp);
 		break;
+	case RTE_FLOW_ITEM_TYPE_ESP:
+		mask_support = (const char *)&dpaa2_flow_item_esp_mask;
+		size = sizeof(struct rte_flow_item_esp);
+		break;
+	case RTE_FLOW_ITEM_TYPE_AH:
+		mask_support = (const char *)&dpaa2_flow_item_ah_mask;
+		size = sizeof(struct rte_flow_item_ah);
+		break;
 	case RTE_FLOW_ITEM_TYPE_SCTP:
 		mask_support = (const char *)&dpaa2_flow_item_sctp_mask;
 		size = sizeof(struct rte_flow_item_sctp);
@@ -1688,7 +1717,7 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask[i] = (mask[i] | mask_src[i]);
 
 	if (memcmp(mask, mask_support, size))
-		return -1;
+		return -ENOTSUP;
 
 	return 0;
 }
@@ -2092,11 +2121,12 @@ dpaa2_configure_flow_tunnel_eth(struct dpaa2_dev_flow *flow,
 	if (!spec)
 		return 0;
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask->src,
@@ -2308,11 +2338,12 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask->src,
@@ -2413,11 +2444,12 @@ dpaa2_configure_flow_tunnel_vlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (!mask->tci)
@@ -2475,14 +2507,14 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
@@ -2490,27 +2522,28 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-				       RTE_FLOW_ITEM_TYPE_VLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+			RTE_FLOW_ITEM_TYPE_VLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (!mask->tci)
 		return 0;
 
 	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
-					      NH_FLD_VLAN_TCI, &spec->tci,
-					      &mask->tci, sizeof(rte_be16_t),
-					      priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+			NH_FLD_VLAN_TCI, &spec->tci,
+			&mask->tci, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
-					      NH_FLD_VLAN_TCI, &spec->tci,
-					      &mask->tci, sizeof(rte_be16_t),
-					      priv, group, &local_cfg,
-					      DPAA2_FLOW_FS_TYPE);
+			NH_FLD_VLAN_TCI, &spec->tci,
+			&mask->tci, sizeof(rte_be16_t),
+			priv, group, &local_cfg,
+			DPAA2_FLOW_FS_TYPE);
 	if (ret)
 		return ret;
 
@@ -2519,12 +2552,13 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	int ret, local_cfg = 0;
 	uint32_t group;
@@ -2548,16 +2582,16 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV4_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV4_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV4_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV4_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
@@ -2566,13 +2600,13 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_index = attr->priority;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
-					 DPAA2_FLOW_QOS_TYPE, group,
-					 &local_cfg);
+			DPAA2_FLOW_QOS_TYPE, group,
+			&local_cfg);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
-					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
 	if (ret)
 		return ret;
 
@@ -2581,10 +2615,11 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
-				       RTE_FLOW_ITEM_TYPE_IPV4)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+			RTE_FLOW_ITEM_TYPE_IPV4);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask_ipv4->hdr.src_addr) {
@@ -2593,18 +2628,18 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(rte_be32_t);
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV4_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV4_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2615,17 +2650,17 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(rte_be32_t);
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV4_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV4_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2636,18 +2671,18 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(uint8_t);
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2657,12 +2692,13 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 }
 
 static int
-dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	int ret, local_cfg = 0;
 	uint32_t group;
@@ -2690,27 +2726,27 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV6_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV6_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV6_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV6_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
-					 DPAA2_FLOW_QOS_TYPE, group,
-					 &local_cfg);
+			DPAA2_FLOW_QOS_TYPE, group,
+			&local_cfg);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
-					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
 	if (ret)
 		return ret;
 
@@ -2719,10 +2755,11 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
-				       RTE_FLOW_ITEM_TYPE_IPV6)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+			RTE_FLOW_ITEM_TYPE_IPV6);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp(mask_ipv6->hdr.src_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
@@ -2731,18 +2768,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = NH_FLD_IPV6_ADDR_SIZE;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV6_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV6_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2753,18 +2790,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = NH_FLD_IPV6_ADDR_SIZE;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV6_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV6_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2775,18 +2812,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(uint8_t);
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2843,11 +2880,12 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ICMP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ICMP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.icmp_type) {
@@ -2920,16 +2958,16 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_UDP_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_UDP_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_UDP_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_UDP_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
@@ -2950,11 +2988,12 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_UDP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_UDP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3027,9 +3066,9 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_TCP_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_TCP_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
@@ -3057,11 +3096,12 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_TCP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_TCP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3101,6 +3141,183 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_esp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_esp *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_esp_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ESP distribution not support");
+		return -ENOTSUP;
+	}
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_ESP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_ESP_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	if (!spec) {
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ESP);
+	if (ret) {
+		DPAA2_PMD_WARN("Extract field(s) of ESP not support.");
+
+		return ret;
+	}
+
+	if (mask->hdr.spi) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SPI, &spec->hdr.spi,
+			&mask->hdr.spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SPI, &spec->hdr.spi,
+			&mask->hdr.spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask->hdr.seq) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SEQUENCE_NUM, &spec->hdr.seq,
+			&mask->hdr.seq, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SEQUENCE_NUM, &spec->hdr.seq,
+			&mask->hdr.seq, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_ah(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ah *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_ah_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-AH distribution not support");
+		return -ENOTSUP;
+	}
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_AH_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_AH_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	if (!spec) {
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_AH);
+	if (ret) {
+		DPAA2_PMD_WARN("Extract field(s) of AH not support.");
+
+		return ret;
+	}
+
+	if (mask->spi) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_AH,
+			NH_FLD_IPSEC_AH_SPI, &spec->spi,
+			&mask->spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_AH,
+			NH_FLD_IPSEC_AH_SPI, &spec->spi,
+			&mask->spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask->seq_num) {
+		DPAA2_PMD_ERR("AH seq distribution not support");
+		return -ENOTSUP;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3149,11 +3366,12 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_SCTP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_SCTP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of SCTP not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3241,11 +3459,12 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_GRE)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GRE);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of GRE not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (!mask->protocol)
@@ -3318,11 +3537,12 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VXLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VXLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of VXLAN not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->flags) {
@@ -3422,17 +3642,18 @@ dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ECPRI)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ECPRI);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ECPRI not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->hdr.common.type != 0xff) {
 		DPAA2_PMD_WARN("ECPRI header type not specified.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA) {
@@ -3733,11 +3954,12 @@ dpaa2_configure_flow_gtp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_GTP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GTP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of GTP not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (!mask->teid)
@@ -4374,6 +4596,26 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_ESP:
+			ret = dpaa2_configure_flow_esp(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("ESP flow config failed!");
+				goto end_flow_set;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_AH:
+			ret = dpaa2_configure_flow_ah(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("AH flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
 			ret = dpaa2_configure_flow_gre(flow, dev, attr,
 					&dpaa2_pattern[i],
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 35/43] net/dpaa2: fix memory corruption in TM
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (33 preceding siblings ...)
  2024-09-13  5:59 ` [v1 34/43] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 36/43] net/dpaa2: support software taildrop vanshika.shukla
                   ` (8 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh; +Cc: stable

From: Gagandeep Singh <g.singh@nxp.com>

driver was reserving memory in an array for 8 queues only,
but it can support many more queues configuration.

This patch fixes the memory corruption issue by defining the
queue array with correct size.

Fixes: 72100f0dee21 ("net/dpaa2: support level 2 in traffic management")
Cc: g.singh@nxp.com
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/net/dpaa2/dpaa2_tm.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index cb854964b4..83d0d669ce 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -684,6 +684,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 	struct dpaa2_tm_node *leaf_node, *temp_leaf_node, *channel_node;
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
 	int ret, t;
+	bool conf_schedule = false;
 
 	/* Populate TCs */
 	LIST_FOREACH(channel_node, &priv->nodes, next) {
@@ -757,7 +758,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 	}
 
 	LIST_FOREACH(channel_node, &priv->nodes, next) {
-		int wfq_grp = 0, is_wfq_grp = 0, conf[DPNI_MAX_TC];
+		int wfq_grp = 0, is_wfq_grp = 0, conf[priv->nb_tx_queues];
 		struct dpni_tx_priorities_cfg prio_cfg;
 
 		memset(&prio_cfg, 0, sizeof(prio_cfg));
@@ -767,6 +768,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 		if (channel_node->level_id != CHANNEL_LEVEL)
 			continue;
 
+		conf_schedule = false;
 		LIST_FOREACH(leaf_node, &priv->nodes, next) {
 			struct dpaa2_queue *leaf_dpaa2_q;
 			uint8_t leaf_tc_id;
@@ -789,6 +791,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			if (leaf_node->parent != channel_node)
 				continue;
 
+			conf_schedule = true;
 			leaf_dpaa2_q =  (struct dpaa2_queue *)dev->data->tx_queues[leaf_node->id];
 			leaf_tc_id = leaf_dpaa2_q->tc_index;
 			/* Process sibling leaf nodes */
@@ -829,8 +832,8 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 						goto out;
 					}
 					is_wfq_grp = 1;
-					conf[temp_leaf_node->id] = 1;
 				}
+				conf[temp_leaf_node->id] = 1;
 			}
 			if (is_wfq_grp) {
 				if (wfq_grp == 0) {
@@ -851,6 +854,9 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			}
 			conf[leaf_node->id] = 1;
 		}
+		if (!conf_schedule)
+			continue;
+
 		if (wfq_grp > 1) {
 			prio_cfg.separate_groups = 1;
 			if (prio_cfg.prio_group_B < prio_cfg.prio_group_A) {
@@ -864,6 +870,16 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 
 		prio_cfg.prio_group_A = 1;
 		prio_cfg.channel_idx = channel_node->channel_id;
+		DPAA2_PMD_DEBUG("########################################\n");
+		DPAA2_PMD_DEBUG("Channel idx = %d\n", prio_cfg.channel_idx);
+		for (t = 0; t < DPNI_MAX_TC; t++)
+			DPAA2_PMD_DEBUG("tc = %d mode = %d, delta = %d\n", t,
+					prio_cfg.tc_sched[t].mode,
+					prio_cfg.tc_sched[t].delta_bandwidth);
+
+		DPAA2_PMD_DEBUG("prioritya = %d, priorityb = %d, separate grps"
+				" = %d\n\n", prio_cfg.prio_group_A,
+				prio_cfg.prio_group_B, prio_cfg.separate_groups);
 		ret = dpni_set_tx_priorities(dpni, 0, priv->token, &prio_cfg);
 		if (ret) {
 			ret = -rte_tm_error_set(error, EINVAL,
@@ -871,15 +887,6 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 					"Scheduling Failed\n");
 			goto out;
 		}
-		DPAA2_PMD_DEBUG("########################################\n");
-		DPAA2_PMD_DEBUG("Channel idx = %d\n", prio_cfg.channel_idx);
-		for (t = 0; t < DPNI_MAX_TC; t++) {
-			DPAA2_PMD_DEBUG("tc = %d mode = %d ", t, prio_cfg.tc_sched[t].mode);
-			DPAA2_PMD_DEBUG("delta = %d\n", prio_cfg.tc_sched[t].delta_bandwidth);
-		}
-		DPAA2_PMD_DEBUG("prioritya = %d\n", prio_cfg.prio_group_A);
-		DPAA2_PMD_DEBUG("priorityb = %d\n", prio_cfg.prio_group_B);
-		DPAA2_PMD_DEBUG("separate grps = %d\n\n", prio_cfg.separate_groups);
 	}
 	return 0;
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 36/43] net/dpaa2: support software taildrop
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (34 preceding siblings ...)
  2024-09-13  5:59 ` [v1 35/43] net/dpaa2: fix memory corruption in TM vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 37/43] net/dpaa2: check IOVA before sending MC command vanshika.shukla
                   ` (7 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

Add software based taildrop support.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  2 +-
 drivers/net/dpaa2/dpaa2_rxtx.c          | 24 +++++++++++++++++++++++-
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index c5900bd06a..03b9088cc6 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -179,7 +179,7 @@ struct __rte_cache_aligned dpaa2_queue {
 	struct dpaa2_queue *tx_conf_queue;
 	int32_t eventfd;	/*!< Event Fd of this queue */
 	uint16_t nb_desc;
-	uint16_t resv;
+	uint16_t tm_sw_td;	/*!< TM software taildrop */
 	uint64_t offloads;
 	uint64_t lpbk_cntx;
 };
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 4bb785aa49..065b219ffd 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1297,8 +1297,11 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		while (qbman_result_SCN_state(dpaa2_q->cscn)) {
 			retry_count++;
 			/* Retry for some time before giving up */
-			if (retry_count > CONG_RETRY_COUNT)
+			if (retry_count > CONG_RETRY_COUNT) {
+				if (dpaa2_q->tm_sw_td)
+					goto sw_td;
 				goto skip_tx;
+			}
 		}
 
 		frames_to_send = (nb_pkts > dpaa2_eqcr_size) ?
@@ -1490,6 +1493,25 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
+	return num_tx;
+sw_td:
+	loop = 0;
+	while (loop < num_tx) {
+		if (unlikely(RTE_MBUF_HAS_EXTBUF(*bufs)))
+			rte_pktmbuf_free(*bufs);
+		bufs++;
+		loop++;
+	}
+
+	/* free the pending buffers */
+	while (nb_pkts) {
+		rte_pktmbuf_free(*bufs);
+		bufs++;
+		nb_pkts--;
+		num_tx++;
+	}
+	dpaa2_q->tx_pkts += num_tx;
+
 	return num_tx;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 37/43] net/dpaa2: check IOVA before sending MC command
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (35 preceding siblings ...)
  2024-09-13  5:59 ` [v1 36/43] net/dpaa2: support software taildrop vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 38/43] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
                   ` (6 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Convert VA to IOVA and check IOVA before sending parameter
to MC. Invalid IOVA of parameter sent to MC will cause system
stuck and not be recovered unless power reset.
IOVA is not checked in data path because:
1) MC is not involved and error can be recovered.
2) IOVA check impacts performance a little bit.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c |  63 +++--
 drivers/net/dpaa2/dpaa2_ethdev.c       | 338 +++++++++++++------------
 drivers/net/dpaa2/dpaa2_ethdev.h       |   3 +
 drivers/net/dpaa2/dpaa2_flow.c         |  67 ++++-
 drivers/net/dpaa2/dpaa2_sparser.c      |  27 +-
 drivers/net/dpaa2/dpaa2_tm.c           |  43 ++--
 6 files changed, 321 insertions(+), 220 deletions(-)

diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 4d33b51fea..20b37a97bb 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -30,8 +30,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 
 int
 rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
-			      uint16_t offset,
-			      uint8_t size)
+	uint16_t offset, uint8_t size)
 {
 	struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
 	struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
@@ -52,8 +51,8 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	p_params = rte_zmalloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_zmalloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -73,17 +72,23 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 	}
 
 	memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
-	tc_cfg.key_cfg_iova = (size_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.dist_size = eth_dev->data->nb_rx_queues;
 	tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
 
 	ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index,
-				  &tc_cfg);
+			&tc_cfg);
 	rte_free(p_params);
 	if (ret) {
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("Set RX TC dist failed(err=%d)", ret);
 		return ret;
 	}
 
@@ -115,8 +120,8 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 	if (tc_dist_queues > priv->dist_queues)
 		tc_dist_queues = priv->dist_queues;
 
-	p_params = rte_malloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_malloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -133,7 +138,15 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 		return ret;
 	}
 
-	tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.dist_size = tc_dist_queues;
 	tc_cfg.enable = true;
 	tc_cfg.tc = tc_index;
@@ -148,17 +161,15 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 	ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, &tc_cfg);
 	rte_free(p_params);
 	if (ret) {
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("RX Hash dist for failed(err=%d)", ret);
 		return ret;
 	}
 
 	return 0;
 }
 
-int dpaa2_remove_flow_dist(
-	struct rte_eth_dev *eth_dev,
+int
+dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
 	uint8_t tc_index)
 {
 	struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
@@ -168,8 +179,8 @@ int dpaa2_remove_flow_dist(
 	void *p_params;
 	int ret;
 
-	p_params = rte_malloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_malloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -177,7 +188,15 @@ int dpaa2_remove_flow_dist(
 
 	memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
 	tc_cfg.dist_size = 0;
-	tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.enable = true;
 	tc_cfg.tc = tc_index;
 
@@ -194,9 +213,7 @@ int dpaa2_remove_flow_dist(
 			&tc_cfg);
 	rte_free(p_params);
 	if (ret)
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("RX hash dist failed(err=%d)", ret);
 	return ret;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 21955ad903..9f859aef66 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -123,9 +123,9 @@ dpaa2_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
-		return -1;
+		return -EINVAL;
 	}
 
 	if (on)
@@ -174,8 +174,8 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 static int
 dpaa2_vlan_tpid_set(struct rte_eth_dev *dev,
-		      enum rte_vlan_type vlan_type __rte_unused,
-		      uint16_t tpid)
+	enum rte_vlan_type vlan_type __rte_unused,
+	uint16_t tpid)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct fsl_mc_io *dpni = dev->process_private;
@@ -212,8 +212,7 @@ dpaa2_vlan_tpid_set(struct rte_eth_dev *dev,
 
 static int
 dpaa2_fw_version_get(struct rte_eth_dev *dev,
-		     char *fw_version,
-		     size_t fw_size)
+	char *fw_version, size_t fw_size)
 {
 	int ret;
 	struct fsl_mc_io *dpni = dev->process_private;
@@ -245,7 +244,8 @@ dpaa2_fw_version_get(struct rte_eth_dev *dev,
 }
 
 static int
-dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+dpaa2_dev_info_get(struct rte_eth_dev *dev,
+	struct rte_eth_dev_info *dev_info)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
@@ -291,8 +291,8 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 static int
 dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
-			__rte_unused uint16_t queue_id,
-			struct rte_eth_burst_mode *mode)
+	__rte_unused uint16_t queue_id,
+	struct rte_eth_burst_mode *mode)
 {
 	struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
 	int ret = -EINVAL;
@@ -368,7 +368,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	uint8_t num_rxqueue_per_tc;
 	struct dpaa2_queue *mc_q, *mcq;
 	uint32_t tot_queues;
-	int i;
+	int i, ret;
 	struct dpaa2_queue *dpaa2_q;
 
 	PMD_INIT_FUNC_TRACE();
@@ -382,7 +382,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 			  RTE_CACHE_LINE_SIZE);
 	if (!mc_q) {
 		DPAA2_PMD_ERR("Memory allocation failed for rx/tx queues");
-		return -1;
+		return -ENOBUFS;
 	}
 
 	for (i = 0; i < priv->nb_rx_queues; i++) {
@@ -404,8 +404,10 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	if (dpaa2_enable_err_queue) {
 		priv->rx_err_vq = rte_zmalloc("dpni_rx_err",
 			sizeof(struct dpaa2_queue), 0);
-		if (!priv->rx_err_vq)
+		if (!priv->rx_err_vq) {
+			ret = -ENOBUFS;
 			goto fail;
+		}
 
 		dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq;
 		dpaa2_q->q_storage = rte_malloc("err_dq_storage",
@@ -424,13 +426,15 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 
 	for (i = 0; i < priv->nb_tx_queues; i++) {
 		mc_q->eth_data = dev->data;
-		mc_q->flow_id = 0xffff;
+		mc_q->flow_id = DPAA2_INVALID_FLOW_ID;
 		priv->tx_vq[i] = mc_q++;
 		dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i];
 		dpaa2_q->cscn = rte_malloc(NULL,
 					   sizeof(struct qbman_result), 16);
-		if (!dpaa2_q->cscn)
+		if (!dpaa2_q->cscn) {
+			ret = -ENOBUFS;
 			goto fail_tx;
+		}
 	}
 
 	if (priv->flags & DPAA2_TX_CONF_ENABLE) {
@@ -498,7 +502,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	}
 
 	rte_free(mc_q);
-	return -1;
+	return ret;
 }
 
 static void
@@ -718,14 +722,14 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
  */
 static int
 dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
-			 uint16_t rx_queue_id,
-			 uint16_t nb_rx_desc,
-			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_rxconf *rx_conf,
-			 struct rte_mempool *mb_pool)
+	uint16_t rx_queue_id,
+	uint16_t nb_rx_desc,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_rxconf *rx_conf,
+	struct rte_mempool *mb_pool)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct dpni_queue cfg;
 	uint8_t options = 0;
@@ -747,8 +751,8 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* Rx deferred start is not supported */
 	if (rx_conf->rx_deferred_start) {
-		DPAA2_PMD_ERR("%p:Rx deferred start not supported",
-				(void *)dev);
+		DPAA2_PMD_ERR("%s:Rx deferred start not supported",
+			dev->data->name);
 		return -EINVAL;
 	}
 
@@ -764,7 +768,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		if (ret)
 			return ret;
 	}
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = priv->rx_vq[rx_queue_id];
 	dpaa2_q->mb_pool = mb_pool; /**< mbuf pool to populate RX ring. */
 	dpaa2_q->bp_array = rte_dpaa2_bpid_info;
 	dpaa2_q->nb_desc = UINT16_MAX;
@@ -790,7 +794,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		cfg.cgid = i;
 		dpaa2_q->cgid = cfg.cgid;
 	} else {
-		dpaa2_q->cgid = 0xff;
+		dpaa2_q->cgid = DPAA2_INVALID_CGID;
 	}
 
 	/*if ls2088 or rev2 device, enable the stashing */
@@ -811,10 +815,10 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			cfg.flc.value |= 0x14;
 	}
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_RX,
-			     dpaa2_q->tc_index, flow_id, options, &cfg);
+			dpaa2_q->tc_index, flow_id, options, &cfg);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in setting the rx flow: = %d", ret);
-		return -1;
+		return ret;
 	}
 
 	if (!(priv->flags & DPAA2_RX_TAILDROP_OFF)) {
@@ -827,7 +831,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		 * There is no HW restriction, but number of CGRs are limited,
 		 * hence this restriction is placed.
 		 */
-		if (dpaa2_q->cgid != 0xff) {
+		if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 			/*enabling per rx queue congestion control */
 			taildrop.threshold = nb_rx_desc;
 			taildrop.units = DPNI_CONGESTION_UNIT_FRAMES;
@@ -853,15 +857,15 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 		if (ret) {
 			DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)",
-				      ret);
-			return -1;
+				ret);
+			return ret;
 		}
 	} else { /* Disable tail Drop */
 		struct dpni_taildrop taildrop = {0};
 		DPAA2_PMD_INFO("Tail drop is disabled on queue");
 
 		taildrop.enable = 0;
-		if (dpaa2_q->cgid != 0xff) {
+		if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
 					dpaa2_q->tc_index,
@@ -873,8 +877,8 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 		if (ret) {
 			DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)",
-				      ret);
-			return -1;
+				ret);
+			return ret;
 		}
 	}
 
@@ -884,16 +888,14 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 static int
 dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
-			 uint16_t tx_queue_id,
-			 uint16_t nb_tx_desc,
-			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_txconf *tx_conf)
+	uint16_t tx_queue_id,
+	uint16_t nb_tx_desc,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_txconf *tx_conf)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)
-		priv->tx_vq[tx_queue_id];
-	struct dpaa2_queue *dpaa2_tx_conf_q = (struct dpaa2_queue *)
-		priv->tx_conf_vq[tx_queue_id];
+	struct dpaa2_queue *dpaa2_q = priv->tx_vq[tx_queue_id];
+	struct dpaa2_queue *dpaa2_tx_conf_q = priv->tx_conf_vq[tx_queue_id];
 	struct fsl_mc_io *dpni = dev->process_private;
 	struct dpni_queue tx_conf_cfg;
 	struct dpni_queue tx_flow_cfg;
@@ -903,13 +905,14 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct dpni_queue_id qid;
 	uint32_t tc_id;
 	int ret;
+	uint64_t iova;
 
 	PMD_INIT_FUNC_TRACE();
 
 	/* Tx deferred start is not supported */
 	if (tx_conf->tx_deferred_start) {
-		DPAA2_PMD_ERR("%p:Tx deferred start not supported",
-				(void *)dev);
+		DPAA2_PMD_ERR("%s:Tx deferred start not supported",
+			dev->data->name);
 		return -EINVAL;
 	}
 
@@ -917,7 +920,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	dpaa2_q->offloads = tx_conf->offloads;
 
 	/* Return if queue already configured */
-	if (dpaa2_q->flow_id != 0xffff) {
+	if (dpaa2_q->flow_id != DPAA2_INVALID_FLOW_ID) {
 		dev->data->tx_queues[tx_queue_id] = dpaa2_q;
 		return 0;
 	}
@@ -959,7 +962,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		DPAA2_PMD_ERR("Error in setting the tx flow: "
 			"tc_id=%d, flow=%d err=%d",
 			tc_id, flow_id, ret);
-			return -1;
+			return ret;
 	}
 
 	dpaa2_q->flow_id = flow_id;
@@ -967,11 +970,11 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	dpaa2_q->tc_index = tc_id;
 
 	ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX, ((channel_id << 8) | dpaa2_q->tc_index),
-			     dpaa2_q->flow_id, &tx_flow_cfg, &qid);
+			DPNI_QUEUE_TX, ((channel_id << 8) | dpaa2_q->tc_index),
+			dpaa2_q->flow_id, &tx_flow_cfg, &qid);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-		return -1;
+		return ret;
 	}
 	dpaa2_q->fqid = qid.fqid;
 
@@ -987,8 +990,17 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		 */
 		cong_notif_cfg.threshold_exit = (nb_tx_desc * 9) / 10;
 		cong_notif_cfg.message_ctx = 0;
-		cong_notif_cfg.message_iova =
-				(size_t)DPAA2_VADDR_TO_IOVA(dpaa2_q->cscn);
+
+		iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(dpaa2_q->cscn,
+			sizeof(struct qbman_result));
+		if (iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("No IOMMU map for cscn(%p)(size=%x)",
+				dpaa2_q->cscn, (uint32_t)sizeof(struct qbman_result));
+
+			return -ENOBUFS;
+		}
+
+		cong_notif_cfg.message_iova = iova;
 		cong_notif_cfg.dest_cfg.dest_type = DPNI_DEST_NONE;
 		cong_notif_cfg.notification_mode =
 					 DPNI_CONG_OPT_WRITE_MEM_ON_ENTER |
@@ -996,16 +1008,13 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 					 DPNI_CONG_OPT_COHERENT_WRITE;
 		cong_notif_cfg.cg_point = DPNI_CP_QUEUE;
 
-		ret = dpni_set_congestion_notification(dpni, CMD_PRI_LOW,
-						       priv->token,
-						       DPNI_QUEUE_TX,
-						       ((channel_id << 8) | tc_id),
-						       &cong_notif_cfg);
+		ret = dpni_set_congestion_notification(dpni,
+				CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
+				((channel_id << 8) | tc_id), &cong_notif_cfg);
 		if (ret) {
-			DPAA2_PMD_ERR(
-			   "Error in setting tx congestion notification: "
-			   "err=%d", ret);
-			return -ret;
+			DPAA2_PMD_ERR("Set TX congestion notification err=%d",
+			   ret);
+			return ret;
 		}
 	}
 	dpaa2_q->cb_eqresp_free = dpaa2_dev_free_eqresp_buf;
@@ -1016,22 +1025,24 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		options = options | DPNI_QUEUE_OPT_USER_CTX;
 		tx_conf_cfg.user_context = (size_t)(dpaa2_q);
 		ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX_CONFIRM, ((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
-			     dpaa2_tx_conf_q->flow_id, options, &tx_conf_cfg);
+				DPNI_QUEUE_TX_CONFIRM,
+				((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
+				dpaa2_tx_conf_q->flow_id,
+				options, &tx_conf_cfg);
 		if (ret) {
-			DPAA2_PMD_ERR("Error in setting the tx conf flow: "
-			      "tc_index=%d, flow=%d err=%d",
-			      dpaa2_tx_conf_q->tc_index,
-			      dpaa2_tx_conf_q->flow_id, ret);
-			return -1;
+			DPAA2_PMD_ERR("Set TC[%d].TX[%d] conf flow err=%d",
+				dpaa2_tx_conf_q->tc_index,
+				dpaa2_tx_conf_q->flow_id, ret);
+			return ret;
 		}
 
 		ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX_CONFIRM, ((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
-			     dpaa2_tx_conf_q->flow_id, &tx_conf_cfg, &qid);
+				DPNI_QUEUE_TX_CONFIRM,
+				((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
+				dpaa2_tx_conf_q->flow_id, &tx_conf_cfg, &qid);
 		if (ret) {
 			DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-			return -1;
+			return ret;
 		}
 		dpaa2_tx_conf_q->fqid = qid.fqid;
 	}
@@ -1043,8 +1054,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct dpaa2_queue *dpaa2_q = dev->data->rx_queues[rx_queue_id];
 	struct dpaa2_dev_priv *priv = dpaa2_q->eth_data->dev_private;
-	struct fsl_mc_io *dpni =
-		(struct fsl_mc_io *)priv->eth_dev->process_private;
+	struct fsl_mc_io *dpni = priv->eth_dev->process_private;
 	uint8_t options = 0;
 	int ret;
 	struct dpni_queue cfg;
@@ -1054,7 +1064,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	total_nb_rx_desc -= dpaa2_q->nb_desc;
 
-	if (dpaa2_q->cgid != 0xff) {
+	if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 		options = DPNI_QUEUE_OPT_CLEAR_CGID;
 		cfg.cgid = dpaa2_q->cgid;
 
@@ -1066,7 +1076,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 			DPAA2_PMD_ERR("Unable to clear CGR from q=%u err=%d",
 					dpaa2_q->fqid, ret);
 		priv->cgid_in_use[dpaa2_q->cgid] = 0;
-		dpaa2_q->cgid = 0xff;
+		dpaa2_q->cgid = DPAA2_INVALID_CGID;
 	}
 }
 
@@ -1230,10 +1240,10 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
 	dpaa2_dev_set_link_up(dev);
 
 	for (i = 0; i < data->nb_rx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)data->rx_queues[i];
+		dpaa2_q = data->rx_queues[i];
 		ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-				     DPNI_QUEUE_RX, dpaa2_q->tc_index,
-				       dpaa2_q->flow_id, &cfg, &qid);
+				DPNI_QUEUE_RX, dpaa2_q->tc_index,
+				dpaa2_q->flow_id, &cfg, &qid);
 		if (ret) {
 			DPAA2_PMD_ERR("Error in getting flow information: "
 				      "err=%d", ret);
@@ -1250,7 +1260,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
 						ret);
 			return ret;
 		}
-		dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq;
+		dpaa2_q = priv->rx_err_vq;
 		dpaa2_q->fqid = qid.fqid;
 		dpaa2_q->eth_data = dev->data;
 
@@ -1315,7 +1325,7 @@ static int
 dpaa2_dev_stop(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int ret;
 	struct rte_eth_link link;
 	struct rte_device *rdev = dev->device;
@@ -1368,7 +1378,7 @@ static int
 dpaa2_dev_close(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int i, ret;
 	struct rte_eth_link link;
 
@@ -1379,7 +1389,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 
 	if (!dpni) {
 		DPAA2_PMD_WARN("Already closed or not started");
-		return -1;
+		return -EINVAL;
 	}
 
 	dpaa2_tm_deinit(dev);
@@ -1388,7 +1398,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	ret = dpni_reset(dpni, CMD_PRI_LOW, priv->token);
 	if (ret) {
 		DPAA2_PMD_ERR("Failure cleaning dpni device: err=%d", ret);
-		return -1;
+		return ret;
 	}
 
 	memset(&link, 0, sizeof(link));
@@ -1400,7 +1410,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	ret = dpni_close(dpni, CMD_PRI_LOW, priv->token);
 	if (ret) {
 		DPAA2_PMD_ERR("Failure closing dpni device with err code %d",
-			      ret);
+			ret);
 	}
 
 	/* Free the allocated memory for ethernet private data and dpni*/
@@ -1409,18 +1419,17 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	rte_free(dpni);
 
 	for (i = 0; i < MAX_TCS; i++)
-		rte_free((void *)(size_t)priv->extract.tc_extract_param[i]);
+		rte_free(priv->extract.tc_extract_param[i]);
 
 	if (priv->extract.qos_extract_param)
-		rte_free((void *)(size_t)priv->extract.qos_extract_param);
+		rte_free(priv->extract.qos_extract_param);
 
 	DPAA2_PMD_INFO("%s: netdev deleted", dev->data->name);
 	return 0;
 }
 
 static int
-dpaa2_dev_promiscuous_enable(
-		struct rte_eth_dev *dev)
+dpaa2_dev_promiscuous_enable(struct rte_eth_dev *dev)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
@@ -1480,7 +1489,7 @@ dpaa2_dev_allmulticast_enable(
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1501,7 +1510,7 @@ dpaa2_dev_allmulticast_disable(struct rte_eth_dev *dev)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1526,13 +1535,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
 				+ VLAN_TAG_SIZE;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1544,7 +1553,7 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 					frame_size - RTE_ETHER_CRC_LEN);
 	if (ret) {
 		DPAA2_PMD_ERR("Setting the max frame length failed");
-		return -1;
+		return ret;
 	}
 	dev->data->mtu = mtu;
 	DPAA2_PMD_INFO("MTU configured for the device: %d", mtu);
@@ -1553,36 +1562,35 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 static int
 dpaa2_dev_add_mac_addr(struct rte_eth_dev *dev,
-		       struct rte_ether_addr *addr,
-		       __rte_unused uint32_t index,
-		       __rte_unused uint32_t pool)
+	struct rte_ether_addr *addr,
+	__rte_unused uint32_t index,
+	__rte_unused uint32_t pool)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
-		return -1;
+		return -EINVAL;
 	}
 
 	ret = dpni_add_mac_addr(dpni, CMD_PRI_LOW, priv->token,
 				addr->addr_bytes, 0, 0, 0);
 	if (ret)
-		DPAA2_PMD_ERR(
-			"error: Adding the MAC ADDR failed: err = %d", ret);
-	return 0;
+		DPAA2_PMD_ERR("ERR(%d) Adding the MAC ADDR failed", ret);
+	return ret;
 }
 
 static void
 dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
-			  uint32_t index)
+	uint32_t index)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct rte_eth_dev_data *data = dev->data;
 	struct rte_ether_addr *macaddr;
 
@@ -1590,7 +1598,7 @@ dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
 
 	macaddr = &data->mac_addrs[index];
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return;
 	}
@@ -1604,15 +1612,15 @@ dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
 
 static int
 dpaa2_dev_set_mac_addr(struct rte_eth_dev *dev,
-		       struct rte_ether_addr *addr)
+	struct rte_ether_addr *addr)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1621,19 +1629,18 @@ dpaa2_dev_set_mac_addr(struct rte_eth_dev *dev,
 					priv->token, addr->addr_bytes);
 
 	if (ret)
-		DPAA2_PMD_ERR(
-			"error: Setting the MAC ADDR failed %d", ret);
+		DPAA2_PMD_ERR("ERR(%d) Setting the MAC ADDR failed", ret);
 
 	return ret;
 }
 
-static
-int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
-			 struct rte_eth_stats *stats)
+static int
+dpaa2_dev_stats_get(struct rte_eth_dev *dev,
+	struct rte_eth_stats *stats)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
-	int32_t  retcode;
+	struct fsl_mc_io *dpni = dev->process_private;
+	int32_t retcode;
 	uint8_t page0 = 0, page1 = 1, page2 = 2;
 	union dpni_statistics value;
 	int i;
@@ -1688,8 +1695,8 @@ int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
 	/* Fill in per queue stats */
 	for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) &&
 		(i < priv->nb_rx_queues || i < priv->nb_tx_queues); ++i) {
-		dpaa2_rxq = (struct dpaa2_queue *)priv->rx_vq[i];
-		dpaa2_txq = (struct dpaa2_queue *)priv->tx_vq[i];
+		dpaa2_rxq = priv->rx_vq[i];
+		dpaa2_txq = priv->tx_vq[i];
 		if (dpaa2_rxq)
 			stats->q_ipackets[i] = dpaa2_rxq->rx_pkts;
 		if (dpaa2_txq)
@@ -1708,19 +1715,20 @@ int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
 };
 
 static int
-dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
-		     unsigned int n)
+dpaa2_dev_xstats_get(struct rte_eth_dev *dev,
+	struct rte_eth_xstat *xstats, unsigned int n)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
-	int32_t  retcode;
+	int32_t retcode;
 	union dpni_statistics value[5] = {};
 	unsigned int i = 0, num = RTE_DIM(dpaa2_xstats_strings);
+	uint8_t page_id, stats_id;
 
 	if (n < num)
 		return num;
 
-	if (xstats == NULL)
+	if (!xstats)
 		return 0;
 
 	/* Get Counters from page_0*/
@@ -1755,8 +1763,9 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 
 	for (i = 0; i < num; i++) {
 		xstats[i].id = i;
-		xstats[i].value = value[dpaa2_xstats_strings[i].page_id].
-			raw.counter[dpaa2_xstats_strings[i].stats_id];
+		page_id = dpaa2_xstats_strings[i].page_id;
+		stats_id = dpaa2_xstats_strings[i].stats_id;
+		xstats[i].value = value[page_id].raw.counter[stats_id];
 	}
 	return i;
 err:
@@ -1766,8 +1775,8 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 
 static int
 dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
-		       struct rte_eth_xstat_name *xstats_names,
-		       unsigned int limit)
+	struct rte_eth_xstat_name *xstats_names,
+	unsigned int limit)
 {
 	unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings);
 
@@ -1785,16 +1794,16 @@ dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 
 static int
 dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
-		       uint64_t *values, unsigned int n)
+	uint64_t *values, unsigned int n)
 {
 	unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings);
 	uint64_t values_copy[stat_cnt];
+	uint8_t page_id, stats_id;
 
 	if (!ids) {
 		struct dpaa2_dev_priv *priv = dev->data->dev_private;
-		struct fsl_mc_io *dpni =
-			(struct fsl_mc_io *)dev->process_private;
-		int32_t  retcode;
+		struct fsl_mc_io *dpni = dev->process_private;
+		int32_t retcode;
 		union dpni_statistics value[5] = {};
 
 		if (n < stat_cnt)
@@ -1828,8 +1837,9 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 			return 0;
 
 		for (i = 0; i < stat_cnt; i++) {
-			values[i] = value[dpaa2_xstats_strings[i].page_id].
-				raw.counter[dpaa2_xstats_strings[i].stats_id];
+			page_id = dpaa2_xstats_strings[i].page_id;
+			stats_id = dpaa2_xstats_strings[i].stats_id;
+			values[i] = value[page_id].raw.counter[stats_id];
 		}
 		return stat_cnt;
 	}
@@ -1839,7 +1849,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 	for (i = 0; i < n; i++) {
 		if (ids[i] >= stat_cnt) {
 			DPAA2_PMD_ERR("xstats id value isn't valid");
-			return -1;
+			return -EINVAL;
 		}
 		values[i] = values_copy[ids[i]];
 	}
@@ -1847,8 +1857,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 }
 
 static int
-dpaa2_xstats_get_names_by_id(
-	struct rte_eth_dev *dev,
+dpaa2_xstats_get_names_by_id(struct rte_eth_dev *dev,
 	const uint64_t *ids,
 	struct rte_eth_xstat_name *xstats_names,
 	unsigned int limit)
@@ -1875,14 +1884,14 @@ static int
 dpaa2_dev_stats_reset(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int retcode;
 	int i;
 	struct dpaa2_queue *dpaa2_q;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1893,13 +1902,13 @@ dpaa2_dev_stats_reset(struct rte_eth_dev *dev)
 
 	/* Reset the per queue stats in dpaa2_queue structure */
 	for (i = 0; i < priv->nb_rx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[i];
+		dpaa2_q = priv->rx_vq[i];
 		if (dpaa2_q)
 			dpaa2_q->rx_pkts = 0;
 	}
 
 	for (i = 0; i < priv->nb_tx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i];
+		dpaa2_q = priv->tx_vq[i];
 		if (dpaa2_q)
 			dpaa2_q->tx_pkts = 0;
 	}
@@ -1918,12 +1927,12 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct rte_eth_link link;
 	struct dpni_link_state state = {0};
 	uint8_t count;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return 0;
 	}
@@ -1933,7 +1942,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 					  &state);
 		if (ret < 0) {
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
-			return -1;
+			return ret;
 		}
 		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
@@ -1952,7 +1961,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
-	if (ret == -1)
+	if (ret < 0)
 		DPAA2_PMD_DEBUG("No change in status");
 	else
 		DPAA2_PMD_INFO("Port %d Link is %s\n", dev->data->port_id,
@@ -1975,9 +1984,9 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	struct dpni_link_state state = {0};
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return ret;
 	}
@@ -2037,9 +2046,9 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("Device has not yet been configured");
 		return ret;
 	}
@@ -2091,9 +2100,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL || fc_conf == NULL) {
+	if (!dpni || !fc_conf) {
 		DPAA2_PMD_ERR("device not configured");
 		return ret;
 	}
@@ -2146,9 +2155,9 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return ret;
 	}
@@ -2391,10 +2400,10 @@ dpaa2_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 {
 	struct dpaa2_queue *rxq;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	uint16_t max_frame_length;
 
-	rxq = (struct dpaa2_queue *)dev->data->rx_queues[queue_id];
+	rxq = dev->data->rx_queues[queue_id];
 
 	qinfo->mp = rxq->mb_pool;
 	qinfo->scattered_rx = dev->data->scattered_rx;
@@ -2510,10 +2519,10 @@ static struct eth_dev_ops dpaa2_ethdev_ops = {
  * Returns the table of MAC entries (multiple entries)
  */
 static int
-populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv,
-		  struct rte_ether_addr *mac_entry)
+populate_mac_addr(struct fsl_mc_io *dpni_dev,
+	struct dpaa2_dev_priv *priv, struct rte_ether_addr *mac_entry)
 {
-	int ret;
+	int ret = 0;
 	struct rte_ether_addr phy_mac, prime_mac;
 
 	memset(&phy_mac, 0, sizeof(struct rte_ether_addr));
@@ -2571,7 +2580,7 @@ populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv,
 	return 0;
 
 cleanup:
-	return -1;
+	return ret;
 }
 
 static int
@@ -2630,7 +2639,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 	dpni_dev->regs = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
-	eth_dev->process_private = (void *)dpni_dev;
+	eth_dev->process_private = dpni_dev;
 
 	/* For secondary processes, the primary has done all the work */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
@@ -2659,7 +2668,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 			     "Failure in opening dpni@%d with err code %d",
 			     hw_id, ret);
 		rte_free(dpni_dev);
-		return -1;
+		return ret;
 	}
 
 	if (eth_dev->data->dev_conf.lpbk_mode)
@@ -2810,7 +2819,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	/* Init fields w.r.t. classification */
 	memset(&priv->extract.qos_key_extract, 0,
 		sizeof(struct dpaa2_key_extract));
-	priv->extract.qos_extract_param = rte_malloc(NULL, 256, 64);
+	priv->extract.qos_extract_param = rte_malloc(NULL,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE,
+		RTE_CACHE_LINE_SIZE);
 	if (!priv->extract.qos_extract_param) {
 		DPAA2_PMD_ERR("Memory alloc failed");
 		goto init_err;
@@ -2819,7 +2830,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	for (i = 0; i < MAX_TCS; i++) {
 		memset(&priv->extract.tc_key_extract[i], 0,
 			sizeof(struct dpaa2_key_extract));
-		priv->extract.tc_extract_param[i] = rte_malloc(NULL, 256, 64);
+		priv->extract.tc_extract_param[i] = rte_malloc(NULL,
+			DPAA2_EXTRACT_PARAM_MAX_SIZE,
+			RTE_CACHE_LINE_SIZE);
 		if (!priv->extract.tc_extract_param[i]) {
 			DPAA2_PMD_ERR("Memory alloc failed");
 			goto init_err;
@@ -2979,12 +2992,11 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 
 	if ((DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE) >
 		RTE_PKTMBUF_HEADROOM) {
-		DPAA2_PMD_ERR(
-		"RTE_PKTMBUF_HEADROOM(%d) shall be > DPAA2 Annotation req(%d)",
-		RTE_PKTMBUF_HEADROOM,
-		DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE);
+		DPAA2_PMD_ERR("RTE_PKTMBUF_HEADROOM(%d) < DPAA2 Annotation(%d)",
+			RTE_PKTMBUF_HEADROOM,
+			DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE);
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index db918725a7..a2b9fc5678 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -31,6 +31,9 @@
 #define MAX_DPNI		8
 #define DPAA2_MAX_CHANNELS	16
 
+#define DPAA2_EXTRACT_PARAM_MAX_SIZE 256
+#define DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE 256
+
 #define DPAA2_RX_DEFAULT_NBDESC 512
 
 #define DPAA2_ETH_MAX_LEN (RTE_ETHER_MTU + \
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 3afe331023..54f38e2e25 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -4322,7 +4322,14 @@ dpaa2_configure_fs_rss_table(struct dpaa2_dev_priv *priv,
 
 	tc_extract = &priv->extract.tc_key_extract[tc_id];
 	key_cfg_buf = priv->extract.tc_extract_param[tc_id];
-	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_buf,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_buf);
+
+		return -ENOBUFS;
+	}
 
 	key_max_size = tc_extract->key_profile.key_max_size;
 	entry_size = dpaa2_flow_entry_size(key_max_size);
@@ -4406,7 +4413,14 @@ dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
 
 	qos_extract = &priv->extract.qos_key_extract;
 	key_cfg_buf = priv->extract.qos_extract_param;
-	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_buf,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_buf);
+
+		return -ENOBUFS;
+	}
 
 	key_max_size = qos_extract->key_profile.key_max_size;
 	entry_size = dpaa2_flow_entry_size(key_max_size);
@@ -4963,6 +4977,7 @@ dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	struct dpaa2_dev_flow *flow = NULL;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int ret;
+	uint64_t iova;
 
 	dpaa2_flow_control_log =
 		getenv("DPAA2_FLOW_CONTROL_LOG");
@@ -4986,34 +5001,66 @@ dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	}
 
 	/* Allocate DMA'ble memory to write the qos rules */
-	flow->qos_key_addr = rte_zmalloc(NULL, 256, 64);
+	flow->qos_key_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->qos_key_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->qos_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->qos_key_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->qos_key_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for qos key(%p)",
+			__func__, flow->qos_key_addr);
+		goto mem_failure;
+	}
+	flow->qos_rule.key_iova = iova;
 
-	flow->qos_mask_addr = rte_zmalloc(NULL, 256, 64);
+	flow->qos_mask_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->qos_mask_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->qos_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->qos_mask_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->qos_mask_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for qos mask(%p)",
+			__func__, flow->qos_mask_addr);
+		goto mem_failure;
+	}
+	flow->qos_rule.mask_iova = iova;
 
 	/* Allocate DMA'ble memory to write the FS rules */
-	flow->fs_key_addr = rte_zmalloc(NULL, 256, 64);
+	flow->fs_key_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->fs_key_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->fs_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->fs_key_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->fs_key_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for fs key(%p)",
+			__func__, flow->fs_key_addr);
+		goto mem_failure;
+	}
+	flow->fs_rule.key_iova = iova;
 
-	flow->fs_mask_addr = rte_zmalloc(NULL, 256, 64);
+	flow->fs_mask_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->fs_mask_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->fs_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->fs_mask_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->fs_mask_addr,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for fs mask(%p)",
+			__func__, flow->fs_mask_addr);
+		goto mem_failure;
+	}
+	flow->fs_rule.mask_iova = iova;
 
 	priv->curr = flow;
 
diff --git a/drivers/net/dpaa2/dpaa2_sparser.c b/drivers/net/dpaa2/dpaa2_sparser.c
index 36a14526a5..aa12e49e46 100644
--- a/drivers/net/dpaa2/dpaa2_sparser.c
+++ b/drivers/net/dpaa2/dpaa2_sparser.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2023 NXP
  */
 
 #include <rte_mbuf.h>
@@ -170,16 +170,23 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 	}
 
 	memcpy(addr, sp_param.byte_code, sp_param.size);
-	cfg.ss_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(addr));
+	cfg.ss_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(addr, sp_param.size);
+	if (cfg.ss_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("No IOMMU map for soft sequence(%p), size=%d",
+			addr, sp_param.size);
+		rte_free(addr);
+
+		return -ENOBUFS;
+	}
 
 	ret = dpni_load_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("dpni_load_sw_sequence failed\n");
+		DPAA2_PMD_ERR("dpni_load_sw_sequence failed");
 		rte_free(addr);
 		return ret;
 	}
 
-	priv->ss_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(addr));
+	priv->ss_iova = cfg.ss_iova;
 	priv->ss_offset += sp_param.size;
 	DPAA2_PMD_INFO("Soft parser loaded for dpni@%d", priv->hw_id);
 
@@ -219,7 +226,15 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 		}
 
 		memcpy(param_addr, sp_param.param_array, cfg.param_size);
-		cfg.param_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(param_addr));
+		cfg.param_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(param_addr,
+			cfg.param_size);
+		if (cfg.param_iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("%s: No IOMMU map for %p, size=%d",
+				__func__, param_addr, cfg.param_size);
+			rte_free(param_addr);
+
+			return -ENOBUFS;
+		}
 		priv->ss_param_iova = cfg.param_iova;
 	} else {
 		cfg.param_iova = 0;
@@ -227,7 +242,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 
 	ret = dpni_enable_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d\n",
+		DPAA2_PMD_ERR("Soft parser enabled for dpni@%d failed",
 			priv->hw_id);
 		rte_free(param_addr);
 		return ret;
diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index 83d0d669ce..a5b7d39ed4 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2020-2021 NXP
+ * Copyright 2020-2023 NXP
  */
 
 #include <rte_ethdev.h>
@@ -572,41 +572,42 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
+	uint64_t iova;
 
 	memset(&tx_flow_cfg, 0, sizeof(struct dpni_queue));
-	dpaa2_q =  (struct dpaa2_queue *)dev->data->tx_queues[node->id];
+	dpaa2_q = (struct dpaa2_queue *)dev->data->tx_queues[node->id];
 	tc_id = node->parent->tc_id;
 	node->parent->tc_id++;
 	flow_id = 0;
 
-	if (dpaa2_q == NULL) {
-		DPAA2_PMD_ERR("Queue is not configured for node = %d", node->id);
-		return -1;
+	if (!dpaa2_q) {
+		DPAA2_PMD_ERR("Queue is not configured for node = %d",
+			node->id);
+		return -ENOMEM;
 	}
 
 	DPAA2_PMD_DEBUG("tc_id = %d, channel = %d\n\n", tc_id,
 			node->parent->channel_id);
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
-			     ((node->parent->channel_id << 8) | tc_id),
-			     flow_id, options, &tx_flow_cfg);
+			((node->parent->channel_id << 8) | tc_id),
+			flow_id, options, &tx_flow_cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("Error in setting the tx flow: "
-		       "channel id  = %d tc_id= %d, param = 0x%x "
-		       "flow=%d err=%d", node->parent->channel_id, tc_id,
-		       ((node->parent->channel_id << 8) | tc_id), flow_id,
-		       ret);
-		return -1;
+		DPAA2_PMD_ERR("Set the TC[%d].ch[%d].TX flow[%d] (err=%d)",
+			tc_id, node->parent->channel_id, flow_id,
+			ret);
+		return ret;
 	}
 
 	dpaa2_q->flow_id = flow_id;
 	dpaa2_q->tc_index = tc_id;
 
 	ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-		DPNI_QUEUE_TX, ((node->parent->channel_id << 8) | dpaa2_q->tc_index),
-		dpaa2_q->flow_id, &tx_flow_cfg, &qid);
+			DPNI_QUEUE_TX,
+			((node->parent->channel_id << 8) | dpaa2_q->tc_index),
+			dpaa2_q->flow_id, &tx_flow_cfg, &qid);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-		return -1;
+		return ret;
 	}
 	dpaa2_q->fqid = qid.fqid;
 
@@ -621,8 +622,13 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 		 */
 		cong_notif_cfg.threshold_exit = (dpaa2_q->nb_desc * 9) / 10;
 		cong_notif_cfg.message_ctx = 0;
-		cong_notif_cfg.message_iova =
-			(size_t)DPAA2_VADDR_TO_IOVA(dpaa2_q->cscn);
+		iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(dpaa2_q->cscn,
+				sizeof(struct qbman_result));
+		if (iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("No IOMMU map for cscn(%p)", dpaa2_q->cscn);
+			return -ENOBUFS;
+		}
+		cong_notif_cfg.message_iova = iova;
 		cong_notif_cfg.dest_cfg.dest_type = DPNI_DEST_NONE;
 		cong_notif_cfg.notification_mode =
 					DPNI_CONG_OPT_WRITE_MEM_ON_ENTER |
@@ -641,6 +647,7 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 			return -ret;
 		}
 	}
+	dpaa2_q->tm_sw_td = true;
 
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 38/43] net/dpaa2: improve DPDMUX error behavior settings
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (36 preceding siblings ...)
  2024-09-13  5:59 ` [v1 37/43] net/dpaa2: check IOVA before sending MC command vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 39/43] net/dpaa2: store drop priority in mbuf vanshika.shukla
                   ` (5 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena

From: Sachin Saxena <sachin.saxena@nxp.com>

compatible with MC v10.36 or later

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 4390be9789..3c9e155b23 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2021,2023 NXP
  */
 
 #include <sys/queue.h>
@@ -448,13 +448,12 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 		struct dpdmux_error_cfg mux_err_cfg;
 
 		memset(&mux_err_cfg, 0, sizeof(mux_err_cfg));
+		/* Note: Discarded flag(DPDMUX_ERROR_DISC) has effect only when
+		 * ERROR_ACTION is set to DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE.
+		 */
+		mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
 		mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE;
 
-		if (attr.method != DPDMUX_METHOD_C_VLAN_MAC)
-			mux_err_cfg.errors = DPDMUX_ERROR_DISC;
-		else
-			mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
-
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
 				dpdmux_dev->token, DPAA2_DPDMUX_DPMAC_IDX,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 39/43] net/dpaa2: store drop priority in mbuf
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (37 preceding siblings ...)
  2024-09-13  5:59 ` [v1 38/43] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 40/43] net/dpaa2: add API to get endpoint name vanshika.shukla
                   ` (4 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

store drop priority in mbuf from fd.

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 1 +
 drivers/net/dpaa2/dpaa2_rxtx.c          | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 03b9088cc6..de31dc6be7 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -328,6 +328,7 @@ enum qbman_fd_format {
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
 #define DPAA2_GET_FD_IVP(fd)   (((fd)->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_GET_FD_DROPP(fd)  (((fd)->simple.ctrl & 0x07000000) >> 24)
 #define DPAA2_GET_FD_FRC(fd)   ((fd)->simple.frc)
 #define DPAA2_GET_FD_FLC(fd) \
 	(((uint64_t)((fd)->simple.flc_hi) << 32) + (fd)->simple.flc_lo)
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 065b219ffd..b9f1f0d05e 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -388,6 +388,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 	mbuf->pkt_len = mbuf->data_len;
 	mbuf->port = port_id;
 	mbuf->next = NULL;
+	mbuf->hash.sched.color = DPAA2_GET_FD_DROPP(fd);
 	rte_mbuf_refcnt_set(mbuf, 1);
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 40/43] net/dpaa2: add API to get endpoint name
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (38 preceding siblings ...)
  2024-09-13  5:59 ` [v1 39/43] net/dpaa2: store drop priority in mbuf vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 41/43] net/dpaa2: support VLAN traffic splitting vanshika.shukla
                   ` (3 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Export API in rte_pmd_dpaa2.h

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 24 ++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h  |  4 ++++
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  3 +++
 drivers/net/dpaa2/version.map     |  1 +
 4 files changed, 32 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9f859aef66..4119949c77 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2900,6 +2900,30 @@ rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+const char *
+rte_pmd_dpaa2_ep_name(uint32_t eth_id)
+{
+	struct rte_eth_dev *dev;
+	struct dpaa2_dev_priv *priv;
+
+	if (eth_id >= RTE_MAX_ETHPORTS)
+		return NULL;
+
+	if (!rte_pmd_dpaa2_dev_is_dpaa2(eth_id))
+		return NULL;
+
+	dev = &rte_eth_devices[eth_id];
+	if (!dev->data)
+		return NULL;
+
+	if (!dev->data->dev_private)
+		return NULL;
+
+	priv = dev->data->dev_private;
+
+	return priv->ep_name;
+}
+
 #if defined(RTE_LIBRTE_IEEE1588)
 int
 rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index a2b9fc5678..fd6bad7f74 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -385,6 +385,10 @@ struct dpaa2_dev_priv {
 	uint8_t max_cgs;
 	uint8_t cgid_in_use[MAX_RX_QUEUES];
 
+	enum rte_dpaa2_dev_type ep_dev_type;   /**< Endpoint Device Type */
+	uint16_t ep_object_id;                 /**< Endpoint DPAA2 Object ID */
+	char ep_name[RTE_DEV_NAME_MAX_LEN];
+
 	struct extract_s extract;
 
 	uint16_t ss_offset;
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index fc52a9218e..f93af1c65f 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -130,6 +130,9 @@ rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
 __rte_experimental
 int
 rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id);
+__rte_experimental
+const char *
+rte_pmd_dpaa2_ep_name(uint32_t eth_id);
 
 #if defined(RTE_LIBRTE_IEEE1588)
 __rte_experimental
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 233c6e6b2c..35815f7777 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -18,6 +18,7 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_get_tlu_hash;
 	# added in 24.11
 	rte_pmd_dpaa2_dev_is_dpaa2;
+	rte_pmd_dpaa2_ep_name;
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
 	rte_pmd_dpaa2_mux_dump_counter;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 41/43] net/dpaa2: support VLAN traffic splitting
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (39 preceding siblings ...)
  2024-09-13  5:59 ` [v1 40/43] net/dpaa2: add API to get endpoint name vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 42/43] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
                   ` (2 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds support for adding rules in DPDMUX
to split VLAN traffic based on VLAN ids.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 3c9e155b23..c35baf4cde 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -118,6 +118,26 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	}
 	break;
 
+	case RTE_FLOW_ITEM_TYPE_VLAN:
+	{
+		const struct rte_flow_item_vlan *spec;
+
+		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
+		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
+		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
+		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FROM_FIELD;
+		kg_cfg.extracts[0].extract.from_hdr.offset = 1;
+		kg_cfg.extracts[0].extract.from_hdr.size = 1;
+		kg_cfg.num_extracts = 1;
+
+		spec = (const struct rte_flow_item_vlan *)pattern[0]->spec;
+		memcpy((void *)key_iova, (const void *)(&spec->hdr.vlan_tci),
+			sizeof(uint16_t));
+		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
+		key_size = sizeof(uint16_t);
+	}
+	break;
+
 	case RTE_FLOW_ITEM_TYPE_UDP:
 	{
 		const struct rte_flow_item_udp *spec;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 42/43] net/dpaa2: add support for C-VLAN and MAC
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (40 preceding siblings ...)
  2024-09-13  5:59 ` [v1 41/43] net/dpaa2: support VLAN traffic splitting vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-13  5:59 ` [v1 43/43] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds the support for DPDMUX_METHOD_C_VLAN_MAC method
which implements DPDMUX based on C-VLAN and MAC address.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     |  2 +-
 drivers/net/dpaa2/mc/fsl_dpdmux.h | 16 ++++++++++++++++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index c35baf4cde..5c37701939 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021,2023 NXP
+ * Copyright 2018-2024 NXP
  */
 
 #include <sys/queue.h>
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 97b09e59f9..70b81f3b3b 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -593,6 +593,22 @@ int dpdmux_dump_table(struct fsl_mc_io *mc_io,
  */
 #define DPDMUX__ERROR_L4CE			0x00000001
 
+#define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
+				 DPDMUX__ERROR_L4CV | \
+				 DPDMUX__ERROR_L3CE | \
+				 DPDMUX__ERROR_L3CV | \
+				 DPDMUX_ERROR_BLE | \
+				 DPDMUX_ERROR_PHE | \
+				 DPDMUX_ERROR_ISP | \
+				 DPDMUX_ERROR_PTE | \
+				 DPDMUX_ERROR_FPE | \
+				 DPDMUX_ERROR_FLE | \
+				 DPDMUX_ERROR_PIEE | \
+				 DPDMUX_ERROR_TIDE | \
+				 DPDMUX_ERROR_MNLE | \
+				 DPDMUX_ERROR_EOFHE | \
+				 DPDMUX_ERROR_KSE)
+
 #define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
 				 DPDMUX__ERROR_L4CV | \
 				 DPDMUX__ERROR_L3CE | \
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v1 43/43] net/dpaa2: dpdmux single flow/multiple rules support
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (41 preceding siblings ...)
  2024-09-13  5:59 ` [v1 42/43] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
@ 2024-09-13  5:59 ` vanshika.shukla
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-13  5:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support multiple extractions as well as hardware descriptions
instead of hard code.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h     |   1 +
 drivers/net/dpaa2/dpaa2_mux.c        | 395 ++++++++++++++++-----------
 drivers/net/dpaa2/dpaa2_parse_dump.h |   2 +
 drivers/net/dpaa2/rte_pmd_dpaa2.h    |   8 +-
 4 files changed, 247 insertions(+), 159 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fd6bad7f74..fd3119247a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -198,6 +198,7 @@ enum dpaa2_rx_faf_offset {
 	FAF_IPV4_FRAM = 34 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IPV6_FRAM = 42 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IP_FRAM = 48 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IP_FRAG_FRAM = 50 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_ICMP_FRAM = 57 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IGMP_FRAM = 58 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_GRE_FRAM = 65 + DPAA2_FAFE_PSR_SIZE * 8,
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 5c37701939..79a1c7f981 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -32,8 +32,9 @@ struct dpaa2_dpdmux_dev {
 	uint8_t num_ifs;   /* Number of interfaces in DPDMUX */
 };
 
-struct rte_flow {
-	struct dpdmux_rule_cfg rule;
+#define DPAA2_MUX_FLOW_MAX_RULE_NUM 8
+struct dpaa2_mux_flow {
+	struct dpdmux_rule_cfg rule[DPAA2_MUX_FLOW_MAX_RULE_NUM];
 };
 
 TAILQ_HEAD(dpdmux_dev_list, dpaa2_dpdmux_dev);
@@ -53,204 +54,287 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 	return dpdmux_dev;
 }
 
-struct rte_flow *
+int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
-			      struct rte_flow_item *pattern[],
-			      struct rte_flow_action *actions[])
+	struct rte_flow_item pattern[],
+	struct rte_flow_action actions[])
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev;
+	static struct dpkg_profile_cfg s_kg_cfg;
 	struct dpkg_profile_cfg kg_cfg;
 	const struct rte_flow_action_vf *vf_conf;
 	struct dpdmux_cls_action dpdmux_action;
-	struct rte_flow *flow = NULL;
-	void *key_iova, *mask_iova, *key_cfg_iova = NULL;
+	uint8_t *key_va = NULL, *mask_va = NULL;
+	void *key_cfg_va = NULL;
+	uint64_t key_iova, mask_iova, key_cfg_iova;
 	uint8_t key_size = 0;
-	int ret;
-	static int i;
+	int ret = 0, loop = 0;
+	static int s_i;
+	struct dpkg_extract *extract;
+	struct dpdmux_rule_cfg rule;
 
-	if (!pattern || !actions || !pattern[0] || !actions[0])
-		return NULL;
+	memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
 
 	/* Find the DPDMUX from dpdmux_id in our list */
 	dpdmux_dev = get_dpdmux_from_id(dpdmux_id);
 	if (!dpdmux_dev) {
 		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
-		return NULL;
+		ret = -ENODEV;
+		goto creation_error;
 	}
 
-	key_cfg_iova = rte_zmalloc(NULL, DIST_PARAM_IOVA_SIZE,
-				   RTE_CACHE_LINE_SIZE);
-	if (!key_cfg_iova) {
-		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
-		return NULL;
+	key_cfg_va = rte_zmalloc(NULL, DIST_PARAM_IOVA_SIZE,
+				RTE_CACHE_LINE_SIZE);
+	if (!key_cfg_va) {
+		DPAA2_PMD_ERR("Unable to allocate key configure buffer");
+		ret = -ENOMEM;
+		goto creation_error;
+	}
+
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_va,
+		DIST_PARAM_IOVA_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_va);
+		ret = -ENOBUFS;
+		goto creation_error;
 	}
-	flow = rte_zmalloc(NULL, sizeof(struct rte_flow) +
-			   (2 * DIST_PARAM_IOVA_SIZE), RTE_CACHE_LINE_SIZE);
-	if (!flow) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+
+	key_va = rte_zmalloc(NULL, (2 * DIST_PARAM_IOVA_SIZE),
+		RTE_CACHE_LINE_SIZE);
+	if (!key_va) {
+		DPAA2_PMD_ERR("Unable to allocate flow dist parameter");
+		ret = -ENOMEM;
 		goto creation_error;
 	}
-	key_iova = (void *)((size_t)flow + sizeof(struct rte_flow));
-	mask_iova = (void *)((size_t)key_iova + DIST_PARAM_IOVA_SIZE);
+
+	key_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_va,
+		(2 * DIST_PARAM_IOVA_SIZE));
+	if (key_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU mapping for address(%p)",
+			__func__, key_va);
+		ret = -ENOBUFS;
+		goto creation_error;
+	}
+
+	mask_va = key_va + DIST_PARAM_IOVA_SIZE;
+	mask_iova = key_iova + DIST_PARAM_IOVA_SIZE;
 
 	/* Currently taking only IP protocol as an extract type.
-	 * This can be extended to other fields using pattern->type.
+	 * This can be exended to other fields using pattern->type.
 	 */
 	memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
 
-	switch (pattern[0]->type) {
-	case RTE_FLOW_ITEM_TYPE_IPV4:
-	{
-		const struct rte_flow_item_ipv4 *spec;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_IP;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_ipv4 *)pattern[0]->spec;
-		memcpy(key_iova, (const void *)(&spec->hdr.next_proto_id),
-			sizeof(uint8_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint8_t));
-		key_size = sizeof(uint8_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_VLAN:
-	{
-		const struct rte_flow_item_vlan *spec;
-
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FROM_FIELD;
-		kg_cfg.extracts[0].extract.from_hdr.offset = 1;
-		kg_cfg.extracts[0].extract.from_hdr.size = 1;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_vlan *)pattern[0]->spec;
-		memcpy((void *)key_iova, (const void *)(&spec->hdr.vlan_tci),
-			sizeof(uint16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_UDP:
-	{
-		const struct rte_flow_item_udp *spec;
-		uint16_t udp_dst_port;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_UDP;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_udp *)pattern[0]->spec;
-		udp_dst_port = rte_constant_bswap16(spec->hdr.dst_port);
-		memcpy((void *)key_iova, (const void *)&udp_dst_port,
-							sizeof(rte_be16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_ETH:
-	{
-		const struct rte_flow_item_eth *spec;
-		uint16_t eth_type;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_ETH;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_ETH_TYPE;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
-		memcpy((void *)key_iova, (const void *)&eth_type,
-							sizeof(rte_be16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_RAW:
-	{
-		const struct rte_flow_item_raw *spec;
-
-		spec = (const struct rte_flow_item_raw *)pattern[0]->spec;
-		kg_cfg.extracts[0].extract.from_data.offset = spec->offset;
-		kg_cfg.extracts[0].extract.from_data.size = spec->length;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_DATA;
-		kg_cfg.num_extracts = 1;
-		memcpy((void *)key_iova, (const void *)spec->pattern,
-							spec->length);
-		memcpy(mask_iova, pattern[0]->mask, spec->length);
-
-		key_size = spec->length;
-	}
-	break;
+	while (pattern[loop].type != RTE_FLOW_ITEM_TYPE_END) {
+		if (kg_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+			DPAA2_PMD_ERR("Too many extracts(%d)",
+				kg_cfg.num_extracts);
+			ret = -ENOTSUP;
+			goto creation_error;
+		}
+		switch (pattern[loop].type) {
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		{
+			const struct rte_flow_item_ipv4 *spec;
+			const struct rte_flow_item_ipv4 *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_IP;
+			extract->extract.from_hdr.field = NH_FLD_IP_PROTO;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->hdr.next_proto_id, sizeof(uint8_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->hdr.next_proto_id,
+					sizeof(uint8_t));
+			} else {
+				mask_va[key_size] = 0xff;
+			}
+			key_size += sizeof(uint8_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+		{
+			const struct rte_flow_item_vlan *spec;
+			const struct rte_flow_item_vlan *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_VLAN;
+			extract->extract.from_hdr.field = NH_FLD_VLAN_TCI;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->tci, sizeof(uint16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->tci, sizeof(uint16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(uint16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_UDP:
+		{
+			const struct rte_flow_item_udp *spec;
+			const struct rte_flow_item_udp *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_UDP;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+			extract->extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->hdr.dst_port, sizeof(rte_be16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->hdr.dst_port,
+					sizeof(rte_be16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(rte_be16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_ETH:
+		{
+			const struct rte_flow_item_eth *spec;
+			const struct rte_flow_item_eth *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_ETH;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+			extract->extract.from_hdr.field = NH_FLD_ETH_TYPE;
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->type, sizeof(rte_be16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->type, sizeof(rte_be16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(rte_be16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_RAW:
+		{
+			const struct rte_flow_item_raw *spec;
+			const struct rte_flow_item_raw *mask;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_DATA;
+			extract->extract.from_data.offset = spec->offset;
+			extract->extract.from_data.size = spec->length;
+			kg_cfg.num_extracts++;
+
+			rte_memcpy(&key_va[key_size],
+				spec->pattern, spec->length);
+			if (mask && mask->pattern) {
+				rte_memcpy(&mask_va[key_size],
+					mask->pattern, spec->length);
+			} else {
+				memset(&mask_va[key_size], 0xff, spec->length);
+			}
+
+			key_size += spec->length;
+		}
+		break;
 
-	default:
-		DPAA2_PMD_ERR("Not supported pattern type: %d",
-				pattern[0]->type);
-		goto creation_error;
+		default:
+			DPAA2_PMD_ERR("Not supported pattern[%d] type: %d",
+				loop, pattern[loop].type);
+			ret = -ENOTSUP;
+			goto creation_error;
+		}
+		loop++;
 	}
 
-	ret = dpkg_prepare_key_cfg(&kg_cfg, key_cfg_iova);
+	ret = dpkg_prepare_key_cfg(&kg_cfg, key_cfg_va);
 	if (ret) {
 		DPAA2_PMD_ERR("dpkg_prepare_key_cfg failed: err(%d)", ret);
 		goto creation_error;
 	}
 
-	/* Multiple rules with same DPKG extracts (kg_cfg.extracts) like same
-	 * offset and length values in raw is supported right now. Different
-	 * values of kg_cfg may not work.
-	 */
-	if (i == 0) {
-		ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-					    dpdmux_dev->token,
-				(uint64_t)(DPAA2_VADDR_TO_IOVA(key_cfg_iova)));
+	if (!s_i) {
+		ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux,
+				CMD_PRI_LOW, dpdmux_dev->token, key_cfg_iova);
 		if (ret) {
 			DPAA2_PMD_ERR("dpdmux_set_custom_key failed: err(%d)",
-					ret);
+				ret);
+			goto creation_error;
+		}
+		rte_memcpy(&s_kg_cfg, &kg_cfg, sizeof(struct dpkg_profile_cfg));
+	} else {
+		if (memcmp(&s_kg_cfg, &kg_cfg,
+			sizeof(struct dpkg_profile_cfg))) {
+			DPAA2_PMD_ERR("%s: Single flow support only.",
+				__func__);
+			ret = -ENOTSUP;
 			goto creation_error;
 		}
 	}
-	/* As now our key extract parameters are set, let us configure
-	 * the rule.
-	 */
-	flow->rule.key_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(key_iova));
-	flow->rule.mask_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(mask_iova));
-	flow->rule.key_size = key_size;
-	flow->rule.entry_index = i++;
 
-	vf_conf = (const struct rte_flow_action_vf *)(actions[0]->conf);
+	vf_conf = actions[0].conf;
 	if (vf_conf->id == 0 || vf_conf->id > dpdmux_dev->num_ifs) {
-		DPAA2_PMD_ERR("Invalid destination id\n");
+		DPAA2_PMD_ERR("Invalid destination id(%d)", vf_conf->id);
 		goto creation_error;
 	}
 	dpdmux_action.dest_if = vf_conf->id;
 
-	ret = dpdmux_add_custom_cls_entry(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-					  dpdmux_dev->token, &flow->rule,
-					  &dpdmux_action);
+	rule.key_iova = key_iova;
+	rule.mask_iova = mask_iova;
+	rule.key_size = key_size;
+	rule.entry_index = s_i;
+	s_i++;
+
+	/* As now our key extract parameters are set, let us configure
+	 * the rule.
+	 */
+	ret = dpdmux_add_custom_cls_entry(&dpdmux_dev->dpdmux,
+			CMD_PRI_LOW, dpdmux_dev->token,
+			&rule, &dpdmux_action);
 	if (ret) {
-		DPAA2_PMD_ERR("dpdmux_add_custom_cls_entry failed: err(%d)",
-			      ret);
+		DPAA2_PMD_ERR("Add classification entry failed:err(%d)", ret);
 		goto creation_error;
 	}
 
-	return flow;
-
 creation_error:
-	rte_free((void *)key_cfg_iova);
-	rte_free((void *)flow);
-	return NULL;
+	if (key_cfg_va)
+		rte_free(key_cfg_va);
+	if (key_va)
+		rte_free(key_va);
+
+	return ret;
 }
 
 int
@@ -407,10 +491,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	PMD_INIT_FUNC_TRACE();
 
 	/* Allocate DPAA2 dpdmux handle */
-	dpdmux_dev = rte_malloc(NULL, sizeof(struct dpaa2_dpdmux_dev), 0);
+	dpdmux_dev = rte_zmalloc(NULL,
+		sizeof(struct dpaa2_dpdmux_dev), RTE_CACHE_LINE_SIZE);
 	if (!dpdmux_dev) {
 		DPAA2_PMD_ERR("Memory allocation failed for DPDMUX Device");
-		return -1;
+		return -ENOMEM;
 	}
 
 	/* Open the dpdmux object */
diff --git a/drivers/net/dpaa2/dpaa2_parse_dump.h b/drivers/net/dpaa2/dpaa2_parse_dump.h
index f1cdc003de..78fd3b768c 100644
--- a/drivers/net/dpaa2/dpaa2_parse_dump.h
+++ b/drivers/net/dpaa2/dpaa2_parse_dump.h
@@ -105,6 +105,8 @@ dpaa2_print_faf(struct dpaa2_fapr_array *fapr)
 			faf_bits[i].name = "IPv4 1 Present";
 		else if (i == FAF_IPV6_FRAM)
 			faf_bits[i].name = "IPv6 1 Present";
+		else if (i == FAF_IP_FRAG_FRAM)
+			faf_bits[i].name = "IP fragment Present";
 		else if (i == FAF_UDP_FRAM)
 			faf_bits[i].name = "UDP Present";
 		else if (i == FAF_TCP_FRAM)
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index f93af1c65f..237c3cd6e7 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -26,12 +26,12 @@
  *    Associated actions.
  *
  * @return
- *    A valid handle in case of success, NULL otherwise.
+ *    0 in case of success,  otherwise failure.
  */
-struct rte_flow *
+int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
-			      struct rte_flow_item *pattern[],
-			      struct rte_flow_action *actions[]);
+	struct rte_flow_item pattern[],
+	struct rte_flow_action actions[]);
 int
 rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
 	uint16_t entry_index);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 00/43] DPAA2 specific patches
  2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
                   ` (42 preceding siblings ...)
  2024-09-13  5:59 ` [v1 43/43] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
@ 2024-09-18  7:50 ` vanshika.shukla
  2024-09-18  7:50   ` [v2 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
                     ` (43 more replies)
  43 siblings, 44 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This series includes:
-> Fixes and enhancements for NXP DPAA2 drivers.
-> Upgrade with MC version 10.37
-> Enhancements in DPDMUX code
-> Fixes for coverity issues reported

V2 changes:
Fixed the broken compilation for clang in:
        "net/dpaa2: dpdmux single flow/multiple rules support" patch.
Fixed checkpatch warnings in the below patches:
        "net/dpaa2: protocol inside tunnel distribution"
        "net/dpaa2: add VXLAN distribution support"
        "bus/fslmc: dynamic IOVA mode configuration"
        "bus/fslmc: enhance MC VFIO multiprocess support"

Apeksha Gupta (2):
  net/dpaa2: add proper MTU debugging print
  net/dpaa2: store drop priority in mbuf

Brick Yang (1):
  net/dpaa2: update DPNI link status method

Gagandeep Singh (3):
  bus/fslmc: upgrade with MC version 10.37
  net/dpaa2: fix memory corruption in TM
  net/dpaa2: support software taildrop

Hemant Agrawal (2):
  net/dpaa2: add support to dump dpdmux counters
  bus/fslmc: change dpcon close as internal symbol

Jun Yang (23):
  net/dpaa2: enhance Tx scatter-gather mempool
  net/dpaa2: add new PMD API to check dpaa platform version
  bus/fslmc: improve BMAN buffer acquire
  bus/fslmc: get MC VFIO group FD directly
  bus/fslmc: enhance MC VFIO multiprocess support
  bus/fslmc: dynamic IOVA mode configuration
  bus/fslmc: remove VFIO IRQ mapping
  bus/fslmc: create dpaa2 device with it's object
  bus/fslmc: introduce VFIO DMA mapping API for fslmc
  net/dpaa2: flow API refactor
  net/dpaa2: dump Rx parser result
  net/dpaa2: enhancement of raw flow extract
  net/dpaa2: frame attribute flags parser
  net/dpaa2: add VXLAN distribution support
  net/dpaa2: protocol inside tunnel distribution
  net/dpaa2: eCPRI support by parser result
  net/dpaa2: add GTP flow support
  net/dpaa2: check if Soft parser is loaded
  net/dpaa2: soft parser flow verification
  net/dpaa2: add flow support for IPsec AH and ESP
  net/dpaa2: check IOVA before sending MC command
  net/dpaa2: add API to get endpoint name
  net/dpaa2: dpdmux single flow/multiple rules support

Rohit Raj (7):
  bus/fslmc: add close API to close DPAA2 device
  net/dpaa2: support link state for eth interfaces
  bus/fslmc: free VFIO group FD in case of add group failure
  bus/fslmc: fix coverity issue
  bus/fslmc: fix invalid error FD code
  bus/fslmc: change qbman eq desc from d to desc
  net/dpaa2: change miss flow ID macro name

Sachin Saxena (1):
  net/dpaa2: improve DPDMUX error behavior settings

Vanshika Shukla (4):
  net/dpaa2: support PTP packet one-step timestamp
  net/dpaa2: dpdmux: add support for CVLAN
  net/dpaa2: support VLAN traffic splitting
  net/dpaa2: add support for C-VLAN and MAC

 doc/guides/platform/dpaa2.rst                 |    4 +-
 drivers/bus/fslmc/bus_fslmc_driver.h          |   72 +-
 drivers/bus/fslmc/fslmc_bus.c                 |   62 +-
 drivers/bus/fslmc/fslmc_logs.h                |    5 +-
 drivers/bus/fslmc/fslmc_vfio.c                | 1628 +++-
 drivers/bus/fslmc/fslmc_vfio.h                |   39 +-
 drivers/bus/fslmc/mc/dpio.c                   |   94 +-
 drivers/bus/fslmc/mc/fsl_dpcon.h              |    6 +-
 drivers/bus/fslmc/mc/fsl_dpio.h               |   21 +-
 drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |   13 +-
 drivers/bus/fslmc/mc/fsl_dpmng.h              |    4 +-
 drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |    8 +-
 drivers/bus/fslmc/meson.build                 |    3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |   38 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |   38 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   50 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |    3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dprc.c      |    8 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  114 +-
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |   12 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |   49 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   30 +-
 drivers/bus/fslmc/version.map                 |   16 +-
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |   91 +-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |   47 +-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |   19 +-
 drivers/dma/dpaa2/dpaa2_qdma.c                |    1 +
 drivers/event/dpaa2/dpaa2_hw_dpcon.c          |   38 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |    2 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |   63 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  597 +-
 drivers/net/dpaa2/dpaa2_ethdev.h              |  225 +-
 drivers/net/dpaa2/dpaa2_flow.c                | 7070 ++++++++++-------
 drivers/net/dpaa2/dpaa2_mux.c                 |  543 +-
 drivers/net/dpaa2/dpaa2_parse_dump.h          |  250 +
 drivers/net/dpaa2/dpaa2_ptp.c                 |    8 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |   32 +-
 drivers/net/dpaa2/dpaa2_sparser.c             |   27 +-
 drivers/net/dpaa2/dpaa2_tm.c                  |   72 +-
 drivers/net/dpaa2/mc/dpdmux.c                 |  205 +-
 drivers/net/dpaa2/mc/dpkg.c                   |   12 +-
 drivers/net/dpaa2/mc/dpni.c                   |  383 +-
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |   99 +-
 drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |   83 +-
 drivers/net/dpaa2/mc/fsl_dpkg.h               |    7 +-
 drivers/net/dpaa2/mc/fsl_dpni.h               |  176 +-
 drivers/net/dpaa2/mc/fsl_dpni_cmd.h           |  125 +-
 drivers/net/dpaa2/rte_pmd_dpaa2.h             |   51 +-
 drivers/net/dpaa2/version.map                 |    6 +
 49 files changed, 8289 insertions(+), 4260 deletions(-)
 create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 01/43] net/dpaa2: enhance Tx scatter-gather mempool
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
  2024-09-18  7:50   ` [v2 02/43] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
                     ` (42 subsequent siblings)
  43 siblings, 1 reply; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Create TX SG pool only for primary process and lookup
this pool in secondary process.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 46 +++++++++++++++++++++++---------
 1 file changed, 33 insertions(+), 13 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 449bbda7ca..238533f439 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2867,6 +2867,35 @@ int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+static int dpaa2_tx_sg_pool_init(void)
+{
+	char name[RTE_MEMZONE_NAMESIZE];
+
+	if (dpaa2_tx_sg_pool)
+		return 0;
+
+	sprintf(name, "dpaa2_mbuf_tx_sg_pool");
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		dpaa2_tx_sg_pool = rte_pktmbuf_pool_create(name,
+			DPAA2_POOL_SIZE,
+			DPAA2_POOL_CACHE_SIZE, 0,
+			DPAA2_MAX_SGS * sizeof(struct qbman_sge),
+			rte_socket_id());
+		if (!dpaa2_tx_sg_pool) {
+			DPAA2_PMD_ERR("SG pool creation failed\n");
+			return -ENOMEM;
+		}
+	} else {
+		dpaa2_tx_sg_pool = rte_mempool_lookup(name);
+		if (!dpaa2_tx_sg_pool) {
+			DPAA2_PMD_ERR("SG pool lookup failed\n");
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
 static int
 rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 		struct rte_dpaa2_device *dpaa2_dev)
@@ -2921,19 +2950,10 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 
 	/* Invoke PMD device initialization function */
 	diag = dpaa2_dev_init(eth_dev);
-	if (diag == 0) {
-		if (!dpaa2_tx_sg_pool) {
-			dpaa2_tx_sg_pool =
-				rte_pktmbuf_pool_create("dpaa2_mbuf_tx_sg_pool",
-				DPAA2_POOL_SIZE,
-				DPAA2_POOL_CACHE_SIZE, 0,
-				DPAA2_MAX_SGS * sizeof(struct qbman_sge),
-				rte_socket_id());
-			if (dpaa2_tx_sg_pool == NULL) {
-				DPAA2_PMD_ERR("SG pool creation failed\n");
-				return -ENOMEM;
-			}
-		}
+	if (!diag) {
+		diag = dpaa2_tx_sg_pool_init();
+		if (diag)
+			return diag;
 		rte_eth_dev_probing_finish(eth_dev);
 		dpaa2_valid_dev++;
 		return 0;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 02/43] net/dpaa2: support PTP packet one-step timestamp
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
  2024-09-18  7:50   ` [v2 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 03/43] net/dpaa2: add proper MTU debugging print vanshika.shukla
                     ` (41 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds PTP one-step timestamping support.
dpni_set_single_step_cfg() MC API is utilized with offset provided
to insert correction time on frame.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 61 +++++++++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h  |  3 ++
 drivers/net/dpaa2/rte_pmd_dpaa2.h | 10 +++++
 drivers/net/dpaa2/version.map     |  3 ++
 4 files changed, 77 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 238533f439..596f1b4f61 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -548,6 +548,9 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	int tx_l4_csum_offload = false;
 	int ret, tc_index;
 	uint32_t max_rx_pktlen;
+#if defined(RTE_LIBRTE_IEEE1588)
+	uint16_t ptp_correction_offset;
+#endif
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -632,6 +635,11 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
+#if defined(RTE_LIBRTE_IEEE1588)
+	/* By default setting ptp correction offset for Ethernet SYNC packets */
+	ptp_correction_offset = RTE_ETHER_HDR_LEN + 8;
+	rte_pmd_dpaa2_set_one_step_ts(dev->data->port_id, ptp_correction_offset, 0);
+#endif
 	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
@@ -2867,6 +2875,59 @@ int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+#if defined(RTE_LIBRTE_IEEE1588)
+int
+rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpni = priv->eth_dev->process_private;
+	struct dpni_single_step_cfg ptp_cfg;
+	int err;
+
+	if (!mc_query)
+		return priv->ptp_correction_offset;
+
+	err = dpni_get_single_step_cfg(dpni, CMD_PRI_LOW, priv->token, &ptp_cfg);
+	if (err) {
+		DPAA2_PMD_ERR("Failed to retrieve onestep configuration");
+		return err;
+	}
+
+	if (!ptp_cfg.ptp_onestep_reg_base) {
+		DPAA2_PMD_ERR("1588 onestep reg not available");
+		return -1;
+	}
+
+	priv->ptp_correction_offset = ptp_cfg.offset;
+
+	return priv->ptp_correction_offset;
+}
+
+int
+rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpni = dev->process_private;
+	struct dpni_single_step_cfg cfg;
+	int err;
+
+	cfg.en = 1;
+	cfg.ch_update = ch_update;
+	cfg.offset = offset;
+	cfg.peer_delay = 0;
+
+	err = dpni_set_single_step_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
+	if (err)
+		return err;
+
+	priv->ptp_correction_offset = offset;
+
+	return 0;
+}
+#endif
+
 static int dpaa2_tx_sg_pool_init(void)
 {
 	char name[RTE_MEMZONE_NAMESIZE];
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 9feb631d5f..6625afaba3 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -230,6 +230,9 @@ struct dpaa2_dev_priv {
 	rte_spinlock_t lpbk_qp_lock;
 
 	uint8_t channel_inuse;
+	/* Stores correction offset for one step timestamping */
+	uint16_t ptp_correction_offset;
+
 	LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */
 	LIST_HEAD(nodes, dpaa2_tm_node) nodes;
 	LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles;
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index a1152eb717..aea9bae905 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -102,4 +102,14 @@ rte_pmd_dpaa2_thread_init(void);
 __rte_experimental
 uint32_t
 rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
+
+#if defined(RTE_LIBRTE_IEEE1588)
+__rte_experimental
+int
+rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update);
+
+__rte_experimental
+int
+rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query);
+#endif
 #endif /* _RTE_PMD_DPAA2_H */
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index ba756d26bd..2d95303e27 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -16,6 +16,9 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_thread_init;
 	# added in 21.11
 	rte_pmd_dpaa2_get_tlu_hash;
+	# added in 24.11
+	rte_pmd_dpaa2_set_one_step_ts;
+	rte_pmd_dpaa2_get_one_step_ts;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 03/43] net/dpaa2: add proper MTU debugging print
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
  2024-09-18  7:50   ` [v2 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
  2024-09-18  7:50   ` [v2 02/43] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 04/43] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
                     ` (40 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Apeksha Gupta, Jun Yang

From: Apeksha Gupta <apeksha.gupta@nxp.com>

This patch add proper debug info for check information of
max-pkt-len and configured params.

also store MTU

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 596f1b4f61..efba9ef286 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -579,9 +579,11 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 			DPAA2_PMD_ERR("Unable to set mtu. check config");
 			return ret;
 		}
-		DPAA2_PMD_INFO("MTU configured for the device: %d",
+		DPAA2_PMD_DEBUG("MTU configured for the device: %d",
 				dev->data->mtu);
 	} else {
+		DPAA2_PMD_ERR("Configured mtu %d and calculated max-pkt-len is %d which should be <= %d",
+			eth_conf->rxmode.mtu, max_rx_pktlen, DPAA2_MAX_RX_PKT_LEN);
 		return -1;
 	}
 
@@ -1534,6 +1536,7 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 		DPAA2_PMD_ERR("Setting the max frame length failed");
 		return -1;
 	}
+	dev->data->mtu = mtu;
 	DPAA2_PMD_INFO("MTU configured for the device: %d", mtu);
 	return 0;
 }
@@ -2836,6 +2839,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		DPAA2_PMD_ERR("Unable to set mtu. check config");
 		goto init_err;
 	}
+	eth_dev->data->mtu = RTE_ETHER_MTU;
 
 	/*TODO To enable soft parser support DPAA2 driver needs to integrate
 	 * with external entity to receive byte code for software sequence
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 04/43] net/dpaa2: add support to dump dpdmux counters
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (2 preceding siblings ...)
  2024-09-18  7:50   ` [v2 03/43] net/dpaa2: add proper MTU debugging print vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 05/43] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
                     ` (39 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Hemant Agrawal <hemant.agrawal@nxp.com>

This patch add supports to dump dpdmux counters as they are required
to identify the reasons for packet drop in dpdmux.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     | 84 +++++++++++++++++++++++++++++++
 drivers/net/dpaa2/rte_pmd_dpaa2.h | 18 +++++++
 drivers/net/dpaa2/version.map     |  1 +
 3 files changed, 103 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 2ff1a98fda..d682a61e52 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -259,6 +259,90 @@ rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 	return ret;
 }
 
+/* dump the status of the dpaa2_mux counters on the console */
+void
+rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
+{
+	struct dpaa2_dpdmux_dev *dpdmux;
+	uint64_t counter;
+	int ret;
+	int if_id;
+
+	/* Find the DPDMUX from dpdmux_id in our list */
+	dpdmux = get_dpdmux_from_id(dpdmux_id);
+	if (!dpdmux) {
+		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
+		return;
+	}
+
+	for (if_id = 0; if_id < num_if; if_id++) {
+		fprintf(f, "dpdmux.%d\n", if_id);
+
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FRAME, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BYTE, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FLTR_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FLTR_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FRAME_DISCARD,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FRAME_DISCARD %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_MCAST_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_MCAST_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_MCAST_BYTE,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_MCAST_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BCAST_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BCAST_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BCAST_BYTES,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BCAST_BYTES %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_FRAME, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_BYTE, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_FRAME_DISCARD,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_FRAME_DISCARD %" PRIu64 "\n",
+				counter);
+	}
+}
+
 static int
 dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 			   struct vfio_device_info *obj_info __rte_unused,
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index aea9bae905..fd9acd841b 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -33,6 +33,24 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			      struct rte_flow_item *pattern[],
 			      struct rte_flow_action *actions[]);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Dump demultiplex ethernet traffic counters
+ *
+ * @param f
+ *    output stream
+ * @param dpdmux_id
+ *    ID of the DPDMUX MC object.
+ * @param num_if
+ *    number of interface in dpdmux object
+ *
+ */
+__rte_experimental
+void
+rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 2d95303e27..7323fc8869 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	# added in 24.11
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
+	rte_pmd_dpaa2_mux_dump_counter;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 05/43] bus/fslmc: change dpcon close as internal symbol
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (3 preceding siblings ...)
  2024-09-18  7:50   ` [v2 04/43] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 06/43] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
                     ` (38 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena

From: Hemant Agrawal <hemant.agrawal@nxp.com>

This patch marks dpcon_close API as internal symbol and
also adds it into version map file

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/mc/fsl_dpcon.h | 3 ++-
 drivers/bus/fslmc/version.map    | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index db72477c8a..34b30d15c2 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2021 NXP
  *
  */
 #ifndef __FSL_DPCON_H
@@ -28,6 +28,7 @@ int dpcon_open(struct fsl_mc_io *mc_io,
 	       int dpcon_id,
 	       uint16_t *token);
 
+__rte_internal
 int dpcon_close(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index e19b8d1f6b..01e28c6625 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -36,6 +36,7 @@ INTERNAL {
 	dpci_set_rx_queue;
 	dpcon_get_attributes;
 	dpcon_open;
+	dpcon_close;
 	dpdmai_close;
 	dpdmai_disable;
 	dpdmai_enable;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 06/43] bus/fslmc: add close API to close DPAA2 device
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (4 preceding siblings ...)
  2024-09-18  7:50   ` [v2 05/43] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 07/43] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
                     ` (37 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Add rte_fslmc_close API to close all the DPAA2 devices while
closing the DPDK application.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     |  3 +
 drivers/bus/fslmc/fslmc_bus.c            | 13 ++++
 drivers/bus/fslmc/fslmc_vfio.c           | 87 ++++++++++++++++++++++++
 drivers/bus/fslmc/fslmc_vfio.h           |  3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c | 31 ++++++++-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c | 32 ++++++++-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 34 +++++++++
 drivers/event/dpaa2/dpaa2_hw_dpcon.c     | 32 ++++++++-
 drivers/net/dpaa2/dpaa2_mux.c            | 18 ++++-
 drivers/net/dpaa2/rte_pmd_dpaa2.h        |  5 +-
 10 files changed, 252 insertions(+), 6 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 7ac5fe6ff1..dc2f395f60 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -98,6 +98,8 @@ typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
 				      struct vfio_device_info *obj_info,
 				      int object_id);
 
+typedef void (*rte_dpaa2_obj_close_t)(int object_id);
+
 /**
  * A structure describing a DPAA2 object.
  */
@@ -106,6 +108,7 @@ struct rte_dpaa2_object {
 	const char *name;                   /**< Name of Object. */
 	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
 	rte_dpaa2_obj_create_t create;
+	rte_dpaa2_obj_close_t close;
 };
 
 /**
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index c155f4a2fd..7baadf99b9 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -384,6 +384,18 @@ rte_fslmc_match(struct rte_dpaa2_driver *dpaa2_drv,
 	return 1;
 }
 
+static int
+rte_fslmc_close(void)
+{
+	int ret = 0;
+
+	ret = fslmc_vfio_close_group();
+	if (ret)
+		DPAA2_BUS_ERR("Unable to close devices %d", ret);
+
+	return 0;
+}
+
 static int
 rte_fslmc_probe(void)
 {
@@ -664,6 +676,7 @@ struct rte_fslmc_bus rte_fslmc_bus = {
 	.bus = {
 		.scan = rte_fslmc_scan,
 		.probe = rte_fslmc_probe,
+		.cleanup = rte_fslmc_close,
 		.parse = rte_fslmc_parse,
 		.find_device = rte_fslmc_find_device,
 		.get_iommu_class = rte_dpaa2_get_iommu_class,
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index e12fd62f34..17163333af 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -702,6 +702,54 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 	return -1;
 }
 
+static void
+fslmc_close_iodevices(struct rte_dpaa2_device *dev)
+{
+	struct rte_dpaa2_object *object = NULL;
+	struct rte_dpaa2_driver *drv;
+	int ret, probe_all;
+
+	switch (dev->dev_type) {
+	case DPAA2_IO:
+	case DPAA2_CON:
+	case DPAA2_CI:
+	case DPAA2_BPOOL:
+	case DPAA2_MUX:
+		TAILQ_FOREACH(object, &dpaa2_obj_list, next) {
+			if (dev->dev_type == object->dev_type)
+				object->close(dev->object_id);
+			else
+				continue;
+		}
+		break;
+	case DPAA2_ETH:
+	case DPAA2_CRYPTO:
+	case DPAA2_QDMA:
+		probe_all = rte_fslmc_bus.bus.conf.scan_mode !=
+			    RTE_BUS_SCAN_ALLOWLIST;
+		TAILQ_FOREACH(drv, &rte_fslmc_bus.driver_list, next) {
+			if (drv->drv_type != dev->dev_type)
+				continue;
+			if (rte_dev_is_probed(&dev->device))
+				continue;
+			if (probe_all ||
+			    (dev->device.devargs &&
+			     dev->device.devargs->policy ==
+			     RTE_DEV_ALLOWED)) {
+				ret = drv->remove(dev);
+				if (ret)
+					DPAA2_BUS_ERR("Unable to remove");
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	DPAA2_BUS_LOG(DEBUG, "Device (%s) Closed",
+		      dev->device.name);
+}
+
 /*
  * fslmc_process_iodevices for processing only IO (ETH, CRYPTO, and possibly
  * EVENT) devices.
@@ -807,6 +855,45 @@ fslmc_process_mcp(struct rte_dpaa2_device *dev)
 	return ret;
 }
 
+int
+fslmc_vfio_close_group(void)
+{
+	struct rte_dpaa2_device *dev, *dev_temp;
+
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+		if (dev->device.devargs &&
+		    dev->device.devargs->policy == RTE_DEV_BLOCKED) {
+			DPAA2_BUS_LOG(DEBUG, "%s Blacklisted, skipping",
+				      dev->device.name);
+			TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+				continue;
+		}
+		switch (dev->dev_type) {
+		case DPAA2_ETH:
+		case DPAA2_CRYPTO:
+		case DPAA2_QDMA:
+		case DPAA2_IO:
+			fslmc_close_iodevices(dev);
+			break;
+		case DPAA2_CON:
+		case DPAA2_CI:
+		case DPAA2_BPOOL:
+		case DPAA2_MUX:
+			if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+				continue;
+
+			fslmc_close_iodevices(dev);
+			break;
+		case DPAA2_DPRTC:
+		default:
+			DPAA2_BUS_DEBUG("Device cannot be closed: Not supported (%s)",
+					dev->device.name);
+		}
+	}
+
+	return 0;
+}
+
 int
 fslmc_vfio_process_group(void)
 {
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 133606a9fd..b6677bdd18 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2019 NXP
+ *   Copyright 2016,2019-2020 NXP
  *
  */
 
@@ -55,6 +55,7 @@ int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 
 int fslmc_vfio_setup_group(void);
 int fslmc_vfio_process_group(void);
+int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
 int fslmc_get_container_group(int *gropuid);
 int rte_fslmc_vfio_dmamap(void);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index d7f6e45b7d..bc36607e64 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016 NXP
+ *   Copyright 2016,2020 NXP
  *
  */
 
@@ -33,6 +33,19 @@ TAILQ_HEAD(dpbp_dev_list, dpaa2_dpbp_dev);
 static struct dpbp_dev_list dpbp_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpbp_dev_list); /*!< DPBP device list */
 
+static struct dpaa2_dpbp_dev *get_dpbp_from_id(uint32_t dpbp_id)
+{
+	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
+
+	/* Get DPBP dev handle from list using index */
+	TAILQ_FOREACH(dpbp_dev, &dpbp_dev_list, next) {
+		if (dpbp_dev->dpbp_id == dpbp_id)
+			break;
+	}
+
+	return dpbp_dev;
+}
+
 static int
 dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
 			 struct vfio_device_info *obj_info __rte_unused,
@@ -116,9 +129,25 @@ int dpaa2_dpbp_supported(void)
 	return 0;
 }
 
+static void
+dpaa2_close_dpbp_device(int object_id)
+{
+	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
+
+	dpbp_dev = get_dpbp_from_id((uint32_t)object_id);
+
+	if (dpbp_dev) {
+		dpaa2_free_dpbp_dev(dpbp_dev);
+		dpbp_close(&dpbp_dev->dpbp, CMD_PRI_LOW, dpbp_dev->token);
+		TAILQ_REMOVE(&dpbp_dev_list, dpbp_dev, next);
+		rte_free(dpbp_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpbp_obj = {
 	.dev_type = DPAA2_BPOOL,
 	.create = dpaa2_create_dpbp_device,
+	.close = dpaa2_close_dpbp_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpbp, rte_dpaa2_dpbp_obj);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 07256ed7ec..d7de2bca05 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017 NXP
+ *   Copyright 2017,2020 NXP
  *
  */
 
@@ -30,6 +30,19 @@ TAILQ_HEAD(dpci_dev_list, dpaa2_dpci_dev);
 static struct dpci_dev_list dpci_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpci_dev_list); /*!< DPCI device list */
 
+static struct dpaa2_dpci_dev *get_dpci_from_id(uint32_t dpci_id)
+{
+	struct dpaa2_dpci_dev *dpci_dev = NULL;
+
+	/* Get DPCI dev handle from list using index */
+	TAILQ_FOREACH(dpci_dev, &dpci_dev_list, next) {
+		if (dpci_dev->dpci_id == dpci_id)
+			break;
+	}
+
+	return dpci_dev;
+}
+
 static int
 rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
 			     struct vfio_device_info *obj_info __rte_unused,
@@ -179,9 +192,26 @@ void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci)
 	}
 }
 
+
+static void
+rte_dpaa2_close_dpci_device(int object_id)
+{
+	struct dpaa2_dpci_dev *dpci_dev = NULL;
+
+	dpci_dev = get_dpci_from_id((uint32_t)object_id);
+
+	if (dpci_dev) {
+		rte_dpaa2_free_dpci_dev(dpci_dev);
+		dpci_close(&dpci_dev->dpci, CMD_PRI_LOW, dpci_dev->token);
+		TAILQ_REMOVE(&dpci_dev_list, dpci_dev, next);
+		rte_free(dpci_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpci_obj = {
 	.dev_type = DPAA2_CI,
 	.create = rte_dpaa2_create_dpci_device,
+	.close = rte_dpaa2_close_dpci_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpci, rte_dpaa2_dpci_obj);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 4aec7b2cd8..8265fee497 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -86,6 +86,19 @@ static int dpaa2_cluster_sz = 2;
  * Cluster 4 (ID = x07) : CPU14, CPU15;
  */
 
+static struct dpaa2_dpio_dev *get_dpio_dev_from_id(int32_t dpio_id)
+{
+	struct dpaa2_dpio_dev *dpio_dev = NULL;
+
+	/* Get DPIO dev handle from list using index */
+	TAILQ_FOREACH(dpio_dev, &dpio_dev_list, next) {
+		if (dpio_dev->hw_id == dpio_id)
+			break;
+	}
+
+	return dpio_dev;
+}
+
 static int
 dpaa2_get_core_id(void)
 {
@@ -358,6 +371,26 @@ static void dpaa2_portal_finish(void *arg)
 	pthread_setspecific(dpaa2_portal_key, NULL);
 }
 
+static void
+dpaa2_close_dpio_device(int object_id)
+{
+	struct dpaa2_dpio_dev *dpio_dev = NULL;
+
+	dpio_dev = get_dpio_dev_from_id((int32_t)object_id);
+
+	if (dpio_dev) {
+		if (dpio_dev->dpio) {
+			dpio_disable(dpio_dev->dpio, CMD_PRI_LOW,
+				     dpio_dev->token);
+			dpio_close(dpio_dev->dpio, CMD_PRI_LOW,
+				   dpio_dev->token);
+			rte_free(dpio_dev->dpio);
+		}
+		TAILQ_REMOVE(&dpio_dev_list, dpio_dev, next);
+		rte_free(dpio_dev);
+	}
+}
+
 static int
 dpaa2_create_dpio_device(int vdev_fd,
 			 struct vfio_device_info *obj_info,
@@ -635,6 +668,7 @@ dpaa2_free_eq_descriptors(void)
 static struct rte_dpaa2_object rte_dpaa2_dpio_obj = {
 	.dev_type = DPAA2_IO,
 	.create = dpaa2_create_dpio_device,
+	.close = dpaa2_close_dpio_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpio, rte_dpaa2_dpio_obj);
diff --git a/drivers/event/dpaa2/dpaa2_hw_dpcon.c b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
index a68d3ac154..64b0136e24 100644
--- a/drivers/event/dpaa2/dpaa2_hw_dpcon.c
+++ b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017 NXP
+ *   Copyright 2017,2020 NXP
  *
  */
 
@@ -30,6 +30,19 @@ TAILQ_HEAD(dpcon_dev_list, dpaa2_dpcon_dev);
 static struct dpcon_dev_list dpcon_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpcon_dev_list); /*!< DPCON device list */
 
+static struct dpaa2_dpcon_dev *get_dpcon_from_id(uint32_t dpcon_id)
+{
+	struct dpaa2_dpcon_dev *dpcon_dev = NULL;
+
+	/* Get DPCONC dev handle from list using index */
+	TAILQ_FOREACH(dpcon_dev, &dpcon_dev_list, next) {
+		if (dpcon_dev->dpcon_id == dpcon_id)
+			break;
+	}
+
+	return dpcon_dev;
+}
+
 static int
 rte_dpaa2_create_dpcon_device(int dev_fd __rte_unused,
 			      struct vfio_device_info *obj_info __rte_unused,
@@ -105,9 +118,26 @@ void rte_dpaa2_free_dpcon_dev(struct dpaa2_dpcon_dev *dpcon)
 	}
 }
 
+
+static void
+rte_dpaa2_close_dpcon_device(int object_id)
+{
+	struct dpaa2_dpcon_dev *dpcon_dev = NULL;
+
+	dpcon_dev = get_dpcon_from_id((uint32_t)object_id);
+
+	if (dpcon_dev) {
+		rte_dpaa2_free_dpcon_dev(dpcon_dev);
+		dpcon_close(&dpcon_dev->dpcon, CMD_PRI_LOW, dpcon_dev->token);
+		TAILQ_REMOVE(&dpcon_dev_list, dpcon_dev, next);
+		rte_free(dpcon_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpcon_obj = {
 	.dev_type = DPAA2_CON,
 	.create = rte_dpaa2_create_dpcon_device,
+	.close = rte_dpaa2_close_dpcon_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpcon, rte_dpaa2_dpcon_obj);
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index d682a61e52..fa3659e452 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -44,7 +44,7 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev = NULL;
 
-	/* Get DPBP dev handle from list using index */
+	/* Get DPDMUX dev handle from list using index */
 	TAILQ_FOREACH(dpdmux_dev, &dpdmux_dev_list, next) {
 		if (dpdmux_dev->dpdmux_id == dpdmux_id)
 			break;
@@ -442,9 +442,25 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	return -1;
 }
 
+static void
+dpaa2_close_dpdmux_device(int object_id)
+{
+	struct dpaa2_dpdmux_dev *dpdmux_dev;
+
+	dpdmux_dev = get_dpdmux_from_id((uint32_t)object_id);
+
+	if (dpdmux_dev) {
+		dpdmux_close(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+			     dpdmux_dev->token);
+		TAILQ_REMOVE(&dpdmux_dev_list, dpdmux_dev, next);
+		rte_free(dpdmux_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpdmux_obj = {
 	.dev_type = DPAA2_MUX,
 	.create = dpaa2_create_dpdmux_device,
+	.close = dpaa2_close_dpdmux_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpdmux, rte_dpaa2_dpdmux_obj);
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index fd9acd841b..80e5e3298b 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2024 NXP
  */
 
 #ifndef _RTE_PMD_DPAA2_H
@@ -32,6 +32,9 @@ struct rte_flow *
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			      struct rte_flow_item *pattern[],
 			      struct rte_flow_action *actions[]);
+int
+rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
+	uint16_t entry_index);
 
 /**
  * @warning
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 07/43] net/dpaa2: dpdmux: add support for CVLAN
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (5 preceding siblings ...)
  2024-09-18  7:50   ` [v2 06/43] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 08/43] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
                     ` (36 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds the support for DPDMUX_METHOD_C_VLAN_MAC method
which implements DPDMUX based on C-VLAN and MAC address.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     | 59 +++++++++++++++++++++++++------
 drivers/net/dpaa2/mc/fsl_dpdmux.h | 18 +++++++++-
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  3 ++
 3 files changed, 68 insertions(+), 12 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index fa3659e452..53020e9302 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -233,6 +233,35 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	return NULL;
 }
 
+int
+rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
+	uint8_t mac_addr[6], uint16_t vlan_id, int dest_if)
+{
+	struct dpaa2_dpdmux_dev *dpdmux_dev;
+	struct dpdmux_l2_rule rule;
+	int ret, i;
+
+	/* Find the DPDMUX from dpdmux_id in our list */
+	dpdmux_dev = get_dpdmux_from_id(dpdmux_id);
+	if (!dpdmux_dev) {
+		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
+		return -ENODEV;
+	}
+
+	for (i = 0; i < 6; i++)
+		rule.mac_addr[i] = mac_addr[i];
+	rule.vlan_id = vlan_id;
+
+	ret = dpdmux_if_add_l2_rule(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+			dpdmux_dev->token, dest_if, &rule);
+	if (ret) {
+		DPAA2_PMD_ERR("dpdmux_if_add_l2_rule failed:err(%d)", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
 int
 rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 {
@@ -353,6 +382,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	int ret;
 	uint16_t maj_ver;
 	uint16_t min_ver;
+	uint8_t skip_reset_flags;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -379,12 +409,18 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 		goto init_err;
 	}
 
-	ret = dpdmux_if_set_default(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-				    dpdmux_dev->token, attr.default_if);
-	if (ret) {
-		DPAA2_PMD_ERR("setting default interface failed in %s",
-			      __func__);
-		goto init_err;
+	if (attr.method != DPDMUX_METHOD_C_VLAN_MAC) {
+		ret = dpdmux_if_set_default(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+				dpdmux_dev->token, attr.default_if);
+		if (ret) {
+			DPAA2_PMD_ERR("setting default interface failed in %s",
+				      __func__);
+			goto init_err;
+		}
+		skip_reset_flags = DPDMUX_SKIP_DEFAULT_INTERFACE
+			| DPDMUX_SKIP_UNICAST_RULES | DPDMUX_SKIP_MULTICAST_RULES;
+	} else {
+		skip_reset_flags = DPDMUX_SKIP_DEFAULT_INTERFACE;
 	}
 
 	ret = dpdmux_get_api_version(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
@@ -400,10 +436,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	 */
 	if (maj_ver >= 6 && min_ver >= 6) {
 		ret = dpdmux_set_resetable(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-				dpdmux_dev->token,
-				DPDMUX_SKIP_DEFAULT_INTERFACE |
-				DPDMUX_SKIP_UNICAST_RULES |
-				DPDMUX_SKIP_MULTICAST_RULES);
+				dpdmux_dev->token, skip_reset_flags);
 		if (ret) {
 			DPAA2_PMD_ERR("setting default interface failed in %s",
 				      __func__);
@@ -416,7 +449,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 
 		memset(&mux_err_cfg, 0, sizeof(mux_err_cfg));
 		mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE;
-		mux_err_cfg.errors = DPDMUX_ERROR_DISC;
+
+		if (attr.method != DPDMUX_METHOD_C_VLAN_MAC)
+			mux_err_cfg.errors = DPDMUX_ERROR_DISC;
+		else
+			mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
 
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 4600ea94d4..9bbac44219 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2022 NXP
  *
  */
 #ifndef __FSL_DPDMUX_H
@@ -549,6 +549,22 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
  */
 #define DPDMUX__ERROR_L4CE			0x00000001
 
+#define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
+				 DPDMUX__ERROR_L4CV | \
+				 DPDMUX__ERROR_L3CE | \
+				 DPDMUX__ERROR_L3CV | \
+				 DPDMUX_ERROR_BLE | \
+				 DPDMUX_ERROR_PHE | \
+				 DPDMUX_ERROR_ISP | \
+				 DPDMUX_ERROR_PTE | \
+				 DPDMUX_ERROR_FPE | \
+				 DPDMUX_ERROR_FLE | \
+				 DPDMUX_ERROR_PIEE | \
+				 DPDMUX_ERROR_TIDE | \
+				 DPDMUX_ERROR_MNLE | \
+				 DPDMUX_ERROR_EOFHE | \
+				 DPDMUX_ERROR_KSE)
+
 enum dpdmux_error_action {
 	DPDMUX_ERROR_ACTION_DISCARD = 0,
 	DPDMUX_ERROR_ACTION_CONTINUE = 1
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index 80e5e3298b..bebebcacdc 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -35,6 +35,9 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 int
 rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
 	uint16_t entry_index);
+int
+rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
+	uint8_t mac_addr[6], uint16_t vlan_id, int dest_if);
 
 /**
  * @warning
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 08/43] bus/fslmc: upgrade with MC version 10.37
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (6 preceding siblings ...)
  2024-09-18  7:50   ` [v2 07/43] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 09/43] net/dpaa2: support link state for eth interfaces vanshika.shukla
                     ` (35 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh; +Cc: Apeksha Gupta

From: Gagandeep Singh <g.singh@nxp.com>

This patch upgrades the MC version compaitbility to 10.37

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 doc/guides/platform/dpaa2.rst                 |   4 +-
 drivers/bus/fslmc/mc/dpio.c                   |  94 ++++-
 drivers/bus/fslmc/mc/fsl_dpcon.h              |   5 +-
 drivers/bus/fslmc/mc/fsl_dpio.h               |  21 +-
 drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |  13 +-
 drivers/bus/fslmc/mc/fsl_dpmng.h              |   4 +-
 drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |   8 +-
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |  12 +-
 drivers/bus/fslmc/version.map                 |   7 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |  91 ++++-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |  47 ++-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |  19 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  36 +-
 drivers/net/dpaa2/mc/dpdmux.c                 | 205 +++++++++-
 drivers/net/dpaa2/mc/dpkg.c                   |  12 +-
 drivers/net/dpaa2/mc/dpni.c                   | 383 +++++++++++++++++-
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |  67 ++-
 drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |  83 +++-
 drivers/net/dpaa2/mc/fsl_dpkg.h               |   7 +-
 drivers/net/dpaa2/mc/fsl_dpni.h               | 176 +++++---
 drivers/net/dpaa2/mc/fsl_dpni_cmd.h           | 125 ++++--
 21 files changed, 1267 insertions(+), 152 deletions(-)

diff --git a/doc/guides/platform/dpaa2.rst b/doc/guides/platform/dpaa2.rst
index 2b0d93a976..c9ec21334f 100644
--- a/doc/guides/platform/dpaa2.rst
+++ b/doc/guides/platform/dpaa2.rst
@@ -105,8 +105,8 @@ separately:
 
 Currently supported by DPDK:
 
-- NXP SDK **LSDK 19.09++**.
-- MC Firmware version **10.18.0** and higher.
+- NXP SDK **LSDK 21.08++**.
+- MC Firmware version **10.37.0** and higher.
 - Supported architectures:  **arm64 LE**.
 
 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
diff --git a/drivers/bus/fslmc/mc/dpio.c b/drivers/bus/fslmc/mc/dpio.c
index a3382ed142..97c08fa713 100644
--- a/drivers/bus/fslmc/mc/dpio.c
+++ b/drivers/bus/fslmc/mc/dpio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -376,6 +376,98 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpio_set_stashing_destination_by_core_id() - Set the stashing destination source
+ * using the core id.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @core_id:	Core id stashing destination
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
+					uint32_t cmd_flags,
+					uint16_t token,
+					uint8_t core_id)
+{
+	struct dpio_stashing_dest_by_core_id *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST_BY_CORE_ID,
+										cmd_flags,
+										token);
+	cmd_params = (struct dpio_stashing_dest_by_core_id  *)cmd.params;
+	cmd_params->core_id = core_id;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpio_set_stashing_destination_source() - Set the stashing destination source.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @ss:		Stashing destination source (0 manual/1 automatic)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ss)
+{
+	struct dpio_stashing_dest_source *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST_SOURCE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpio_stashing_dest_source *)cmd.params;
+	cmd_params->ss = ss;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpio_get_stashing_destination_source() - Get the stashing destination source.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @ss:		Returns the stashing destination source (0 manual/1 automatic)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *ss)
+{
+	struct dpio_stashing_dest_source *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_STASHING_DEST_SOURCE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpio_stashing_dest_source *)cmd.params;
+	*ss = rsp_params->ss;
+
+	return 0;
+}
+
 /**
  * dpio_add_static_dequeue_channel() - Add a static dequeue channel.
  * @mc_io:		Pointer to MC portal's I/O object
diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index 34b30d15c2..e3a626077e 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2021, 2024 NXP
  *
  */
 #ifndef __FSL_DPCON_H
@@ -52,10 +52,12 @@ int dpcon_destroy(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint32_t obj_id);
 
+__rte_internal
 int dpcon_enable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
 
+__rte_internal
 int dpcon_disable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token);
@@ -65,6 +67,7 @@ int dpcon_is_enabled(struct fsl_mc_io *mc_io,
 		     uint16_t token,
 		     int *en);
 
+__rte_internal
 int dpcon_reset(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
index c2db76bdf8..eddce58a5f 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPIO_H
@@ -87,11 +87,30 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint16_t token,
 				  uint8_t sdest);
 
+__rte_internal
 int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
 				  uint8_t *sdest);
 
+__rte_internal
+int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t core_id);
+
+__rte_internal
+int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ss);
+
+__rte_internal
+int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *ss);
+
 __rte_internal
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
diff --git a/drivers/bus/fslmc/mc/fsl_dpio_cmd.h b/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
index 45ed01f809..360c68eaa5 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2019 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPIO_CMD_H
@@ -40,6 +40,9 @@
 #define DPIO_CMDID_GET_STASHING_DEST			DPIO_CMD(0x121)
 #define DPIO_CMDID_ADD_STATIC_DEQUEUE_CHANNEL		DPIO_CMD(0x122)
 #define DPIO_CMDID_REMOVE_STATIC_DEQUEUE_CHANNEL	DPIO_CMD(0x123)
+#define DPIO_CMDID_SET_STASHING_DEST_SOURCE		DPIO_CMD(0x124)
+#define DPIO_CMDID_GET_STASHING_DEST_SOURCE		DPIO_CMD(0x125)
+#define DPIO_CMDID_SET_STASHING_DEST_BY_CORE_ID		DPIO_CMD(0x126)
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPIO_MASK(field)        \
@@ -98,6 +101,14 @@ struct dpio_stashing_dest {
 	uint8_t sdest;
 };
 
+struct dpio_stashing_dest_source {
+	uint8_t ss;
+};
+
+struct dpio_stashing_dest_by_core_id {
+	uint8_t core_id;
+};
+
 struct dpio_cmd_static_dequeue_channel {
 	uint32_t dpcon_id;
 };
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index c6ea220df7..dfa51b3a86 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2017-2022 NXP
+ * Copyright 2017-2023 NXP
  *
  */
 #ifndef __FSL_DPMNG_H
@@ -20,7 +20,7 @@ struct fsl_mc_io;
  * Management Complex firmware version information
  */
 #define MC_VER_MAJOR 10
-#define MC_VER_MINOR 32
+#define MC_VER_MINOR 37
 
 /**
  * struct mc_version
diff --git a/drivers/bus/fslmc/mc/fsl_dprc_cmd.h b/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
index 6efa5634d2..d5ba35b5f0 100644
--- a/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 
@@ -10,13 +10,17 @@
 
 /* Minimal supported DPRC Version */
 #define DPRC_VER_MAJOR			6
-#define DPRC_VER_MINOR			6
+#define DPRC_VER_MINOR			7
 
 /* Command versioning */
 #define DPRC_CMD_BASE_VERSION			1
+#define DPRC_CMD_VERSION_2			2
+#define DPRC_CMD_VERSION_3			3
 #define DPRC_CMD_ID_OFFSET			4
 
 #define DPRC_CMD(id)	((id << DPRC_CMD_ID_OFFSET) | DPRC_CMD_BASE_VERSION)
+#define DPRC_CMD_V2(id)	(((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_VERSION_2)
+#define DPRC_CMD_V3(id)	(((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_VERSION_3)
 
 /* Command IDs */
 #define DPRC_CMDID_CLOSE                        DPRC_CMD(0x800)
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 18b6a3c2e4..297d4ed4fc 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2023 NXP
  */
 #ifndef _FSL_QBMAN_DEBUG_H
 #define _FSL_QBMAN_DEBUG_H
@@ -105,16 +105,6 @@ uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r);
 uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r);
 uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r);
 
-/* FQ query command for non-programmable fields*/
-enum qbman_fq_schedstate_e {
-	qbman_fq_schedstate_oos = 0,
-	qbman_fq_schedstate_retired,
-	qbman_fq_schedstate_tentatively_scheduled,
-	qbman_fq_schedstate_truly_scheduled,
-	qbman_fq_schedstate_parked,
-	qbman_fq_schedstate_held_active,
-};
-
 struct qbman_fq_query_np_rslt {
 uint8_t verb;
 	uint8_t rslt;
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index 01e28c6625..df1143733d 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -37,6 +37,9 @@ INTERNAL {
 	dpcon_get_attributes;
 	dpcon_open;
 	dpcon_close;
+	dpcon_reset;
+	dpcon_enable;
+	dpcon_disable;
 	dpdmai_close;
 	dpdmai_disable;
 	dpdmai_enable;
@@ -53,7 +56,11 @@ INTERNAL {
 	dpio_open;
 	dpio_remove_static_dequeue_channel;
 	dpio_reset;
+	dpio_get_stashing_destination;
+	dpio_get_stashing_destination_source;
 	dpio_set_stashing_destination;
+	dpio_set_stashing_destination_by_core_id;
+	dpio_set_stashing_destination_source;
 	mc_get_soc_version;
 	mc_get_version;
 	mc_send_command;
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
index 87e0defdc6..773b4648e0 100644
--- a/drivers/crypto/dpaa2_sec/mc/dpseci.c
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -763,3 +763,92 @@ int dpseci_get_congestion_notification(
 
 	return 0;
 }
+
+
+/**
+ * dpseci_get_rx_queue_status() - Get queue status attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue_index:	Select the queue_index
+ * @attr:	Returned queue status attributes
+ *
+ * Return:	'0' on success, error code otherwise
+ */
+int dpseci_get_rx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr)
+{
+	struct dpseci_rsp_get_queue_status *rsp_params;
+	struct dpseci_cmd_get_queue_status *cmd_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpseci_cmd_get_queue_status *)cmd.params;
+	cmd_params->queue_index = cpu_to_le32(queue_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpseci_rsp_get_queue_status *)cmd.params;
+	attr->fqid = le32_to_cpu(rsp_params->fqid);
+	attr->schedstate = (enum qbman_fq_schedstate_e)(le16_to_cpu(rsp_params->schedstate));
+	attr->state_flags = le16_to_cpu(rsp_params->state_flags);
+	attr->frame_count = le32_to_cpu(rsp_params->frame_count);
+	attr->byte_count = le32_to_cpu(rsp_params->byte_count);
+
+	return 0;
+}
+
+/**
+ * dpseci_get_tx_queue_status() - Get queue status attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue_index:	Select the queue_index
+ * @attr:	Returned queue status attributes
+ *
+ * Return:	'0' on success, error code otherwise
+ */
+int dpseci_get_tx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr)
+{
+	struct dpseci_rsp_get_queue_status *rsp_params;
+	struct dpseci_cmd_get_queue_status *cmd_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpseci_cmd_get_queue_status *)cmd.params;
+	cmd_params->queue_index = cpu_to_le32(queue_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpseci_rsp_get_queue_status *)cmd.params;
+	attr->fqid = le32_to_cpu(rsp_params->fqid);
+	attr->schedstate = (enum qbman_fq_schedstate_e)(le16_to_cpu(rsp_params->schedstate));
+	attr->state_flags = le16_to_cpu(rsp_params->state_flags);
+	attr->frame_count = le32_to_cpu(rsp_params->frame_count);
+	attr->byte_count = le32_to_cpu(rsp_params->byte_count);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
index c295c04f24..e371abdd64 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPSECI_H
@@ -429,4 +429,49 @@ int dpseci_get_congestion_notification(
 			uint16_t token,
 			struct dpseci_congestion_notification_cfg *cfg);
 
+/* Available FQ's scheduling states */
+enum qbman_fq_schedstate_e {
+	qbman_fq_schedstate_oos = 0,
+	qbman_fq_schedstate_retired,
+	qbman_fq_schedstate_tentatively_scheduled,
+	qbman_fq_schedstate_truly_scheduled,
+	qbman_fq_schedstate_parked,
+	qbman_fq_schedstate_held_active,
+};
+
+/* FQ's force eligible pending bit */
+#define DPSECI_FQ_STATE_FORCE_ELIGIBLE			0x00000001
+/* FQ's XON/XOFF state, 0: XON, 1: XOFF */
+#define DPSECI_FQ_STATE_XOFF					0x00000002
+/* FQ's retirement pending bit */
+#define DPSECI_FQ_STATE_RETIREMENT_PENDING		0x00000004
+/* FQ's overflow error bit */
+#define DPSECI_FQ_STATE_OVERFLOW_ERROR			0x00000008
+
+struct dpseci_queue_status {
+	uint32_t fqid;
+	/* FQ's scheduling states
+	 * (available scheduling states are defined in qbman_fq_schedstate_e)
+	 */
+	enum qbman_fq_schedstate_e schedstate;
+	/* FQ's state flags (available flags are defined above) */
+	uint16_t state_flags;
+	/* FQ's frame count */
+	uint32_t frame_count;
+	/* FQ's byte count */
+	uint32_t byte_count;
+};
+
+int dpseci_get_rx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr);
+
+int dpseci_get_tx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr);
+
 #endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
index af3518a0f3..065464b701 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPSECI_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPSECI Version */
 #define DPSECI_VER_MAJOR		5
-#define DPSECI_VER_MINOR		3
+#define DPSECI_VER_MINOR		4
 
 /* Command versioning */
 #define DPSECI_CMD_BASE_VERSION		1
@@ -46,6 +46,9 @@
 #define DPSECI_CMDID_GET_OPR		DPSECI_CMD_V1(0x19B)
 #define DPSECI_CMDID_SET_CONGESTION_NOTIFICATION	DPSECI_CMD_V1(0x170)
 #define DPSECI_CMDID_GET_CONGESTION_NOTIFICATION	DPSECI_CMD_V1(0x171)
+#define DPSECI_CMDID_GET_RX_QUEUE_STATUS	DPSECI_CMD_V1(0x172)
+#define DPSECI_CMDID_GET_TX_QUEUE_STATUS	DPSECI_CMD_V1(0x173)
+
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPSECI_MASK(field)        \
@@ -251,5 +254,17 @@ struct dpseci_cmd_set_congestion_notification {
 	uint32_t threshold_exit;
 };
 
+struct dpseci_cmd_get_queue_status {
+	uint32_t queue_index;
+};
+
+struct dpseci_rsp_get_queue_status {
+	uint32_t fqid;
+	uint16_t schedstate;
+	uint16_t state_flags;
+	uint32_t frame_count;
+	uint32_t byte_count;
+};
+
 #pragma pack(pop)
 #endif /* _FSL_DPSECI_CMD_H */
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index efba9ef286..4dc7a82b47 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -896,6 +896,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct dpni_queue tx_conf_cfg;
 	struct dpni_queue tx_flow_cfg;
 	uint8_t options = 0, flow_id;
+	uint8_t ceetm_ch_idx;
 	uint16_t channel_id;
 	struct dpni_queue_id qid;
 	uint32_t tc_id;
@@ -922,20 +923,27 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	memset(&tx_conf_cfg, 0, sizeof(struct dpni_queue));
 	memset(&tx_flow_cfg, 0, sizeof(struct dpni_queue));
 
-	if (tx_queue_id == 0) {
-		/*Set tx-conf and error configuration*/
-		if (priv->flags & DPAA2_TX_CONF_ENABLE)
-			ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW,
-							    priv->token,
-							    DPNI_CONF_AFFINE);
-		else
-			ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW,
-							    priv->token,
-							    DPNI_CONF_DISABLE);
-		if (ret) {
-			DPAA2_PMD_ERR("Error in set tx conf mode settings: "
-				      "err=%d", ret);
-			return -1;
+	if (!tx_queue_id) {
+		for (ceetm_ch_idx = 0;
+			ceetm_ch_idx <= (priv->num_channels - 1);
+			ceetm_ch_idx++) {
+			/*Set tx-conf and error configuration*/
+			if (priv->flags & DPAA2_TX_CONF_ENABLE) {
+				ret = dpni_set_tx_confirmation_mode(dpni,
+						CMD_PRI_LOW, priv->token,
+						ceetm_ch_idx,
+						DPNI_CONF_AFFINE);
+			} else {
+				ret = dpni_set_tx_confirmation_mode(dpni,
+						CMD_PRI_LOW, priv->token,
+						ceetm_ch_idx,
+						DPNI_CONF_DISABLE);
+			}
+			if (ret) {
+				DPAA2_PMD_ERR("Error(%d) in tx conf setting",
+					ret);
+				return ret;
+			}
 		}
 	}
 
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 1bb153cad7..f4feef3840 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -287,15 +287,19 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
  * @token:	Token of DPDMUX object
  * @skip_reset_flags:	By default all are 0.
  *			By setting 1 will deactivate the reset.
- *	The flags are:
- *			DPDMUX_SKIP_DEFAULT_INTERFACE  0x01
- *			DPDMUX_SKIP_UNICAST_RULES      0x02
- *			DPDMUX_SKIP_MULTICAST_RULES    0x04
+ * The flags are:
+ *			DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE  0x01
+ *			DPDMUX_SKIP_UNICAST_RULES             0x02
+ *			DPDMUX_SKIP_MULTICAST_RULES           0x04
+ *			DPDMUX_SKIP_RESET_DEFAULT_INTERFACE   0x08
  *
  * For example, by default, through DPDMUX_RESET the default
  * interface will be restored with the one from create.
- * By setting DPDMUX_SKIP_DEFAULT_INTERFACE flag,
- * through DPDMUX_RESET the default interface will not be modified.
+ * By setting DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE flag,
+ * through DPDMUX_RESET the default interface will not be modified after reset.
+ * By setting DPDMUX_SKIP_RESET_DEFAULT_INTERFACE flag,
+ * through DPDMUX_RESET the default interface will not be reset
+ * and will continue to be functional during reset procedure.
  *
  * Return:	'0' on Success; Error code otherwise.
  */
@@ -327,10 +331,11 @@ int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
  * @token:	Token of DPDMUX object
  * @skip_reset_flags:	Get the reset flags.
  *
- *	The flags are:
- *			DPDMUX_SKIP_DEFAULT_INTERFACE  0x01
- *			DPDMUX_SKIP_UNICAST_RULES      0x02
- *			DPDMUX_SKIP_MULTICAST_RULES    0x04
+ * The flags are:
+ *			DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE  0x01
+ *			DPDMUX_SKIP_UNICAST_RULES             0x02
+ *			DPDMUX_SKIP_MULTICAST_RULES           0x04
+ *			DPDMUX_SKIP_RESET_DEFAULT_INTERFACE   0x08
  *
  * Return:	'0' on Success; Error code otherwise.
  */
@@ -1064,6 +1069,127 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpdmux_if_set_taildrop() - enable taildrop for egress interface queues.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id:	Interface Identifier
+ * @cfg: Taildrop configuration
+ */
+int dpdmux_if_set_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+	struct dpdmux_cmd_set_taildrop *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_SET_TAILDROP,
+			cmd_flags,
+			token);
+	cmd_params = (struct dpdmux_cmd_set_taildrop *)cmd.params;
+	cmd_params->if_id		= cpu_to_le16(if_id);
+	cmd_params->units		= cfg->units;
+	cmd_params->threshold	= cpu_to_le32(cfg->threshold);
+	dpdmux_set_field(cmd_params->oal_en, ENABLE, (!!cfg->enable));
+
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpdmux_if_get_taildrop() - get current taildrop configuration.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id:	Interface Identifier
+ * @cfg: Taildrop configuration
+ */
+int dpdmux_if_get_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg)
+{
+	struct mc_command cmd = {0};
+	struct dpdmux_cmd_get_taildrop *cmd_params;
+	struct dpdmux_rsp_get_taildrop *rsp_params;
+	int err = 0;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_TAILDROP,
+			cmd_flags,
+			token);
+	cmd_params = (struct dpdmux_cmd_get_taildrop *)cmd.params;
+	cmd_params->if_id	= cpu_to_le16(if_id);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpdmux_rsp_get_taildrop *)cmd.params;
+	cfg->threshold = le32_to_cpu(rsp_params->threshold);
+	cfg->units = rsp_params->units;
+	cfg->enable = dpdmux_get_field(rsp_params->oal_en, ENABLE);
+
+	return err;
+}
+
+/**
+ * dpdmux_dump_table() - Dump the content of table_type table into memory.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @table_type: The type of the table to dump
+ *	- DPDMUX_DMAT_TABLE
+ *	- DPDMUX_MISS_TABLE
+ *	- DPDMUX_PRUNE_TABLE
+ * @table_index: The index of the table to dump in case of more than one table
+ *	if table_type == DPDMUX_DMAT_TABLE
+ *		- DPDMUX_HMAP_UNICAST
+ *		- DPDMUX_HMAP_MULTICAST
+ *	else 0
+ * @iova_addr: The snapshot will be stored in this variable as an header of struct dump_table_header
+ *             followed by an array of struct dump_table_entry
+ * @iova_size: Memory size allocated for iova_addr
+ * @num_entries: Number of entries written in iova_addr
+ *
+ * Return: Completion status. '0' on Success; Error code otherwise.
+ *
+ * The memory allocated at iova_addr must be zeroed before command execution.
+ * If the table content exceeds the memory size provided the dump will be truncated.
+ */
+int dpdmux_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	struct dpdmux_cmd_dump_table *cmd_params;
+	struct dpdmux_rsp_dump_table *rsp_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_DUMP_TABLE, cmd_flags, token);
+	cmd_params = (struct dpdmux_cmd_dump_table *)cmd.params;
+	cmd_params->table_type = cpu_to_le16(table_type);
+	cmd_params->table_index = cpu_to_le16(table_index);
+	cmd_params->iova_addr = cpu_to_le64(iova_addr);
+	cmd_params->iova_size = cpu_to_le32(iova_size);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpdmux_rsp_dump_table *)cmd.params;
+	*num_entries = le16_to_cpu(rsp_params->num_entries);
+
+	return 0;
+}
+
+
 /**
  * dpdmux_if_set_errors_behavior() - Set errors behavior
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
@@ -1100,3 +1226,60 @@ int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 	/* send command to mc*/
 	return mc_send_command(mc_io, &cmd);
 }
+
+/* Sets up a Soft Parser Profile on this DPDMUX
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @sp_profile: Soft Parser Profile name (must a valid name for a defined profile)
+ *			Maximum allowed length for this string is 8 characters long
+ *			If this parameter is empty string (all zeros)
+ *			then the Default SP Profile is set on this dpdmux
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ */
+int dpdmux_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type)
+{
+	struct dpdmux_cmd_set_sp_profile *cmd_params;
+	struct mc_command cmd = { 0 };
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SET_SP_PROFILE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpdmux_cmd_set_sp_profile *)cmd.params;
+	for (i = 0; i < MAX_SP_PROFILE_ID_SIZE && sp_profile[i]; i++)
+		cmd_params->sp_profile[i] = sp_profile[i];
+	cmd_params->type = type;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/* Enable/Disable Soft Parser on this DPDMUX interface
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id: interface id
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ * @en: 1 to enable or 0 to disable
+ */
+int dpdmux_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t if_id, uint8_t type, uint8_t en)
+{
+	struct dpdmux_cmd_sp_enable *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SP_ENABLE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpdmux_cmd_sp_enable *)cmd.params;
+	cmd_params->if_id = if_id;
+	cmd_params->type = type;
+	cmd_params->en = en;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
diff --git a/drivers/net/dpaa2/mc/dpkg.c b/drivers/net/dpaa2/mc/dpkg.c
index 4789976b7d..5db3d092c1 100644
--- a/drivers/net/dpaa2/mc/dpkg.c
+++ b/drivers/net/dpaa2/mc/dpkg.c
@@ -1,16 +1,18 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2021, 2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
 #include <fsl_mc_cmd.h>
 #include <fsl_dpkg.h>
+#include <string.h>
 
 /**
  * dpkg_prepare_key_cfg() - function prepare extract parameters
  * @cfg: defining a full Key Generation profile (rule)
- * @key_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
+ * @key_cfg_buf: Zeroed memory whose size is sizeo of
+ *		"struct dpni_ext_set_rx_tc_dist" before mapping it to DMA
  *
  * This function has to be called before the following functions:
  *	- dpni_set_rx_tc_dist()
@@ -18,7 +20,8 @@
  *	- dpkg_prepare_key_cfg()
  */
 int
-dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, uint8_t *key_cfg_buf)
+dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+	void *key_cfg_buf)
 {
 	int i, j;
 	struct dpni_ext_set_rx_tc_dist *dpni_ext;
@@ -27,11 +30,12 @@ dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, uint8_t *key_cfg_buf)
 	if (cfg->num_extracts > DPKG_MAX_NUM_OF_EXTRACTS)
 		return -EINVAL;
 
-	dpni_ext = (struct dpni_ext_set_rx_tc_dist *)key_cfg_buf;
+	dpni_ext = key_cfg_buf;
 	dpni_ext->num_extracts = cfg->num_extracts;
 
 	for (i = 0; i < cfg->num_extracts; i++) {
 		extr = &dpni_ext->extracts[i];
+		memset(extr, 0, sizeof(struct dpni_dist_extract));
 
 		switch (cfg->extracts[i].type) {
 		case DPKG_EXTRACT_FROM_HDR:
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index 4d97b98939..558f08dc69 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2022 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -852,6 +852,92 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpni_get_qdid_ex() - Extension for the function to get the Queuing Destination ID (QDID)
+ *			that should be used for enqueue operations.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @qtype:	Type of queue to receive QDID for
+ * @qdid:	Array of virtual QDID value that should be used as an argument
+ *			in all enqueue operations.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * This function must be used when dpni is created using multiple Tx channels to return one
+ * qdid for each channel.
+ */
+int dpni_get_qdid_ex(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  enum dpni_queue_type qtype,
+		  uint16_t *qdid)
+{
+	struct mc_command cmd = { 0 };
+	struct dpni_cmd_get_qdid *cmd_params;
+	struct dpni_rsp_get_qdid_ex *rsp_params;
+	int i;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QDID_EX,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_qdid *)cmd.params;
+	cmd_params->qtype = qtype;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_qdid_ex *)cmd.params;
+	for (i = 0; i < DPNI_MAX_CHANNELS; i++)
+		qdid[i] = le16_to_cpu(rsp_params->qdid[i]);
+
+	return 0;
+}
+
+/**
+ * dpni_get_sp_info() - Get the AIOP storage profile IDs associated
+ *			with the DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @sp_info:	Returned AIOP storage-profile information
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * @warning	Only relevant for DPNI that belongs to AIOP container.
+ */
+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
+		     uint32_t cmd_flags,
+		     uint16_t token,
+		     struct dpni_sp_info *sp_info)
+{
+	struct dpni_rsp_get_sp_info *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err, i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SP_INFO,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_sp_info *)cmd.params;
+	for (i = 0; i < DPNI_MAX_SP; i++)
+		sp_info->spids[i] = le16_to_cpu(rsp_params->spids[i]);
+
+	return 0;
+}
+
 /**
  * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
  * @mc_io:	Pointer to MC portal's I/O object
@@ -1684,6 +1770,7 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
  * @mc_io:	Pointer to MC portal's I/O object
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
  * @mode:	Tx confirmation mode
  *
  * This function is useful only when 'DPNI_OPT_TX_CONF_DISABLED' is not
@@ -1701,6 +1788,7 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
 int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode mode)
 {
 	struct dpni_tx_confirmation_mode *cmd_params;
@@ -1711,6 +1799,7 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 					  cmd_flags,
 					  token);
 	cmd_params = (struct dpni_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
 	cmd_params->confirmation_mode = mode;
 
 	/* send command to mc*/
@@ -1722,6 +1811,7 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
  * @mc_io:	Pointer to MC portal's I/O object
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
  * @mode:	Tx confirmation mode
  *
  * Return:  '0' on Success; Error code otherwise.
@@ -1729,8 +1819,10 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode *mode)
 {
+	struct dpni_tx_confirmation_mode *cmd_params;
 	struct dpni_tx_confirmation_mode *rsp_params;
 	struct mc_command cmd = { 0 };
 	int err;
@@ -1738,6 +1830,8 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONFIRMATION_MODE,
 					cmd_flags,
 					token);
+	cmd_params = (struct dpni_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
 
 	err = mc_send_command(mc_io, &cmd);
 	if (err)
@@ -1749,6 +1843,78 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpni_set_queue_tx_confirmation_mode() - Set Tx confirmation mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
+ * @index:	queue index
+ * @mode:	Tx confirmation mode
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode mode)
+{
+	struct dpni_queue_tx_confirmation_mode *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QUEUE_TX_CONFIRMATION_MODE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
+	cmd_params->index = index;
+	cmd_params->confirmation_mode = mode;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_queue_tx_confirmation_mode() - Get Tx confirmation mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
+ * @index:	queue index
+ * @mode:	Tx confirmation mode
+ *
+ * Return:  '0' on Success; Error code otherwise.
+ */
+int dpni_get_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode *mode)
+{
+	struct dpni_queue_tx_confirmation_mode *cmd_params;
+	struct dpni_queue_tx_confirmation_mode *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QUEUE_TX_CONFIRMATION_MODE,
+					cmd_flags,
+					token);
+	cmd_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
+	cmd_params->index = index;
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	*mode =  rsp_params->confirmation_mode;
+
+	return 0;
+}
+
 /**
  * dpni_set_qos_table() - Set QoS mapping table
  * @mc_io:	Pointer to MC portal's I/O object
@@ -2291,8 +2457,7 @@ int dpni_set_congestion_notification(struct fsl_mc_io *mc_io,
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
  * @qtype:	Type of queue - Rx, Tx and Tx confirm types are supported
- * @param:	Traffic class and channel. Bits[0-7] contain traaffic class,
- *		byte[8-15] contains channel id
+ * @tc_id:	Traffic class selection (0-7)
  * @cfg:	congestion notification configuration
  *
  * Return:	'0' on Success; error code otherwise.
@@ -3114,8 +3279,216 @@ int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 
 	cmd_params = (struct dpni_cmd_set_port_cfg *)cmd.params;
 	cmd_params->flags = cpu_to_le32(flags);
-	dpni_set_field(cmd_params->bit_params,	PORT_LOOPBACK_EN,
-			!!port_cfg->loopback_en);
+	dpni_set_field(cmd_params->bit_params, PORT_LOOPBACK_EN, !!port_cfg->loopback_en);
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_single_step_cfg() - return current configuration for single step PTP
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPNI object
+ * @ptp_cfg: ptp single step configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ */
+int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_single_step_cfg *ptp_cfg)
+{
+	struct dpni_rsp_single_step_cfg *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SINGLE_STEP_CFG,
+						cmd_flags,
+						token);
+	/* send command to mc*/
+	err =  mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* read command response */
+	rsp_params = (struct dpni_rsp_single_step_cfg *)cmd.params;
+	ptp_cfg->offset = le16_to_cpu(rsp_params->offset);
+	ptp_cfg->en = dpni_get_field(rsp_params->flags, PTP_ENABLE);
+	ptp_cfg->ch_update = dpni_get_field(rsp_params->flags, PTP_CH_UPDATE);
+	ptp_cfg->peer_delay = le32_to_cpu(rsp_params->peer_delay);
+	ptp_cfg->ptp_onestep_reg_base =
+				  le32_to_cpu(rsp_params->ptp_onestep_reg_base);
+
+	return err;
+}
+
+/**
+ * dpni_get_port_cfg() - return configuration from physical port. The command has effect only if
+ *			dpni is connected to a mac object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @port_cfg: Configuration data
+ * The command can be called only when dpni is connected to a dpmac object.
+ * If the dpni is unconnected or the endpoint is not a dpni it will return error;
+ */
+int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_port_cfg *port_cfg)
+{
+	struct dpni_rsp_get_port_cfg *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PORT_CFG,
+			cmd_flags, token);
+
+	/* send command to MC */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* read command response */
+	rsp_params = (struct dpni_rsp_get_port_cfg *)cmd.params;
+	port_cfg->loopback_en = dpni_get_field(rsp_params->bit_params, PORT_LOOPBACK_EN);
+
+	return 0;
+}
+
+/**
+ * dpni_set_single_step_cfg() - enable/disable and configure single step PTP
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPNI object
+ * @ptp_cfg: ptp single step configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * The function has effect only when dpni object is connected to a dpmac object. If the
+ * dpni is not connected to a dpmac the configuration will be stored inside and applied
+ * when connection is made.
+ */
+int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_single_step_cfg *ptp_cfg)
+{
+	struct dpni_cmd_single_step_cfg *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_SINGLE_STEP_CFG,
+						cmd_flags,
+						token);
+	cmd_params = (struct dpni_cmd_single_step_cfg *)cmd.params;
+	cmd_params->offset = cpu_to_le16(ptp_cfg->offset);
+	cmd_params->peer_delay = cpu_to_le32(ptp_cfg->peer_delay);
+	dpni_set_field(cmd_params->flags, PTP_ENABLE, !!ptp_cfg->en);
+	dpni_set_field(cmd_params->flags, PTP_CH_UPDATE, !!ptp_cfg->ch_update);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_dump_table() - Dump the content of table_type table into memory.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @table_type: The type of the table to dump
+ * @table_index: The index of the table to dump in case of more than one table
+ * @iova_addr: The snapshot will be stored in this variable as an header of struct dump_table_header
+ *             followed by an array of struct dump_table_entry
+ * @iova_size: Memory size allocated for iova_addr
+ * @num_entries: Number of entries written in iova_addr
+ *
+ * Return: Completion status. '0' on Success; Error code otherwise.
+ *
+ * The memory allocated at iova_addr must be zeroed before command execution.
+ * If the table content exceeds the memory size provided the dump will be truncated.
+ */
+int dpni_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	struct dpni_cmd_dump_table *cmd_params;
+	struct dpni_rsp_dump_table *rsp_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_DUMP_TABLE, cmd_flags, token);
+	cmd_params = (struct dpni_cmd_dump_table *)cmd.params;
+	cmd_params->table_type = cpu_to_le16(table_type);
+	cmd_params->table_index = cpu_to_le16(table_index);
+	cmd_params->iova_addr = cpu_to_le64(iova_addr);
+	cmd_params->iova_size = cpu_to_le32(iova_size);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_rsp_dump_table *)cmd.params;
+	*num_entries = le16_to_cpu(rsp_params->num_entries);
+
+	return 0;
+}
+
+/* Sets up a Soft Parser Profile on this DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @sp_profile: Soft Parser Profile name (must a valid name for a defined profile)
+ *			Maximum allowed length for this string is 8 characters long
+ *			If this parameter is empty string (all zeros)
+ *			then the Default SP Profile is set on this dpni
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ */
+int dpni_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type)
+{
+	struct dpni_cmd_set_sp_profile *cmd_params;
+	struct mc_command cmd = { 0 };
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_SP_PROFILE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpni_cmd_set_sp_profile *)cmd.params;
+	for (i = 0; i < MAX_SP_PROFILE_ID_SIZE && sp_profile[i]; i++)
+		cmd_params->sp_profile[i] = sp_profile[i];
+	cmd_params->type = type;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/* Enable/Disable Soft Parser on this DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ * @en: 1 to enable or 0 to disable
+ */
+int dpni_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t type, uint8_t en)
+{
+	struct dpni_cmd_sp_enable *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SP_ENABLE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpni_cmd_sp_enable *)cmd.params;
+	cmd_params->type = type;
+	cmd_params->en = en;
 
 	/* send command to MC */
 	return mc_send_command(mc_io, &cmd);
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 9bbac44219..97b09e59f9 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2022 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #ifndef __FSL_DPDMUX_H
@@ -154,6 +154,10 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
  *Setting 1 DPDMUX_RESET will not reset multicast rules
  */
 #define DPDMUX_SKIP_MULTICAST_RULES	0x04
+/**
+ *Setting 4 DPDMUX_RESET will not reset default interface
+ */
+#define DPDMUX_SKIP_RESET_DEFAULT_INTERFACE	0x08
 
 int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
@@ -464,10 +468,50 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
 			   uint16_t *major_ver,
 			   uint16_t *minor_ver);
 
+enum dpdmux_congestion_unit {
+	DPDMUX_TAIDLROP_DROP_UNIT_BYTE = 0,
+	DPDMUX_TAILDROP_DROP_UNIT_FRAMES,
+	DPDMUX_TAILDROP_DROP_UNIT_BUFFERS
+};
+
 /**
- * Discard bit. This bit must be used together with other bits in
- * DPDMUX_ERROR_ACTION_CONTINUE to disable discarding of frames containing
- * errors
+ * struct dpdmux_taildrop_cfg - interface taildrop configuration
+ * @enable - enable (1 ) or disable (0) taildrop
+ * @units - taildrop units
+ * @threshold - taildtop threshold
+ */
+struct dpdmux_taildrop_cfg {
+	char enable;
+	enum dpdmux_congestion_unit units;
+	uint32_t threshold;
+};
+
+int dpdmux_if_set_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg);
+
+int dpdmux_if_get_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg);
+
+#define DPDMUX_MAX_KEY_SIZE 56
+
+enum dpdmux_table_type {
+	DPDMUX_DMAT_TABLE = 1,
+	DPDMUX_MISS_TABLE = 2,
+	DPDMUX_PRUNE_TABLE = 3,
+};
+
+int dpdmux_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries);
+
+/**
+ * Discard bit. This bit must be used together with other bits in DPDMUX_ERROR_ACTION_CONTINUE
+ * to disable discarding of frames containing errors
  */
 #define DPDMUX_ERROR_DISC		0x80000000
 /**
@@ -583,4 +627,19 @@ struct dpdmux_error_cfg {
 int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 		uint16_t token, uint16_t if_id, struct dpdmux_error_cfg *cfg);
 
+/**
+ * SP Profile on Ingress DPDMUX
+ */
+#define DPDMUX_SP_PROFILE_INGRESS 0x1
+/**
+ * SP Profile on Egress DPDMUX
+ */
+#define DPDMUX_SP_PROFILE_EGRESS	0x2
+
+int dpdmux_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type);
+
+int dpdmux_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t if_id, uint8_t type, uint8_t en);
+
 #endif /* __FSL_DPDMUX_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
index bf6b8a20d1..a94f1bf91a 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #ifndef _FSL_DPDMUX_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPDMUX Version */
 #define DPDMUX_VER_MAJOR		6
-#define DPDMUX_VER_MINOR		9
+#define DPDMUX_VER_MINOR		10
 
 #define DPDMUX_CMD_BASE_VERSION		1
 #define DPDMUX_CMD_VERSION_2		2
@@ -63,8 +63,17 @@
 
 #define DPDMUX_CMDID_SET_RESETABLE		DPDMUX_CMD(0x0ba)
 #define DPDMUX_CMDID_GET_RESETABLE		DPDMUX_CMD(0x0bb)
+
+#define DPDMUX_CMDID_IF_SET_TAILDROP		DPDMUX_CMD(0x0bc)
+#define DPDMUX_CMDID_IF_GET_TAILDROP		DPDMUX_CMD(0x0bd)
+
+#define DPDMUX_CMDID_DUMP_TABLE           DPDMUX_CMD(0x0be)
+
 #define DPDMUX_CMDID_SET_ERRORS_BEHAVIOR	DPDMUX_CMD(0x0bf)
 
+#define DPDMUX_CMDID_SET_SP_PROFILE			DPDMUX_CMD(0x0c0)
+#define DPDMUX_CMDID_SP_ENABLE				DPDMUX_CMD(0x0c1)
+
 #define DPDMUX_MASK(field)        \
 	GENMASK(DPDMUX_##field##_SHIFT + DPDMUX_##field##_SIZE - 1, \
 		DPDMUX_##field##_SHIFT)
@@ -241,7 +250,7 @@ struct dpdmux_cmd_remove_custom_cls_entry {
 };
 
 #define DPDMUX_SKIP_RESET_FLAGS_SHIFT    0
-#define DPDMUX_SKIP_RESET_FLAGS_SIZE     3
+#define DPDMUX_SKIP_RESET_FLAGS_SIZE     4
 
 struct dpdmux_cmd_set_skip_reset_flags {
 	uint8_t skip_reset_flags;
@@ -251,6 +260,61 @@ struct dpdmux_rsp_get_skip_reset_flags {
 	uint8_t skip_reset_flags;
 };
 
+struct dpdmux_cmd_set_taildrop {
+	uint32_t	pad1;
+	uint16_t	if_id;
+	uint16_t	pad2;
+	uint16_t	oal_en;
+	uint8_t		units;
+	uint8_t		pad3;
+	uint32_t	threshold;
+};
+
+struct dpdmux_cmd_get_taildrop {
+	uint32_t	pad1;
+	uint16_t	if_id;
+};
+
+struct dpdmux_rsp_get_taildrop {
+	uint16_t	pad1;
+	uint16_t	pad2;
+	uint16_t	if_id;
+	uint16_t	pad3;
+	uint16_t	oal_en;
+	uint8_t		units;
+	uint8_t		pad4;
+	uint32_t	threshold;
+};
+
+struct dpdmux_cmd_dump_table {
+	uint16_t table_type;
+	uint16_t table_index;
+	uint32_t pad0;
+	uint64_t iova_addr;
+	uint32_t iova_size;
+};
+
+struct dpdmux_rsp_dump_table {
+	uint16_t num_entries;
+};
+
+struct dpdmux_dump_table_header {
+	uint16_t table_type;
+	uint16_t table_num_entries;
+	uint16_t table_max_entries;
+	uint8_t default_action;
+	uint8_t match_type;
+	uint8_t reserved[24];
+};
+
+struct dpdmux_dump_table_entry {
+	uint8_t key[DPDMUX_MAX_KEY_SIZE];
+	uint8_t mask[DPDMUX_MAX_KEY_SIZE];
+	uint8_t key_action;
+	uint16_t result[3];
+	uint8_t reserved[21];
+};
+
 #define DPDMUX_ERROR_ACTION_SHIFT		0
 #define DPDMUX_ERROR_ACTION_SIZE		4
 
@@ -260,5 +324,18 @@ struct dpdmux_cmd_set_errors_behavior {
 	uint16_t if_id;
 };
 
+#define MAX_SP_PROFILE_ID_SIZE	8
+
+struct dpdmux_cmd_set_sp_profile {
+	uint8_t sp_profile[MAX_SP_PROFILE_ID_SIZE];
+	uint8_t type;
+};
+
+struct dpdmux_cmd_sp_enable {
+	uint16_t if_id;
+	uint8_t type;
+	uint8_t en;
+};
+
 #pragma pack(pop)
 #endif /* _FSL_DPDMUX_CMD_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpkg.h b/drivers/net/dpaa2/mc/fsl_dpkg.h
index 70f2339ea5..834c765513 100644
--- a/drivers/net/dpaa2/mc/fsl_dpkg.h
+++ b/drivers/net/dpaa2/mc/fsl_dpkg.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  * Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPKG_H_
@@ -180,7 +180,8 @@ struct dpni_ext_set_rx_tc_dist {
 	struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
 };
 
-int dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
-			 uint8_t *key_cfg_buf);
+int
+dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+	void *key_cfg_buf);
 
 #endif /* __FSL_DPKG_H_ */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index ce84f4265e..3a5fcfa8a5 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPNI_H
@@ -116,6 +116,11 @@ struct fsl_mc_io;
  * Flow steering table is shared between all traffic classes
  */
 #define DPNI_OPT_SHARED_FS				0x001000
+/*
+ * Fq frame data, context and annotations stashing disable.
+ * The stashing is enabled by default.
+ */
+#define DPNI_OPT_STASHING_DIS			0x002000
 /**
  * Software sequence maximum layout size
  */
@@ -147,6 +152,7 @@ int dpni_close(struct fsl_mc_io *mc_io,
  *		DPNI_OPT_HAS_KEY_MASKING
  *		DPNI_OPT_NO_FS
  *		DPNI_OPT_SINGLE_SENDER
+ *		DPNI_OPT_STASHING_DIS
  * @fs_entries: Number of entries in the flow steering table.
  *		This table is used to select the ingress queue for
  *		ingress traffic, targeting a GPP core or another.
@@ -335,6 +341,7 @@ int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
  *		DPNI_OPT_SHARED_CONGESTION
  *		DPNI_OPT_HAS_KEY_MASKING
  *		DPNI_OPT_NO_FS
+ *		DPNI_OPT_STASHING_DIS
  * @num_queues: Number of Tx and Rx queues used for traffic distribution.
  * @num_rx_tcs: Number of RX traffic classes (TCs), reserved for the DPNI.
  * @num_tx_tcs: Number of TX traffic classes (TCs), reserved for the DPNI.
@@ -394,7 +401,7 @@ int dpni_get_attributes(struct fsl_mc_io *mc_io,
  * error queue. To be used in dpni_set_errors_behavior() only if error_action
  * parameter is set to DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE.
  */
-#define DPNI_ERROR_DISC		0x80000000
+#define DPNI_ERROR_DISC			0x80000000
 
 /**
  * Extract out of frame header error
@@ -576,6 +583,8 @@ enum dpni_offload {
 	DPNI_OFF_TX_L3_CSUM,
 	DPNI_OFF_TX_L4_CSUM,
 	DPNI_FLCTYPE_HASH,
+	DPNI_HEADER_STASHING,
+	DPNI_PAYLOAD_STASHING,
 };
 
 int dpni_set_offload(struct fsl_mc_io *mc_io,
@@ -596,6 +605,26 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 		  enum dpni_queue_type qtype,
 		  uint16_t *qdid);
 
+int dpni_get_qdid_ex(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  enum dpni_queue_type qtype,
+		  uint16_t *qdid);
+
+/**
+ * struct dpni_sp_info - Structure representing DPNI storage-profile information
+ * (relevant only for DPNI owned by AIOP)
+ * @spids: array of storage-profiles
+ */
+struct dpni_sp_info {
+	uint16_t spids[DPNI_MAX_SP];
+};
+
+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
+		     uint32_t cmd_flags,
+		     uint16_t token,
+		     struct dpni_sp_info *sp_info);
+
 int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
 			    uint32_t cmd_flags,
 			    uint16_t token,
@@ -1443,11 +1472,25 @@ enum dpni_confirmation_mode {
 int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode mode);
 
 int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
+				  enum dpni_confirmation_mode *mode);
+
+int dpni_set_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode mode);
+
+int dpni_get_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
 				  enum dpni_confirmation_mode *mode);
 
 /**
@@ -1841,6 +1884,60 @@ void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
 				     const uint8_t *sw_sequence_layout_buf);
 
 /**
+ * When used for queue_idx in function dpni_set_rx_dist_default_queue will signal to dpni
+ * to drop all unclassified frames
+ */
+#define DPNI_FS_MISS_DROP		((uint16_t)-1)
+
+/**
+ * struct dpni_rx_dist_cfg - distribution configuration
+ * @dist_size:	distribution size; supported values: 1,2,3,4,6,7,8,
+ *		12,14,16,24,28,32,48,56,64,96,112,128,192,224,256,384,448,
+ *		512,768,896,1024
+ * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
+ *		the extractions to be used for the distribution key by calling
+ *		dpkg_prepare_key_cfg() relevant only when enable!=0 otherwise it can be '0'
+ * @enable: enable/disable the distribution.
+ * @tc: TC id for which distribution is set
+ * @fs_miss_flow_id: when packet misses all rules from flow steering table and hash is
+ *		disabled it will be put into this queue id; use DPNI_FS_MISS_DROP to drop
+ *		frames. The value of this field is used only when flow steering distribution
+ *		is enabled and hash distribution is disabled
+ */
+struct dpni_rx_dist_cfg {
+	uint16_t dist_size;
+	uint64_t key_cfg_iova;
+	uint8_t enable;
+	uint8_t tc;
+	uint16_t fs_miss_flow_id;
+};
+
+int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		const struct dpni_rx_dist_cfg *cfg);
+
+int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		const struct dpni_rx_dist_cfg *cfg);
+
+int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t tpid);
+
+int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t tpid);
+
+/**
+ * struct dpni_custom_tpid_cfg - custom TPID configuration. Contains custom TPID values
+ *		used in current dpni object to detect 802.1q frames.
+ *	@tpid1: first tag. Not used if zero.
+ *	@tpid2: second tag. Not used if zero.
+ */
+struct dpni_custom_tpid_cfg {
+	uint16_t tpid1;
+	uint16_t tpid2;
+};
+
+int dpni_get_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_custom_tpid_cfg *tpid);
+/*
  * struct dpni_ptp_cfg - configure single step PTP (IEEE 1588)
  *	@en: enable single step PTP. When enabled the PTPv1 functionality will
  *		not work. If the field is zero, offset and ch_update parameters
@@ -1858,6 +1955,7 @@ struct dpni_single_step_cfg {
 	uint8_t ch_update;
 	uint16_t offset;
 	uint32_t peer_delay;
+	uint32_t ptp_onestep_reg_base;
 };
 
 int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
@@ -1885,61 +1983,35 @@ int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 		uint16_t token, struct dpni_port_cfg *port_cfg);
 
-/**
- * When used for queue_idx in function dpni_set_rx_dist_default_queue will
- * signal to dpni to drop all unclassified frames
- */
-#define DPNI_FS_MISS_DROP		((uint16_t)-1)
-
-/**
- * struct dpni_rx_dist_cfg - distribution configuration
- * @dist_size:	distribution size; supported values: 1,2,3,4,6,7,8,
- *		12,14,16,24,28,32,48,56,64,96,112,128,192,224,256,384,448,
- *		512,768,896,1024
- * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
- *		the extractions to be used for the distribution key by calling
- *		dpkg_prepare_key_cfg() relevant only when enable!=0 otherwise
- *		it can be '0'
- * @enable: enable/disable the distribution.
- * @tc: TC id for which distribution is set
- * @fs_miss_flow_id: when packet misses all rules from flow steering table and
- *		hash is disabled it will be put into this queue id; use
- *		DPNI_FS_MISS_DROP to drop frames. The value of this field is
- *		used only when flow steering distribution is enabled and hash
- *		distribution is disabled
- */
-struct dpni_rx_dist_cfg {
-	uint16_t dist_size;
-	uint64_t key_cfg_iova;
-	uint8_t enable;
-	uint8_t tc;
-	uint16_t fs_miss_flow_id;
+enum dpni_table_type {
+	DPNI_FS_TABLE = 1,
+	DPNI_MAC_TABLE = 2,
+	DPNI_QOS_TABLE = 3,
+	DPNI_VLAN_TABLE = 4,
 };
 
-int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, const struct dpni_rx_dist_cfg *cfg);
-
-int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, const struct dpni_rx_dist_cfg *cfg);
-
-int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, uint16_t tpid);
-
-int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, uint16_t tpid);
+int dpni_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries);
 
 /**
- * struct dpni_custom_tpid_cfg - custom TPID configuration. Contains custom TPID
- *	values used in current dpni object to detect 802.1q frames.
- *	@tpid1: first tag. Not used if zero.
- *	@tpid2: second tag. Not used if zero.
+ * SP Profile on Ingress DPNI
  */
-struct dpni_custom_tpid_cfg {
-	uint16_t tpid1;
-	uint16_t tpid2;
-};
+#define DPNI_SP_PROFILE_INGRESS 0x1
+/**
+ * SP Profile on Egress DPNI
+ */
+#define DPNI_SP_PROFILE_EGRESS	0x2
+
+int dpni_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type);
 
-int dpni_get_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, struct dpni_custom_tpid_cfg *tpid);
+int dpni_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t type, uint8_t en);
 
 #endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
index 781f936add..1152182e34 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2022 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPNI_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPNI Version */
 #define DPNI_VER_MAJOR				8
-#define DPNI_VER_MINOR				2
+#define DPNI_VER_MINOR				4
 
 #define DPNI_CMD_BASE_VERSION			1
 #define DPNI_CMD_VERSION_2			2
@@ -108,8 +108,8 @@
 #define DPNI_CMDID_GET_EARLY_DROP		DPNI_CMD_V3(0x26A)
 #define DPNI_CMDID_GET_OFFLOAD			DPNI_CMD_V2(0x26B)
 #define DPNI_CMDID_SET_OFFLOAD			DPNI_CMD_V2(0x26C)
-#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE	DPNI_CMD(0x266)
-#define DPNI_CMDID_GET_TX_CONFIRMATION_MODE	DPNI_CMD(0x26D)
+#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE	DPNI_CMD_V2(0x266)
+#define DPNI_CMDID_GET_TX_CONFIRMATION_MODE	DPNI_CMD_V2(0x26D)
 #define DPNI_CMDID_SET_OPR			DPNI_CMD_V2(0x26e)
 #define DPNI_CMDID_GET_OPR			DPNI_CMD_V2(0x26f)
 #define DPNI_CMDID_LOAD_SW_SEQUENCE		DPNI_CMD(0x270)
@@ -121,7 +121,16 @@
 #define DPNI_CMDID_REMOVE_CUSTOM_TPID		DPNI_CMD(0x276)
 #define DPNI_CMDID_GET_CUSTOM_TPID		DPNI_CMD(0x277)
 #define DPNI_CMDID_GET_LINK_CFG			DPNI_CMD(0x278)
+#define DPNI_CMDID_SET_SINGLE_STEP_CFG			DPNI_CMD(0x279)
+#define DPNI_CMDID_GET_SINGLE_STEP_CFG		DPNI_CMD_V2(0x27a)
 #define DPNI_CMDID_SET_PORT_CFG			DPNI_CMD(0x27B)
+#define DPNI_CMDID_GET_PORT_CFG			DPNI_CMD(0x27C)
+#define DPNI_CMDID_DUMP_TABLE           DPNI_CMD(0x27D)
+#define DPNI_CMDID_SET_SP_PROFILE		DPNI_CMD(0x27E)
+#define DPNI_CMDID_GET_QDID_EX			DPNI_CMD(0x27F)
+#define DPNI_CMDID_SP_ENABLE		    DPNI_CMD(0x280)
+#define DPNI_CMDID_SET_QUEUE_TX_CONFIRMATION_MODE	DPNI_CMD(0x281)
+#define DPNI_CMDID_GET_QUEUE_TX_CONFIRMATION_MODE	DPNI_CMD(0x282)
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPNI_MASK(field)	\
@@ -329,6 +338,10 @@ struct dpni_rsp_get_qdid {
 	uint16_t qdid;
 };
 
+struct dpni_rsp_get_qdid_ex {
+	uint16_t qdid[16];
+};
+
 struct dpni_rsp_get_sp_info {
 	uint16_t spids[2];
 };
@@ -748,7 +761,16 @@ struct dpni_cmd_set_taildrop {
 };
 
 struct dpni_tx_confirmation_mode {
-	uint32_t pad;
+	uint8_t ceetm_ch_idx;
+	uint8_t pad1;
+	uint16_t pad2;
+	uint8_t confirmation_mode;
+};
+
+struct dpni_queue_tx_confirmation_mode {
+	uint8_t ceetm_ch_idx;
+	uint8_t index;
+	uint16_t pad;
 	uint8_t confirmation_mode;
 };
 
@@ -894,6 +916,42 @@ struct dpni_sw_sequence_layout_entry {
 	uint16_t pad;
 };
 
+#define DPNI_RX_FS_DIST_ENABLE_SHIFT	0
+#define DPNI_RX_FS_DIST_ENABLE_SIZE		1
+struct dpni_cmd_set_rx_fs_dist {
+	uint16_t	dist_size;
+	uint8_t		enable;
+	uint8_t		tc;
+	uint16_t	miss_flow_id;
+	uint16_t	pad1;
+	uint64_t	key_cfg_iova;
+};
+
+#define DPNI_RX_HASH_DIST_ENABLE_SHIFT	0
+#define DPNI_RX_HASH_DIST_ENABLE_SIZE		1
+struct dpni_cmd_set_rx_hash_dist {
+	uint16_t	dist_size;
+	uint8_t		enable;
+	uint8_t		tc_id;
+	uint32_t	pad;
+	uint64_t	key_cfg_iova;
+};
+
+struct dpni_cmd_add_custom_tpid {
+	uint16_t	pad;
+	uint16_t	tpid;
+};
+
+struct dpni_cmd_remove_custom_tpid {
+	uint16_t	pad;
+	uint16_t	tpid;
+};
+
+struct dpni_rsp_get_custom_tpid {
+	uint16_t	tpid1;
+	uint16_t	tpid2;
+};
+
 #define DPNI_PTP_ENABLE_SHIFT			0
 #define DPNI_PTP_ENABLE_SIZE			1
 #define DPNI_PTP_CH_UPDATE_SHIFT		1
@@ -925,40 +983,45 @@ struct dpni_rsp_get_port_cfg {
 	uint32_t	bit_params;
 };
 
-#define DPNI_RX_FS_DIST_ENABLE_SHIFT	0
-#define DPNI_RX_FS_DIST_ENABLE_SIZE		1
-struct dpni_cmd_set_rx_fs_dist {
-	uint16_t	dist_size;
-	uint8_t		enable;
-	uint8_t		tc;
-	uint16_t	miss_flow_id;
-	uint16_t	pad1;
-	uint64_t	key_cfg_iova;
+struct dpni_cmd_dump_table {
+	uint16_t table_type;
+	uint16_t table_index;
+	uint32_t pad0;
+	uint64_t iova_addr;
+	uint32_t iova_size;
 };
 
-#define DPNI_RX_HASH_DIST_ENABLE_SHIFT	0
-#define DPNI_RX_HASH_DIST_ENABLE_SIZE		1
-struct dpni_cmd_set_rx_hash_dist {
-	uint16_t	dist_size;
-	uint8_t		enable;
-	uint8_t		tc_id;
-	uint32_t	pad;
-	uint64_t	key_cfg_iova;
+struct dpni_rsp_dump_table {
+	uint16_t num_entries;
 };
 
-struct dpni_cmd_add_custom_tpid {
-	uint16_t	pad;
-	uint16_t	tpid;
+struct dump_table_header {
+	uint16_t table_type;
+	uint16_t table_num_entries;
+	uint16_t table_max_entries;
+	uint8_t default_action;
+	uint8_t match_type;
+	uint8_t reserved[24];
 };
 
-struct dpni_cmd_remove_custom_tpid {
-	uint16_t	pad;
-	uint16_t	tpid;
+struct dump_table_entry {
+	uint8_t key[DPNI_MAX_KEY_SIZE];
+	uint8_t mask[DPNI_MAX_KEY_SIZE];
+	uint8_t key_action;
+	uint16_t result[3];
+	uint8_t reserved[21];
 };
 
-struct dpni_rsp_get_custom_tpid {
-	uint16_t	tpid1;
-	uint16_t	tpid2;
+#define MAX_SP_PROFILE_ID_SIZE	8
+
+struct dpni_cmd_set_sp_profile {
+	uint8_t sp_profile[MAX_SP_PROFILE_ID_SIZE];
+	uint8_t type;
+};
+
+struct dpni_cmd_sp_enable {
+	uint8_t type;
+	uint8_t en;
 };
 
 #pragma pack(pop)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 09/43] net/dpaa2: support link state for eth interfaces
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (7 preceding siblings ...)
  2024-09-18  7:50   ` [v2 08/43] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 10/43] net/dpaa2: update DPNI link status method vanshika.shukla
                     ` (34 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

This patch add support to update the duplex value along with
link status and link speed after setting the link UP.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4dc7a82b47..9172097abf 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1985,7 +1985,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	if (ret) {
 		/* Unable to obtain dpni status; Not continuing */
 		DPAA2_PMD_ERR("Interface Link UP failed (%d)", ret);
-		return -EINVAL;
+		return ret;
 	}
 
 	/* Enable link if not already enabled */
@@ -1993,13 +1993,13 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 		ret = dpni_enable(dpni, CMD_PRI_LOW, priv->token);
 		if (ret) {
 			DPAA2_PMD_ERR("Interface Link UP failed (%d)", ret);
-			return -EINVAL;
+			return ret;
 		}
 	}
 	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
 	if (ret < 0) {
 		DPAA2_PMD_DEBUG("Unable to get link state (%d)", ret);
-		return -1;
+		return ret;
 	}
 
 	/* changing tx burst function to start enqueues */
@@ -2007,10 +2007,15 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = state.up;
 	dev->data->dev_link.link_speed = state.rate;
 
+	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
+		dev->data->dev_link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	else
+		dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+
 	if (state.up)
-		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
+		DPAA2_PMD_DEBUG("Port %d Link is Up", dev->data->port_id);
 	else
-		DPAA2_PMD_INFO("Port %d Link is Down", dev->data->port_id);
+		DPAA2_PMD_DEBUG("Port %d Link is Down", dev->data->port_id);
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 10/43] net/dpaa2: update DPNI link status method
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (8 preceding siblings ...)
  2024-09-18  7:50   ` [v2 09/43] net/dpaa2: support link state for eth interfaces vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 11/43] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
                     ` (33 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Brick Yang, Rohit Raj

From: Brick Yang <brick.yang@nxp.com>

If SFP module is not connected to the port and flow control is
configured using flow control API, link will show DOWN even after
connecting the SFP module and fiber cable.

This issue cannot be reproduced if only SFP module is connected and
fiber cable is disconnected before configuring flow control even
though link is down in this case too.

This patch improves it by getting configuration values from
dpni_get_link_cfg API instead of dpni_get_link_state API, which
provides us static configuration data.

Signed-off-by: Brick Yang <brick.yang@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9172097abf..2fb9b8ea95 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2084,7 +2084,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	int ret = -EINVAL;
 	struct dpaa2_dev_priv *priv;
 	struct fsl_mc_io *dpni;
-	struct dpni_link_state state = {0};
+	struct dpni_link_cfg cfg = {0};
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -2096,14 +2096,14 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
+	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("error: dpni_get_link_state %d", ret);
+		DPAA2_PMD_ERR("error: dpni_get_link_cfg %d", ret);
 		return ret;
 	}
 
 	memset(fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	if (state.options & DPNI_LINK_OPT_PAUSE) {
+	if (cfg.options & DPNI_LINK_OPT_PAUSE) {
 		/* DPNI_LINK_OPT_PAUSE set
 		 *  if ASYM_PAUSE not set,
 		 *	RX Side flow control (handle received Pause frame)
@@ -2112,7 +2112,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	RX Side flow control (handle received Pause frame)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
-		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
+		if (!(cfg.options & DPNI_LINK_OPT_ASYM_PAUSE))
 			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
 			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
@@ -2124,7 +2124,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *  if ASYM_PAUSE not set,
 		 *	Flow control disabled
 		 */
-		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
+		if (cfg.options & DPNI_LINK_OPT_ASYM_PAUSE)
 			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
 			fc_conf->mode = RTE_ETH_FC_NONE;
@@ -2139,7 +2139,6 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	int ret = -EINVAL;
 	struct dpaa2_dev_priv *priv;
 	struct fsl_mc_io *dpni;
-	struct dpni_link_state state = {0};
 	struct dpni_link_cfg cfg = {0};
 
 	PMD_INIT_FUNC_TRACE();
@@ -2152,23 +2151,19 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	/* It is necessary to obtain the current state before setting fc_conf
+	/* It is necessary to obtain the current cfg before setting fc_conf
 	 * as MC would return error in case rate, autoneg or duplex values are
 	 * different.
 	 */
-	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
+	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("Unable to get link state (err=%d)", ret);
+		DPAA2_PMD_ERR("Unable to get link cfg (err=%d)", ret);
 		return -1;
 	}
 
 	/* Disable link before setting configuration */
 	dpaa2_dev_set_link_down(dev);
 
-	/* Based on fc_conf, update cfg */
-	cfg.rate = state.rate;
-	cfg.options = state.options;
-
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
 	case RTE_ETH_FC_FULL:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 11/43] net/dpaa2: add new PMD API to check dpaa platform version
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (9 preceding siblings ...)
  2024-09-18  7:50   ` [v2 10/43] net/dpaa2: update DPNI link status method vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 12/43] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
                     ` (32 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

This patch add support to check the DPAA platform type from
the applications.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 16 +++++++++++++---
 drivers/net/dpaa2/dpaa2_flow.c    |  5 ++---
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  4 ++++
 drivers/net/dpaa2/version.map     |  1 +
 4 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 2fb9b8ea95..f0b4843472 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2158,7 +2158,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
 		DPAA2_PMD_ERR("Unable to get link cfg (err=%d)", ret);
-		return -1;
+		return ret;
 	}
 
 	/* Disable link before setting configuration */
@@ -2200,7 +2200,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	default:
 		DPAA2_PMD_ERR("Incorrect Flow control flag (%d)",
 			      fc_conf->mode);
-		return -1;
+		return -EINVAL;
 	}
 
 	ret = dpni_set_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
@@ -2882,8 +2882,18 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
+int
+rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 {
+	struct rte_eth_dev *dev;
+
+	if (eth_id >= RTE_MAX_ETHPORTS)
+		return false;
+
+	dev = &rte_eth_devices[eth_id];
+	if (!dev->device)
+		return false;
+
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 6c7bac4d48..15f3343db4 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3300,14 +3300,13 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 	if (idx >= 0) {
 		if (!rte_eth_dev_is_valid_port(idx))
 			return NULL;
+		if (!rte_pmd_dpaa2_dev_is_dpaa2(idx))
+			return NULL;
 		dest_dev = &rte_eth_devices[idx];
 	} else {
 		dest_dev = priv->eth_dev;
 	}
 
-	if (!dpaa2_dev_is_dpaa2(dest_dev))
-		return NULL;
-
 	return dest_dev;
 }
 
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index bebebcacdc..fc52a9218e 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -127,6 +127,10 @@ __rte_experimental
 uint32_t
 rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
 
+__rte_experimental
+int
+rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id);
+
 #if defined(RTE_LIBRTE_IEEE1588)
 __rte_experimental
 int
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 7323fc8869..233c6e6b2c 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -17,6 +17,7 @@ EXPERIMENTAL {
 	# added in 21.11
 	rte_pmd_dpaa2_get_tlu_hash;
 	# added in 24.11
+	rte_pmd_dpaa2_dev_is_dpaa2;
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
 	rte_pmd_dpaa2_mux_dump_counter;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 12/43] bus/fslmc: improve BMAN buffer acquire
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (10 preceding siblings ...)
  2024-09-18  7:50   ` [v2 11/43] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 13/43] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
                     ` (31 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Ignore reserved bits of BMan acquire response number.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 1f24cdce7e..3fdca9761d 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2023-2024 NXP
  *
  */
 
@@ -42,6 +42,8 @@
 /* opaque token for static dequeues */
 #define QMAN_SDQCR_TOKEN    0xbb
 
+#define BMAN_VALID_RSLT_NUM_MASK 0x7
+
 enum qbman_sdqcr_dct {
 	qbman_sdqcr_dct_null = 0,
 	qbman_sdqcr_dct_prio_ics,
@@ -2628,7 +2630,7 @@ struct qbman_acquire_rslt {
 	uint16_t reserved;
 	uint8_t num;
 	uint8_t reserved2[3];
-	uint64_t buf[7];
+	uint64_t buf[BMAN_VALID_RSLT_NUM_MASK];
 };
 
 static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
@@ -2636,8 +2638,9 @@ static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
+	int num;
 
-	if (!num_buffers || (num_buffers > 7))
+	if (!num_buffers || (num_buffers > BMAN_VALID_RSLT_NUM_MASK))
 		return -EINVAL;
 
 	/* Start the management command */
@@ -2668,12 +2671,13 @@ static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
 		return -EIO;
 	}
 
-	QBMAN_BUG_ON(r->num > num_buffers);
+	num = r->num & BMAN_VALID_RSLT_NUM_MASK;
+	QBMAN_BUG_ON(num > num_buffers);
 
 	/* Copy the acquired buffers to the caller's array */
-	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+	u64_from_le32_copy(buffers, &r->buf[0], num);
 
-	return (int)r->num;
+	return num;
 }
 
 static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
@@ -2681,8 +2685,9 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
+	int num;
 
-	if (!num_buffers || (num_buffers > 7))
+	if (!num_buffers || (num_buffers > BMAN_VALID_RSLT_NUM_MASK))
 		return -EINVAL;
 
 	/* Start the management command */
@@ -2713,12 +2718,13 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 		return -EIO;
 	}
 
-	QBMAN_BUG_ON(r->num > num_buffers);
+	num = r->num & BMAN_VALID_RSLT_NUM_MASK;
+	QBMAN_BUG_ON(num > num_buffers);
 
 	/* Copy the acquired buffers to the caller's array */
-	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+	u64_from_le32_copy(buffers, &r->buf[0], num);
 
-	return (int)r->num;
+	return num;
 }
 
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 13/43] bus/fslmc: get MC VFIO group FD directly
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (11 preceding siblings ...)
  2024-09-18  7:50   ` [v2 12/43] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 14/43] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
                     ` (30 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Get vfio group fd directly from file system instead of
from RTE API to avoid conflicting with PCIe VFIO.
FSL MC VFIO should have it's own logic which doe NOT depend on
RTE VFIO.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 88 ++++++++++++++++++++++++++--------
 drivers/bus/fslmc/meson.build  |  3 +-
 2 files changed, 71 insertions(+), 20 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 17163333af..1cc256f849 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2021 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -30,6 +30,7 @@
 #include <rte_kvargs.h>
 #include <dev_driver.h>
 #include <rte_eal_memconfig.h>
+#include <eal_vfio.h>
 
 #include "private.h"
 #include "fslmc_vfio.h"
@@ -440,6 +441,59 @@ int rte_fslmc_vfio_dmamap(void)
 	return 0;
 }
 
+static int
+fslmc_vfio_open_group_fd(int iommu_group_num)
+{
+	int vfio_group_fd;
+	char filename[PATH_MAX];
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
+
+	/* if primary, try to open the group */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		/* try regular group format */
+		snprintf(filename, sizeof(filename),
+			VFIO_GROUP_FMT, iommu_group_num);
+		vfio_group_fd = open(filename, O_RDWR);
+		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
+				filename, vfio_group_fd);
+		}
+
+		return vfio_group_fd;
+	}
+	/* if we're in a secondary process, request group fd from the primary
+	 * process via mp channel.
+	 */
+	p->req = SOCKET_REQ_GROUP;
+	p->group_num = iommu_group_num;
+	strcpy(mp_req.name, EAL_VFIO_MP);
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_group_fd = -1;
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+	    mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		p = (struct vfio_mp_param *)mp_rep->param;
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+			vfio_group_fd = mp_rep->fds[0];
+		} else if (p->result == SOCKET_NO_FD) {
+			DPAA2_BUS_ERR("Bad VFIO group fd");
+			vfio_group_fd = 0;
+		}
+	}
+
+	free(mp_reply.msgs);
+	if (vfio_group_fd < 0) {
+		DPAA2_BUS_ERR("Cannot request group fd(%d)",
+			vfio_group_fd);
+	}
+	return vfio_group_fd;
+}
+
 static int
 fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		int *vfio_dev_fd, struct vfio_device_info *device_info)
@@ -455,7 +509,7 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		return -1;
 
 	/* get the actual group fd */
-	vfio_group_fd = rte_vfio_get_group_fd(iommu_group_no);
+	vfio_group_fd = vfio_group.fd;
 	if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
 		return -1;
 
@@ -891,6 +945,11 @@ fslmc_vfio_close_group(void)
 		}
 	}
 
+	if (vfio_group.fd > 0) {
+		close(vfio_group.fd);
+		vfio_group.fd = 0;
+	}
+
 	return 0;
 }
 
@@ -1081,7 +1140,6 @@ fslmc_vfio_setup_group(void)
 {
 	int groupid;
 	int ret;
-	int vfio_container_fd;
 	struct vfio_group_status status = { .argsz = sizeof(status) };
 
 	/* if already done once */
@@ -1100,16 +1158,9 @@ fslmc_vfio_setup_group(void)
 		return 0;
 	}
 
-	ret = rte_vfio_container_create();
-	if (ret < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return ret;
-	}
-	vfio_container_fd = ret;
-
 	/* Get the actual group fd */
-	ret = rte_vfio_container_group_bind(vfio_container_fd, groupid);
-	if (ret < 0)
+	ret = fslmc_vfio_open_group_fd(groupid);
+	if (ret <= 0)
 		return ret;
 	vfio_group.fd = ret;
 
@@ -1118,14 +1169,14 @@ fslmc_vfio_setup_group(void)
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO error getting group status");
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return ret;
 	}
 
 	if (!(status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
 		DPAA2_BUS_ERR("VFIO group not viable");
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return -EPERM;
 	}
 	/* Since Group is VIABLE, Store the groupid */
@@ -1136,11 +1187,10 @@ fslmc_vfio_setup_group(void)
 		/* Now connect this IOMMU group to given container */
 		ret = vfio_connect_container();
 		if (ret) {
-			DPAA2_BUS_ERR(
-				"Error connecting container with groupid %d",
-				groupid);
+			DPAA2_BUS_ERR("vfio group(%d) connect failed(%d)",
+				groupid, ret);
 			close(vfio_group.fd);
-			rte_vfio_clear_group(vfio_group.fd);
+			vfio_group.fd = 0;
 			return ret;
 		}
 	}
@@ -1151,7 +1201,7 @@ fslmc_vfio_setup_group(void)
 		DPAA2_BUS_ERR("Error getting device %s fd from group %d",
 			      fslmc_container, vfio_group.groupid);
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return ret;
 	}
 	container_device_fd = ret;
diff --git a/drivers/bus/fslmc/meson.build b/drivers/bus/fslmc/meson.build
index 162ca286fe..70098ad778 100644
--- a/drivers/bus/fslmc/meson.build
+++ b/drivers/bus/fslmc/meson.build
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018,2021 NXP
+# Copyright 2018-2023 NXP
 
 if not is_linux
     build = false
@@ -27,3 +27,4 @@ sources = files(
 )
 
 includes += include_directories('mc', 'qbman/include', 'portal')
+includes += include_directories('../../../lib/eal/linux')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 14/43] bus/fslmc: enhance MC VFIO multiprocess support
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (12 preceding siblings ...)
  2024-09-18  7:50   ` [v2 13/43] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 15/43] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
                     ` (29 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

MC VFIO is not registered into RTE VFIO. Primary process registers
MC vfio mp action for secondary process to request.
VFIO/Container handlers are provided via CMSG.
Primary process is responsible to connect MC VFIO group to container.

In addition, MC VFIO code is refactored according to container/group logic.
In general, VFIO container can support multiple groups per process.
Now we only support single MC group(dprc.x) per process, but we add
logic to support connecting multiple MC groups to container.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_bus.c  |  14 +-
 drivers/bus/fslmc/fslmc_vfio.c | 996 ++++++++++++++++++++++-----------
 drivers/bus/fslmc/fslmc_vfio.h |  35 +-
 drivers/bus/fslmc/version.map  |   1 +
 4 files changed, 694 insertions(+), 352 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 7baadf99b9..654726dbe6 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -318,6 +318,7 @@ rte_fslmc_scan(void)
 	struct dirent *entry;
 	static int process_once;
 	int groupid;
+	char *group_name;
 
 	if (process_once) {
 		DPAA2_BUS_DEBUG("Fslmc bus already scanned. Not rescanning");
@@ -325,12 +326,19 @@ rte_fslmc_scan(void)
 	}
 	process_once = 1;
 
-	ret = fslmc_get_container_group(&groupid);
+	/* Now we only support single group per process.*/
+	group_name = getenv("DPRC");
+	if (!group_name) {
+		DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
+		return -EINVAL;
+	}
+
+	ret = fslmc_get_container_group(group_name, &groupid);
 	if (ret != 0)
 		goto scan_fail;
 
 	/* Scan devices on the group */
-	sprintf(fslmc_dirpath, "%s/%s", SYSFS_FSL_MC_DEVICES, fslmc_container);
+	sprintf(fslmc_dirpath, "%s/%s", SYSFS_FSL_MC_DEVICES, group_name);
 	dir = opendir(fslmc_dirpath);
 	if (!dir) {
 		DPAA2_BUS_ERR("Unable to open VFIO group directory");
@@ -338,7 +346,7 @@ rte_fslmc_scan(void)
 	}
 
 	/* Scan the DPRC container object */
-	ret = scan_one_fslmc_device(fslmc_container);
+	ret = scan_one_fslmc_device(group_name);
 	if (ret != 0) {
 		/* Error in parsing directory - exit gracefully */
 		goto scan_fail_cleanup;
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 1cc256f849..15d2930cf0 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -42,12 +42,14 @@
 
 #define FSLMC_CONTAINER_MAX_LEN 8 /**< Of the format dprc.XX */
 
-/* Number of VFIO containers & groups with in */
-static struct fslmc_vfio_group vfio_group;
-static struct fslmc_vfio_container vfio_container;
-static int container_device_fd;
-char *fslmc_container;
-static int fslmc_iommu_type;
+#define FSLMC_VFIO_MP "fslmc_vfio_mp_sync"
+
+/* Container is composed by multiple groups, however,
+ * now each process only supports single group with in container.
+ */
+static struct fslmc_vfio_container s_vfio_container;
+/* Currently we only support single group/process. */
+const char *fslmc_group; /* dprc.x*/
 static uint32_t *msi_intr_vaddr;
 void *(*rte_mcp_ptr_list);
 
@@ -72,108 +74,547 @@ rte_fslmc_object_register(struct rte_dpaa2_object *object)
 	TAILQ_INSERT_TAIL(&dpaa2_obj_list, object, next);
 }
 
-int
-fslmc_get_container_group(int *groupid)
+static const char *
+fslmc_vfio_get_group_name(void)
 {
-	int ret;
-	char *container;
+	return fslmc_group;
+}
+
+static void
+fslmc_vfio_set_group_name(const char *group_name)
+{
+	fslmc_group = group_name;
+}
+
+static int
+fslmc_vfio_add_group(int vfio_group_fd,
+	int iommu_group_num, const char *group_name)
+{
+	struct fslmc_vfio_group *group;
+
+	group = rte_zmalloc(NULL, sizeof(struct fslmc_vfio_group), 0);
+	if (!group)
+		return -ENOMEM;
+	group->fd = vfio_group_fd;
+	group->groupid = iommu_group_num;
+	strcpy(group->group_name, group_name);
+	if (rte_vfio_noiommu_is_enabled() > 0)
+		group->iommu_type = RTE_VFIO_NOIOMMU;
+	else
+		group->iommu_type = VFIO_TYPE1_IOMMU;
+	LIST_INSERT_HEAD(&s_vfio_container.groups, group, next);
+
+	return 0;
+}
+
+static int
+fslmc_vfio_clear_group(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+	struct fslmc_vfio_device *dev;
+	int clear = 0;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			LIST_FOREACH(dev, &group->vfio_devices, next)
+				LIST_REMOVE(dev, next);
+
+			close(vfio_group_fd);
+			LIST_REMOVE(group, next);
+			rte_free(group);
+			clear = 1;
 
-	if (!fslmc_container) {
-		container = getenv("DPRC");
-		if (container == NULL) {
-			DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
-			return -EINVAL;
+			break;
 		}
+	}
 
-		if (strlen(container) >= FSLMC_CONTAINER_MAX_LEN) {
-			DPAA2_BUS_ERR("Invalid container name: %s", container);
-			return -1;
+	if (LIST_EMPTY(&s_vfio_container.groups)) {
+		if (s_vfio_container.fd > 0)
+			close(s_vfio_container.fd);
+
+		s_vfio_container.fd = -1;
+	}
+	if (clear)
+		return 0;
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_connect_container(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			group->connected = 1;
+
+			return 0;
+		}
+	}
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_container_connected(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			if (group->connected)
+				return 1;
+		}
+	}
+	return 0;
+}
+
+static int
+fslmc_vfio_iommu_type(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd)
+			return group->iommu_type;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_fd_by_name(const char *group_name)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (!strcmp(group->group_name, group_name))
+			return group->fd;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_fd_by_id(int group_id)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->groupid == group_id)
+			return group->fd;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_add_dev(int vfio_group_fd,
+	int dev_fd, const char *name)
+{
+	struct fslmc_vfio_group *group;
+	struct fslmc_vfio_device *dev;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			dev = rte_zmalloc(NULL,
+				sizeof(struct fslmc_vfio_device), 0);
+			dev->fd = dev_fd;
+			strcpy(dev->dev_name, name);
+			LIST_INSERT_HEAD(&group->vfio_devices, dev, next);
+			return 0;
 		}
+	}
+	return -ENODEV;
+}
 
-		fslmc_container = strdup(container);
-		if (!fslmc_container) {
-			DPAA2_BUS_ERR("Mem alloc failure; Container name");
-			return -ENOMEM;
+static int
+fslmc_vfio_group_remove_dev(int vfio_group_fd,
+	const char *name)
+{
+	struct fslmc_vfio_group *group = NULL;
+	struct fslmc_vfio_device *dev;
+	int removed = 0;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd)
+			break;
+	}
+
+	if (group) {
+		LIST_FOREACH(dev, &group->vfio_devices, next) {
+			if (!strcmp(dev->dev_name, name)) {
+				LIST_REMOVE(dev, next);
+				removed = 1;
+				break;
+			}
 		}
 	}
 
-	fslmc_iommu_type = (rte_vfio_noiommu_is_enabled() == 1) ?
-		RTE_VFIO_NOIOMMU : VFIO_TYPE1_IOMMU;
+	if (removed)
+		return 0;
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_container_fd(void)
+{
+	return s_vfio_container.fd;
+}
+
+static int
+fslmc_get_group_id(const char *group_name,
+	int *groupid)
+{
+	int ret;
 
 	/* get group number */
 	ret = rte_vfio_get_group_num(SYSFS_FSL_MC_DEVICES,
-				     fslmc_container, groupid);
+			group_name, groupid);
 	if (ret <= 0) {
-		DPAA2_BUS_ERR("Unable to find %s IOMMU group", fslmc_container);
-		return -1;
+		DPAA2_BUS_ERR("Unable to find %s IOMMU group", group_name);
+		if (ret < 0)
+			return ret;
+
+		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("Container: %s has VFIO iommu group id = %d",
-			fslmc_container, *groupid);
+	DPAA2_BUS_DEBUG("GROUP(%s) has VFIO iommu group id = %d",
+		group_name, *groupid);
 
 	return 0;
 }
 
 static int
-vfio_connect_container(void)
+fslmc_vfio_open_group_fd(const char *group_name)
 {
-	int fd, ret;
+	int vfio_group_fd;
+	char filename[PATH_MAX];
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
+	int iommu_group_num, ret;
 
-	if (vfio_container.used) {
-		DPAA2_BUS_DEBUG("No container available");
-		return -1;
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd > 0)
+		return vfio_group_fd;
+
+	ret = fslmc_get_group_id(group_name, &iommu_group_num);
+	if (ret)
+		return ret;
+	/* if primary, try to open the group */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		/* try regular group format */
+		snprintf(filename, sizeof(filename),
+			VFIO_GROUP_FMT, iommu_group_num);
+		vfio_group_fd = open(filename, O_RDWR);
+
+		goto add_vfio_group;
+	}
+	/* if we're in a secondary process, request group fd from the primary
+	 * process via mp channel.
+	 */
+	p->req = SOCKET_REQ_GROUP;
+	p->group_num = iommu_group_num;
+	strcpy(mp_req.name, FSLMC_VFIO_MP);
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_group_fd = -1;
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+	    mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		p = (struct vfio_mp_param *)mp_rep->param;
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+			vfio_group_fd = mp_rep->fds[0];
+		} else if (p->result == SOCKET_NO_FD) {
+			DPAA2_BUS_ERR("Bad VFIO group fd");
+			vfio_group_fd = 0;
+		}
 	}
 
-	/* Try connecting to vfio container if already created */
-	if (!ioctl(vfio_group.fd, VFIO_GROUP_SET_CONTAINER,
-		&vfio_container.fd)) {
-		DPAA2_BUS_DEBUG(
-		    "Container pre-exists with FD[0x%x] for this group",
-		    vfio_container.fd);
-		vfio_group.container = &vfio_container;
+	free(mp_reply.msgs);
+
+add_vfio_group:
+	if (vfio_group_fd <= 0) {
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
+				filename, vfio_group_fd);
+		} else {
+			DPAA2_BUS_ERR("Cannot request group fd(%d)",
+				vfio_group_fd);
+		}
+	} else {
+		ret = fslmc_vfio_add_group(vfio_group_fd, iommu_group_num,
+			group_name);
+		if (ret)
+			return ret;
+	}
+
+	return vfio_group_fd;
+}
+
+static int
+fslmc_vfio_check_extensions(int vfio_container_fd)
+{
+	int ret;
+	uint32_t idx, n_extensions = 0;
+	static const int type_id[] = {RTE_VFIO_TYPE1, RTE_VFIO_SPAPR,
+		RTE_VFIO_NOIOMMU};
+	static const char * const type_id_nm[] = {"Type 1",
+		"sPAPR", "No-IOMMU"};
+
+	for (idx = 0; idx < RTE_DIM(type_id); idx++) {
+		ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
+			type_id[idx]);
+		if (ret < 0) {
+			DPAA2_BUS_ERR("Could not get IOMMU type, error %i (%s)",
+				errno, strerror(errno));
+			close(vfio_container_fd);
+			return -errno;
+		} else if (ret == 1) {
+			/* we found a supported extension */
+			n_extensions++;
+		}
+		DPAA2_BUS_DEBUG("IOMMU type %d (%s) is %s",
+			type_id[idx], type_id_nm[idx],
+			ret ? "supported" : "not supported");
+	}
+
+	/* if we didn't find any supported IOMMU types, fail */
+	if (!n_extensions) {
+		close(vfio_container_fd);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int
+fslmc_vfio_open_container_fd(void)
+{
+	int ret, vfio_container_fd;
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (void *)mp_req.param;
+
+	if (fslmc_vfio_container_fd() > 0)
+		return fslmc_vfio_container_fd();
+
+	/* if we're in a primary process, try to open the container */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
+		if (vfio_container_fd < 0) {
+			DPAA2_BUS_ERR("Cannot open VFIO container(%s), err(%d)",
+				VFIO_CONTAINER_PATH, vfio_container_fd);
+			ret = vfio_container_fd;
+			goto err_exit;
+		}
+
+		/* check VFIO API version */
+		ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
+		if (ret < 0) {
+			DPAA2_BUS_ERR("Could not get VFIO API version(%d)",
+				ret);
+		} else if (ret != VFIO_API_VERSION) {
+			DPAA2_BUS_ERR("Unsupported VFIO API version(%d)",
+				ret);
+			ret = -ENOTSUP;
+		}
+		if (ret < 0) {
+			close(vfio_container_fd);
+			goto err_exit;
+		}
+
+		ret = fslmc_vfio_check_extensions(vfio_container_fd);
+		if (ret) {
+			DPAA2_BUS_ERR("No supported IOMMU extensions found(%d)",
+				ret);
+			close(vfio_container_fd);
+			goto err_exit;
+		}
+
+		goto success_exit;
+	}
+	/*
+	 * if we're in a secondary process, request container fd from the
+	 * primary process via mp channel
+	 */
+	p->req = SOCKET_REQ_CONTAINER;
+	strcpy(mp_req.name, FSLMC_VFIO_MP);
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_container_fd = -1;
+	ret = rte_mp_request_sync(&mp_req, &mp_reply, &ts);
+	if (ret)
+		goto err_exit;
+
+	if (mp_reply.nb_received != 1) {
+		ret = -EIO;
+		goto err_exit;
+	}
+
+	mp_rep = &mp_reply.msgs[0];
+	p = (void *)mp_rep->param;
+	if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+		vfio_container_fd = mp_rep->fds[0];
+		free(mp_reply.msgs);
+	}
+
+success_exit:
+	s_vfio_container.fd = vfio_container_fd;
+
+	return vfio_container_fd;
+
+err_exit:
+	if (mp_reply.msgs)
+		free(mp_reply.msgs);
+	DPAA2_BUS_ERR("Cannot request container fd err(%d)", ret);
+	return ret;
+}
+
+int
+fslmc_get_container_group(const char *group_name,
+	int *groupid)
+{
+	int ret;
+
+	if (!group_name) {
+		DPAA2_BUS_ERR("No group name provided!");
+
+		return -EINVAL;
+	}
+	ret = fslmc_get_group_id(group_name, groupid);
+	if (ret)
+		return ret;
+
+	fslmc_vfio_set_group_name(group_name);
+
+	return 0;
+}
+
+static int
+fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
+	const void *peer)
+{
+	int fd = -1;
+	int ret;
+	struct rte_mp_msg reply;
+	struct vfio_mp_param *r = (void *)reply.param;
+	const struct vfio_mp_param *m = (const void *)msg->param;
+
+	if (msg->len_param != sizeof(*m)) {
+		DPAA2_BUS_ERR("fslmc vfio received invalid message!");
+		return -EINVAL;
+	}
+
+	memset(&reply, 0, sizeof(reply));
+
+	switch (m->req) {
+	case SOCKET_REQ_GROUP:
+		r->req = SOCKET_REQ_GROUP;
+		r->group_num = m->group_num;
+		fd = fslmc_vfio_group_fd_by_id(m->group_num);
+		if (fd < 0) {
+			r->result = SOCKET_ERR;
+		} else if (!fd) {
+			/* if group exists but isn't bound to VFIO driver */
+			r->result = SOCKET_NO_FD;
+		} else {
+			/* if group exists and is bound to VFIO driver */
+			r->result = SOCKET_OK;
+			reply.num_fds = 1;
+			reply.fds[0] = fd;
+		}
+		break;
+	case SOCKET_REQ_CONTAINER:
+		r->req = SOCKET_REQ_CONTAINER;
+		fd = fslmc_vfio_container_fd();
+		if (fd <= 0) {
+			r->result = SOCKET_ERR;
+		} else {
+			r->result = SOCKET_OK;
+			reply.num_fds = 1;
+			reply.fds[0] = fd;
+		}
+		break;
+	default:
+		DPAA2_BUS_ERR("fslmc vfio received invalid message(%08x)",
+			m->req);
+		return -ENOTSUP;
+	}
+
+	strcpy(reply.name, FSLMC_VFIO_MP);
+	reply.len_param = sizeof(*r);
+	ret = rte_mp_reply(&reply, peer);
+
+	return ret;
+}
+
+static int
+fslmc_vfio_mp_sync_setup(void)
+{
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		ret = rte_mp_action_register(FSLMC_VFIO_MP,
+			fslmc_vfio_mp_primary);
+		if (ret && rte_errno != ENOTSUP)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int
+vfio_connect_container(int vfio_container_fd,
+	int vfio_group_fd)
+{
+	int ret;
+	int iommu_type;
+
+	if (fslmc_vfio_container_connected(vfio_group_fd)) {
+		DPAA2_BUS_WARN("VFIO FD(%d) has connected to container",
+			vfio_group_fd);
 		return 0;
 	}
 
-	/* Opens main vfio file descriptor which represents the "container" */
-	fd = rte_vfio_get_container_fd();
-	if (fd < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return -errno;
+	iommu_type = fslmc_vfio_iommu_type(vfio_group_fd);
+	if (iommu_type < 0) {
+		DPAA2_BUS_ERR("Failed to get iommu type(%d)",
+			iommu_type);
+
+		return iommu_type;
 	}
 
 	/* Check whether support for SMMU type IOMMU present or not */
-	if (ioctl(fd, VFIO_CHECK_EXTENSION, fslmc_iommu_type)) {
+	if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type)) {
 		/* Connect group to container */
-		ret = ioctl(vfio_group.fd, VFIO_GROUP_SET_CONTAINER, &fd);
+		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
+			&vfio_container_fd);
 		if (ret) {
 			DPAA2_BUS_ERR("Failed to setup group container");
-			close(fd);
 			return -errno;
 		}
 
-		ret = ioctl(fd, VFIO_SET_IOMMU, fslmc_iommu_type);
+		ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
 		if (ret) {
 			DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-			close(fd);
 			return -errno;
 		}
 	} else {
 		DPAA2_BUS_ERR("No supported IOMMU available");
-		close(fd);
 		return -EINVAL;
 	}
 
-	vfio_container.used = 1;
-	vfio_container.fd = fd;
-	vfio_container.group = &vfio_group;
-	vfio_group.container = &vfio_container;
-
-	return 0;
+	return fslmc_vfio_connect_container(vfio_group_fd);
 }
 
-static int vfio_map_irq_region(struct fslmc_vfio_group *group)
+static int vfio_map_irq_region(void)
 {
-	int ret;
+	int ret, fd;
 	unsigned long *vaddr = NULL;
 	struct vfio_iommu_type1_dma_map map = {
 		.argsz = sizeof(map),
@@ -182,9 +623,23 @@ static int vfio_map_irq_region(struct fslmc_vfio_group *group)
 		.iova = 0x6030000,
 		.size = 0x1000,
 	};
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (!fslmc_vfio_container_connected(fd)) {
+		DPAA2_BUS_ERR("Container is not connected");
+		return -EIO;
+	}
 
 	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
-		PROT_READ, MAP_SHARED, container_device_fd, 0x6030000);
+		PROT_READ, MAP_SHARED, fd, 0x6030000);
 	if (vaddr == MAP_FAILED) {
 		DPAA2_BUS_INFO("Unable to map region (errno = %d)", errno);
 		return -errno;
@@ -192,8 +647,8 @@ static int vfio_map_irq_region(struct fslmc_vfio_group *group)
 
 	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
 	map.vaddr = (unsigned long)vaddr;
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA, &map);
-	if (ret == 0)
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA, &map);
+	if (!ret)
 		return 0;
 
 	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
@@ -204,8 +659,8 @@ static int fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
 static int fslmc_unmap_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
 
 static void
-fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
-		void *arg __rte_unused)
+fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
+	size_t len, void *arg __rte_unused)
 {
 	struct rte_memseg_list *msl;
 	struct rte_memseg *ms;
@@ -262,44 +717,54 @@ fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
 }
 
 static int
-fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr __rte_unused, size_t len)
+fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr,
+	size_t len)
 {
-	struct fslmc_vfio_group *group;
 	struct vfio_iommu_type1_dma_map dma_map = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
 		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
 	};
-	int ret;
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
+	int ret, fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
 		return 0;
 	}
 
 	dma_map.size = len;
 	dma_map.vaddr = vaddr;
-
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 	dma_map.iova = iovaddr;
-#else
-	dma_map.iova = dma_map.vaddr;
+
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+	if (vaddr != iovaddr) {
+		DPAA2_BUS_WARN("vaddr(0x%lx) != iovaddr(0x%lx)",
+			vaddr, iovaddr);
+	}
 #endif
 
 	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-
-	if (!group->container) {
+	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected ");
-		return -1;
+		return -EIO;
 	}
 
 	DPAA2_BUS_DEBUG("--> Map address: 0x%"PRIx64", size: %"PRIu64"",
 			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA, &dma_map);
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA,
+		&dma_map);
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO_IOMMU_MAP_DMA API(errno = %d)",
 				errno);
-		return -1;
+		return ret;
 	}
 
 	return 0;
@@ -308,14 +773,22 @@ fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr __rte_unused, size_t len)
 static int
 fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 {
-	struct fslmc_vfio_group *group;
 	struct vfio_iommu_type1_dma_unmap dma_unmap = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_unmap),
 		.flags = 0,
 	};
-	int ret;
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
+	int ret, fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
 		return 0;
 	}
@@ -324,16 +797,15 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 	dma_unmap.iova = vaddr;
 
 	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-
-	if (!group->container) {
+	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected ");
-		return -1;
+		return -EIO;
 	}
 
 	DPAA2_BUS_DEBUG("--> Unmap address: 0x%"PRIx64", size: %"PRIu64"",
 			(uint64_t)dma_unmap.iova, (uint64_t)dma_unmap.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap);
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_UNMAP_DMA,
+		&dma_unmap);
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO_IOMMU_UNMAP_DMA API(errno = %d)",
 				errno);
@@ -367,41 +839,13 @@ fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 int
 rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 {
-	int ret;
-	struct fslmc_vfio_group *group;
-	struct vfio_iommu_type1_dma_map dma_map = {
-		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
-		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
-	};
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
-		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
-		return 0;
-	}
-
-	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-	if (!group->container) {
-		DPAA2_BUS_ERR("Container is not connected");
-		return -1;
-	}
-
-	dma_map.size = size;
-	dma_map.vaddr = vaddr;
-	dma_map.iova = iova;
-
-	DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64"\n",
-			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.iova,
-			(uint64_t)dma_map.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA,
-		    &dma_map);
-	if (ret) {
-		DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)",
-			errno);
-		return ret;
-	}
+	return fslmc_map_dma(vaddr, iova, size);
+}
 
-	return 0;
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
+{
+	return fslmc_unmap_dma(iova, 0, size);
 }
 
 int rte_fslmc_vfio_dmamap(void)
@@ -431,7 +875,7 @@ int rte_fslmc_vfio_dmamap(void)
 	 * the interrupt region to SMMU. This should be removed once the
 	 * support is added in the Kernel.
 	 */
-	vfio_map_irq_region(&vfio_group);
+	vfio_map_irq_region();
 
 	/* Existing segments have been mapped and memory callback for hotplug
 	 * has been installed.
@@ -442,149 +886,19 @@ int rte_fslmc_vfio_dmamap(void)
 }
 
 static int
-fslmc_vfio_open_group_fd(int iommu_group_num)
-{
-	int vfio_group_fd;
-	char filename[PATH_MAX];
-	struct rte_mp_msg mp_req, *mp_rep;
-	struct rte_mp_reply mp_reply = {0};
-	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
-	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
-
-	/* if primary, try to open the group */
-	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
-		/* try regular group format */
-		snprintf(filename, sizeof(filename),
-			VFIO_GROUP_FMT, iommu_group_num);
-		vfio_group_fd = open(filename, O_RDWR);
-		if (vfio_group_fd <= 0) {
-			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
-				filename, vfio_group_fd);
-		}
-
-		return vfio_group_fd;
-	}
-	/* if we're in a secondary process, request group fd from the primary
-	 * process via mp channel.
-	 */
-	p->req = SOCKET_REQ_GROUP;
-	p->group_num = iommu_group_num;
-	strcpy(mp_req.name, EAL_VFIO_MP);
-	mp_req.len_param = sizeof(*p);
-	mp_req.num_fds = 0;
-
-	vfio_group_fd = -1;
-	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
-	    mp_reply.nb_received == 1) {
-		mp_rep = &mp_reply.msgs[0];
-		p = (struct vfio_mp_param *)mp_rep->param;
-		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
-			vfio_group_fd = mp_rep->fds[0];
-		} else if (p->result == SOCKET_NO_FD) {
-			DPAA2_BUS_ERR("Bad VFIO group fd");
-			vfio_group_fd = 0;
-		}
-	}
-
-	free(mp_reply.msgs);
-	if (vfio_group_fd < 0) {
-		DPAA2_BUS_ERR("Cannot request group fd(%d)",
-			vfio_group_fd);
-	}
-	return vfio_group_fd;
-}
-
-static int
-fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
-		int *vfio_dev_fd, struct vfio_device_info *device_info)
+fslmc_vfio_setup_device(const char *dev_addr,
+	int *vfio_dev_fd, struct vfio_device_info *device_info)
 {
 	struct vfio_group_status group_status = {
 			.argsz = sizeof(group_status)
 	};
-	int vfio_group_fd, vfio_container_fd, iommu_group_no, ret;
+	int vfio_group_fd, ret;
+	const char *group_name = fslmc_vfio_get_group_name();
 
-	/* get group number */
-	ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_no);
-	if (ret < 0)
-		return -1;
-
-	/* get the actual group fd */
-	vfio_group_fd = vfio_group.fd;
-	if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
-		return -1;
-
-	/*
-	 * if vfio_group_fd == -ENOENT, that means the device
-	 * isn't managed by VFIO
-	 */
-	if (vfio_group_fd == -ENOENT) {
-		DPAA2_BUS_WARN(" %s not managed by VFIO driver, skipping",
-				dev_addr);
-		return 1;
-	}
-
-	/* Opens main vfio file descriptor which represents the "container" */
-	vfio_container_fd = rte_vfio_get_container_fd();
-	if (vfio_container_fd < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return -errno;
-	}
-
-	/* check if the group is viable */
-	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status);
-	if (ret) {
-		DPAA2_BUS_ERR("  %s cannot get group status, "
-				"error %i (%s)\n", dev_addr,
-				errno, strerror(errno));
-		close(vfio_group_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
-	} else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
-		DPAA2_BUS_ERR("  %s VFIO group is not viable!\n", dev_addr);
-		close(vfio_group_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
-	}
-	/* At this point, we know that this group is viable (meaning,
-	 * all devices are either bound to VFIO or not bound to anything)
-	 */
-
-	/* check if group does not have a container yet */
-	if (!(group_status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
-
-		/* add group to a container */
-		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
-				&vfio_container_fd);
-		if (ret) {
-			DPAA2_BUS_ERR("  %s cannot add VFIO group to container, "
-					"error %i (%s)\n", dev_addr,
-					errno, strerror(errno));
-			close(vfio_group_fd);
-			close(vfio_container_fd);
-			rte_vfio_clear_group(vfio_group_fd);
-			return -1;
-		}
-
-		/*
-		 * set an IOMMU type for container
-		 *
-		 */
-		if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
-			  fslmc_iommu_type)) {
-			ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU,
-				    fslmc_iommu_type);
-			if (ret) {
-				DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-				close(vfio_group_fd);
-				close(vfio_container_fd);
-				return -errno;
-			}
-		} else {
-			DPAA2_BUS_ERR("No supported IOMMU available");
-			close(vfio_group_fd);
-			close(vfio_container_fd);
-			return -EINVAL;
-		}
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (!fslmc_vfio_container_connected(vfio_group_fd)) {
+		DPAA2_BUS_ERR("Container is not connected");
+		return -EIO;
 	}
 
 	/* get a file descriptor for the device */
@@ -594,26 +908,21 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		 * the VFIO group or the container not having IOMMU configured.
 		 */
 
-		DPAA2_BUS_WARN("Getting a vfio_dev_fd for %s failed", dev_addr);
-		close(vfio_group_fd);
-		close(vfio_container_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
+		DPAA2_BUS_ERR("Getting a vfio_dev_fd for %s from %s failed",
+			dev_addr, group_name);
+		return -EIO;
 	}
 
 	/* test and setup the device */
 	ret = ioctl(*vfio_dev_fd, VFIO_DEVICE_GET_INFO, device_info);
 	if (ret) {
-		DPAA2_BUS_ERR("  %s cannot get device info, error %i (%s)",
-				dev_addr, errno, strerror(errno));
-		close(*vfio_dev_fd);
-		close(vfio_group_fd);
-		close(vfio_container_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
+		DPAA2_BUS_ERR("%s cannot get device info err(%d)(%s)",
+			dev_addr, errno, strerror(errno));
+		return ret;
 	}
 
-	return 0;
+	return fslmc_vfio_group_add_dev(vfio_group_fd, *vfio_dev_fd,
+			dev_addr);
 }
 
 static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
@@ -625,8 +934,7 @@ static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
 	struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
 
-	fslmc_vfio_setup_device(SYSFS_FSL_MC_DEVICES, mcp_obj,
-			&mc_fd, &d_info);
+	fslmc_vfio_setup_device(mcp_obj, &mc_fd, &d_info);
 
 	/* getting device region info*/
 	ret = ioctl(mc_fd, VFIO_DEVICE_GET_REGION_INFO, &reg_info);
@@ -757,7 +1065,8 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 }
 
 static void
-fslmc_close_iodevices(struct rte_dpaa2_device *dev)
+fslmc_close_iodevices(struct rte_dpaa2_device *dev,
+	int vfio_fd)
 {
 	struct rte_dpaa2_object *object = NULL;
 	struct rte_dpaa2_driver *drv;
@@ -800,6 +1109,11 @@ fslmc_close_iodevices(struct rte_dpaa2_device *dev)
 		break;
 	}
 
+	ret = fslmc_vfio_group_remove_dev(vfio_fd, dev->device.name);
+	if (ret) {
+		DPAA2_BUS_ERR("Failed to remove %s from vfio",
+			dev->device.name);
+	}
 	DPAA2_BUS_LOG(DEBUG, "Device (%s) Closed",
 		      dev->device.name);
 }
@@ -811,17 +1125,21 @@ fslmc_close_iodevices(struct rte_dpaa2_device *dev)
 static int
 fslmc_process_iodevices(struct rte_dpaa2_device *dev)
 {
-	int dev_fd;
+	int dev_fd, ret;
 	struct vfio_device_info device_info = { .argsz = sizeof(device_info) };
 	struct rte_dpaa2_object *object = NULL;
 
-	fslmc_vfio_setup_device(SYSFS_FSL_MC_DEVICES, dev->device.name,
-			&dev_fd, &device_info);
+	ret = fslmc_vfio_setup_device(dev->device.name, &dev_fd,
+			&device_info);
+	if (ret)
+		return ret;
 
 	switch (dev->dev_type) {
 	case DPAA2_ETH:
-		rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
-					  device_info.num_irqs);
+		ret = rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
+				device_info.num_irqs);
+		if (ret)
+			return ret;
 		break;
 	case DPAA2_CON:
 	case DPAA2_IO:
@@ -913,6 +1231,10 @@ int
 fslmc_vfio_close_group(void)
 {
 	struct rte_dpaa2_device *dev, *dev_temp;
+	int vfio_group_fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
 		if (dev->device.devargs &&
@@ -927,7 +1249,7 @@ fslmc_vfio_close_group(void)
 		case DPAA2_CRYPTO:
 		case DPAA2_QDMA:
 		case DPAA2_IO:
-			fslmc_close_iodevices(dev);
+			fslmc_close_iodevices(dev, vfio_group_fd);
 			break;
 		case DPAA2_CON:
 		case DPAA2_CI:
@@ -936,7 +1258,7 @@ fslmc_vfio_close_group(void)
 			if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 				continue;
 
-			fslmc_close_iodevices(dev);
+			fslmc_close_iodevices(dev, vfio_group_fd);
 			break;
 		case DPAA2_DPRTC:
 		default:
@@ -945,10 +1267,7 @@ fslmc_vfio_close_group(void)
 		}
 	}
 
-	if (vfio_group.fd > 0) {
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
-	}
+	fslmc_vfio_clear_group(vfio_group_fd);
 
 	return 0;
 }
@@ -1138,75 +1457,84 @@ fslmc_vfio_process_group(void)
 int
 fslmc_vfio_setup_group(void)
 {
-	int groupid;
-	int ret;
+	int vfio_group_fd, vfio_container_fd, ret;
 	struct vfio_group_status status = { .argsz = sizeof(status) };
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	/* MC VFIO setup entry */
+	vfio_container_fd = fslmc_vfio_container_fd();
+	if (vfio_container_fd <= 0) {
+		vfio_container_fd = fslmc_vfio_open_container_fd();
+		if (vfio_container_fd <= 0) {
+			DPAA2_BUS_ERR("Failed to create MC VFIO container");
+			return -rte_errno;
+		}
+	}
 
-	/* if already done once */
-	if (container_device_fd)
-		return 0;
-
-	ret = fslmc_get_container_group(&groupid);
-	if (ret)
-		return ret;
-
-	/* In case this group was already opened, continue without any
-	 * processing.
-	 */
-	if (vfio_group.groupid == groupid) {
-		DPAA2_BUS_ERR("groupid already exists %d", groupid);
-		return 0;
+	if (!group_name) {
+		DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
+		return -EINVAL;
 	}
 
-	/* Get the actual group fd */
-	ret = fslmc_vfio_open_group_fd(groupid);
-	if (ret <= 0)
-		return ret;
-	vfio_group.fd = ret;
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
+		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("Failed to create MC VFIO group");
+			return -rte_errno;
+		}
+	}
 
 	/* Check group viability */
-	ret = ioctl(vfio_group.fd, VFIO_GROUP_GET_STATUS, &status);
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &status);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO error getting group status");
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		DPAA2_BUS_ERR("VFIO(%s:fd=%d) error getting group status(%d)",
+			group_name, vfio_group_fd, ret);
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return ret;
 	}
 
 	if (!(status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
 		DPAA2_BUS_ERR("VFIO group not viable");
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return -EPERM;
 	}
-	/* Since Group is VIABLE, Store the groupid */
-	vfio_group.groupid = groupid;
 
 	/* check if group does not have a container yet */
 	if (!(status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
 		/* Now connect this IOMMU group to given container */
-		ret = vfio_connect_container();
-		if (ret) {
-			DPAA2_BUS_ERR("vfio group(%d) connect failed(%d)",
-				groupid, ret);
-			close(vfio_group.fd);
-			vfio_group.fd = 0;
-			return ret;
-		}
+		ret = vfio_connect_container(vfio_container_fd,
+			vfio_group_fd);
+	} else {
+		/* Here is supposed in secondary process,
+		 * group has been set to container in primary process.
+		 */
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+			DPAA2_BUS_WARN("This group has been set container?");
+		ret = fslmc_vfio_connect_container(vfio_group_fd);
+	}
+	if (ret) {
+		DPAA2_BUS_ERR("vfio group connect failed(%d)", ret);
+		fslmc_vfio_clear_group(vfio_group_fd);
+		return ret;
 	}
 
 	/* Get Device information */
-	ret = ioctl(vfio_group.fd, VFIO_GROUP_GET_DEVICE_FD, fslmc_container);
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_DEVICE_FD, group_name);
 	if (ret < 0) {
-		DPAA2_BUS_ERR("Error getting device %s fd from group %d",
-			      fslmc_container, vfio_group.groupid);
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		DPAA2_BUS_ERR("Error getting device %s fd", group_name);
+		fslmc_vfio_clear_group(vfio_group_fd);
+		return ret;
+	}
+
+	ret = fslmc_vfio_mp_sync_setup();
+	if (ret) {
+		DPAA2_BUS_ERR("VFIO MP sync setup failed!");
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return ret;
 	}
-	container_device_fd = ret;
-	DPAA2_BUS_DEBUG("VFIO Container FD is [0x%X]",
-			container_device_fd);
+
+	DPAA2_BUS_DEBUG("VFIO GROUP FD is %d", vfio_group_fd);
 
 	return 0;
 }
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index b6677bdd18..1695b6c078 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2019-2020 NXP
+ *   Copyright 2016,2019-2023 NXP
  *
  */
 
@@ -20,26 +20,28 @@
 #define DPAA2_MC_DPBP_DEVID	10
 #define DPAA2_MC_DPCI_DEVID	11
 
-typedef struct fslmc_vfio_device {
+struct fslmc_vfio_device {
+	LIST_ENTRY(fslmc_vfio_device) next;
 	int fd; /* fslmc root container device ?? */
 	int index; /*index of child object */
+	char dev_name[64];
 	struct fslmc_vfio_device *child; /* Child object */
-} fslmc_vfio_device;
+};
 
-typedef struct fslmc_vfio_group {
+struct fslmc_vfio_group {
+	LIST_ENTRY(fslmc_vfio_group) next;
 	int fd; /* /dev/vfio/"groupid" */
 	int groupid;
-	struct fslmc_vfio_container *container;
-	int object_index;
-	struct fslmc_vfio_device *vfio_device;
-} fslmc_vfio_group;
+	int connected;
+	char group_name[64]; /* dprc.x*/
+	int iommu_type;
+	LIST_HEAD(, fslmc_vfio_device) vfio_devices;
+};
 
-typedef struct fslmc_vfio_container {
+struct fslmc_vfio_container {
 	int fd; /* /dev/vfio/vfio */
-	int used;
-	int index; /* index in group list */
-	struct fslmc_vfio_group *group;
-} fslmc_vfio_container;
+	LIST_HEAD(, fslmc_vfio_group) groups;
+};
 
 extern char *fslmc_container;
 
@@ -57,8 +59,11 @@ int fslmc_vfio_setup_group(void);
 int fslmc_vfio_process_group(void);
 int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
-int fslmc_get_container_group(int *gropuid);
+int fslmc_get_container_group(const char *group_name, int *gropuid);
 int rte_fslmc_vfio_dmamap(void);
-int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova,
+		uint64_t size);
+int rte_fslmc_vfio_mem_dmaunmap(uint64_t iova,
+		uint64_t size);
 
 #endif /* _FSLMC_VFIO_H_ */
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index df1143733d..b49bc0a62c 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -118,6 +118,7 @@ INTERNAL {
 	rte_fslmc_get_device_count;
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
+	rte_fslmc_vfio_mem_dmaunmap;
 
 	local: *;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 15/43] bus/fslmc: free VFIO group FD in case of add group failure
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (13 preceding siblings ...)
  2024-09-18  7:50   ` [v2 14/43] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 16/43] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
                     ` (28 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Free vfio_group_fd if add group fails to avoid ersource leak
NXP coverity-id: 26661846

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 15d2930cf0..45dac61d97 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -347,8 +347,10 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	} else {
 		ret = fslmc_vfio_add_group(vfio_group_fd, iommu_group_num,
 			group_name);
-		if (ret)
+		if (ret) {
+			close(vfio_group_fd);
 			return ret;
+		}
 	}
 
 	return vfio_group_fd;
@@ -1480,6 +1482,8 @@ fslmc_vfio_setup_group(void)
 	if (vfio_group_fd <= 0) {
 		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
 		if (vfio_group_fd <= 0) {
+			if (!vfio_group_fd)
+				close(vfio_group_fd);
 			DPAA2_BUS_ERR("Failed to create MC VFIO group");
 			return -rte_errno;
 		}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 16/43] bus/fslmc: dynamic IOVA mode configuration
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (14 preceding siblings ...)
  2024-09-18  7:50   ` [v2 15/43] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 17/43] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
                     ` (27 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh
  Cc: Jun Yang, Vanshika Shukla

From: Jun Yang <jun.yang@nxp.com>

IOVA mode should not be configured with CFLAGS because
1) User can perform "--iova-mode" to configure IOVA.
2) IOVA mode is determined by negotiation between multiple devices.
   Eal is in VA mode only when all devices support VA mode.

Hence:
1) Remove RTE_LIBRTE_DPAA2_USE_PHYS_IOVA cflags.
   Instead, use rte_eal_iova_mode API to identify VA or PA mode.
2) Support memory IOMMU mapping and I/O IOMMU mapping(PCI space).
3) For memory IOMMU, in VA mode, IOVA:VA = 1:1;
   in PA mode, IOVA:VA = PA:VA. The mapping policy is determined by
   EAL memory driver.
4) For I/O IOMMU, IOVA:VA is up to I/O driver configuration.
   In general, it's aligned with memory IOMMU mapping.
5) Memory and I/O IOVA tables are created and update when DMA
   mapping is setup, which takes place of dpaax IOVA table.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     |  29 +-
 drivers/bus/fslmc/fslmc_bus.c            |  33 +-
 drivers/bus/fslmc/fslmc_logs.h           |   5 +-
 drivers/bus/fslmc/fslmc_vfio.c           | 668 ++++++++++++++++++-----
 drivers/bus/fslmc/fslmc_vfio.h           |   4 +
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c |   3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c |  10 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h |   3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h  | 111 ++--
 drivers/bus/fslmc/version.map            |   7 +-
 drivers/dma/dpaa2/dpaa2_qdma.c           |   1 +
 11 files changed, 619 insertions(+), 255 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index dc2f395f60..11eebd560c 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -37,9 +37,6 @@ extern "C" {
 
 #include <fslmc_vfio.h>
 
-#include "portal/dpaa2_hw_pvt.h"
-#include "portal/dpaa2_hw_dpio.h"
-
 #define FSLMC_OBJECT_MAX_LEN 32   /**< Length of each device on bus */
 
 #define DPAA2_INVALID_MBUF_SEQN        0
@@ -149,6 +146,32 @@ struct rte_dpaa2_driver {
 	rte_dpaa2_remove_t remove;
 };
 
+int
+rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+__rte_internal
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size);
+__rte_internal
+uint64_t
+rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
+	uint64_t size);
+__rte_internal
+void *
+rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
+	uint64_t size);
+__rte_internal
+__hot uint64_t
+rte_fslmc_mem_vaddr_to_iova(void *vaddr);
+__rte_internal
+__hot void *
+rte_fslmc_mem_iova_to_vaddr(uint64_t iova);
+__rte_internal
+uint64_t
+rte_fslmc_io_vaddr_to_iova(void *vaddr);
+__rte_internal
+void *
+rte_fslmc_io_iova_to_vaddr(uint64_t iova);
+
 /**
  * Register a DPAA2 driver.
  *
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 654726dbe6..ce87b4ddbd 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -27,7 +27,6 @@
 #define FSLMC_BUS_NAME	fslmc
 
 struct rte_fslmc_bus rte_fslmc_bus;
-uint8_t dpaa2_virt_mode;
 
 #define DPAA2_SEQN_DYNFIELD_NAME "dpaa2_seqn_dynfield"
 int dpaa2_seqn_dynfield_offset = -1;
@@ -457,22 +456,6 @@ rte_fslmc_probe(void)
 
 	probe_all = rte_fslmc_bus.bus.conf.scan_mode != RTE_BUS_SCAN_ALLOWLIST;
 
-	/* In case of PA, the FD addresses returned by qbman APIs are physical
-	 * addresses, which need conversion into equivalent VA address for
-	 * rte_mbuf. For that, a table (a serial array, in memory) is used to
-	 * increase translation efficiency.
-	 * This has to be done before probe as some device initialization
-	 * (during) probe allocate memory (dpaa2_sec) which needs to be pinned
-	 * to this table.
-	 *
-	 * Error is ignored as relevant logs are handled within dpaax and
-	 * handling for unavailable dpaax table too is transparent to caller.
-	 *
-	 * And, the IOVA table is only applicable in case of PA mode.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_populate();
-
 	TAILQ_FOREACH(dev, &rte_fslmc_bus.device_list, next) {
 		TAILQ_FOREACH(drv, &rte_fslmc_bus.driver_list, next) {
 			ret = rte_fslmc_match(drv, dev);
@@ -507,9 +490,6 @@ rte_fslmc_probe(void)
 		}
 	}
 
-	if (rte_eal_iova_mode() == RTE_IOVA_VA)
-		dpaa2_virt_mode = 1;
-
 	return 0;
 }
 
@@ -558,12 +538,6 @@ rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
 void
 rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver)
 {
-	/* Cleanup the PA->VA Translation table; From wherever this function
-	 * is called from.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_depopulate();
-
 	TAILQ_REMOVE(&rte_fslmc_bus.driver_list, driver, next);
 }
 
@@ -599,13 +573,12 @@ rte_dpaa2_get_iommu_class(void)
 	bool is_vfio_noiommu_enabled = 1;
 	bool has_iova_va;
 
+	if (rte_eal_iova_mode() == RTE_IOVA_PA)
+		return RTE_IOVA_PA;
+
 	if (TAILQ_EMPTY(&rte_fslmc_bus.device_list))
 		return RTE_IOVA_DC;
 
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-	return RTE_IOVA_PA;
-#endif
-
 	/* check if all devices on the bus support Virtual addressing or not */
 	has_iova_va = fslmc_all_device_support_iova();
 
diff --git a/drivers/bus/fslmc/fslmc_logs.h b/drivers/bus/fslmc/fslmc_logs.h
index e15c603426..d6abffc566 100644
--- a/drivers/bus/fslmc/fslmc_logs.h
+++ b/drivers/bus/fslmc/fslmc_logs.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2016 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -10,7 +10,8 @@
 extern int dpaa2_logtype_bus;
 
 #define DPAA2_BUS_LOG(level, fmt, args...) \
-	rte_log(RTE_LOG_ ## level, dpaa2_logtype_bus, "fslmc: " fmt "\n", \
+	rte_log(RTE_LOG_ ## level, dpaa2_logtype_bus, \
+		"fslmc " # level ": " fmt "\n", \
 		##args)
 
 /* Debug logs are with Function names */
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 45dac61d97..fe18429f42 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -19,6 +19,7 @@
 #include <libgen.h>
 #include <dirent.h>
 #include <sys/eventfd.h>
+#include <ctype.h>
 
 #include <eal_filesystem.h>
 #include <rte_mbuf.h>
@@ -49,9 +50,41 @@
  */
 static struct fslmc_vfio_container s_vfio_container;
 /* Currently we only support single group/process. */
-const char *fslmc_group; /* dprc.x*/
+static const char *fslmc_group; /* dprc.x*/
 static uint32_t *msi_intr_vaddr;
-void *(*rte_mcp_ptr_list);
+static void *(*rte_mcp_ptr_list);
+
+struct fslmc_dmaseg {
+	uint64_t vaddr;
+	uint64_t iova;
+	uint64_t size;
+
+	TAILQ_ENTRY(fslmc_dmaseg) next;
+};
+
+TAILQ_HEAD(fslmc_dmaseg_list, fslmc_dmaseg);
+
+struct fslmc_dmaseg_list fslmc_memsegs =
+		TAILQ_HEAD_INITIALIZER(fslmc_memsegs);
+struct fslmc_dmaseg_list fslmc_iosegs =
+		TAILQ_HEAD_INITIALIZER(fslmc_iosegs);
+
+static uint64_t fslmc_mem_va2iova = RTE_BAD_IOVA;
+static int fslmc_mem_map_num;
+
+struct fslmc_mem_param {
+	struct vfio_mp_param mp_param;
+	struct fslmc_dmaseg_list memsegs;
+	struct fslmc_dmaseg_list iosegs;
+	uint64_t mem_va2iova;
+	int mem_map_num;
+};
+
+enum {
+	FSLMC_VFIO_SOCKET_REQ_CONTAINER = 0x100,
+	FSLMC_VFIO_SOCKET_REQ_GROUP,
+	FSLMC_VFIO_SOCKET_REQ_MEM
+};
 
 void *
 dpaa2_get_mcp_ptr(int portal_idx)
@@ -65,6 +98,64 @@ dpaa2_get_mcp_ptr(int portal_idx)
 static struct rte_dpaa2_object_list dpaa2_obj_list =
 	TAILQ_HEAD_INITIALIZER(dpaa2_obj_list);
 
+static uint64_t
+fslmc_io_virt2phy(const void *virtaddr)
+{
+	FILE *fp = fopen("/proc/self/maps", "r");
+	char *line = NULL;
+	size_t linesz;
+	uint64_t start, end, phy;
+	const uint64_t va = (const uint64_t)virtaddr;
+	char tmp[1024];
+	int ret;
+
+	if (!fp)
+		return RTE_BAD_IOVA;
+	while (getdelim(&line, &linesz, '\n', fp) > 0) {
+		char *ptr = line;
+		int n;
+
+		/** Parse virtual address range.*/
+		n = 0;
+		while (*ptr && !isspace(*ptr)) {
+			tmp[n] = *ptr;
+			ptr++;
+			n++;
+		}
+		tmp[n] = 0;
+		ret = sscanf(tmp, "%" SCNx64 "-%" SCNx64, &start, &end);
+		if (ret != 2)
+			continue;
+		if (va < start || va >= end)
+			continue;
+
+		/** This virtual address is in this segment.*/
+		while (*ptr == ' ' || *ptr == 'r' ||
+			*ptr == 'w' || *ptr == 's' ||
+			*ptr == 'p' || *ptr == 'x' ||
+			*ptr == '-')
+			ptr++;
+
+		/** Extract phy address*/
+		n = 0;
+		while (*ptr && !isspace(*ptr)) {
+			tmp[n] = *ptr;
+			ptr++;
+			n++;
+		}
+		tmp[n] = 0;
+		phy = strtoul(tmp, 0, 16);
+		if (!phy)
+			continue;
+
+		fclose(fp);
+		return phy + va - start;
+	}
+
+	fclose(fp);
+	return RTE_BAD_IOVA;
+}
+
 /*register a fslmc bus based dpaa2 driver */
 void
 rte_fslmc_object_register(struct rte_dpaa2_object *object)
@@ -271,7 +362,7 @@ fslmc_get_group_id(const char *group_name,
 	ret = rte_vfio_get_group_num(SYSFS_FSL_MC_DEVICES,
 			group_name, groupid);
 	if (ret <= 0) {
-		DPAA2_BUS_ERR("Unable to find %s IOMMU group", group_name);
+		DPAA2_BUS_ERR("Find %s IOMMU group", group_name);
 		if (ret < 0)
 			return ret;
 
@@ -314,7 +405,7 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	/* if we're in a secondary process, request group fd from the primary
 	 * process via mp channel.
 	 */
-	p->req = SOCKET_REQ_GROUP;
+	p->req = FSLMC_VFIO_SOCKET_REQ_GROUP;
 	p->group_num = iommu_group_num;
 	strcpy(mp_req.name, FSLMC_VFIO_MP);
 	mp_req.len_param = sizeof(*p);
@@ -408,7 +499,7 @@ fslmc_vfio_open_container_fd(void)
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
 		if (vfio_container_fd < 0) {
-			DPAA2_BUS_ERR("Cannot open VFIO container(%s), err(%d)",
+			DPAA2_BUS_ERR("Open VFIO container(%s), err(%d)",
 				VFIO_CONTAINER_PATH, vfio_container_fd);
 			ret = vfio_container_fd;
 			goto err_exit;
@@ -417,7 +508,7 @@ fslmc_vfio_open_container_fd(void)
 		/* check VFIO API version */
 		ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
 		if (ret < 0) {
-			DPAA2_BUS_ERR("Could not get VFIO API version(%d)",
+			DPAA2_BUS_ERR("Get VFIO API version(%d)",
 				ret);
 		} else if (ret != VFIO_API_VERSION) {
 			DPAA2_BUS_ERR("Unsupported VFIO API version(%d)",
@@ -431,7 +522,7 @@ fslmc_vfio_open_container_fd(void)
 
 		ret = fslmc_vfio_check_extensions(vfio_container_fd);
 		if (ret) {
-			DPAA2_BUS_ERR("No supported IOMMU extensions found(%d)",
+			DPAA2_BUS_ERR("Unsupported IOMMU extensions found(%d)",
 				ret);
 			close(vfio_container_fd);
 			goto err_exit;
@@ -443,7 +534,7 @@ fslmc_vfio_open_container_fd(void)
 	 * if we're in a secondary process, request container fd from the
 	 * primary process via mp channel
 	 */
-	p->req = SOCKET_REQ_CONTAINER;
+	p->req = FSLMC_VFIO_SOCKET_REQ_CONTAINER;
 	strcpy(mp_req.name, FSLMC_VFIO_MP);
 	mp_req.len_param = sizeof(*p);
 	mp_req.num_fds = 0;
@@ -473,7 +564,7 @@ fslmc_vfio_open_container_fd(void)
 err_exit:
 	if (mp_reply.msgs)
 		free(mp_reply.msgs);
-	DPAA2_BUS_ERR("Cannot request container fd err(%d)", ret);
+	DPAA2_BUS_ERR("Open container fd err(%d)", ret);
 	return ret;
 }
 
@@ -506,17 +597,19 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 	struct rte_mp_msg reply;
 	struct vfio_mp_param *r = (void *)reply.param;
 	const struct vfio_mp_param *m = (const void *)msg->param;
+	struct fslmc_mem_param *map;
 
 	if (msg->len_param != sizeof(*m)) {
-		DPAA2_BUS_ERR("fslmc vfio received invalid message!");
+		DPAA2_BUS_ERR("Invalid msg size(%d) for req(%d)",
+			msg->len_param, m->req);
 		return -EINVAL;
 	}
 
 	memset(&reply, 0, sizeof(reply));
 
 	switch (m->req) {
-	case SOCKET_REQ_GROUP:
-		r->req = SOCKET_REQ_GROUP;
+	case FSLMC_VFIO_SOCKET_REQ_GROUP:
+		r->req = FSLMC_VFIO_SOCKET_REQ_GROUP;
 		r->group_num = m->group_num;
 		fd = fslmc_vfio_group_fd_by_id(m->group_num);
 		if (fd < 0) {
@@ -530,9 +623,10 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 			reply.num_fds = 1;
 			reply.fds[0] = fd;
 		}
+		reply.len_param = sizeof(*r);
 		break;
-	case SOCKET_REQ_CONTAINER:
-		r->req = SOCKET_REQ_CONTAINER;
+	case FSLMC_VFIO_SOCKET_REQ_CONTAINER:
+		r->req = FSLMC_VFIO_SOCKET_REQ_CONTAINER;
 		fd = fslmc_vfio_container_fd();
 		if (fd <= 0) {
 			r->result = SOCKET_ERR;
@@ -541,20 +635,73 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 			reply.num_fds = 1;
 			reply.fds[0] = fd;
 		}
+		reply.len_param = sizeof(*r);
+		break;
+	case FSLMC_VFIO_SOCKET_REQ_MEM:
+		map = (void *)reply.param;
+		r = &map->mp_param;
+		r->req = FSLMC_VFIO_SOCKET_REQ_MEM;
+		r->result = SOCKET_OK;
+		rte_memcpy(&map->memsegs, &fslmc_memsegs,
+			sizeof(struct fslmc_dmaseg_list));
+		rte_memcpy(&map->iosegs, &fslmc_iosegs,
+			sizeof(struct fslmc_dmaseg_list));
+		map->mem_va2iova = fslmc_mem_va2iova;
+		map->mem_map_num = fslmc_mem_map_num;
+		reply.len_param = sizeof(struct fslmc_mem_param);
 		break;
 	default:
-		DPAA2_BUS_ERR("fslmc vfio received invalid message(%08x)",
+		DPAA2_BUS_ERR("VFIO received invalid message(%08x)",
 			m->req);
 		return -ENOTSUP;
 	}
 
 	strcpy(reply.name, FSLMC_VFIO_MP);
-	reply.len_param = sizeof(*r);
 	ret = rte_mp_reply(&reply, peer);
 
 	return ret;
 }
 
+static int
+fslmc_vfio_mp_sync_mem_req(void)
+{
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	int ret = 0;
+	struct vfio_mp_param *mp_param;
+	struct fslmc_mem_param *mem_rsp;
+
+	mp_param = (void *)mp_req.param;
+	memset(&mp_req, 0, sizeof(struct rte_mp_msg));
+	mp_param->req = FSLMC_VFIO_SOCKET_REQ_MEM;
+	strcpy(mp_req.name, FSLMC_VFIO_MP);
+	mp_req.len_param = sizeof(struct vfio_mp_param);
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+		mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		mem_rsp = (struct fslmc_mem_param *)mp_rep->param;
+		if (mem_rsp->mp_param.result == SOCKET_OK) {
+			rte_memcpy(&fslmc_memsegs,
+				&mem_rsp->memsegs,
+				sizeof(struct fslmc_dmaseg_list));
+			rte_memcpy(&fslmc_memsegs,
+				&mem_rsp->memsegs,
+				sizeof(struct fslmc_dmaseg_list));
+			fslmc_mem_va2iova = mem_rsp->mem_va2iova;
+			fslmc_mem_map_num = mem_rsp->mem_map_num;
+		} else {
+			DPAA2_BUS_ERR("Bad MEM SEG");
+			ret = -EINVAL;
+		}
+	} else {
+		ret = -EINVAL;
+	}
+	free(mp_reply.msgs);
+
+	return ret;
+}
+
 static int
 fslmc_vfio_mp_sync_setup(void)
 {
@@ -565,6 +712,10 @@ fslmc_vfio_mp_sync_setup(void)
 			fslmc_vfio_mp_primary);
 		if (ret && rte_errno != ENOTSUP)
 			return ret;
+	} else {
+		ret = fslmc_vfio_mp_sync_mem_req();
+		if (ret)
+			return ret;
 	}
 
 	return 0;
@@ -585,30 +736,34 @@ vfio_connect_container(int vfio_container_fd,
 
 	iommu_type = fslmc_vfio_iommu_type(vfio_group_fd);
 	if (iommu_type < 0) {
-		DPAA2_BUS_ERR("Failed to get iommu type(%d)",
-			iommu_type);
+		DPAA2_BUS_ERR("Get iommu type(%d)", iommu_type);
 
 		return iommu_type;
 	}
 
 	/* Check whether support for SMMU type IOMMU present or not */
-	if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type)) {
-		/* Connect group to container */
-		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
+	ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type);
+	if (ret <= 0) {
+		DPAA2_BUS_ERR("Unsupported IOMMU type(%d) ret(%d), err(%d)",
+			iommu_type, ret, -errno);
+		return -EINVAL;
+	}
+
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
 			&vfio_container_fd);
-		if (ret) {
-			DPAA2_BUS_ERR("Failed to setup group container");
-			return -errno;
-		}
+	if (ret) {
+		DPAA2_BUS_ERR("Set group container ret(%d), err(%d)",
+			ret, -errno);
 
-		ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
-		if (ret) {
-			DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-			return -errno;
-		}
-	} else {
-		DPAA2_BUS_ERR("No supported IOMMU available");
-		return -EINVAL;
+		return ret;
+	}
+
+	ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
+	if (ret) {
+		DPAA2_BUS_ERR("Set iommu ret(%d), err(%d)",
+			ret, -errno);
+
+		return ret;
 	}
 
 	return fslmc_vfio_connect_container(vfio_group_fd);
@@ -629,11 +784,11 @@ static int vfio_map_irq_region(void)
 
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected");
@@ -643,8 +798,8 @@ static int vfio_map_irq_region(void)
 	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
 		PROT_READ, MAP_SHARED, fd, 0x6030000);
 	if (vaddr == MAP_FAILED) {
-		DPAA2_BUS_INFO("Unable to map region (errno = %d)", errno);
-		return -errno;
+		DPAA2_BUS_ERR("Unable to map region (errno = %d)", errno);
+		return -ENOMEM;
 	}
 
 	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
@@ -654,141 +809,200 @@ static int vfio_map_irq_region(void)
 		return 0;
 
 	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
-	return -errno;
-}
-
-static int fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
-static int fslmc_unmap_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
-
-static void
-fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
-	size_t len, void *arg __rte_unused)
-{
-	struct rte_memseg_list *msl;
-	struct rte_memseg *ms;
-	size_t cur_len = 0, map_len = 0;
-	uint64_t virt_addr;
-	rte_iova_t iova_addr;
-	int ret;
-
-	msl = rte_mem_virt2memseg_list(addr);
-
-	while (cur_len < len) {
-		const void *va = RTE_PTR_ADD(addr, cur_len);
-
-		ms = rte_mem_virt2memseg(va, msl);
-		iova_addr = ms->iova;
-		virt_addr = ms->addr_64;
-		map_len = ms->len;
-
-		DPAA2_BUS_DEBUG("Request for %s, va=%p, "
-				"virt_addr=0x%" PRIx64 ", "
-				"iova=0x%" PRIx64 ", map_len=%zu",
-				type == RTE_MEM_EVENT_ALLOC ?
-					"alloc" : "dealloc",
-				va, virt_addr, iova_addr, map_len);
-
-		/* iova_addr may be set to RTE_BAD_IOVA */
-		if (iova_addr == RTE_BAD_IOVA) {
-			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping\n");
-			cur_len += map_len;
-			continue;
-		}
-
-		if (type == RTE_MEM_EVENT_ALLOC)
-			ret = fslmc_map_dma(virt_addr, iova_addr, map_len);
-		else
-			ret = fslmc_unmap_dma(virt_addr, iova_addr, map_len);
-
-		if (ret != 0) {
-			DPAA2_BUS_ERR("DMA Mapping/Unmapping failed. "
-					"Map=%d, addr=%p, len=%zu, err:(%d)",
-					type, va, map_len, ret);
-			return;
-		}
-
-		cur_len += map_len;
-	}
-
-	if (type == RTE_MEM_EVENT_ALLOC)
-		DPAA2_BUS_DEBUG("Total Mapped: addr=%p, len=%zu",
-				addr, len);
-	else
-		DPAA2_BUS_DEBUG("Total Unmapped: addr=%p, len=%zu",
-				addr, len);
+	return ret;
 }
 
 static int
-fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr,
-	size_t len)
+fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len)
 {
 	struct vfio_iommu_type1_dma_map dma_map = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
 		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
 	};
-	int ret, fd;
+	int ret, fd, is_io = 0;
 	const char *group_name = fslmc_vfio_get_group_name();
+	struct fslmc_dmaseg *dmaseg = NULL;
+	uint64_t phy = 0;
+
+	if (rte_eal_iova_mode() == RTE_IOVA_VA) {
+		if (vaddr != iovaddr) {
+			DPAA2_BUS_ERR("IOVA:VA(%" PRIx64 " : %" PRIx64 ") %s",
+				iovaddr, vaddr,
+				"should be 1:1 for VA mode");
+
+			return -EINVAL;
+		}
+	}
 
+	phy = rte_mem_virt2phy((const void *)(uintptr_t)vaddr);
+	if (phy == RTE_BAD_IOVA) {
+		phy = fslmc_io_virt2phy((const void *)(uintptr_t)vaddr);
+		if (phy == RTE_BAD_IOVA)
+			return -ENOMEM;
+		is_io = 1;
+	} else if (fslmc_mem_va2iova != RTE_BAD_IOVA &&
+		fslmc_mem_va2iova != (iovaddr - vaddr)) {
+		DPAA2_BUS_WARN("Multiple MEM PA<->VA conversions.");
+	}
+	DPAA2_BUS_DEBUG("%s(%zu): VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+		is_io ? "DMA IO map size" : "DMA MEM map size",
+		len, vaddr, iovaddr, phy);
+
+	if (is_io)
+		goto io_mapping_check;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (!((vaddr + len) <= dmaseg->vaddr ||
+			(dmaseg->vaddr + dmaseg->size) <= vaddr)) {
+			DPAA2_BUS_ERR("MEM: New VA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				vaddr, vaddr + len);
+			DPAA2_BUS_ERR("MEM: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->vaddr,
+				dmaseg->vaddr + dmaseg->size);
+			return -EEXIST;
+		}
+		if (!((iovaddr + len) <= dmaseg->iova ||
+			(dmaseg->iova + dmaseg->size) <= iovaddr)) {
+			DPAA2_BUS_ERR("MEM: New IOVA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				iovaddr, iovaddr + len);
+			DPAA2_BUS_ERR("MEM: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->iova,
+				dmaseg->iova + dmaseg->size);
+			return -EEXIST;
+		}
+	}
+	goto start_mapping;
+
+io_mapping_check:
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if (!((vaddr + len) <= dmaseg->vaddr ||
+			(dmaseg->vaddr + dmaseg->size) <= vaddr)) {
+			DPAA2_BUS_ERR("IO: New VA Range (%" PRIx64 " ~ %" PRIx64 ")",
+				vaddr, vaddr + len);
+			DPAA2_BUS_ERR("IO: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->vaddr,
+				dmaseg->vaddr + dmaseg->size);
+			return -EEXIST;
+		}
+		if (!((iovaddr + len) <= dmaseg->iova ||
+			(dmaseg->iova + dmaseg->size) <= iovaddr)) {
+			DPAA2_BUS_ERR("IO: New IOVA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				iovaddr, iovaddr + len);
+			DPAA2_BUS_ERR("IO: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->iova,
+				dmaseg->iova + dmaseg->size);
+			return -EEXIST;
+		}
+	}
+
+start_mapping:
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
-		return 0;
+		if (phy != iovaddr) {
+			DPAA2_BUS_ERR("IOVA should support with IOMMU");
+			return -EIO;
+		}
+		goto end_mapping;
 	}
 
 	dma_map.size = len;
 	dma_map.vaddr = vaddr;
 	dma_map.iova = iovaddr;
 
-#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-	if (vaddr != iovaddr) {
-		DPAA2_BUS_WARN("vaddr(0x%lx) != iovaddr(0x%lx)",
-			vaddr, iovaddr);
-	}
-#endif
-
 	/* SET DMA MAP for IOMMU */
 	if (!fslmc_vfio_container_connected(fd)) {
-		DPAA2_BUS_ERR("Container is not connected ");
+		DPAA2_BUS_ERR("Container is not connected");
 		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("--> Map address: 0x%"PRIx64", size: %"PRIu64"",
-			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.size);
 	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA,
 		&dma_map);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO_IOMMU_MAP_DMA API(errno = %d)",
-				errno);
+		DPAA2_BUS_ERR("%s(%d) VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+			is_io ? "DMA IO map err" : "DMA MEM map err",
+			errno, vaddr, iovaddr, phy);
 		return ret;
 	}
 
+end_mapping:
+	dmaseg = malloc(sizeof(struct fslmc_dmaseg));
+	if (!dmaseg) {
+		DPAA2_BUS_ERR("DMA segment malloc failed!");
+		return -ENOMEM;
+	}
+	dmaseg->vaddr = vaddr;
+	dmaseg->iova = iovaddr;
+	dmaseg->size = len;
+	if (is_io) {
+		TAILQ_INSERT_TAIL(&fslmc_iosegs, dmaseg, next);
+	} else {
+		fslmc_mem_map_num++;
+		if (fslmc_mem_map_num == 1)
+			fslmc_mem_va2iova = iovaddr - vaddr;
+		else
+			fslmc_mem_va2iova = RTE_BAD_IOVA;
+		TAILQ_INSERT_TAIL(&fslmc_memsegs, dmaseg, next);
+	}
+	DPAA2_BUS_LOG(NOTICE,
+		"%s(%zx): VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+		is_io ? "DMA I/O map size" : "DMA MEM map size",
+		len, vaddr, iovaddr, phy);
+
 	return 0;
 }
 
 static int
-fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
+fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr, size_t len)
 {
 	struct vfio_iommu_type1_dma_unmap dma_unmap = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_unmap),
 		.flags = 0,
 	};
-	int ret, fd;
+	int ret, fd, is_io = 0;
 	const char *group_name = fslmc_vfio_get_group_name();
+	struct fslmc_dmaseg *dmaseg = NULL;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (((vaddr && dmaseg->vaddr == vaddr) || !vaddr) &&
+			dmaseg->iova == iovaddr &&
+			dmaseg->size == len) {
+			is_io = 0;
+			break;
+		}
+	}
+
+	if (!dmaseg) {
+		TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+			if (((vaddr && dmaseg->vaddr == vaddr) || !vaddr) &&
+				dmaseg->iova == iovaddr &&
+				dmaseg->size == len) {
+				is_io = 1;
+				break;
+			}
+		}
+	}
+
+	if (!dmaseg) {
+		DPAA2_BUS_ERR("IOVA(%" PRIx64 ") with length(%zx) not mapped",
+			iovaddr, len);
+		return 0;
+	}
 
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
@@ -796,7 +1010,7 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 	}
 
 	dma_unmap.size = len;
-	dma_unmap.iova = vaddr;
+	dma_unmap.iova = iovaddr;
 
 	/* SET DMA MAP for IOMMU */
 	if (!fslmc_vfio_container_connected(fd)) {
@@ -804,19 +1018,164 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("--> Unmap address: 0x%"PRIx64", size: %"PRIu64"",
-			(uint64_t)dma_unmap.iova, (uint64_t)dma_unmap.size);
 	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_UNMAP_DMA,
 		&dma_unmap);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO_IOMMU_UNMAP_DMA API(errno = %d)",
-				errno);
-		return -1;
+		DPAA2_BUS_ERR("DMA un-map IOVA(%" PRIx64 " ~ %" PRIx64 ") err(%d)",
+			iovaddr, iovaddr + len, errno);
+		return ret;
+	}
+
+	if (is_io) {
+		TAILQ_REMOVE(&fslmc_iosegs, dmaseg, next);
+	} else {
+		TAILQ_REMOVE(&fslmc_memsegs, dmaseg, next);
+		fslmc_mem_map_num--;
+		if (TAILQ_EMPTY(&fslmc_memsegs))
+			fslmc_mem_va2iova = RTE_BAD_IOVA;
 	}
 
+	free(dmaseg);
+
 	return 0;
 }
 
+uint64_t
+rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
+	uint64_t size)
+{
+	struct fslmc_dmaseg *dmaseg;
+	uint64_t va;
+
+	va = (uint64_t)vaddr;
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (va >= dmaseg->vaddr &&
+			(va + size) < (dmaseg->vaddr + dmaseg->size)) {
+			return dmaseg->iova + va - dmaseg->vaddr;
+		}
+	}
+
+	return RTE_BAD_IOVA;
+}
+
+void *
+rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
+	uint64_t size)
+{
+	struct fslmc_dmaseg *dmaseg;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (iova >= dmaseg->iova &&
+			(iova + size) < (dmaseg->iova + dmaseg->size))
+			return (void *)((uintptr_t)dmaseg->vaddr
+				+ (uintptr_t)(iova - dmaseg->iova));
+	}
+
+	return NULL;
+}
+
+__hot uint64_t
+rte_fslmc_mem_vaddr_to_iova(void *vaddr)
+{
+	if (likely(fslmc_mem_va2iova != RTE_BAD_IOVA))
+		return (uint64_t)vaddr + fslmc_mem_va2iova;
+
+	return rte_fslmc_cold_mem_vaddr_to_iova(vaddr, 0);
+}
+
+__hot void *
+rte_fslmc_mem_iova_to_vaddr(uint64_t iova)
+{
+	if (likely(fslmc_mem_va2iova != RTE_BAD_IOVA))
+		return (void *)((uintptr_t)iova - (uintptr_t)fslmc_mem_va2iova);
+
+	return rte_fslmc_cold_mem_iova_to_vaddr(iova, 0);
+}
+
+uint64_t
+rte_fslmc_io_vaddr_to_iova(void *vaddr)
+{
+	struct fslmc_dmaseg *dmaseg = NULL;
+	uint64_t va = (uint64_t)vaddr;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if ((va >= dmaseg->vaddr) &&
+			va < dmaseg->vaddr + dmaseg->size)
+			return dmaseg->iova + va - dmaseg->vaddr;
+	}
+
+	return RTE_BAD_IOVA;
+}
+
+void *
+rte_fslmc_io_iova_to_vaddr(uint64_t iova)
+{
+	struct fslmc_dmaseg *dmaseg = NULL;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if ((iova >= dmaseg->iova) &&
+			iova < dmaseg->iova + dmaseg->size)
+			return (void *)((uintptr_t)dmaseg->vaddr
+				+ (uintptr_t)(iova - dmaseg->iova));
+	}
+
+	return NULL;
+}
+
+static void
+fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
+	size_t len, void *arg __rte_unused)
+{
+	struct rte_memseg_list *msl;
+	struct rte_memseg *ms;
+	size_t cur_len = 0, map_len = 0;
+	uint64_t virt_addr;
+	rte_iova_t iova_addr;
+	int ret;
+
+	msl = rte_mem_virt2memseg_list(addr);
+
+	while (cur_len < len) {
+		const void *va = RTE_PTR_ADD(addr, cur_len);
+
+		ms = rte_mem_virt2memseg(va, msl);
+		iova_addr = ms->iova;
+		virt_addr = ms->addr_64;
+		map_len = ms->len;
+
+		DPAA2_BUS_DEBUG("%s, va=%p, virt=%" PRIx64 ", iova=%" PRIx64 ", len=%zu",
+			type == RTE_MEM_EVENT_ALLOC ? "alloc" : "dealloc",
+			va, virt_addr, iova_addr, map_len);
+
+		/* iova_addr may be set to RTE_BAD_IOVA */
+		if (iova_addr == RTE_BAD_IOVA) {
+			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping\n");
+			cur_len += map_len;
+			continue;
+		}
+
+		if (type == RTE_MEM_EVENT_ALLOC)
+			ret = fslmc_map_dma(virt_addr, iova_addr, map_len);
+		else
+			ret = fslmc_unmap_dma(virt_addr, iova_addr, map_len);
+
+		if (ret != 0) {
+			DPAA2_BUS_ERR("%s: Map=%d, addr=%p, len=%zu, err:(%d)",
+				type == RTE_MEM_EVENT_ALLOC ?
+				"DMA Mapping failed. " :
+				"DMA Unmapping failed. ",
+				type, va, map_len, ret);
+			return;
+		}
+
+		cur_len += map_len;
+	}
+
+	DPAA2_BUS_DEBUG("Total %s: addr=%p, len=%zu",
+		type == RTE_MEM_EVENT_ALLOC ? "Mapped" : "Unmapped",
+		addr, len);
+}
+
 static int
 fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 		const struct rte_memseg *ms, void *arg)
@@ -847,7 +1206,7 @@ rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 int
 rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 {
-	return fslmc_unmap_dma(iova, 0, size);
+	return fslmc_unmap_dma(0, iova, size);
 }
 
 int rte_fslmc_vfio_dmamap(void)
@@ -857,9 +1216,10 @@ int rte_fslmc_vfio_dmamap(void)
 	/* Lock before parsing and registering callback to memory subsystem */
 	rte_mcfg_mem_read_lock();
 
-	if (rte_memseg_walk(fslmc_dmamap_seg, &i) < 0) {
+	ret = rte_memseg_walk(fslmc_dmamap_seg, &i);
+	if (ret) {
 		rte_mcfg_mem_read_unlock();
-		return -1;
+		return ret;
 	}
 
 	ret = rte_mem_event_callback_register("fslmc_memevent_clb",
@@ -898,6 +1258,14 @@ fslmc_vfio_setup_device(const char *dev_addr,
 	const char *group_name = fslmc_vfio_get_group_name();
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, vfio_group_fd);
+		if (vfio_group_fd < 0)
+			return vfio_group_fd;
+		return -EIO;
+	}
+
 	if (!fslmc_vfio_container_connected(vfio_group_fd)) {
 		DPAA2_BUS_ERR("Container is not connected");
 		return -EIO;
@@ -1006,8 +1374,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
 	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
 	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 	if (ret)
-		DPAA2_BUS_ERR(
-			"Error disabling dpaa2 interrupts for fd %d",
+		DPAA2_BUS_ERR("Error disabling dpaa2 interrupts for fd %d",
 			rte_intr_fd_get(intr_handle));
 
 	return ret;
@@ -1032,7 +1399,7 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 		if (ret < 0) {
 			DPAA2_BUS_ERR("Cannot get IRQ(%d) info, error %i (%s)",
 				      i, errno, strerror(errno));
-			return -1;
+			return ret;
 		}
 
 		/* if this vector cannot be used with eventfd,
@@ -1046,8 +1413,8 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 		fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
 		if (fd < 0) {
 			DPAA2_BUS_ERR("Cannot set up eventfd, error %i (%s)",
-				      errno, strerror(errno));
-			return -1;
+				errno, strerror(errno));
+			return fd;
 		}
 
 		if (rte_intr_fd_set(intr_handle, fd))
@@ -1063,7 +1430,7 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 	}
 
 	/* if we're here, we haven't found a suitable interrupt vector */
-	return -1;
+	return -EIO;
 }
 
 static void
@@ -1237,6 +1604,13 @@ fslmc_vfio_close_group(void)
 	const char *group_name = fslmc_vfio_get_group_name();
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, vfio_group_fd);
+		if (vfio_group_fd < 0)
+			return vfio_group_fd;
+		return -EIO;
+	}
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
 		if (dev->device.devargs &&
@@ -1328,7 +1702,7 @@ fslmc_vfio_process_group(void)
 				ret = fslmc_process_mcp(dev);
 				if (ret) {
 					DPAA2_BUS_ERR("Unable to map MC Portal");
-					return -1;
+					return ret;
 				}
 				found_mportal = 1;
 			}
@@ -1345,7 +1719,7 @@ fslmc_vfio_process_group(void)
 	/* Cannot continue if there is not even a single mportal */
 	if (!found_mportal) {
 		DPAA2_BUS_ERR("No MC Portal device found. Not continuing");
-		return -1;
+		return -EIO;
 	}
 
 	/* Search for DPRC device next as it updates endpoint of
@@ -1357,7 +1731,7 @@ fslmc_vfio_process_group(void)
 			ret = fslmc_process_iodevices(dev);
 			if (ret) {
 				DPAA2_BUS_ERR("Unable to process dprc");
-				return -1;
+				return ret;
 			}
 			TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		}
@@ -1414,7 +1788,7 @@ fslmc_vfio_process_group(void)
 			if (ret) {
 				DPAA2_BUS_DEBUG("Dev (%s) init failed",
 						dev->device.name);
-				return -1;
+				return ret;
 			}
 
 			break;
@@ -1438,7 +1812,7 @@ fslmc_vfio_process_group(void)
 			if (ret) {
 				DPAA2_BUS_DEBUG("Dev (%s) init failed",
 						dev->device.name);
-				return -1;
+				return ret;
 			}
 
 			break;
@@ -1467,9 +1841,9 @@ fslmc_vfio_setup_group(void)
 	vfio_container_fd = fslmc_vfio_container_fd();
 	if (vfio_container_fd <= 0) {
 		vfio_container_fd = fslmc_vfio_open_container_fd();
-		if (vfio_container_fd <= 0) {
+		if (vfio_container_fd < 0) {
 			DPAA2_BUS_ERR("Failed to create MC VFIO container");
-			return -rte_errno;
+			return vfio_container_fd;
 		}
 	}
 
@@ -1482,6 +1856,8 @@ fslmc_vfio_setup_group(void)
 	if (vfio_group_fd <= 0) {
 		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
 		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("%s: open group name(%s) failed(%d)",
+				__func__, group_name, vfio_group_fd);
 			if (!vfio_group_fd)
 				close(vfio_group_fd);
 			DPAA2_BUS_ERR("Failed to create MC VFIO group");
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 1695b6c078..408b35680d 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -11,6 +11,10 @@
 #include <rte_compat.h>
 #include <rte_vfio.h>
 
+#ifndef __hot
+#define __hot __attribute__((hot))
+#endif
+
 /* Pathname of FSL-MC devices directory. */
 #define SYSFS_FSL_MC_DEVICES	"/sys/bus/fsl-mc/devices"
 #define DPAA2_MC_DPNI_DEVID	7
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index bc36607e64..85e4c16c03 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2020 NXP
+ *   Copyright 2016,2020-2023 NXP
  *
  */
 
@@ -28,7 +28,6 @@
 #include "portal/dpaa2_hw_pvt.h"
 #include "portal/dpaa2_hw_dpio.h"
 
-
 TAILQ_HEAD(dpbp_dev_list, dpaa2_dpbp_dev);
 static struct dpbp_dev_list dpbp_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpbp_dev_list); /*!< DPBP device list */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 8265fee497..b52a8c8ba5 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -332,9 +332,8 @@ dpaa2_affine_qbman_swp(void)
 		}
 		RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
 
-		DPAA2_BUS_INFO(
-			"DPAA Portal=%p (%d) is affined to thread %" PRIu64,
-			dpio_dev, dpio_dev->index, tid);
+		DPAA2_BUS_DEBUG("Portal[%d] is affined to thread %" PRIu64,
+			dpio_dev->index, tid);
 	}
 	return 0;
 }
@@ -354,9 +353,8 @@ dpaa2_affine_qbman_ethrx_swp(void)
 		}
 		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
 
-		DPAA2_BUS_INFO(
-			"DPAA Portal=%p (%d) is affined for eth rx to thread %"
-			PRIu64, dpio_dev, dpio_dev->index, tid);
+		DPAA2_BUS_DEBUG("Portal_eth_rx[%d] is affined to thread %" PRIu64,
+			dpio_dev->index, tid);
 	}
 	return 0;
 }
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 7407f8d38d..328e1e788a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -12,6 +12,7 @@
 #include <mc/fsl_mc_sys.h>
 
 #include <rte_compat.h>
+#include <dpaa2_hw_pvt.h>
 
 struct dpaa2_io_portal_t {
 	struct dpaa2_dpio_dev *dpio_dev;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 169c7917ea..c5900bd06a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -14,6 +14,7 @@
 
 #include <mc/fsl_mc_sys.h>
 #include <fsl_qbman_portal.h>
+#include <bus_fslmc_driver.h>
 
 #ifndef false
 #define false      0
@@ -80,6 +81,8 @@
 #define DPAA2_PACKET_LAYOUT_ALIGN	64 /*changing from 256 */
 
 #define DPAA2_DPCI_MAX_QUEUES 2
+#define DPAA2_INVALID_FLOW_ID 0xffff
+#define DPAA2_INVALID_CGID 0xff
 
 struct dpaa2_queue;
 
@@ -365,83 +368,63 @@ enum qbman_fd_format {
  */
 #define DPAA2_EQ_RESP_ALWAYS		1
 
-/* Various structures representing contiguous memory maps */
-struct dpaa2_memseg {
-	TAILQ_ENTRY(dpaa2_memseg) next;
-	char *vaddr;
-	rte_iova_t iova;
-	size_t len;
-};
-
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-extern uint8_t dpaa2_virt_mode;
-static void *dpaa2_mem_ptov(phys_addr_t paddr) __rte_unused;
-
-static void *dpaa2_mem_ptov(phys_addr_t paddr)
+static inline uint64_t
+dpaa2_mem_va_to_iova(void *va)
 {
-	void *va;
-
-	if (dpaa2_virt_mode)
-		return (void *)(size_t)paddr;
-
-	va = (void *)dpaax_iova_table_get_va(paddr);
-	if (likely(va != NULL))
-		return va;
-
-	/* If not, Fallback to full memseg list searching */
-	va = rte_mem_iova2virt(paddr);
+	if (likely(rte_eal_iova_mode() == RTE_IOVA_VA))
+		return (uint64_t)va;
 
-	return va;
+	return rte_fslmc_mem_vaddr_to_iova(va);
 }
 
-static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr) __rte_unused;
-
-static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
+static inline void *
+dpaa2_mem_iova_to_va(uint64_t iova)
 {
-	const struct rte_memseg *memseg;
-
-	if (dpaa2_virt_mode)
-		return vaddr;
+	if (likely(rte_eal_iova_mode() == RTE_IOVA_VA))
+		return (void *)(uintptr_t)iova;
 
-	memseg = rte_mem_virt2memseg((void *)(uintptr_t)vaddr, NULL);
-	if (memseg)
-		return memseg->iova + RTE_PTR_DIFF(vaddr, memseg->addr);
-	return (size_t)NULL;
+	return rte_fslmc_mem_iova_to_vaddr(iova);
 }
 
-/**
- * When we are using Physical addresses as IO Virtual Addresses,
- * Need to call conversion routines dpaa2_mem_vtop & dpaa2_mem_ptov
- * wherever required.
- * These routines are called with help of below MACRO's
- */
-
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_iova)
-
-/**
- * macro to convert Virtual address to IOVA
- */
-#define DPAA2_VADDR_TO_IOVA(_vaddr) dpaa2_mem_vtop((size_t)(_vaddr))
-
-/**
- * macro to convert IOVA to Virtual address
- */
-#define DPAA2_IOVA_TO_VADDR(_iova) dpaa2_mem_ptov((size_t)(_iova))
-
-/**
- * macro to convert modify the memory containing IOVA to Virtual address
- */
+#define DPAA2_VADDR_TO_IOVA(_vaddr) \
+	dpaa2_mem_va_to_iova((void *)(uintptr_t)_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) \
+	dpaa2_mem_iova_to_va((uint64_t)_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type) \
-	{_mem = (_type)(dpaa2_mem_ptov((size_t)(_mem))); }
+	{_mem = (_type)DPAA2_IOVA_TO_VADDR(_mem); }
+
+#define DPAA2_VAMODE_VADDR_TO_IOVA(_vaddr) ((uint64_t)_vaddr)
+#define DPAA2_VAMODE_IOVA_TO_VADDR(_iova) ((void *)_iova)
+#define DPAA2_VAMODE_MODIFY_IOVA_TO_VADDR(_mem, _type) \
+	{_mem = (_type)(_mem); }
+
+#define DPAA2_PAMODE_VADDR_TO_IOVA(_vaddr) \
+	rte_fslmc_mem_vaddr_to_iova((void *)_vaddr)
+#define DPAA2_PAMODE_IOVA_TO_VADDR(_iova) \
+	rte_fslmc_mem_iova_to_vaddr((uint64_t)_iova)
+#define DPAA2_PAMODE_MODIFY_IOVA_TO_VADDR(_mem, _type) \
+	{_mem = (_type)rte_fslmc_mem_iova_to_vaddr(_mem); }
+
+static inline uint64_t
+dpaa2_mem_va_to_iova_check(void *va, uint64_t size)
+{
+	uint64_t iova = rte_fslmc_cold_mem_vaddr_to_iova(va, size);
 
-#else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
+	if (iova == RTE_BAD_IOVA)
+		return RTE_BAD_IOVA;
 
-#define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
-#define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
+	/** Double check the iova is valid.*/
+	if (iova != rte_mem_virt2iova(va))
+		return RTE_BAD_IOVA;
+
+	return iova;
+}
 
-#endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
+#define DPAA2_VADDR_TO_IOVA_AND_CHECK(_vaddr, size) \
+	dpaa2_mem_va_to_iova_check(_vaddr, size)
+#define DPAA2_IOVA_TO_VADDR_AND_CHECK(_iova, size) \
+	rte_fslmc_cold_mem_iova_to_vaddr(_iova, size)
 
 static inline
 int check_swp_active_dqs(uint16_t dpio_index)
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index b49bc0a62c..2c36895285 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -24,7 +24,6 @@ INTERNAL {
 	dpaa2_seqn_dynfield_offset;
 	dpaa2_seqn;
 	dpaa2_svr_family;
-	dpaa2_virt_mode;
 	dpbp_disable;
 	dpbp_enable;
 	dpbp_get_attributes;
@@ -119,6 +118,12 @@ INTERNAL {
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
 	rte_fslmc_vfio_mem_dmaunmap;
+	rte_fslmc_cold_mem_vaddr_to_iova;
+	rte_fslmc_cold_mem_iova_to_vaddr;
+	rte_fslmc_mem_vaddr_to_iova;
+	rte_fslmc_mem_iova_to_vaddr;
+	rte_fslmc_io_vaddr_to_iova;
+	rte_fslmc_io_iova_to_vaddr;
 
 	local: *;
 };
diff --git a/drivers/dma/dpaa2/dpaa2_qdma.c b/drivers/dma/dpaa2/dpaa2_qdma.c
index 2c91ceec13..99b8881c5d 100644
--- a/drivers/dma/dpaa2/dpaa2_qdma.c
+++ b/drivers/dma/dpaa2/dpaa2_qdma.c
@@ -10,6 +10,7 @@
 
 #include <mc/fsl_dpdmai.h>
 
+#include <dpaa2_hw_dpio.h>
 #include "rte_pmd_dpaa2_qdma.h"
 #include "dpaa2_qdma.h"
 #include "dpaa2_qdma_logs.h"
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 17/43] bus/fslmc: remove VFIO IRQ mapping
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (15 preceding siblings ...)
  2024-09-18  7:50   ` [v2 16/43] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 18/43] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
                     ` (26 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Remove unused GITS translator VFIO mapping.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 50 ----------------------------------
 1 file changed, 50 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index fe18429f42..733423faa0 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -51,7 +51,6 @@
 static struct fslmc_vfio_container s_vfio_container;
 /* Currently we only support single group/process. */
 static const char *fslmc_group; /* dprc.x*/
-static uint32_t *msi_intr_vaddr;
 static void *(*rte_mcp_ptr_list);
 
 struct fslmc_dmaseg {
@@ -769,49 +768,6 @@ vfio_connect_container(int vfio_container_fd,
 	return fslmc_vfio_connect_container(vfio_group_fd);
 }
 
-static int vfio_map_irq_region(void)
-{
-	int ret, fd;
-	unsigned long *vaddr = NULL;
-	struct vfio_iommu_type1_dma_map map = {
-		.argsz = sizeof(map),
-		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
-		.vaddr = 0x6030000,
-		.iova = 0x6030000,
-		.size = 0x1000,
-	};
-	const char *group_name = fslmc_vfio_get_group_name();
-
-	fd = fslmc_vfio_group_fd_by_name(group_name);
-	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
-			__func__, group_name, fd);
-		if (fd < 0)
-			return fd;
-		return -EIO;
-	}
-	if (!fslmc_vfio_container_connected(fd)) {
-		DPAA2_BUS_ERR("Container is not connected");
-		return -EIO;
-	}
-
-	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
-		PROT_READ, MAP_SHARED, fd, 0x6030000);
-	if (vaddr == MAP_FAILED) {
-		DPAA2_BUS_ERR("Unable to map region (errno = %d)", errno);
-		return -ENOMEM;
-	}
-
-	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
-	map.vaddr = (unsigned long)vaddr;
-	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA, &map);
-	if (!ret)
-		return 0;
-
-	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
-	return ret;
-}
-
 static int
 fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len)
 {
@@ -1233,12 +1189,6 @@ int rte_fslmc_vfio_dmamap(void)
 
 	DPAA2_BUS_DEBUG("Total %d segments found.", i);
 
-	/* TODO - This is a W.A. as VFIO currently does not add the mapping of
-	 * the interrupt region to SMMU. This should be removed once the
-	 * support is added in the Kernel.
-	 */
-	vfio_map_irq_region();
-
 	/* Existing segments have been mapped and memory callback for hotplug
 	 * has been installed.
 	 */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 18/43] bus/fslmc: create dpaa2 device with it's object
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (16 preceding siblings ...)
  2024-09-18  7:50   ` [v2 17/43] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 19/43] bus/fslmc: fix coverity issue vanshika.shukla
                     ` (25 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Create dpaa2 device with object instead of object ID.
Assign each dpaa2 object with it's container.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     | 39 ++++++++++++------------
 drivers/bus/fslmc/fslmc_vfio.c           |  3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c |  6 ++--
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c |  8 ++---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c |  6 ++--
 drivers/bus/fslmc/portal/dpaa2_hw_dprc.c |  8 +++--
 drivers/event/dpaa2/dpaa2_hw_dpcon.c     |  8 ++---
 drivers/net/dpaa2/dpaa2_mux.c            |  6 ++--
 drivers/net/dpaa2/dpaa2_ptp.c            |  8 ++---
 9 files changed, 47 insertions(+), 45 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 11eebd560c..462bf2113e 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -89,25 +89,6 @@ enum rte_dpaa2_dev_type {
 	DPAA2_DEVTYPE_MAX,
 };
 
-TAILQ_HEAD(rte_dpaa2_object_list, rte_dpaa2_object);
-
-typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
-				      struct vfio_device_info *obj_info,
-				      int object_id);
-
-typedef void (*rte_dpaa2_obj_close_t)(int object_id);
-
-/**
- * A structure describing a DPAA2 object.
- */
-struct rte_dpaa2_object {
-	TAILQ_ENTRY(rte_dpaa2_object) next; /**< Next in list. */
-	const char *name;                   /**< Name of Object. */
-	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
-	rte_dpaa2_obj_create_t create;
-	rte_dpaa2_obj_close_t close;
-};
-
 /**
  * A structure describing a DPAA2 device.
  */
@@ -123,6 +104,7 @@ struct rte_dpaa2_device {
 	enum rte_dpaa2_dev_type dev_type;   /**< Device Type */
 	uint16_t object_id;                 /**< DPAA2 Object ID */
 	enum rte_dpaa2_dev_type ep_dev_type;   /**< Endpoint Device Type */
+	struct dpaa2_dprc_dev *container;
 	uint16_t ep_object_id;                 /**< Endpoint DPAA2 Object ID */
 	char ep_name[RTE_DEV_NAME_MAX_LEN];
 	struct rte_intr_handle *intr_handle; /**< Interrupt handle */
@@ -130,10 +112,29 @@ struct rte_dpaa2_device {
 	char name[FSLMC_OBJECT_MAX_LEN];    /**< DPAA2 Object name*/
 };
 
+typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
+				      struct vfio_device_info *obj_info,
+				      struct rte_dpaa2_device *dev);
+
+typedef void (*rte_dpaa2_obj_close_t)(int object_id);
+
 typedef int (*rte_dpaa2_probe_t)(struct rte_dpaa2_driver *dpaa2_drv,
 				 struct rte_dpaa2_device *dpaa2_dev);
 typedef int (*rte_dpaa2_remove_t)(struct rte_dpaa2_device *dpaa2_dev);
 
+TAILQ_HEAD(rte_dpaa2_object_list, rte_dpaa2_object);
+
+/**
+ * A structure describing a DPAA2 object.
+ */
+struct rte_dpaa2_object {
+	TAILQ_ENTRY(rte_dpaa2_object) next; /**< Next in list. */
+	const char *name;                   /**< Name of Object. */
+	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
+	rte_dpaa2_obj_create_t create;
+	rte_dpaa2_obj_close_t close;
+};
+
 /**
  * A structure describing a DPAA2 driver.
  */
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 733423faa0..9a1e53f2ee 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1469,8 +1469,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
 	case DPAA2_DPRC:
 		TAILQ_FOREACH(object, &dpaa2_obj_list, next) {
 			if (dev->dev_type == object->dev_type)
-				object->create(dev_fd, &device_info,
-					       dev->object_id);
+				object->create(dev_fd, &device_info, dev);
 			else
 				continue;
 		}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index 85e4c16c03..0ca3b2b2e4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -47,11 +47,11 @@ static struct dpaa2_dpbp_dev *get_dpbp_from_id(uint32_t dpbp_id)
 
 static int
 dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
-			 struct vfio_device_info *obj_info __rte_unused,
-			 int dpbp_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpbp_dev *dpbp_node;
-	int ret;
+	int ret, dpbp_id = obj->object_id;
 	static int register_once;
 
 	/* Allocate DPAA2 dpbp handle */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index d7de2bca05..03c2c82f66 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2020 NXP
+ *   Copyright 2017, 2020, 2023 NXP
  *
  */
 
@@ -45,15 +45,15 @@ static struct dpaa2_dpci_dev *get_dpci_from_id(uint32_t dpci_id)
 
 static int
 rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
-			     struct vfio_device_info *obj_info __rte_unused,
-			     int dpci_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpci_dev *dpci_node;
 	struct dpci_attr attr;
 	struct dpci_rx_queue_cfg rx_queue_cfg;
 	struct dpci_rx_queue_attr rx_attr;
 	struct dpci_tx_queue_attr tx_attr;
-	int ret, i;
+	int ret, i, dpci_id = obj->object_id;
 
 	/* Allocate DPAA2 dpci handle */
 	dpci_node = rte_malloc(NULL, sizeof(struct dpaa2_dpci_dev), 0);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index b52a8c8ba5..346092a6b4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -391,14 +391,14 @@ dpaa2_close_dpio_device(int object_id)
 
 static int
 dpaa2_create_dpio_device(int vdev_fd,
-			 struct vfio_device_info *obj_info,
-			 int object_id)
+	struct vfio_device_info *obj_info,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)};
 	struct qbman_swp_desc p_des;
 	struct dpio_attr attr;
-	int ret;
+	int ret, object_id = obj->object_id;
 
 	if (obj_info->num_regions < NUM_DPIO_REGIONS) {
 		DPAA2_BUS_ERR("Not sufficient number of DPIO regions");
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c b/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
index 65e2d799c3..a057cb1309 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
@@ -23,13 +23,13 @@ static struct dprc_dev_list dprc_dev_list
 
 static int
 rte_dpaa2_create_dprc_device(int vdev_fd __rte_unused,
-			     struct vfio_device_info *obj_info __rte_unused,
-			     int dprc_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dprc_dev *dprc_node;
 	struct dprc_endpoint endpoint1, endpoint2;
 	struct rte_dpaa2_device *dev, *dev_tmp;
-	int ret;
+	int ret, dprc_id = obj->object_id;
 
 	/* Allocate DPAA2 dprc handle */
 	dprc_node = rte_malloc(NULL, sizeof(struct dpaa2_dprc_dev), 0);
@@ -50,6 +50,8 @@ rte_dpaa2_create_dprc_device(int vdev_fd __rte_unused,
 	}
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_tmp) {
+		/** DPRC is always created before it's children are created.*/
+		dev->container = dprc_node;
 		if (dev->dev_type == DPAA2_ETH) {
 			int link_state;
 
diff --git a/drivers/event/dpaa2/dpaa2_hw_dpcon.c b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
index 64b0136e24..ea5b0d4b85 100644
--- a/drivers/event/dpaa2/dpaa2_hw_dpcon.c
+++ b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2020 NXP
+ *   Copyright 2017, 2020, 2023 NXP
  *
  */
 
@@ -45,12 +45,12 @@ static struct dpaa2_dpcon_dev *get_dpcon_from_id(uint32_t dpcon_id)
 
 static int
 rte_dpaa2_create_dpcon_device(int dev_fd __rte_unused,
-			      struct vfio_device_info *obj_info __rte_unused,
-			      int dpcon_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpcon_dev *dpcon_node;
 	struct dpcon_attr attr;
-	int ret;
+	int ret, dpcon_id = obj->object_id;
 
 	/* Allocate DPAA2 dpcon handle */
 	dpcon_node = rte_malloc(NULL, sizeof(struct dpaa2_dpcon_dev), 0);
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 53020e9302..4390be9789 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -374,12 +374,12 @@ rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
 
 static int
 dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
-			   struct vfio_device_info *obj_info __rte_unused,
-			   int dpdmux_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev;
 	struct dpdmux_attr attr;
-	int ret;
+	int ret, dpdmux_id = obj->object_id;
 	uint16_t maj_ver;
 	uint16_t min_ver;
 	uint8_t skip_reset_flags;
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index c08aa0f3bf..751e558c73 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2019 NXP
+ * Copyright 2019, 2023 NXP
  */
 
 #include <sys/queue.h>
@@ -134,11 +134,11 @@ int dpaa2_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
 #if defined(RTE_LIBRTE_IEEE1588)
 static int
 dpaa2_create_dprtc_device(int vdev_fd __rte_unused,
-			   struct vfio_device_info *obj_info __rte_unused,
-			   int dprtc_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dprtc_attr attr;
-	int ret;
+	int ret, dprtc_id = obj->object_id;
 
 	PMD_INIT_FUNC_TRACE();
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 19/43] bus/fslmc: fix coverity issue
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (17 preceding siblings ...)
  2024-09-18  7:50   ` [v2 18/43] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 20/43] bus/fslmc: fix invalid error FD code vanshika.shukla
                     ` (24 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fix Issues reported by coverity (NXP Internal Coverity)

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 49 +++++++++++++++++----------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index eea06988ff..0e471ec3fd 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2022 NXP
  */
 
 #include "compat.h"
@@ -37,6 +37,7 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
 		   struct qbman_bp_query_rslt *r)
 {
 	struct qbman_bp_query_desc *p;
+	struct qbman_bp_query_rslt *bp_query_rslt;
 
 	/* Start the management command */
 	p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s);
@@ -47,14 +48,16 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
 	p->bpid = bpid;
 
 	/* Complete the management command */
-	*r = *(struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s, p,
-						 QBMAN_BP_QUERY);
-	if (!r) {
+	bp_query_rslt = (struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s,
+						p, QBMAN_BP_QUERY);
+	if (!bp_query_rslt) {
 		pr_err("qbman: Query BPID %d failed, no response\n",
 			bpid);
 		return -EIO;
 	}
 
+	*r = *bp_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY);
 
@@ -202,20 +205,23 @@ int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
 		   struct qbman_fq_query_rslt *r)
 {
 	struct qbman_fq_query_desc *p;
+	struct qbman_fq_query_rslt *fq_query_rslt;
 
 	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->fqid = fqid;
-	*r = *(struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s, p,
-					  QBMAN_FQ_QUERY);
-	if (!r) {
+	fq_query_rslt = (struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s,
+					p, QBMAN_FQ_QUERY);
+	if (!fq_query_rslt) {
 		pr_err("qbman: Query FQID %d failed, no response\n",
 			fqid);
 		return -EIO;
 	}
 
+	*r = *fq_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY);
 
@@ -398,20 +404,23 @@ int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
 		    struct qbman_cgr_query_rslt *r)
 {
 	struct qbman_cgr_query_desc *p;
+	struct qbman_cgr_query_rslt *cgr_query_rslt;
 
 	p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->cgid = cgid;
-	*r = *(struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_CGR_QUERY);
-	if (!r) {
+	cgr_query_rslt = (struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s,
+					p, QBMAN_CGR_QUERY);
+	if (!cgr_query_rslt) {
 		pr_err("qbman: Query CGID %d failed, no response\n",
 			cgid);
 		return -EIO;
 	}
 
+	*r = *cgr_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_CGR_QUERY);
 
@@ -473,20 +482,23 @@ int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
 			struct qbman_wred_query_rslt *r)
 {
 	struct qbman_cgr_query_desc *p;
+	struct qbman_wred_query_rslt *wred_query_rslt;
 
 	p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->cgid = cgid;
-	*r = *(struct qbman_wred_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_WRED_QUERY);
-	if (!r) {
+	wred_query_rslt = (struct qbman_wred_query_rslt *)qbman_swp_mc_complete(
+					s, p, QBMAN_WRED_QUERY);
+	if (!wred_query_rslt) {
 		pr_err("qbman: Query CGID WRED %d failed, no response\n",
 			cgid);
 		return -EIO;
 	}
 
+	*r = *wred_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WRED_QUERY);
 
@@ -527,7 +539,7 @@ void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
 	if (mn == 0)
 		*maxth = ma;
 	else
-		*maxth = ((ma+256) * (1<<(mn-1)));
+		*maxth = ((uint64_t)(ma+256) * (1<<(mn-1)));
 
 	if (step_s == 0)
 		*minth = *maxth - step_i;
@@ -630,6 +642,7 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
 		       struct qbman_wqchan_query_rslt *r)
 {
 	struct qbman_wqchan_query_desc *p;
+	struct qbman_wqchan_query_rslt *wqchan_query_rslt;
 
 	/* Start the management command */
 	p = (struct qbman_wqchan_query_desc *)qbman_swp_mc_start(s);
@@ -640,14 +653,16 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
 	p->chid = chanid;
 
 	/* Complete the management command */
-	*r = *(struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_WQ_QUERY);
-	if (!r) {
+	wqchan_query_rslt = (struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(
+						s, p, QBMAN_WQ_QUERY);
+	if (!wqchan_query_rslt) {
 		pr_err("qbman: Query WQ Channel %d failed, no response\n",
 			chanid);
 		return -EIO;
 	}
 
+	*r = *wqchan_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WQ_QUERY);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 20/43] bus/fslmc: fix invalid error FD code
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (18 preceding siblings ...)
  2024-09-18  7:50   ` [v2 19/43] bus/fslmc: fix coverity issue vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 21/43] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
                     ` (23 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Since error code was being set to 0 in case of error which is a valid
fd, it caused memory leak issue.
This issue have been fixed by changing zero to a valid non fd error.
CID: 26661848

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 9a1e53f2ee..5b5fd2e6ca 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2023 NXP
+ *   Copyright 2016-2024 NXP
  *
  */
 
@@ -41,8 +41,6 @@
 #include "portal/dpaa2_hw_pvt.h"
 #include "portal/dpaa2_hw_dpio.h"
 
-#define FSLMC_CONTAINER_MAX_LEN 8 /**< Of the format dprc.XX */
-
 #define FSLMC_VFIO_MP "fslmc_vfio_mp_sync"
 
 /* Container is composed by multiple groups, however,
@@ -415,18 +413,16 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	    mp_reply.nb_received == 1) {
 		mp_rep = &mp_reply.msgs[0];
 		p = (struct vfio_mp_param *)mp_rep->param;
-		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1)
 			vfio_group_fd = mp_rep->fds[0];
-		} else if (p->result == SOCKET_NO_FD) {
+		else if (p->result == SOCKET_NO_FD)
 			DPAA2_BUS_ERR("Bad VFIO group fd");
-			vfio_group_fd = 0;
-		}
 	}
 
 	free(mp_reply.msgs);
 
 add_vfio_group:
-	if (vfio_group_fd <= 0) {
+	if (vfio_group_fd < 0) {
 		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
 				filename, vfio_group_fd);
@@ -1802,14 +1798,11 @@ fslmc_vfio_setup_group(void)
 	}
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
-	if (vfio_group_fd <= 0) {
+	if (vfio_group_fd < 0) {
 		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
-		if (vfio_group_fd <= 0) {
+		if (vfio_group_fd < 0) {
 			DPAA2_BUS_ERR("%s: open group name(%s) failed(%d)",
 				__func__, group_name, vfio_group_fd);
-			if (!vfio_group_fd)
-				close(vfio_group_fd);
-			DPAA2_BUS_ERR("Failed to create MC VFIO group");
 			return -rte_errno;
 		}
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 21/43] bus/fslmc: change qbman eq desc from d to desc
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (19 preceding siblings ...)
  2024-09-18  7:50   ` [v2 20/43] bus/fslmc: fix invalid error FD code vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
                     ` (22 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Change qbman_eq_desc name to avoid redefining same variable.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 3fdca9761d..5d0cedc136 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1008,9 +1008,9 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
 				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
 		p[0] = cl[0] | s->eqcr.pi_vb;
 		if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
-			struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+			struct qbman_eq_desc *desc = (struct qbman_eq_desc *)p;
 
-			d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+			desc->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
 				((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
 		}
 		eqcr_pi++;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (20 preceding siblings ...)
  2024-09-18  7:50   ` [v2 21/43] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 23/43] net/dpaa2: change miss flow ID macro name vanshika.shukla
                     ` (21 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Declare rte_fslmc_vfio_mem_dmamap and rte_fslmc_vfio_mem_dmaunmap
in bus_fslmc_driver.h for external usage.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     | 7 ++++++-
 drivers/bus/fslmc/fslmc_bus.c            | 2 +-
 drivers/bus/fslmc/fslmc_vfio.c           | 3 ++-
 drivers/bus/fslmc/fslmc_vfio.h           | 7 +------
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 2 +-
 5 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 462bf2113e..7479fd35e0 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2016,2021 NXP
+ *   Copyright 2016,2021-2023 NXP
  *
  */
 
@@ -135,6 +135,11 @@ struct rte_dpaa2_object {
 	rte_dpaa2_obj_close_t close;
 };
 
+int
+rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size);
+
 /**
  * A structure describing a DPAA2 driver.
  */
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index ce87b4ddbd..6590b2305f 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -438,7 +438,7 @@ rte_fslmc_probe(void)
 	 * install callback handler.
 	 */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
-		ret = rte_fslmc_vfio_dmamap();
+		ret = fslmc_vfio_dmamap();
 		if (ret) {
 			DPAA2_BUS_ERR("Unable to DMA map existing VAs: (%d)",
 				      ret);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 5b5fd2e6ca..8fca1af322 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1161,7 +1161,8 @@ rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 	return fslmc_unmap_dma(0, iova, size);
 }
 
-int rte_fslmc_vfio_dmamap(void)
+int
+fslmc_vfio_dmamap(void)
 {
 	int i = 0, ret;
 
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 408b35680d..11efcc036e 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -64,10 +64,5 @@ int fslmc_vfio_process_group(void);
 int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
 int fslmc_get_container_group(const char *group_name, int *gropuid);
-int rte_fslmc_vfio_dmamap(void);
-int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova,
-		uint64_t size);
-int rte_fslmc_vfio_mem_dmaunmap(uint64_t iova,
-		uint64_t size);
-
+int fslmc_vfio_dmamap(void);
 #endif /* _FSLMC_VFIO_H_ */
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 42e17d984c..cfa71751d8 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -23,7 +23,7 @@
 #include <dev_driver.h>
 #include "rte_dpaa2_mempool.h"
 
-#include "fslmc_vfio.h"
+#include <bus_fslmc_driver.h>
 #include <fslmc_logs.h>
 #include <mc/fsl_dpbp.h>
 #include <portal/dpaa2_hw_pvt.h>
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 23/43] net/dpaa2: change miss flow ID macro name
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (21 preceding siblings ...)
  2024-09-18  7:50   ` [v2 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 24/43] net/dpaa2: flow API refactor vanshika.shukla
                     ` (20 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Remove miss flow id macro name to DPNI_FS_MISS_DROP since its
conflicting with enum. Also, set default miss flow id to 0.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 15f3343db4..c30c5225c7 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -30,8 +30,7 @@
 int mc_l4_port_identification;
 
 static char *dpaa2_flow_control_log;
-static uint16_t dpaa2_flow_miss_flow_id =
-	DPNI_FS_MISS_DROP;
+static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
 #define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
 
@@ -3994,7 +3993,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
 		struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 		dpaa2_flow_miss_flow_id =
-			atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
+			(uint16_t)atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
 		if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
 			DPAA2_PMD_ERR(
 				"The missed flow ID %d exceeds the max flow ID %d",
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 24/43] net/dpaa2: flow API refactor
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (22 preceding siblings ...)
  2024-09-18  7:50   ` [v2 23/43] net/dpaa2: change miss flow ID macro name vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 25/43] net/dpaa2: dump Rx parser result vanshika.shukla
                     ` (19 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

1) Gather redundant code with same logic from various protocol
   handlers to create common functions.
2) struct dpaa2_key_profile is used to describe each extract's
   offset of rule and size. It's easy to insert new extract previous
   IP address extract.
3) IP address profile is used to describe ipv4/v6 addresses extracts
   located at end of rule.
4) L4 ports profile is used to describe the ports positions and offsets
   of rule.
5) Once the extracts of QoS/FS table are update, go through all
   the existing flows of this table to update the rule data.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c |   27 +-
 drivers/net/dpaa2/dpaa2_ethdev.h |   90 +-
 drivers/net/dpaa2/dpaa2_flow.c   | 4839 ++++++++++++------------------
 3 files changed, 2030 insertions(+), 2926 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index f0b4843472..533effd72b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2805,39 +2805,20 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	/* Init fields w.r.t. classification */
 	memset(&priv->extract.qos_key_extract, 0,
 		sizeof(struct dpaa2_key_extract));
-	priv->extract.qos_extract_param = (size_t)rte_malloc(NULL, 256, 64);
+	priv->extract.qos_extract_param = rte_malloc(NULL, 256, 64);
 	if (!priv->extract.qos_extract_param) {
-		DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow "
-			    " classification ", ret);
+		DPAA2_PMD_ERR("Memory alloc failed");
 		goto init_err;
 	}
-	priv->extract.qos_key_extract.key_info.ipv4_src_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv4_dst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv6_src_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv6_dst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
 
 	for (i = 0; i < MAX_TCS; i++) {
 		memset(&priv->extract.tc_key_extract[i], 0,
 			sizeof(struct dpaa2_key_extract));
-		priv->extract.tc_extract_param[i] =
-			(size_t)rte_malloc(NULL, 256, 64);
+		priv->extract.tc_extract_param[i] = rte_malloc(NULL, 256, 64);
 		if (!priv->extract.tc_extract_param[i]) {
-			DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classification",
-				     ret);
+			DPAA2_PMD_ERR("Memory alloc failed");
 			goto init_err;
 		}
-		priv->extract.tc_key_extract[i].key_info.ipv4_src_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv4_dst_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv6_src_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv6_dst_offset =
-			IP_ADDRESS_OFFSET_INVALID;
 	}
 
 	ret = dpni_set_max_frame_length(dpni_dev, CMD_PRI_LOW, priv->token,
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 6625afaba3..ea1c1b5117 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -145,14 +145,6 @@ extern bool dpaa2_enable_ts[];
 extern uint64_t dpaa2_timestamp_rx_dynflag;
 extern int dpaa2_timestamp_dynfield_offset;
 
-#define DPAA2_QOS_TABLE_RECONFIGURE	1
-#define DPAA2_FS_TABLE_RECONFIGURE	2
-
-#define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4
-#define DPAA2_FS_TABLE_IPADDR_EXTRACT 8
-
-#define DPAA2_FLOW_MAX_KEY_SIZE		16
-
 /* Externally defined */
 extern const struct rte_flow_ops dpaa2_flow_ops;
 
@@ -160,29 +152,85 @@ extern const struct rte_tm_ops dpaa2_tm_ops;
 
 extern bool dpaa2_enable_err_queue;
 
-#define IP_ADDRESS_OFFSET_INVALID (-1)
+struct ipv4_sd_addr_extract_rule {
+	uint32_t ipv4_src;
+	uint32_t ipv4_dst;
+};
+
+struct ipv6_sd_addr_extract_rule {
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+};
 
-struct dpaa2_key_info {
+struct ipv4_ds_addr_extract_rule {
+	uint32_t ipv4_dst;
+	uint32_t ipv4_src;
+};
+
+struct ipv6_ds_addr_extract_rule {
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+union ip_addr_extract_rule {
+	struct ipv4_sd_addr_extract_rule ipv4_sd_addr;
+	struct ipv6_sd_addr_extract_rule ipv6_sd_addr;
+	struct ipv4_ds_addr_extract_rule ipv4_ds_addr;
+	struct ipv6_ds_addr_extract_rule ipv6_ds_addr;
+};
+
+union ip_src_addr_extract_rule {
+	uint32_t ipv4_src;
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+union ip_dst_addr_extract_rule {
+	uint32_t ipv4_dst;
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+enum ip_addr_extract_type {
+	IP_NONE_ADDR_EXTRACT,
+	IP_SRC_EXTRACT,
+	IP_DST_EXTRACT,
+	IP_SRC_DST_EXTRACT,
+	IP_DST_SRC_EXTRACT
+};
+
+struct key_prot_field {
+	enum net_prot prot;
+	uint32_t key_field;
+};
+
+struct dpaa2_key_profile {
+	uint8_t num;
 	uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
 	uint8_t key_size[DPKG_MAX_NUM_OF_EXTRACTS];
-	/* Special for IP address. */
-	int ipv4_src_offset;
-	int ipv4_dst_offset;
-	int ipv6_src_offset;
-	int ipv6_dst_offset;
-	uint8_t key_total_size;
+
+	enum ip_addr_extract_type ip_addr_type;
+	uint8_t ip_addr_extract_pos;
+	uint8_t ip_addr_extract_off;
+
+	uint8_t l4_src_port_present;
+	uint8_t l4_src_port_pos;
+	uint8_t l4_src_port_offset;
+	uint8_t l4_dst_port_present;
+	uint8_t l4_dst_port_pos;
+	uint8_t l4_dst_port_offset;
+	struct key_prot_field prot_field[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint16_t key_max_size;
 };
 
 struct dpaa2_key_extract {
 	struct dpkg_profile_cfg dpkg;
-	struct dpaa2_key_info key_info;
+	struct dpaa2_key_profile key_profile;
 };
 
 struct extract_s {
 	struct dpaa2_key_extract qos_key_extract;
 	struct dpaa2_key_extract tc_key_extract[MAX_TCS];
-	uint64_t qos_extract_param;
-	uint64_t tc_extract_param[MAX_TCS];
+	uint8_t *qos_extract_param;
+	uint8_t *tc_extract_param[MAX_TCS];
 };
 
 struct dpaa2_dev_priv {
@@ -233,7 +281,8 @@ struct dpaa2_dev_priv {
 	/* Stores correction offset for one step timestamping */
 	uint16_t ptp_correction_offset;
 
-	LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */
+	struct dpaa2_dev_flow *curr;
+	LIST_HEAD(, dpaa2_dev_flow) flows;
 	LIST_HEAD(nodes, dpaa2_tm_node) nodes;
 	LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles;
 };
@@ -292,7 +341,6 @@ uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
 void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci, struct dpaa2_queue *dpaa2_q);
 void dpaa2_flow_clean(struct rte_eth_dev *dev);
 uint16_t dpaa2_dev_tx_conf(void *queue)  __rte_unused;
-int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev);
 
 int dpaa2_timesync_enable(struct rte_eth_dev *dev);
 int dpaa2_timesync_disable(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index c30c5225c7..0522fdb026 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2022 NXP
  */
 
 #include <sys/queue.h>
@@ -27,41 +27,40 @@
  * MC/WRIOP are not able to identify
  * the l4 protocol with l4 ports.
  */
-int mc_l4_port_identification;
+static int mc_l4_port_identification;
 
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
-#define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
-
-enum flow_rule_ipaddr_type {
-	FLOW_NONE_IPADDR,
-	FLOW_IPV4_ADDR,
-	FLOW_IPV6_ADDR
+enum dpaa2_flow_entry_size {
+	DPAA2_FLOW_ENTRY_MIN_SIZE = (DPNI_MAX_KEY_SIZE / 2),
+	DPAA2_FLOW_ENTRY_MAX_SIZE = DPNI_MAX_KEY_SIZE
 };
 
-struct flow_rule_ipaddr {
-	enum flow_rule_ipaddr_type ipaddr_type;
-	int qos_ipsrc_offset;
-	int qos_ipdst_offset;
-	int fs_ipsrc_offset;
-	int fs_ipdst_offset;
+enum dpaa2_flow_dist_type {
+	DPAA2_FLOW_QOS_TYPE = 1 << 0,
+	DPAA2_FLOW_FS_TYPE = 1 << 1
 };
 
-struct rte_flow {
-	LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
+#define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
+#define DPAA2_FLOW_MAX_KEY_SIZE			16
+
+struct dpaa2_dev_flow {
+	LIST_ENTRY(dpaa2_dev_flow) next;
 	struct dpni_rule_cfg qos_rule;
+	uint8_t *qos_key_addr;
+	uint8_t *qos_mask_addr;
+	uint16_t qos_rule_size;
 	struct dpni_rule_cfg fs_rule;
 	uint8_t qos_real_key_size;
 	uint8_t fs_real_key_size;
+	uint8_t *fs_key_addr;
+	uint8_t *fs_mask_addr;
+	uint16_t fs_rule_size;
 	uint8_t tc_id; /** Traffic Class ID. */
 	uint8_t tc_index; /** index within this Traffic Class. */
-	enum rte_flow_action_type action;
-	/* Special for IP address to specify the offset
-	 * in key/mask.
-	 */
-	struct flow_rule_ipaddr ipaddr_rule;
-	struct dpni_fs_action_cfg action_cfg;
+	enum rte_flow_action_type action_type;
+	struct dpni_fs_action_cfg fs_action_cfg;
 };
 
 static const
@@ -94,9 +93,6 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
 };
 
-/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
-#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
-
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
 	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
@@ -155,11 +151,12 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
-
 #endif
 
-static inline void dpaa2_prot_field_string(
-	enum net_prot prot, uint32_t field,
+#define DPAA2_FLOW_DUMP printf
+
+static inline void
+dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 	char *string)
 {
 	if (!dpaa2_flow_control_log)
@@ -234,60 +231,84 @@ static inline void dpaa2_prot_field_string(
 	}
 }
 
-static inline void dpaa2_flow_qos_table_extracts_log(
-	const struct dpaa2_dev_priv *priv, FILE *f)
+static inline void
+dpaa2_flow_qos_extracts_log(const struct dpaa2_dev_priv *priv)
 {
 	int idx;
 	char string[32];
+	const struct dpkg_profile_cfg *dpkg =
+		&priv->extract.qos_key_extract.dpkg;
+	const struct dpkg_extract *extract;
+	enum dpkg_extract_type type;
+	enum net_prot prot;
+	uint32_t field;
 
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "Setup QoS table: number of extracts: %d\r\n",
-			priv->extract.qos_key_extract.dpkg.num_extracts);
-	for (idx = 0; idx < priv->extract.qos_key_extract.dpkg.num_extracts;
-		idx++) {
-		dpaa2_prot_field_string(priv->extract.qos_key_extract.dpkg
-			.extracts[idx].extract.from_hdr.prot,
-			priv->extract.qos_key_extract.dpkg.extracts[idx]
-			.extract.from_hdr.field,
-			string);
-		fprintf(f, "%s", string);
-		if ((idx + 1) < priv->extract.qos_key_extract.dpkg.num_extracts)
-			fprintf(f, " / ");
-	}
-	fprintf(f, "\r\n");
+	DPAA2_FLOW_DUMP("QoS table: %d extracts\r\n",
+		dpkg->num_extracts);
+	for (idx = 0; idx < dpkg->num_extracts; idx++) {
+		extract = &dpkg->extracts[idx];
+		type = extract->type;
+		if (type == DPKG_EXTRACT_FROM_HDR) {
+			prot = extract->extract.from_hdr.prot;
+			field = extract->extract.from_hdr.field;
+			dpaa2_prot_field_string(prot, field,
+				string);
+		} else if (type == DPKG_EXTRACT_FROM_DATA) {
+			sprintf(string, "raw offset/len: %d/%d",
+				extract->extract.from_data.offset,
+				extract->extract.from_data.size);
+		}
+		DPAA2_FLOW_DUMP("%s", string);
+		if ((idx + 1) < dpkg->num_extracts)
+			DPAA2_FLOW_DUMP(" / ");
+	}
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_fs_table_extracts_log(
-	const struct dpaa2_dev_priv *priv, int tc_id, FILE *f)
+static inline void
+dpaa2_flow_fs_extracts_log(const struct dpaa2_dev_priv *priv,
+	int tc_id)
 {
 	int idx;
 	char string[32];
+	const struct dpkg_profile_cfg *dpkg =
+		&priv->extract.tc_key_extract[tc_id].dpkg;
+	const struct dpkg_extract *extract;
+	enum dpkg_extract_type type;
+	enum net_prot prot;
+	uint32_t field;
 
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "Setup FS table: number of extracts of TC[%d]: %d\r\n",
-			tc_id, priv->extract.tc_key_extract[tc_id]
-			.dpkg.num_extracts);
-	for (idx = 0; idx < priv->extract.tc_key_extract[tc_id]
-		.dpkg.num_extracts; idx++) {
-		dpaa2_prot_field_string(priv->extract.tc_key_extract[tc_id]
-			.dpkg.extracts[idx].extract.from_hdr.prot,
-			priv->extract.tc_key_extract[tc_id].dpkg.extracts[idx]
-			.extract.from_hdr.field,
-			string);
-		fprintf(f, "%s", string);
-		if ((idx + 1) < priv->extract.tc_key_extract[tc_id]
-			.dpkg.num_extracts)
-			fprintf(f, " / ");
-	}
-	fprintf(f, "\r\n");
+	DPAA2_FLOW_DUMP("FS table: %d extracts in TC[%d]\r\n",
+		dpkg->num_extracts, tc_id);
+	for (idx = 0; idx < dpkg->num_extracts; idx++) {
+		extract = &dpkg->extracts[idx];
+		type = extract->type;
+		if (type == DPKG_EXTRACT_FROM_HDR) {
+			prot = extract->extract.from_hdr.prot;
+			field = extract->extract.from_hdr.field;
+			dpaa2_prot_field_string(prot, field,
+				string);
+		} else if (type == DPKG_EXTRACT_FROM_DATA) {
+			sprintf(string, "raw offset/len: %d/%d",
+				extract->extract.from_data.offset,
+				extract->extract.from_data.size);
+		}
+		DPAA2_FLOW_DUMP("%s", string);
+		if ((idx + 1) < dpkg->num_extracts)
+			DPAA2_FLOW_DUMP(" / ");
+	}
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_qos_entry_log(
-	const char *log_info, const struct rte_flow *flow, int qos_index, FILE *f)
+static inline void
+dpaa2_flow_qos_entry_log(const char *log_info,
+	const struct dpaa2_dev_flow *flow, int qos_index)
 {
 	int idx;
 	uint8_t *key, *mask;
@@ -295,27 +316,34 @@ static inline void dpaa2_flow_qos_entry_log(
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "\r\n%s QoS entry[%d] for TC[%d], extracts size is %d\r\n",
-		log_info, qos_index, flow->tc_id, flow->qos_real_key_size);
-
-	key = (uint8_t *)(size_t)flow->qos_rule.key_iova;
-	mask = (uint8_t *)(size_t)flow->qos_rule.mask_iova;
+	if (qos_index >= 0) {
+		DPAA2_FLOW_DUMP("%s QoS entry[%d](size %d/%d) for TC[%d]\r\n",
+			log_info, qos_index, flow->qos_rule_size,
+			flow->qos_rule.key_size,
+			flow->tc_id);
+	} else {
+		DPAA2_FLOW_DUMP("%s QoS entry(size %d/%d) for TC[%d]\r\n",
+			log_info, flow->qos_rule_size,
+			flow->qos_rule.key_size,
+			flow->tc_id);
+	}
 
-	fprintf(f, "key:\r\n");
-	for (idx = 0; idx < flow->qos_real_key_size; idx++)
-		fprintf(f, "%02x ", key[idx]);
+	key = flow->qos_key_addr;
+	mask = flow->qos_mask_addr;
 
-	fprintf(f, "\r\nmask:\r\n");
-	for (idx = 0; idx < flow->qos_real_key_size; idx++)
-		fprintf(f, "%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("key:\r\n");
+	for (idx = 0; idx < flow->qos_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", key[idx]);
 
-	fprintf(f, "\r\n%s QoS ipsrc: %d, ipdst: %d\r\n", log_info,
-		flow->ipaddr_rule.qos_ipsrc_offset,
-		flow->ipaddr_rule.qos_ipdst_offset);
+	DPAA2_FLOW_DUMP("\r\nmask:\r\n");
+	for (idx = 0; idx < flow->qos_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_fs_entry_log(
-	const char *log_info, const struct rte_flow *flow, FILE *f)
+static inline void
+dpaa2_flow_fs_entry_log(const char *log_info,
+	const struct dpaa2_dev_flow *flow)
 {
 	int idx;
 	uint8_t *key, *mask;
@@ -323,187 +351,432 @@ static inline void dpaa2_flow_fs_entry_log(
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "\r\n%s FS/TC entry[%d] of TC[%d], extracts size is %d\r\n",
-		log_info, flow->tc_index, flow->tc_id, flow->fs_real_key_size);
+	DPAA2_FLOW_DUMP("%s FS/TC entry[%d](size %d/%d) of TC[%d]\r\n",
+		log_info, flow->tc_index,
+		flow->fs_rule_size, flow->fs_rule.key_size,
+		flow->tc_id);
+
+	key = flow->fs_key_addr;
+	mask = flow->fs_mask_addr;
+
+	DPAA2_FLOW_DUMP("key:\r\n");
+	for (idx = 0; idx < flow->fs_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", key[idx]);
+
+	DPAA2_FLOW_DUMP("\r\nmask:\r\n");
+	for (idx = 0; idx < flow->fs_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("\r\n");
+}
 
-	key = (uint8_t *)(size_t)flow->fs_rule.key_iova;
-	mask = (uint8_t *)(size_t)flow->fs_rule.mask_iova;
+static int
+dpaa2_flow_ip_address_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_IPV4 &&
+		(field == NH_FLD_IPV4_SRC_IP ||
+		field == NH_FLD_IPV4_DST_IP))
+		return true;
+	else if (prot == NET_PROT_IPV6 &&
+		(field == NH_FLD_IPV6_SRC_IP ||
+		field == NH_FLD_IPV6_DST_IP))
+		return true;
+	else if (prot == NET_PROT_IP &&
+		(field == NH_FLD_IP_SRC ||
+		field == NH_FLD_IP_DST))
+		return true;
 
-	fprintf(f, "key:\r\n");
-	for (idx = 0; idx < flow->fs_real_key_size; idx++)
-		fprintf(f, "%02x ", key[idx]);
+	return false;
+}
 
-	fprintf(f, "\r\nmask:\r\n");
-	for (idx = 0; idx < flow->fs_real_key_size; idx++)
-		fprintf(f, "%02x ", mask[idx]);
+static int
+dpaa2_flow_l4_src_port_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_TCP &&
+		field == NH_FLD_TCP_PORT_SRC)
+		return true;
+	else if (prot == NET_PROT_UDP &&
+		field == NH_FLD_UDP_PORT_SRC)
+		return true;
+	else if (prot == NET_PROT_SCTP &&
+		field == NH_FLD_SCTP_PORT_SRC)
+		return true;
+
+	return false;
+}
 
-	fprintf(f, "\r\n%s FS ipsrc: %d, ipdst: %d\r\n", log_info,
-		flow->ipaddr_rule.fs_ipsrc_offset,
-		flow->ipaddr_rule.fs_ipdst_offset);
+static int
+dpaa2_flow_l4_dst_port_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_TCP &&
+		field == NH_FLD_TCP_PORT_DST)
+		return true;
+	else if (prot == NET_PROT_UDP &&
+		field == NH_FLD_UDP_PORT_DST)
+		return true;
+	else if (prot == NET_PROT_SCTP &&
+		field == NH_FLD_SCTP_PORT_DST)
+		return true;
+
+	return false;
 }
 
-static inline void dpaa2_flow_extract_key_set(
-	struct dpaa2_key_info *key_info, int index, uint8_t size)
+static int
+dpaa2_flow_add_qos_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow)
 {
-	key_info->key_size[index] = size;
-	if (index > 0) {
-		key_info->key_offset[index] =
-			key_info->key_offset[index - 1] +
-			key_info->key_size[index - 1];
-	} else {
-		key_info->key_offset[index] = 0;
+	uint16_t qos_index;
+	int ret;
+	struct fsl_mc_io *dpni = priv->hw;
+
+	if (priv->num_rx_tc <= 1 &&
+		flow->action_type != RTE_FLOW_ACTION_TYPE_RSS) {
+		DPAA2_PMD_WARN("No QoS Table for FS");
+		return -EINVAL;
 	}
-	key_info->key_total_size += size;
+
+	/* QoS entry added is only effective for multiple TCs.*/
+	qos_index = flow->tc_id * priv->fs_entries + flow->tc_index;
+	if (qos_index >= priv->qos_entries) {
+		DPAA2_PMD_ERR("QoS table full(%d >= %d)",
+			qos_index, priv->qos_entries);
+		return -EINVAL;
+	}
+
+	dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
+
+	ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+			priv->token, &flow->qos_rule,
+			flow->tc_id, qos_index,
+			0, 0);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("Add entry(%d) to table(%d) failed",
+			qos_index, flow->tc_id);
+		return ret;
+	}
+
+	return 0;
 }
 
-static int dpaa2_flow_extract_add(
-	struct dpaa2_key_extract *key_extract,
-	enum net_prot prot,
-	uint32_t field, uint8_t field_size)
+static int
+dpaa2_flow_add_fs_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow)
 {
-	int index, ip_src = -1, ip_dst = -1;
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
+	int ret;
+	struct fsl_mc_io *dpni = priv->hw;
 
-	if (dpkg->num_extracts >=
-		DPKG_MAX_NUM_OF_EXTRACTS) {
-		DPAA2_PMD_WARN("Number of extracts overflows");
-		return -1;
+	if (flow->tc_index >= priv->fs_entries) {
+		DPAA2_PMD_ERR("FS table full(%d >= %d)",
+			flow->tc_index, priv->fs_entries);
+		return -EINVAL;
 	}
-	/* Before reorder, the IP SRC and IP DST are already last
-	 * extract(s).
-	 */
-	for (index = 0; index < dpkg->num_extracts; index++) {
-		if (dpkg->extracts[index].extract.from_hdr.prot ==
-			NET_PROT_IP) {
-			if (dpkg->extracts[index].extract.from_hdr.field ==
-				NH_FLD_IP_SRC) {
-				ip_src = index;
-			}
-			if (dpkg->extracts[index].extract.from_hdr.field ==
-				NH_FLD_IP_DST) {
-				ip_dst = index;
+
+	dpaa2_flow_fs_entry_log("Start add", flow);
+
+	ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
+			priv->token, flow->tc_id,
+			flow->tc_index, &flow->fs_rule,
+			&flow->fs_action_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("Add rule(%d) to FS table(%d) failed",
+			flow->tc_index, flow->tc_id);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_rule_insert_hole(struct dpaa2_dev_flow *flow,
+	int offset, int size,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int end;
+
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		end = flow->qos_rule_size;
+		if (end > offset) {
+			memmove(flow->qos_key_addr + offset + size,
+					flow->qos_key_addr + offset,
+					end - offset);
+			memset(flow->qos_key_addr + offset,
+					0, size);
+
+			memmove(flow->qos_mask_addr + offset + size,
+					flow->qos_mask_addr + offset,
+					end - offset);
+			memset(flow->qos_mask_addr + offset,
+					0, size);
+		}
+		flow->qos_rule_size += size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		end = flow->fs_rule_size;
+		if (end > offset) {
+			memmove(flow->fs_key_addr + offset + size,
+					flow->fs_key_addr + offset,
+					end - offset);
+			memset(flow->fs_key_addr + offset,
+					0, size);
+
+			memmove(flow->fs_mask_addr + offset + size,
+					flow->fs_mask_addr + offset,
+					end - offset);
+			memset(flow->fs_mask_addr + offset,
+					0, size);
+		}
+		flow->fs_rule_size += size;
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_rule_add_all(struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type,
+	uint16_t entry_size, uint8_t tc_id)
+{
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	int ret;
+
+	while (curr) {
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			if (priv->num_rx_tc > 1 ||
+				curr->action_type ==
+				RTE_FLOW_ACTION_TYPE_RSS) {
+				curr->qos_rule.key_size = entry_size;
+				ret = dpaa2_flow_add_qos_rule(priv, curr);
+				if (ret)
+					return ret;
 			}
 		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE &&
+			curr->tc_id == tc_id) {
+			curr->fs_rule.key_size = entry_size;
+			ret = dpaa2_flow_add_fs_rule(priv, curr);
+			if (ret)
+				return ret;
+		}
+		curr = LIST_NEXT(curr, next);
 	}
 
-	if (ip_src >= 0)
-		RTE_ASSERT((ip_src + 2) >= dpkg->num_extracts);
+	return 0;
+}
 
-	if (ip_dst >= 0)
-		RTE_ASSERT((ip_dst + 2) >= dpkg->num_extracts);
+static int
+dpaa2_flow_qos_rule_insert_hole(struct dpaa2_dev_priv *priv,
+	int offset, int size)
+{
+	struct dpaa2_dev_flow *curr;
+	int ret;
 
-	if (prot == NET_PROT_IP &&
-		(field == NH_FLD_IP_SRC ||
-		field == NH_FLD_IP_DST)) {
-		index = dpkg->num_extracts;
+	curr = priv->curr;
+	if (!curr) {
+		DPAA2_PMD_ERR("Current qos flow insert hole failed.");
+		return -EINVAL;
 	} else {
-		if (ip_src >= 0 && ip_dst >= 0)
-			index = dpkg->num_extracts - 2;
-		else if (ip_src >= 0 || ip_dst >= 0)
-			index = dpkg->num_extracts - 1;
-		else
-			index = dpkg->num_extracts;
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	dpkg->extracts[index].type =	DPKG_EXTRACT_FROM_HDR;
-	dpkg->extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-	dpkg->extracts[index].extract.from_hdr.prot = prot;
-	dpkg->extracts[index].extract.from_hdr.field = field;
-	if (prot == NET_PROT_IP &&
-		(field == NH_FLD_IP_SRC ||
-		field == NH_FLD_IP_DST)) {
-		dpaa2_flow_extract_key_set(key_info, index, 0);
+	curr = LIST_FIRST(&priv->flows);
+	while (curr) {
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		curr = LIST_NEXT(curr, next);
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_fs_rule_insert_hole(struct dpaa2_dev_priv *priv,
+	int offset, int size, int tc_id)
+{
+	struct dpaa2_dev_flow *curr;
+	int ret;
+
+	curr = priv->curr;
+	if (!curr || curr->tc_id != tc_id) {
+		DPAA2_PMD_ERR("Current flow insert hole failed.");
+		return -EINVAL;
 	} else {
-		dpaa2_flow_extract_key_set(key_info, index, field_size);
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	if (prot == NET_PROT_IP) {
-		if (field == NH_FLD_IP_SRC) {
-			if (key_info->ipv4_dst_offset >= 0) {
-				key_info->ipv4_src_offset =
-					key_info->ipv4_dst_offset +
-					NH_FLD_IPV4_ADDR_SIZE;
-			} else {
-				key_info->ipv4_src_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-			if (key_info->ipv6_dst_offset >= 0) {
-				key_info->ipv6_src_offset =
-					key_info->ipv6_dst_offset +
-					NH_FLD_IPV6_ADDR_SIZE;
-			} else {
-				key_info->ipv6_src_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-		} else if (field == NH_FLD_IP_DST) {
-			if (key_info->ipv4_src_offset >= 0) {
-				key_info->ipv4_dst_offset =
-					key_info->ipv4_src_offset +
-					NH_FLD_IPV4_ADDR_SIZE;
-			} else {
-				key_info->ipv4_dst_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-			if (key_info->ipv6_src_offset >= 0) {
-				key_info->ipv6_dst_offset =
-					key_info->ipv6_src_offset +
-					NH_FLD_IPV6_ADDR_SIZE;
-			} else {
-				key_info->ipv6_dst_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
+	curr = LIST_FIRST(&priv->flows);
+
+	while (curr) {
+		if (curr->tc_id != tc_id) {
+			curr = LIST_NEXT(curr, next);
+			continue;
 		}
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		curr = LIST_NEXT(curr, next);
 	}
 
-	if (index == dpkg->num_extracts) {
-		dpkg->num_extracts++;
-		return 0;
+	return 0;
+}
+
+/* Move IPv4/IPv6 addresses to fill new extract previous IP address.
+ * Current MC/WRIOP only support generic IP extract but IP address
+ * is not fixed, so we have to put them at end of extracts, otherwise,
+ * the extracts position following them can't be identified.
+ */
+static int
+dpaa2_flow_key_profile_advance(enum net_prot prot,
+	uint32_t field, uint8_t field_size,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off += field_size;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, field_size);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, field_size, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = field_size;
+	key_profile->prot_field[pos].prot = prot;
+	key_profile->prot_field[pos].key_field = field;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	if (dpaa2_flow_l4_src_port_extract(prot, field)) {
+		key_profile->l4_src_port_present = 1;
+		key_profile->l4_src_port_pos = pos;
+		key_profile->l4_src_port_offset =
+			key_profile->key_offset[pos];
+	} else if (dpaa2_flow_l4_dst_port_extract(prot, field)) {
+		key_profile->l4_dst_port_present = 1;
+		key_profile->l4_dst_port_pos = pos;
+		key_profile->l4_dst_port_offset =
+			key_profile->key_offset[pos];
+	}
+	key_profile->key_max_size += field_size;
+
+	return pos;
+}
+
+static int
+dpaa2_flow_extract_add_hdr(enum net_prot prot,
+	uint32_t field, uint8_t field_size,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
 	}
 
-	if (ip_src >= 0) {
-		ip_src++;
-		dpkg->extracts[ip_src].type =
-			DPKG_EXTRACT_FROM_HDR;
-		dpkg->extracts[ip_src].extract.from_hdr.type =
-			DPKG_FULL_FIELD;
-		dpkg->extracts[ip_src].extract.from_hdr.prot =
-			NET_PROT_IP;
-		dpkg->extracts[ip_src].extract.from_hdr.field =
-			NH_FLD_IP_SRC;
-		dpaa2_flow_extract_key_set(key_info, ip_src, 0);
-		key_info->ipv4_src_offset += field_size;
-		key_info->ipv6_src_offset += field_size;
-	}
-	if (ip_dst >= 0) {
-		ip_dst++;
-		dpkg->extracts[ip_dst].type =
-			DPKG_EXTRACT_FROM_HDR;
-		dpkg->extracts[ip_dst].extract.from_hdr.type =
-			DPKG_FULL_FIELD;
-		dpkg->extracts[ip_dst].extract.from_hdr.prot =
-			NET_PROT_IP;
-		dpkg->extracts[ip_dst].extract.from_hdr.field =
-			NH_FLD_IP_DST;
-		dpaa2_flow_extract_key_set(key_info, ip_dst, 0);
-		key_info->ipv4_dst_offset += field_size;
-		key_info->ipv6_dst_offset += field_size;
+	pos = dpaa2_flow_key_profile_advance(prot,
+			field, field_size, priv,
+			dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
 	}
 
+	extracts[pos].type = DPKG_EXTRACT_FROM_HDR;
+	extracts[pos].extract.from_hdr.prot = prot;
+	extracts[pos].extract.from_hdr.type = DPKG_FULL_FIELD;
+	extracts[pos].extract.from_hdr.field = field;
+
 	dpkg->num_extracts++;
 
 	return 0;
 }
 
-static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
-				      int size)
+static int
+dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
+	int size)
 {
 	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
+	struct dpaa2_key_profile *key_info = &key_extract->key_profile;
 	int last_extract_size, index;
 
 	if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
@@ -531,83 +804,58 @@ static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
 			DPAA2_FLOW_MAX_KEY_SIZE * index;
 	}
 
-	key_info->key_total_size = size;
+	key_info->key_max_size = size;
 	return 0;
 }
 
-/* Protocol discrimination.
- * Discriminate IPv4/IPv6/vLan by Eth type.
- * Discriminate UDP/TCP/ICMP by next proto of IP.
- */
 static inline int
-dpaa2_flow_proto_discrimination_extract(
-	struct dpaa2_key_extract *key_extract,
-	enum rte_flow_item_type type)
+dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t key_field)
 {
-	if (type == RTE_FLOW_ITEM_TYPE_ETH) {
-		return dpaa2_flow_extract_add(
-				key_extract, NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				sizeof(rte_be16_t));
-	} else if (type == (enum rte_flow_item_type)
-		DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
-		return dpaa2_flow_extract_add(
-				key_extract, NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				NH_FLD_IP_PROTO_SIZE);
-	}
-
-	return -1;
-}
+	int pos;
+	struct key_prot_field *prot_field;
 
-static inline int dpaa2_flow_extract_search(
-	struct dpkg_profile_cfg *dpkg,
-	enum net_prot prot, uint32_t field)
-{
-	int i;
+	if (dpaa2_flow_ip_address_extract(prot, key_field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
 
-	for (i = 0; i < dpkg->num_extracts; i++) {
-		if (dpkg->extracts[i].extract.from_hdr.prot == prot &&
-			dpkg->extracts[i].extract.from_hdr.field == field) {
-			return i;
+	prot_field = key_profile->prot_field;
+	for (pos = 0; pos < key_profile->num; pos++) {
+		if (prot_field[pos].prot == prot &&
+			prot_field[pos].key_field == key_field) {
+			return pos;
 		}
 	}
 
-	return -1;
+	if (dpaa2_flow_l4_src_port_extract(prot, key_field)) {
+		if (key_profile->l4_src_port_present)
+			return key_profile->l4_src_port_pos;
+	} else if (dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
+		if (key_profile->l4_dst_port_present)
+			return key_profile->l4_dst_port_pos;
+	}
+
+	return -ENXIO;
 }
 
-static inline int dpaa2_flow_extract_key_offset(
-	struct dpaa2_key_extract *key_extract,
-	enum net_prot prot, uint32_t field)
+static inline int
+dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t key_field)
 {
 	int i;
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
 
-	if (prot == NET_PROT_IPV4 ||
-		prot == NET_PROT_IPV6)
-		i = dpaa2_flow_extract_search(dpkg, NET_PROT_IP, field);
+	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
+
+	if (i >= 0)
+		return key_profile->key_offset[i];
 	else
-		i = dpaa2_flow_extract_search(dpkg, prot, field);
-
-	if (i >= 0) {
-		if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_SRC)
-			return key_info->ipv4_src_offset;
-		else if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_DST)
-			return key_info->ipv4_dst_offset;
-		else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_SRC)
-			return key_info->ipv6_src_offset;
-		else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_DST)
-			return key_info->ipv6_dst_offset;
-		else
-			return key_info->key_offset[i];
-	} else {
-		return -1;
-	}
+		return i;
 }
 
-struct proto_discrimination {
-	enum rte_flow_item_type type;
+struct prev_proto_field_id {
+	enum net_prot prot;
 	union {
 		rte_be16_t eth_type;
 		uint8_t ip_proto;
@@ -615,103 +863,134 @@ struct proto_discrimination {
 };
 
 static int
-dpaa2_flow_proto_discrimination_rule(
-	struct dpaa2_dev_priv *priv, struct rte_flow *flow,
-	struct proto_discrimination proto, int group)
+dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct prev_proto_field_id *prev_proto,
+	int group,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	enum net_prot prot;
-	uint32_t field;
 	int offset;
-	size_t key_iova;
-	size_t mask_iova;
+	uint8_t *key_addr;
+	uint8_t *mask_addr;
+	uint32_t field = 0;
 	rte_be16_t eth_type;
 	uint8_t ip_proto;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
 
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		prot = NET_PROT_ETH;
+	if (prev_proto->prot == NET_PROT_ETH) {
 		field = NH_FLD_ETH_TYPE;
-	} else if (proto.type == DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
-		prot = NET_PROT_IP;
+	} else if (prev_proto->prot == NET_PROT_IP) {
 		field = NH_FLD_IP_PROTO;
 	} else {
-		DPAA2_PMD_ERR(
-			"Only Eth and IP support to discriminate next proto.");
-		return -1;
-	}
-
-	offset = dpaa2_flow_extract_key_offset(&priv->extract.qos_key_extract,
-			prot, field);
-	if (offset < 0) {
-		DPAA2_PMD_ERR("QoS prot %d field %d extract failed",
-				prot, field);
-		return -1;
-	}
-	key_iova = flow->qos_rule.key_iova + offset;
-	mask_iova = flow->qos_rule.mask_iova + offset;
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		eth_type = proto.eth_type;
-		memcpy((void *)key_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-		eth_type = 0xffff;
-		memcpy((void *)mask_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-	} else {
-		ip_proto = proto.ip_proto;
-		memcpy((void *)key_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-		ip_proto = 0xff;
-		memcpy((void *)mask_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-	}
-
-	offset = dpaa2_flow_extract_key_offset(
-			&priv->extract.tc_key_extract[group],
-			prot, field);
-	if (offset < 0) {
-		DPAA2_PMD_ERR("FS prot %d field %d extract failed",
-				prot, field);
-		return -1;
+		DPAA2_PMD_ERR("Prev proto(%d) not support!",
+			prev_proto->prot);
+		return -EINVAL;
 	}
-	key_iova = flow->fs_rule.key_iova + offset;
-	mask_iova = flow->fs_rule.mask_iova + offset;
 
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		eth_type = proto.eth_type;
-		memcpy((void *)key_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-		eth_type = 0xffff;
-		memcpy((void *)mask_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-	} else {
-		ip_proto = proto.ip_proto;
-		memcpy((void *)key_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-		ip_proto = 0xff;
-		memcpy((void *)mask_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		key_extract = &priv->extract.qos_key_extract;
+		key_profile = &key_extract->key_profile;
+
+		offset = dpaa2_flow_extract_key_offset(key_profile,
+				prev_proto->prot, field);
+		if (offset < 0) {
+			DPAA2_PMD_ERR("%s QoS key extract failed", __func__);
+			return -EINVAL;
+		}
+		key_addr = flow->qos_key_addr + offset;
+		mask_addr = flow->qos_mask_addr + offset;
+		if (prev_proto->prot == NET_PROT_ETH) {
+			eth_type = prev_proto->eth_type;
+			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
+			eth_type = 0xffff;
+			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
+			flow->qos_rule_size += sizeof(rte_be16_t);
+		} else if (prev_proto->prot == NET_PROT_IP) {
+			ip_proto = prev_proto->ip_proto;
+			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
+			ip_proto = 0xff;
+			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
+			flow->qos_rule_size += sizeof(uint8_t);
+		} else {
+			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
+				prev_proto->prot);
+			return -EINVAL;
+		}
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		key_extract = &priv->extract.tc_key_extract[group];
+		key_profile = &key_extract->key_profile;
+
+		offset = dpaa2_flow_extract_key_offset(key_profile,
+				prev_proto->prot, field);
+		if (offset < 0) {
+			DPAA2_PMD_ERR("%s TC[%d] key extract failed",
+				__func__, group);
+			return -EINVAL;
+		}
+		key_addr = flow->fs_key_addr + offset;
+		mask_addr = flow->fs_mask_addr + offset;
+
+		if (prev_proto->prot == NET_PROT_ETH) {
+			eth_type = prev_proto->eth_type;
+			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
+			eth_type = 0xffff;
+			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
+			flow->fs_rule_size += sizeof(rte_be16_t);
+		} else if (prev_proto->prot == NET_PROT_IP) {
+			ip_proto = prev_proto->ip_proto;
+			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
+			ip_proto = 0xff;
+			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
+			flow->fs_rule_size += sizeof(uint8_t);
+		} else {
+			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
+				prev_proto->prot);
+			return -EINVAL;
+		}
 	}
 
 	return 0;
 }
 
 static inline int
-dpaa2_flow_rule_data_set(
-	struct dpaa2_key_extract *key_extract,
-	struct dpni_rule_cfg *rule,
-	enum net_prot prot, uint32_t field,
-	const void *key, const void *mask, int size)
+dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t field, int size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int offset = dpaa2_flow_extract_key_offset(key_extract,
-				prot, field);
+	int offset;
 
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			prot, field);
 	if (offset < 0) {
-		DPAA2_PMD_ERR("prot %d, field %d extract failed",
+		DPAA2_PMD_ERR("P(%d)/F(%d) does not exist!",
 			prot, field);
-		return -1;
+		return -EINVAL;
 	}
 
-	memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
-	memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, size);
+		memcpy((flow->qos_mask_addr + offset), mask, size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size = offset + size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, size);
+		memcpy((flow->fs_mask_addr + offset), mask, size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size = offset + size;
+	}
 
 	return 0;
 }
@@ -728,145 +1007,13 @@ dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
 	return 0;
 }
 
-static inline int
-_dpaa2_flow_rule_move_ipaddr_tail(
-	struct dpaa2_key_extract *key_extract,
-	struct dpni_rule_cfg *rule, int src_offset,
-	uint32_t field, bool ipv4)
-{
-	size_t key_src;
-	size_t mask_src;
-	size_t key_dst;
-	size_t mask_dst;
-	int dst_offset, len;
-	enum net_prot prot;
-	char tmp[NH_FLD_IPV6_ADDR_SIZE];
-
-	if (field != NH_FLD_IP_SRC &&
-		field != NH_FLD_IP_DST) {
-		DPAA2_PMD_ERR("Field of IP addr reorder must be IP SRC/DST");
-		return -1;
-	}
-	if (ipv4)
-		prot = NET_PROT_IPV4;
-	else
-		prot = NET_PROT_IPV6;
-	dst_offset = dpaa2_flow_extract_key_offset(key_extract,
-				prot, field);
-	if (dst_offset < 0) {
-		DPAA2_PMD_ERR("Field %d reorder extract failed", field);
-		return -1;
-	}
-	key_src = rule->key_iova + src_offset;
-	mask_src = rule->mask_iova + src_offset;
-	key_dst = rule->key_iova + dst_offset;
-	mask_dst = rule->mask_iova + dst_offset;
-	if (ipv4)
-		len = sizeof(rte_be32_t);
-	else
-		len = NH_FLD_IPV6_ADDR_SIZE;
-
-	memcpy(tmp, (char *)key_src, len);
-	memset((char *)key_src, 0, len);
-	memcpy((char *)key_dst, tmp, len);
-
-	memcpy(tmp, (char *)mask_src, len);
-	memset((char *)mask_src, 0, len);
-	memcpy((char *)mask_dst, tmp, len);
-
-	return 0;
-}
-
-static inline int
-dpaa2_flow_rule_move_ipaddr_tail(
-	struct rte_flow *flow, struct dpaa2_dev_priv *priv,
-	int fs_group)
+static int
+dpaa2_flow_extract_support(const uint8_t *mask_src,
+	enum rte_flow_item_type type)
 {
-	int ret;
-	enum net_prot prot;
-
-	if (flow->ipaddr_rule.ipaddr_type == FLOW_NONE_IPADDR)
-		return 0;
-
-	if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR)
-		prot = NET_PROT_IPV4;
-	else
-		prot = NET_PROT_IPV6;
-
-	if (flow->ipaddr_rule.qos_ipsrc_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				flow->ipaddr_rule.qos_ipsrc_offset,
-				NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS src address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_SRC);
-	}
-
-	if (flow->ipaddr_rule.qos_ipdst_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				flow->ipaddr_rule.qos_ipdst_offset,
-				NH_FLD_IP_DST, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS dst address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_DST);
-	}
-
-	if (flow->ipaddr_rule.fs_ipsrc_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.tc_key_extract[fs_group],
-				&flow->fs_rule,
-				flow->ipaddr_rule.fs_ipsrc_offset,
-				NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("FS src address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.fs_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[fs_group],
-				prot, NH_FLD_IP_SRC);
-	}
-	if (flow->ipaddr_rule.fs_ipdst_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.tc_key_extract[fs_group],
-				&flow->fs_rule,
-				flow->ipaddr_rule.fs_ipdst_offset,
-				NH_FLD_IP_DST, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("FS dst address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.fs_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[fs_group],
-				prot, NH_FLD_IP_DST);
-	}
-
-	return 0;
-}
-
-static int
-dpaa2_flow_extract_support(
-	const uint8_t *mask_src,
-	enum rte_flow_item_type type)
-{
-	char mask[64];
-	int i, size = 0;
-	const char *mask_support = 0;
+	char mask[64];
+	int i, size = 0;
+	const char *mask_support = 0;
 
 	switch (type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
@@ -906,7 +1053,7 @@ dpaa2_flow_extract_support(
 		size = sizeof(struct rte_flow_item_gre);
 		break;
 	default:
-		return -1;
+		return -EINVAL;
 	}
 
 	memcpy(mask, mask_support, size);
@@ -921,491 +1068,444 @@ dpaa2_flow_extract_support(
 }
 
 static int
-dpaa2_configure_flow_eth(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct prev_proto_field_id *prev_prot,
+	enum dpaa2_flow_dist_type dist_type,
+	int group, int *recfg)
 {
-	int index, ret;
-	int local_cfg = 0;
-	uint32_t group;
-	const struct rte_flow_item_eth *spec, *mask;
-
-	/* TODO: Currently upper bound of range parameter is not implemented */
-	const struct rte_flow_item_eth *last __rte_unused;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
-
-	group = attr->group;
-
-	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_eth *)pattern->spec;
-	last    = (const struct rte_flow_item_eth *)pattern->last;
-	mask    = (const struct rte_flow_item_eth *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_eth_mask);
-	if (!spec) {
-		/* Don't care any field of eth header,
-		 * only care eth protocol.
-		 */
-		DPAA2_PMD_WARN("No pattern spec for Eth flow, just skip");
-		return 0;
-	}
-
-	/* Get traffic class index and flow id to be configured */
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
-
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
-		DPAA2_PMD_WARN("Extract field(s) of ethernet not support.");
-
-		return -1;
-	}
-
-	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_SA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_SA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_SA failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_SA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_SA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_SA failed.");
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	int ret, index, local_cfg = 0, size = 0;
+	struct dpaa2_key_extract *extract;
+	struct dpaa2_key_profile *key_profile;
+	enum net_prot prot = prev_prot->prot;
+	uint32_t key_field = 0;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH_SA rule set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_SA,
-				&spec->hdr.src_addr.addr_bytes,
-				&mask->hdr.src_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_SA,
-				&spec->hdr.src_addr.addr_bytes,
-				&mask->hdr.src_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
-			return -1;
-		}
+	if (prot == NET_PROT_ETH) {
+		key_field = NH_FLD_ETH_TYPE;
+		size = sizeof(rte_be16_t);
+	} else if (prot == NET_PROT_IP) {
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else if (prot == NET_PROT_IPV4) {
+		prot = NET_PROT_IP;
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else if (prot == NET_PROT_IPV6) {
+		prot = NET_PROT_IP;
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else {
+		DPAA2_PMD_ERR("Invalid Prev prot(%d)", prot);
+		return -EINVAL;
 	}
 
-	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_DA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_DA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_DA failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		extract = &priv->extract.qos_key_extract;
+		key_profile = &extract->key_profile;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_DA);
+		index = dpaa2_flow_extract_search(key_profile,
+				prot, key_field);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_DA,
-					RTE_ETHER_ADDR_LEN);
+			ret = dpaa2_flow_extract_add_hdr(prot,
+					key_field, size, priv,
+					DPAA2_FLOW_QOS_TYPE, group,
+					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_DA failed.");
+				DPAA2_PMD_ERR("QOS prev extract add failed");
 
-				return -1;
+				return -EINVAL;
 			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH DA rule set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_DA,
-				&spec->hdr.dst_addr.addr_bytes,
-				&mask->hdr.dst_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
-			return -1;
+			local_cfg |= DPAA2_FLOW_QOS_TYPE;
 		}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_DA,
-				&spec->hdr.dst_addr.addr_bytes,
-				&mask->hdr.dst_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
+		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
-			return -1;
+			DPAA2_PMD_ERR("QoS prev rule set failed");
+			return -EINVAL;
 		}
 	}
 
-	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_TYPE,
-					RTE_ETHER_TYPE_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_TYPE failed.");
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		extract = &priv->extract.tc_key_extract[group];
+		key_profile = &extract->key_profile;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
+		index = dpaa2_flow_extract_search(key_profile,
+				prot, key_field);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_TYPE,
-					RTE_ETHER_TYPE_LEN);
+			ret = dpaa2_flow_extract_add_hdr(prot,
+					key_field, size, priv,
+					DPAA2_FLOW_FS_TYPE, group,
+					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_TYPE failed.");
+				DPAA2_PMD_ERR("FS[%d] prev extract add failed",
+					group);
 
-				return -1;
+				return -EINVAL;
 			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH TYPE rule set failed");
-				return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				&spec->hdr.ether_type,
-				&mask->hdr.ether_type,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
-			return -1;
+			local_cfg |= DPAA2_FLOW_FS_TYPE;
 		}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				&spec->hdr.ether_type,
-				&mask->hdr.ether_type,
-				sizeof(rte_be16_t));
+		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
-			return -1;
+			DPAA2_PMD_ERR("FS[%d] prev rule set failed",
+				group);
+			return -EINVAL;
 		}
 	}
 
-	(*device_configured) |= local_cfg;
+	if (recfg)
+		*recfg = local_cfg;
 
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_vlan(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
+	enum net_prot prot, uint32_t field,
+	const void *key, const void *mask, int size,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int index, ret;
-	int local_cfg = 0;
-	uint32_t group;
-	const struct rte_flow_item_vlan *spec, *mask;
-
-	const struct rte_flow_item_vlan *last __rte_unused;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-
-	group = attr->group;
+	int index, ret, local_cfg = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
 
-	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_vlan *)pattern->spec;
-	last    = (const struct rte_flow_item_vlan *)pattern->last;
-	mask    = (const struct rte_flow_item_vlan *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask);
+	if (dpaa2_flow_ip_address_extract(prot, field))
+		return -EINVAL;
 
-	/* Get traffic class index and flow id to be configured */
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
 
-	if (!spec) {
-		/* Don't care any field of vlan header,
-		 * only care vlan protocol.
-		 */
-		/* Eth type is actually used for vLan classification.
-		 */
-		struct proto_discrimination proto;
+	key_profile = &key_extract->key_profile;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-						&priv->extract.qos_key_extract,
-						RTE_FLOW_ITEM_TYPE_ETH);
-			if (ret) {
-				DPAA2_PMD_ERR(
-				"QoS Ext ETH_TYPE to discriminate vLan failed");
+	index = dpaa2_flow_extract_search(key_profile,
+			prot, field);
+	if (index < 0) {
+		ret = dpaa2_flow_extract_add_hdr(prot,
+				field, size, priv,
+				dist_type, tc_id, NULL);
+		if (ret) {
+			DPAA2_PMD_ERR("QoS Extract P(%d)/F(%d) failed",
+				prot, field);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+			return ret;
 		}
+		local_cfg |= dist_type;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					RTE_FLOW_ITEM_TYPE_ETH);
-			if (ret) {
-				DPAA2_PMD_ERR(
-				"FS Ext ETH_TYPE to discriminate vLan failed.");
+	ret = dpaa2_flow_hdr_rule_data_set(flow, key_profile,
+			prot, field, size, key, mask, dist_type);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS P(%d)/F(%d) rule data set failed",
+			prot, field);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		return ret;
+	}
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"Move ipaddr before vLan discrimination set failed");
-			return -1;
-		}
+	if (recfg)
+		*recfg |= local_cfg;
 
-		proto.type = RTE_FLOW_ITEM_TYPE_ETH;
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("vLan discrimination rule set failed");
-			return -1;
-		}
+	return 0;
+}
 
-		(*device_configured) |= local_cfg;
+static int
+dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
+	enum net_prot prot, uint32_t field,
+	const void *key, const void *mask, int size,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int local_cfg = 0, num, ipaddr_extract_len = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
+	struct dpkg_profile_cfg *dpkg;
+	uint8_t *key_addr, *mask_addr;
+	union ip_addr_extract_rule *ip_addr_data;
+	union ip_addr_extract_rule *ip_addr_mask;
+	enum net_prot orig_prot;
+	uint32_t orig_field;
+
+	if (prot != NET_PROT_IPV4 && prot != NET_PROT_IPV6)
+		return -EINVAL;
 
-		return 0;
+	if (prot == NET_PROT_IPV4 && field != NH_FLD_IPV4_SRC_IP &&
+		field != NH_FLD_IPV4_DST_IP) {
+		return -EINVAL;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VLAN)) {
-		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
-
-		return -1;
+	if (prot == NET_PROT_IPV6 && field != NH_FLD_IPV6_SRC_IP &&
+		field != NH_FLD_IPV6_DST_IP) {
+		return -EINVAL;
 	}
 
-	if (!mask->hdr.vlan_tci)
-		return 0;
-
-	index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_VLAN, NH_FLD_VLAN_TCI);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-						&priv->extract.qos_key_extract,
-						NET_PROT_VLAN,
-						NH_FLD_VLAN_TCI,
-						sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract add VLAN_TCI failed.");
+	orig_prot = prot;
+	orig_field = field;
 
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+	if (prot == NET_PROT_IPV4 &&
+		field == NH_FLD_IPV4_SRC_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_SRC;
+	} else if (prot == NET_PROT_IPV4 &&
+		field == NH_FLD_IPV4_DST_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_DST;
+	} else if (prot == NET_PROT_IPV6 &&
+		field == NH_FLD_IPV6_SRC_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_SRC;
+	} else if (prot == NET_PROT_IPV6 &&
+		field == NH_FLD_IPV6_DST_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_DST;
+	} else {
+		DPAA2_PMD_ERR("Inval P(%d)/F(%d) to extract ip address",
+			prot, field);
+		return -EINVAL;
 	}
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_VLAN, NH_FLD_VLAN_TCI);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.tc_key_extract[group],
-				NET_PROT_VLAN,
-				NH_FLD_VLAN_TCI,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract add VLAN_TCI failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		key_extract = &priv->extract.qos_key_extract;
+		key_profile = &key_extract->key_profile;
+		dpkg = &key_extract->dpkg;
+		num = key_profile->num;
+		key_addr = flow->qos_key_addr;
+		mask_addr = flow->qos_mask_addr;
+	} else {
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+		key_profile = &key_extract->key_profile;
+		dpkg = &key_extract->dpkg;
+		num = key_profile->num;
+		key_addr = flow->fs_key_addr;
+		mask_addr = flow->fs_mask_addr;
 	}
 
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before VLAN TCI rule set failed");
-		return -1;
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_VLAN,
-				NH_FLD_VLAN_TCI,
-				&spec->hdr.vlan_tci,
-				&mask->hdr.vlan_tci,
-				sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
-		return -1;
+	if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT) {
+		if (field == NH_FLD_IP_SRC)
+			key_profile->ip_addr_type = IP_SRC_EXTRACT;
+		else
+			key_profile->ip_addr_type = IP_DST_EXTRACT;
+		ipaddr_extract_len = size;
+
+		key_profile->ip_addr_extract_pos = num;
+		if (num > 0) {
+			key_profile->ip_addr_extract_off =
+				key_profile->key_offset[num - 1] +
+				key_profile->key_size[num - 1];
+		} else {
+			key_profile->ip_addr_extract_off = 0;
+		}
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	} else if (key_profile->ip_addr_type == IP_SRC_EXTRACT) {
+		if (field == NH_FLD_IP_SRC) {
+			ipaddr_extract_len = size;
+			goto rule_configure;
+		}
+		key_profile->ip_addr_type = IP_SRC_DST_EXTRACT;
+		ipaddr_extract_len = size * 2;
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	} else if (key_profile->ip_addr_type == IP_DST_EXTRACT) {
+		if (field == NH_FLD_IP_DST) {
+			ipaddr_extract_len = size;
+			goto rule_configure;
+		}
+		key_profile->ip_addr_type = IP_DST_SRC_EXTRACT;
+		ipaddr_extract_len = size * 2;
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	}
+	key_profile->num++;
+
+	dpkg->extracts[num].extract.from_hdr.prot = prot;
+	dpkg->extracts[num].extract.from_hdr.field = field;
+	dpkg->extracts[num].extract.from_hdr.type = DPKG_FULL_FIELD;
+	dpkg->num_extracts++;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		local_cfg = DPAA2_FLOW_QOS_TYPE;
+	else
+		local_cfg = DPAA2_FLOW_FS_TYPE;
+
+rule_configure:
+	key_addr += key_profile->ip_addr_extract_off;
+	ip_addr_data = (union ip_addr_extract_rule *)key_addr;
+	mask_addr += key_profile->ip_addr_extract_off;
+	ip_addr_mask = (union ip_addr_extract_rule *)mask_addr;
+
+	if (orig_prot == NET_PROT_IPV4 &&
+		orig_field == NH_FLD_IPV4_SRC_IP) {
+		if (key_profile->ip_addr_type == IP_SRC_EXTRACT ||
+			key_profile->ip_addr_type == IP_SRC_DST_EXTRACT) {
+			memcpy(&ip_addr_data->ipv4_sd_addr.ipv4_src,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_sd_addr.ipv4_src,
+				mask, size);
+		} else {
+			memcpy(&ip_addr_data->ipv4_ds_addr.ipv4_src,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_ds_addr.ipv4_src,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV4 &&
+		orig_field == NH_FLD_IPV4_DST_IP) {
+		if (key_profile->ip_addr_type == IP_DST_EXTRACT ||
+			key_profile->ip_addr_type == IP_DST_SRC_EXTRACT) {
+			memcpy(&ip_addr_data->ipv4_ds_addr.ipv4_dst,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_ds_addr.ipv4_dst,
+				mask, size);
+		} else {
+			memcpy(&ip_addr_data->ipv4_sd_addr.ipv4_dst,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_sd_addr.ipv4_dst,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV6 &&
+		orig_field == NH_FLD_IPV6_SRC_IP) {
+		if (key_profile->ip_addr_type == IP_SRC_EXTRACT ||
+			key_profile->ip_addr_type == IP_SRC_DST_EXTRACT) {
+			memcpy(ip_addr_data->ipv6_sd_addr.ipv6_src,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_sd_addr.ipv6_src,
+				mask, size);
+		} else {
+			memcpy(ip_addr_data->ipv6_ds_addr.ipv6_src,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_ds_addr.ipv6_src,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV6 &&
+		orig_field == NH_FLD_IPV6_DST_IP) {
+		if (key_profile->ip_addr_type == IP_DST_EXTRACT ||
+			key_profile->ip_addr_type == IP_DST_SRC_EXTRACT) {
+			memcpy(ip_addr_data->ipv6_ds_addr.ipv6_dst,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_ds_addr.ipv6_dst,
+				mask, size);
+		} else {
+			memcpy(ip_addr_data->ipv6_sd_addr.ipv6_dst,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_sd_addr.ipv6_dst,
+				mask, size);
+		}
 	}
 
-	ret = dpaa2_flow_rule_data_set(
-			&priv->extract.tc_key_extract[group],
-			&flow->fs_rule,
-			NET_PROT_VLAN,
-			NH_FLD_VLAN_TCI,
-			&spec->hdr.vlan_tci,
-			&mask->hdr.vlan_tci,
-			sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
-		return -1;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		flow->qos_rule_size =
+			key_profile->ip_addr_extract_off + ipaddr_extract_len;
+	} else {
+		flow->fs_rule_size =
+			key_profile->ip_addr_extract_off + ipaddr_extract_len;
 	}
 
-	(*device_configured) |= local_cfg;
+	if (recfg)
+		*recfg |= local_cfg;
 
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_ip_discrimation(
-	struct dpaa2_dev_priv *priv, struct rte_flow *flow,
+dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
-	int *local_cfg,	int *device_configured,
-	uint32_t group)
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	struct proto_discrimination proto;
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_eth *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.qos_key_extract.dpkg,
-			NET_PROT_ETH, NH_FLD_ETH_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.qos_key_extract,
-				RTE_FLOW_ITEM_TYPE_ETH);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"QoS Extract ETH_TYPE to discriminate IP failed.");
-			return -1;
-		}
-		(*local_cfg) |= DPAA2_QOS_TABLE_RECONFIGURE;
-	}
+	group = attr->group;
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_ETH, NH_FLD_ETH_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				RTE_FLOW_ITEM_TYPE_ETH);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"FS Extract ETH_TYPE to discriminate IP failed.");
-			return -1;
-		}
-		(*local_cfg) |= DPAA2_FS_TABLE_RECONFIGURE;
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+			pattern->mask : &dpaa2_flow_item_eth_mask;
+	if (!spec) {
+		DPAA2_PMD_WARN("No pattern spec for Eth flow");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before IP discrimination set failed");
-		return -1;
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH)) {
+		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
+
+		return -EINVAL;
 	}
 
-	proto.type = RTE_FLOW_ITEM_TYPE_ETH;
-	if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
-	else
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	ret = dpaa2_flow_proto_discrimination_rule(priv, flow, proto, group);
-	if (ret) {
-		DPAA2_PMD_ERR("IP discrimination rule set failed");
-		return -1;
+	if (memcmp((const char *)&mask->src,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_SA, &spec->src.addr_bytes,
+			&mask->src.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_SA, &spec->src.addr_bytes,
+			&mask->src.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->dst,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_DA, &spec->dst.addr_bytes,
+			&mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_DA, &spec->dst.addr_bytes,
+			&mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->type,
+		zero_cmp, sizeof(rte_be16_t))) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_TYPE, &spec->type,
+			&mask->type, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_TYPE, &spec->type,
+			&mask->type, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	(*device_configured) |= (*local_cfg);
+	(*device_configured) |= local_cfg;
 
 	return 0;
 }
 
-
 static int
-dpaa2_configure_flow_generic_ip(
-	struct rte_flow *flow,
+dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
@@ -1413,419 +1513,338 @@ dpaa2_configure_flow_generic_ip(
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
-	const struct rte_flow_item_ipv4 *spec_ipv4 = 0,
-		*mask_ipv4 = 0;
-	const struct rte_flow_item_ipv6 *spec_ipv6 = 0,
-		*mask_ipv6 = 0;
-	const void *key, *mask;
-	enum net_prot prot;
-
+	const struct rte_flow_item_vlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
-	int size;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
-		spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec;
-		mask_ipv4 = (const struct rte_flow_item_ipv4 *)
-			(pattern->mask ? pattern->mask :
-					&dpaa2_flow_item_ipv4_mask);
-	} else {
-		spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec;
-		mask_ipv6 = (const struct rte_flow_item_ipv6 *)
-			(pattern->mask ? pattern->mask :
-					&dpaa2_flow_item_ipv6_mask);
-	}
+	spec = pattern->spec;
+	mask = pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	ret = dpaa2_configure_flow_ip_discrimation(priv,
-			flow, pattern, &local_cfg,
-			device_configured, group);
-	if (ret) {
-		DPAA2_PMD_ERR("IP discrimination failed!");
-		return -1;
+	if (!spec) {
+		struct prev_proto_field_id prev_proto;
+
+		prev_proto.prot = NET_PROT_ETH;
+		prev_proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_proto,
+				DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+				       RTE_FLOW_ITEM_TYPE_VLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+		return -EINVAL;
 	}
 
-	if (!spec_ipv4 && !spec_ipv6)
+	if (!mask->tci)
 		return 0;
 
-	if (mask_ipv4) {
-		if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
-			RTE_FLOW_ITEM_TYPE_IPV4)) {
-			DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
+					      NH_FLD_VLAN_TCI, &spec->tci,
+					      &mask->tci, sizeof(rte_be16_t),
+					      priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
 
-			return -1;
-		}
-	}
-
-	if (mask_ipv6) {
-		if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
-			RTE_FLOW_ITEM_TYPE_IPV6)) {
-			DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
-
-			return -1;
-		}
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
+					      NH_FLD_VLAN_TCI, &spec->tci,
+					      &mask->tci, sizeof(rte_be16_t),
+					      priv, group, &local_cfg,
+					      DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
 
-	if (mask_ipv4 && (mask_ipv4->hdr.src_addr ||
-		mask_ipv4->hdr.dst_addr)) {
-		flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR;
-	} else if (mask_ipv6 &&
-		(memcmp((const char *)mask_ipv6->hdr.src_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE) ||
-		memcmp((const char *)mask_ipv6->hdr.dst_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		flow->ipaddr_rule.ipaddr_type = FLOW_IPV6_ADDR;
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.src_addr) ||
-		(mask_ipv6 &&
-			memcmp((const char *)mask_ipv6->hdr.src_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_IP,
-					NH_FLD_IP_SRC,
-					0);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_SRC failed.");
+	(*device_configured) |= local_cfg;
+	return 0;
+}
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+static int
+dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
+			  const struct rte_flow_attr *attr,
+			  const struct rte_flow_item *pattern,
+			  const struct rte_flow_action actions[] __rte_unused,
+			  struct rte_flow_error *error __rte_unused,
+			  int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ipv4 *spec_ipv4 = 0, *mask_ipv4 = 0;
+	const void *key, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	int size;
+	struct prev_proto_field_id prev_prot;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_SRC,
-					0);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_SRC failed.");
+	group = attr->group;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	/* Parse pattern list to get the matching parameters */
+	spec_ipv4 = pattern->spec;
+	mask_ipv4 = pattern->mask ?
+		    pattern->mask : &dpaa2_flow_item_ipv4_mask;
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.src_addr;
-		else
-			key = &spec_ipv6->hdr.src_addr[0];
-		if (mask_ipv4) {
-			mask = &mask_ipv4->hdr.src_addr;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-			prot = NET_PROT_IPV4;
-		} else {
-			mask = &mask_ipv6->hdr.src_addr[0];
-			size = NH_FLD_IPV6_ADDR_SIZE;
-			prot = NET_PROT_IPV6;
-		}
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				prot, NH_FLD_IP_SRC,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_SRC rule data set failed");
-			return -1;
-		}
+	prev_prot.prot = NET_PROT_ETH;
+	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				prot, NH_FLD_IP_SRC,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_SRC rule data set failed");
-			return -1;
-		}
+	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE, group,
+			&local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("IPv4 identification failed!");
+		return ret;
+	}
 
-		flow->ipaddr_rule.qos_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_SRC);
-		flow->ipaddr_rule.fs_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[group],
-				prot, NH_FLD_IP_SRC);
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.dst_addr) ||
-		(mask_ipv6 &&
-			memcmp((const char *)mask_ipv6->hdr.dst_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_DST);
-		if (index < 0) {
-			if (mask_ipv4)
-				size = NH_FLD_IPV4_ADDR_SIZE;
-			else
-				size = NH_FLD_IPV6_ADDR_SIZE;
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_IP,
-					NH_FLD_IP_DST,
-					size);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+	if (!spec_ipv4)
+		return 0;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+				       RTE_FLOW_ITEM_TYPE_IPV4)) {
+		DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+		return -EINVAL;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_DST);
-		if (index < 0) {
-			if (mask_ipv4)
-				size = NH_FLD_IPV4_ADDR_SIZE;
-			else
-				size = NH_FLD_IPV6_ADDR_SIZE;
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_DST,
-					size);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+	if (mask_ipv4->hdr.src_addr) {
+		key = &spec_ipv4->hdr.src_addr;
+		mask = &mask_ipv4->hdr.src_addr;
+		size = sizeof(rte_be32_t);
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv4->hdr.dst_addr) {
+		key = &spec_ipv4->hdr.dst_addr;
+		mask = &mask_ipv4->hdr.dst_addr;
+		size = sizeof(rte_be32_t);
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv4->hdr.next_proto_id) {
+		key = &spec_ipv4->hdr.next_proto_id;
+		mask = &mask_ipv4->hdr.next_proto_id;
+		size = sizeof(uint8_t);
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	(*device_configured) |= local_cfg;
+	return 0;
+}
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.dst_addr;
-		else
-			key = spec_ipv6->hdr.dst_addr;
-		if (mask_ipv4) {
-			mask = &mask_ipv4->hdr.dst_addr;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-			prot = NET_PROT_IPV4;
-		} else {
-			mask = &mask_ipv6->hdr.dst_addr[0];
-			size = NH_FLD_IPV6_ADDR_SIZE;
-			prot = NET_PROT_IPV6;
-		}
+static int
+dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
+			  const struct rte_flow_attr *attr,
+			  const struct rte_flow_item *pattern,
+			  const struct rte_flow_action actions[] __rte_unused,
+			  struct rte_flow_error *error __rte_unused,
+			  int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ipv6 *spec_ipv6 = 0, *mask_ipv6 = 0;
+	const void *key, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
+	int size;
+	struct prev_proto_field_id prev_prot;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				prot, NH_FLD_IP_DST,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_DST rule data set failed");
-			return -1;
-		}
+	group = attr->group;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				prot, NH_FLD_IP_DST,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_DST rule data set failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_DST);
-		flow->ipaddr_rule.fs_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[group],
-				prot, NH_FLD_IP_DST);
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.next_proto_id) ||
-		(mask_ipv6 && mask_ipv6->hdr.proto)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-				&priv->extract.qos_key_extract,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				NH_FLD_IP_PROTO_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+	/* Parse pattern list to get the matching parameters */
+	spec_ipv6 = pattern->spec;
+	mask_ipv6 = pattern->mask ? pattern->mask : &dpaa2_flow_item_ipv6_mask;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_PROTO,
-					NH_FLD_IP_PROTO_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+	prev_prot.prot = NET_PROT_ETH;
+	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("IPv6 identification failed!");
+		return ret;
+	}
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr after NH_FLD_IP_PROTO rule set failed");
-			return -1;
-		}
+	if (!spec_ipv6)
+		return 0;
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.next_proto_id;
-		else
-			key = &spec_ipv6->hdr.proto;
-		if (mask_ipv4)
-			mask = &mask_ipv4->hdr.next_proto_id;
-		else
-			mask = &mask_ipv6->hdr.proto;
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				key,	mask, NH_FLD_IP_PROTO_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_PROTO rule data set failed");
-			return -1;
-		}
+	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+				       RTE_FLOW_ITEM_TYPE_IPV6)) {
+		DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
+		return -EINVAL;
+	}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				key,	mask, NH_FLD_IP_PROTO_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_PROTO rule data set failed");
-			return -1;
-		}
+	if (memcmp(mask_ipv6->hdr.src_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
+		key = &spec_ipv6->hdr.src_addr[0];
+		mask = &mask_ipv6->hdr.src_addr[0];
+		size = NH_FLD_IPV6_ADDR_SIZE;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp(mask_ipv6->hdr.dst_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
+		key = &spec_ipv6->hdr.dst_addr[0];
+		mask = &mask_ipv6->hdr.dst_addr[0];
+		size = NH_FLD_IPV6_ADDR_SIZE;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv6->hdr.proto) {
+		key = &spec_ipv6->hdr.proto;
+		mask = &mask_ipv6->hdr.proto;
+		size = sizeof(uint8_t);
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
-
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_icmp(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_icmp *spec, *mask;
-
-	const struct rte_flow_item_icmp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_icmp *)pattern->spec;
-	last    = (const struct rte_flow_item_icmp *)pattern->last;
-	mask    = (const struct rte_flow_item_icmp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_icmp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_icmp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		/* Don't care any field of ICMP header,
-		 * only care ICMP protocol.
-		 * Example: flow create 0 ingress pattern icmp /
-		 */
 		/* Next proto of Generical IP is actually used
 		 * for ICMP identification.
+		 * Example: flow create 0 ingress pattern icmp
 		 */
-		struct proto_discrimination proto;
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate ICMP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate ICMP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before ICMP discrimination set failed");
-			return -1;
-		}
+		struct prev_proto_field_id prev_proto;
 
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_ICMP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("ICMP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_ICMP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
-
 		return 0;
 	}
 
@@ -1833,145 +1852,39 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_ICMP)) {
 		DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.icmp_type) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_TYPE,
-					NH_FLD_ICMP_TYPE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ICMP_TYPE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_TYPE,
-					NH_FLD_ICMP_TYPE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ICMP_TYPE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ICMP TYPE set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_TYPE, &spec->hdr.icmp_type,
+			&mask->hdr.icmp_type, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_TYPE,
-				&spec->hdr.icmp_type,
-				&mask->hdr.icmp_type,
-				NH_FLD_ICMP_TYPE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ICMP_TYPE rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_TYPE,
-				&spec->hdr.icmp_type,
-				&mask->hdr.icmp_type,
-				NH_FLD_ICMP_TYPE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ICMP_TYPE rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_TYPE, &spec->hdr.icmp_type,
+			&mask->hdr.icmp_type, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.icmp_code) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_CODE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_CODE,
-					NH_FLD_ICMP_CODE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ICMP_CODE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_CODE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_CODE,
-					NH_FLD_ICMP_CODE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ICMP_CODE failed.");
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_CODE, &spec->hdr.icmp_code,
+			&mask->hdr.icmp_code, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr after ICMP CODE set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_CODE,
-				&spec->hdr.icmp_code,
-				&mask->hdr.icmp_code,
-				NH_FLD_ICMP_CODE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ICMP_CODE rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_CODE,
-				&spec->hdr.icmp_code,
-				&mask->hdr.icmp_code,
-				NH_FLD_ICMP_CODE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ICMP_CODE rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_CODE, &spec->hdr.icmp_code,
+			&mask->hdr.icmp_code, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -1980,84 +1893,41 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_udp(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_udp *spec, *mask;
-
-	const struct rte_flow_item_udp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_udp *)pattern->spec;
-	last    = (const struct rte_flow_item_udp *)pattern->last;
-	mask    = (const struct rte_flow_item_udp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_udp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_udp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate UDP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate UDP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before UDP discrimination set failed");
-			return -1;
-		}
+		struct prev_proto_field_id prev_proto;
 
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_UDP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("UDP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_UDP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2069,149 +1939,40 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_UDP)) {
 		DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add UDP_SRC failed.");
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_SRC,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add UDP_SRC failed.");
+	if (mask->hdr.dst_port) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before UDP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_UDP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_UDP_PORT_SRC rule data set failed");
-			return -1;
-		}
-	}
-
-	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_DST,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add UDP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_DST,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add UDP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before UDP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_UDP_PORT_DST rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_UDP_PORT_DST rule data set failed");
-			return -1;
-		}
-	}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
 	(*device_configured) |= local_cfg;
 
@@ -2219,84 +1980,41 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_tcp(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_tcp *spec, *mask;
-
-	const struct rte_flow_item_tcp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_tcp *)pattern->spec;
-	last    = (const struct rte_flow_item_tcp *)pattern->last;
-	mask    = (const struct rte_flow_item_tcp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_tcp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_tcp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate TCP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate TCP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before TCP discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_TCP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("TCP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_TCP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2308,149 +2026,39 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_TCP)) {
 		DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_SRC,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add TCP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_SRC,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add TCP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before TCP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_TCP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_TCP_PORT_SRC rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_DST,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add TCP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_DST,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add TCP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before TCP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_TCP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_TCP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -2459,85 +2067,41 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_sctp(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_sctp *spec, *mask;
-
-	const struct rte_flow_item_sctp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_sctp *)pattern->spec;
-	last    = (const struct rte_flow_item_sctp *)pattern->last;
-	mask    = (const struct rte_flow_item_sctp *)
-			(pattern->mask ? pattern->mask :
-				&dpaa2_flow_item_sctp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_sctp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate SCTP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate SCTP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_SCTP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("SCTP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_SCTP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2553,145 +2117,35 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_SRC,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add SCTP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_SRC,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add SCTP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_SCTP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_SCTP_PORT_SRC rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_DST,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add SCTP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_DST,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add SCTP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_SCTP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_SCTP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -2700,88 +2154,46 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_gre(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_gre *spec, *mask;
-
-	const struct rte_flow_item_gre *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_gre *)pattern->spec;
-	last    = (const struct rte_flow_item_gre *)pattern->last;
-	mask    = (const struct rte_flow_item_gre *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_gre_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_gre_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate GRE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate GRE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before GRE discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_GRE;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("GRE discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_GRE;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
-		return 0;
+		if (!spec)
+			return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2794,74 +2206,19 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 	if (!mask->protocol)
 		return 0;
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.qos_key_extract.dpkg,
-			NET_PROT_GRE, NH_FLD_GRE_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.qos_key_extract,
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract add GRE_TYPE failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-	}
-
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_GRE, NH_FLD_GRE_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.tc_key_extract[group],
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract add GRE_TYPE failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-	}
-
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before GRE_TYPE set failed");
-		return -1;
-	}
-
-	ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
-				sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"QoS NH_FLD_GRE_TYPE rule data set failed");
-		return -1;
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GRE,
+			NH_FLD_GRE_TYPE, &spec->protocol,
+			&mask->protocol, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_rule_data_set(
-			&priv->extract.tc_key_extract[group],
-			&flow->fs_rule,
-			NET_PROT_GRE,
-			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
-			sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"FS NH_FLD_GRE_TYPE rule data set failed");
-		return -1;
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GRE,
+			NH_FLD_GRE_TYPE, &spec->protocol,
+			&mask->protocol, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
 
 	(*device_configured) |= local_cfg;
 
@@ -2869,404 +2226,109 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_raw(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_raw *spec = pattern->spec;
 	const struct rte_flow_item_raw *mask = pattern->mask;
 	int prev_key_size =
-		priv->extract.qos_key_extract.key_info.key_total_size;
+		priv->extract.qos_key_extract.key_profile.key_max_size;
 	int local_cfg = 0, ret;
 	uint32_t group;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
-		DPAA2_PMD_ERR("spec or mask not present.");
-		return -EINVAL;
-	}
-	/* Only supports non-relative with offset 0 */
-	if (spec->relative || spec->offset != 0 ||
-	    spec->search || spec->limit) {
-		DPAA2_PMD_ERR("relative and non zero offset not supported.");
-		return -EINVAL;
-	}
-	/* Spec len and mask len should be same */
-	if (spec->length != mask->length) {
-		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
-		return -EINVAL;
-	}
-
-	/* Get traffic class index and flow id to be configured */
-	group = attr->group;
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
-
-	if (prev_key_size <= spec->length) {
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
-						 spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-
-		ret = dpaa2_flow_extract_add_raw(
-					&priv->extract.tc_key_extract[group],
-					spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-	}
-
-	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
-					   mask->pattern, spec->length);
-	if (ret) {
-		DPAA2_PMD_ERR("QoS RAW rule data set failed");
-		return -1;
-	}
-
-	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
-					   mask->pattern, spec->length);
-	if (ret) {
-		DPAA2_PMD_ERR("FS RAW rule data set failed");
-		return -1;
-	}
-
-	(*device_configured) |= local_cfg;
-
-	return 0;
-}
-
-static inline int
-dpaa2_fs_action_supported(enum rte_flow_action_type action)
-{
-	int i;
-
-	for (i = 0; i < (int)(sizeof(dpaa2_supported_fs_action_type) /
-					sizeof(enum rte_flow_action_type)); i++) {
-		if (action == dpaa2_supported_fs_action_type[i])
-			return 1;
-	}
-
-	return 0;
-}
-/* The existing QoS/FS entry with IP address(es)
- * needs update after
- * new extract(s) are inserted before IP
- * address(es) extract(s).
- */
-static int
-dpaa2_flow_entry_update(
-	struct dpaa2_dev_priv *priv, uint8_t tc_id)
-{
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
-	int ret;
-	int qos_ipsrc_offset = -1, qos_ipdst_offset = -1;
-	int fs_ipsrc_offset = -1, fs_ipdst_offset = -1;
-	struct dpaa2_key_extract *qos_key_extract =
-		&priv->extract.qos_key_extract;
-	struct dpaa2_key_extract *tc_key_extract =
-		&priv->extract.tc_key_extract[tc_id];
-	char ipsrc_key[NH_FLD_IPV6_ADDR_SIZE];
-	char ipdst_key[NH_FLD_IPV6_ADDR_SIZE];
-	char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
-	char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
-	int extend = -1, extend1, size = -1;
-	uint16_t qos_index;
-
-	while (curr) {
-		if (curr->ipaddr_rule.ipaddr_type ==
-			FLOW_NONE_IPADDR) {
-			curr = LIST_NEXT(curr, next);
-			continue;
-		}
-
-		if (curr->ipaddr_rule.ipaddr_type ==
-			FLOW_IPV4_ADDR) {
-			qos_ipsrc_offset =
-				qos_key_extract->key_info.ipv4_src_offset;
-			qos_ipdst_offset =
-				qos_key_extract->key_info.ipv4_dst_offset;
-			fs_ipsrc_offset =
-				tc_key_extract->key_info.ipv4_src_offset;
-			fs_ipdst_offset =
-				tc_key_extract->key_info.ipv4_dst_offset;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-		} else {
-			qos_ipsrc_offset =
-				qos_key_extract->key_info.ipv6_src_offset;
-			qos_ipdst_offset =
-				qos_key_extract->key_info.ipv6_dst_offset;
-			fs_ipsrc_offset =
-				tc_key_extract->key_info.ipv6_src_offset;
-			fs_ipdst_offset =
-				tc_key_extract->key_info.ipv6_dst_offset;
-			size = NH_FLD_IPV6_ADDR_SIZE;
-		}
-
-		qos_index = curr->tc_id * priv->fs_entries +
-			curr->tc_index;
-
-		dpaa2_flow_qos_entry_log("Before update", curr, qos_index, stdout);
-
-		if (priv->num_rx_tc > 1) {
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
-					priv->token, &curr->qos_rule);
-			if (ret) {
-				DPAA2_PMD_ERR("Qos entry remove failed.");
-				return -1;
-			}
-		}
-
-		extend = -1;
-
-		if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
-			RTE_ASSERT(qos_ipsrc_offset >=
-				curr->ipaddr_rule.qos_ipsrc_offset);
-			extend1 = qos_ipsrc_offset -
-				curr->ipaddr_rule.qos_ipsrc_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-
-			memcpy(ipsrc_key,
-				(char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				0, size);
-
-			memcpy(ipsrc_mask,
-				(char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				0, size);
-
-			curr->ipaddr_rule.qos_ipsrc_offset = qos_ipsrc_offset;
-		}
-
-		if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
-			RTE_ASSERT(qos_ipdst_offset >=
-				curr->ipaddr_rule.qos_ipdst_offset);
-			extend1 = qos_ipdst_offset -
-				curr->ipaddr_rule.qos_ipdst_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-
-			memcpy(ipdst_key,
-				(char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				0, size);
-
-			memcpy(ipdst_mask,
-				(char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				0, size);
-
-			curr->ipaddr_rule.qos_ipdst_offset = qos_ipdst_offset;
-		}
-
-		if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-			memcpy((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				ipsrc_key,
-				size);
-			memcpy((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				ipsrc_mask,
-				size);
-		}
-		if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-			memcpy((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				ipdst_key,
-				size);
-			memcpy((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				ipdst_mask,
-				size);
-		}
-
-		if (extend >= 0)
-			curr->qos_real_key_size += extend;
-
-		curr->qos_rule.key_size = FIXED_ENTRY_SIZE;
-
-		dpaa2_flow_qos_entry_log("Start update", curr, qos_index, stdout);
-
-		if (priv->num_rx_tc > 1) {
-			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
-					priv->token, &curr->qos_rule,
-					curr->tc_id, qos_index,
-					0, 0);
-			if (ret) {
-				DPAA2_PMD_ERR("Qos entry update failed.");
-				return -1;
-			}
-		}
-
-		if (!dpaa2_fs_action_supported(curr->action)) {
-			curr = LIST_NEXT(curr, next);
-			continue;
-		}
+		DPAA2_PMD_ERR("spec or mask not present.");
+		return -EINVAL;
+	}
+	/* Only supports non-relative with offset 0 */
+	if (spec->relative || spec->offset != 0 ||
+	    spec->search || spec->limit) {
+		DPAA2_PMD_ERR("relative and non zero offset not supported.");
+		return -EINVAL;
+	}
+	/* Spec len and mask len should be same */
+	if (spec->length != mask->length) {
+		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
+		return -EINVAL;
+	}
 
-		dpaa2_flow_fs_entry_log("Before update", curr, stdout);
-		extend = -1;
+	/* Get traffic class index and flow id to be configured */
+	group = attr->group;
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW,
-				priv->token, curr->tc_id, &curr->fs_rule);
+	if (prev_key_size <= spec->length) {
+		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
+						 spec->length);
 		if (ret) {
-			DPAA2_PMD_ERR("FS entry remove failed.");
+			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
 			return -1;
 		}
+		local_cfg |= DPAA2_FLOW_QOS_TYPE;
 
-		if (curr->ipaddr_rule.fs_ipsrc_offset >= 0 &&
-			tc_id == curr->tc_id) {
-			RTE_ASSERT(fs_ipsrc_offset >=
-				curr->ipaddr_rule.fs_ipsrc_offset);
-			extend1 = fs_ipsrc_offset -
-				curr->ipaddr_rule.fs_ipsrc_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			memcpy(ipsrc_key,
-				(char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				0, size);
-
-			memcpy(ipsrc_mask,
-				(char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				0, size);
-
-			curr->ipaddr_rule.fs_ipsrc_offset = fs_ipsrc_offset;
+		ret = dpaa2_flow_extract_add_raw(&priv->extract.tc_key_extract[group],
+					spec->length);
+		if (ret) {
+			DPAA2_PMD_ERR("FS Extract RAW add failed.");
+			return -1;
 		}
+		local_cfg |= DPAA2_FLOW_FS_TYPE;
+	}
 
-		if (curr->ipaddr_rule.fs_ipdst_offset >= 0 &&
-			tc_id == curr->tc_id) {
-			RTE_ASSERT(fs_ipdst_offset >=
-				curr->ipaddr_rule.fs_ipdst_offset);
-			extend1 = fs_ipdst_offset -
-				curr->ipaddr_rule.fs_ipdst_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			memcpy(ipdst_key,
-				(char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				0, size);
-
-			memcpy(ipdst_mask,
-				(char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				0, size);
-
-			curr->ipaddr_rule.fs_ipdst_offset = fs_ipdst_offset;
-		}
+	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
+					   mask->pattern, spec->length);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS RAW rule data set failed");
+		return -1;
+	}
 
-		if (curr->ipaddr_rule.fs_ipsrc_offset >= 0) {
-			memcpy((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				ipsrc_key,
-				size);
-			memcpy((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				ipsrc_mask,
-				size);
-		}
-		if (curr->ipaddr_rule.fs_ipdst_offset >= 0) {
-			memcpy((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				ipdst_key,
-				size);
-			memcpy((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				ipdst_mask,
-				size);
-		}
+	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
+					   mask->pattern, spec->length);
+	if (ret) {
+		DPAA2_PMD_ERR("FS RAW rule data set failed");
+		return -1;
+	}
 
-		if (extend >= 0)
-			curr->fs_real_key_size += extend;
-		curr->fs_rule.key_size = FIXED_ENTRY_SIZE;
+	(*device_configured) |= local_cfg;
 
-		dpaa2_flow_fs_entry_log("Start update", curr, stdout);
+	return 0;
+}
 
-		ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
-				priv->token, curr->tc_id, curr->tc_index,
-				&curr->fs_rule, &curr->action_cfg);
-		if (ret) {
-			DPAA2_PMD_ERR("FS entry update failed.");
-			return -1;
-		}
+static inline int
+dpaa2_fs_action_supported(enum rte_flow_action_type action)
+{
+	int i;
+	int action_num = sizeof(dpaa2_supported_fs_action_type) /
+		sizeof(enum rte_flow_action_type);
 
-		curr = LIST_NEXT(curr, next);
+	for (i = 0; i < action_num; i++) {
+		if (action == dpaa2_supported_fs_action_type[i])
+			return true;
 	}
 
-	return 0;
+	return false;
 }
 
 static inline int
-dpaa2_flow_verify_attr(
-	struct dpaa2_dev_priv *priv,
+dpaa2_flow_verify_attr(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr)
 {
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
 
 	while (curr) {
 		if (curr->tc_id == attr->group &&
 			curr->tc_index == attr->priority) {
-			DPAA2_PMD_ERR(
-				"Flow with group %d and priority %d already exists.",
+			DPAA2_PMD_ERR("Flow(TC[%d].entry[%d] exists",
 				attr->group, attr->priority);
 
-			return -1;
+			return -EINVAL;
 		}
 		curr = LIST_NEXT(curr, next);
 	}
@@ -3279,18 +2341,16 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_action *action)
 {
 	const struct rte_flow_action_port_id *port_id;
+	const struct rte_flow_action_ethdev *ethdev;
 	int idx = -1;
 	struct rte_eth_dev *dest_dev;
 
 	if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
-		port_id = (const struct rte_flow_action_port_id *)
-					action->conf;
+		port_id = action->conf;
 		if (!port_id->original)
 			idx = port_id->id;
 	} else if (action->type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) {
-		const struct rte_flow_action_ethdev *ethdev;
-
-		ethdev = (const struct rte_flow_action_ethdev *)action->conf;
+		ethdev = action->conf;
 		idx = ethdev->port_id;
 	} else {
 		return NULL;
@@ -3310,8 +2370,7 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 }
 
 static inline int
-dpaa2_flow_verify_action(
-	struct dpaa2_dev_priv *priv,
+dpaa2_flow_verify_action(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_action actions[])
 {
@@ -3323,15 +2382,14 @@ dpaa2_flow_verify_action(
 	while (!end_of_list) {
 		switch (actions[j].type) {
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
-			dest_queue = (const struct rte_flow_action_queue *)
-					(actions[j].conf);
+			dest_queue = actions[j].conf;
 			rxq = priv->rx_vq[dest_queue->index];
 			if (attr->group != rxq->tc_index) {
-				DPAA2_PMD_ERR(
-					"RXQ[%d] does not belong to the group %d",
-					dest_queue->index, attr->group);
+				DPAA2_PMD_ERR("FSQ(%d.%d) not in TC[%d]",
+					rxq->tc_index, rxq->flow_id,
+					attr->group);
 
-				return -1;
+				return -ENOTSUP;
 			}
 			break;
 		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
@@ -3345,20 +2403,17 @@ dpaa2_flow_verify_action(
 			rss_conf = (const struct rte_flow_action_rss *)
 					(actions[j].conf);
 			if (rss_conf->queue_num > priv->dist_queues) {
-				DPAA2_PMD_ERR(
-					"RSS number exceeds the distribution size");
+				DPAA2_PMD_ERR("RSS number too large");
 				return -ENOTSUP;
 			}
 			for (i = 0; i < (int)rss_conf->queue_num; i++) {
 				if (rss_conf->queue[i] >= priv->nb_rx_queues) {
-					DPAA2_PMD_ERR(
-						"RSS queue index exceeds the number of RXQs");
+					DPAA2_PMD_ERR("RSS queue not in range");
 					return -ENOTSUP;
 				}
 				rxq = priv->rx_vq[rss_conf->queue[i]];
 				if (rxq->tc_index != attr->group) {
-					DPAA2_PMD_ERR(
-						"Queue/Group combination are not supported\n");
+					DPAA2_PMD_ERR("RSS queue not in group");
 					return -ENOTSUP;
 				}
 			}
@@ -3378,28 +2433,248 @@ dpaa2_flow_verify_action(
 }
 
 static int
-dpaa2_generic_flow_set(struct rte_flow *flow,
-		       struct rte_eth_dev *dev,
-		       const struct rte_flow_attr *attr,
-		       const struct rte_flow_item pattern[],
-		       const struct rte_flow_action actions[],
-		       struct rte_flow_error *error)
+dpaa2_configure_flow_fs_action(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct rte_flow_action *rte_action)
 {
+	struct rte_eth_dev *dest_dev;
+	struct dpaa2_dev_priv *dest_priv;
 	const struct rte_flow_action_queue *dest_queue;
+	struct dpaa2_queue *dest_q;
+
+	memset(&flow->fs_action_cfg, 0,
+		sizeof(struct dpni_fs_action_cfg));
+	flow->action_type = rte_action->type;
+
+	if (flow->action_type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		dest_queue = rte_action->conf;
+		dest_q = priv->rx_vq[dest_queue->index];
+		flow->fs_action_cfg.flow_id = dest_q->flow_id;
+	} else if (flow->action_type == RTE_FLOW_ACTION_TYPE_PORT_ID ||
+		   flow->action_type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) {
+		dest_dev = dpaa2_flow_redirect_dev(priv, rte_action);
+		if (!dest_dev) {
+			DPAA2_PMD_ERR("Invalid device to redirect");
+			return -EINVAL;
+		}
+
+		dest_priv = dest_dev->data->dev_private;
+		dest_q = dest_priv->tx_vq[0];
+		flow->fs_action_cfg.options =
+			DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
+		flow->fs_action_cfg.redirect_obj_token =
+			dest_priv->token;
+		flow->fs_action_cfg.flow_id = dest_q->flow_id;
+	}
+
+	return 0;
+}
+
+static inline uint16_t
+dpaa2_flow_entry_size(uint16_t key_max_size)
+{
+	if (key_max_size > DPAA2_FLOW_ENTRY_MAX_SIZE) {
+		DPAA2_PMD_ERR("Key size(%d) > max(%d)",
+			key_max_size,
+			DPAA2_FLOW_ENTRY_MAX_SIZE);
+
+		return 0;
+	}
+
+	if (key_max_size > DPAA2_FLOW_ENTRY_MIN_SIZE)
+		return DPAA2_FLOW_ENTRY_MAX_SIZE;
+
+	/* Current MC only support fixed entry size(56)*/
+	return DPAA2_FLOW_ENTRY_MAX_SIZE;
+}
+
+static inline int
+dpaa2_flow_clear_fs_table(struct dpaa2_dev_priv *priv,
+	uint8_t tc_id)
+{
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	int need_clear = 0, ret;
+	struct fsl_mc_io *dpni = priv->hw;
+
+	while (curr) {
+		if (curr->tc_id == tc_id) {
+			need_clear = 1;
+			break;
+		}
+		curr = LIST_NEXT(curr, next);
+	}
+
+	if (need_clear) {
+		ret = dpni_clear_fs_entries(dpni, CMD_PRI_LOW,
+				priv->token, tc_id);
+		if (ret) {
+			DPAA2_PMD_ERR("TC[%d] clear failed", tc_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_configure_fs_rss_table(struct dpaa2_dev_priv *priv,
+	uint8_t tc_id, uint16_t dist_size, int rss_dist)
+{
+	struct dpaa2_key_extract *tc_extract;
+	uint8_t *key_cfg_buf;
+	uint64_t key_cfg_iova;
+	int ret;
+	struct dpni_rx_dist_cfg tc_cfg;
+	struct fsl_mc_io *dpni = priv->hw;
+	uint16_t entry_size;
+	uint16_t key_max_size;
+
+	ret = dpaa2_flow_clear_fs_table(priv, tc_id);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] clear failed", tc_id);
+		return ret;
+	}
+
+	tc_extract = &priv->extract.tc_key_extract[tc_id];
+	key_cfg_buf = priv->extract.tc_extract_param[tc_id];
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+
+	key_max_size = tc_extract->key_profile.key_max_size;
+	entry_size = dpaa2_flow_entry_size(key_max_size);
+
+	dpaa2_flow_fs_extracts_log(priv, tc_id);
+	ret = dpkg_prepare_key_cfg(&tc_extract->dpkg,
+			key_cfg_buf);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] prepare key failed", tc_id);
+		return ret;
+	}
+
+	memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
+	tc_cfg.dist_size = dist_size;
+	tc_cfg.key_cfg_iova = key_cfg_iova;
+	if (rss_dist)
+		tc_cfg.enable = true;
+	else
+		tc_cfg.enable = false;
+	tc_cfg.tc = tc_id;
+	ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
+			priv->token, &tc_cfg);
+	if (ret < 0) {
+		if (rss_dist) {
+			DPAA2_PMD_ERR("RSS TC[%d] set failed",
+				tc_id);
+		} else {
+			DPAA2_PMD_ERR("FS TC[%d] hash disable failed",
+				tc_id);
+		}
+
+		return ret;
+	}
+
+	if (rss_dist)
+		return 0;
+
+	tc_cfg.enable = true;
+	tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
+	ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
+			priv->token, &tc_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] FS configured failed", tc_id);
+		return ret;
+	}
+
+	ret = dpaa2_flow_rule_add_all(priv, DPAA2_FLOW_FS_TYPE,
+			entry_size, tc_id);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
+	int rss_dist)
+{
+	struct dpaa2_key_extract *qos_extract;
+	uint8_t *key_cfg_buf;
+	uint64_t key_cfg_iova;
+	int ret;
+	struct dpni_qos_tbl_cfg qos_cfg;
+	struct fsl_mc_io *dpni = priv->hw;
+	uint16_t entry_size;
+	uint16_t key_max_size;
+
+	if (!rss_dist && priv->num_rx_tc <= 1) {
+		/* QoS table is effecitive for FS multiple TCs or RSS.*/
+		return 0;
+	}
+
+	if (LIST_FIRST(&priv->flows)) {
+		ret = dpni_clear_qos_table(dpni, CMD_PRI_LOW,
+				priv->token);
+		if (ret < 0) {
+			DPAA2_PMD_ERR("QoS table clear failed");
+			return ret;
+		}
+	}
+
+	qos_extract = &priv->extract.qos_key_extract;
+	key_cfg_buf = priv->extract.qos_extract_param;
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+
+	key_max_size = qos_extract->key_profile.key_max_size;
+	entry_size = dpaa2_flow_entry_size(key_max_size);
+
+	dpaa2_flow_qos_extracts_log(priv);
+
+	ret = dpkg_prepare_key_cfg(&qos_extract->dpkg,
+			key_cfg_buf);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("QoS prepare extract failed");
+		return ret;
+	}
+	memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
+	qos_cfg.keep_entries = true;
+	qos_cfg.key_cfg_iova = key_cfg_iova;
+	if (rss_dist) {
+		qos_cfg.discard_on_miss = true;
+	} else {
+		qos_cfg.discard_on_miss = false;
+		qos_cfg.default_tc = 0;
+	}
+
+	ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
+			priv->token, &qos_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("QoS table set failed");
+		return ret;
+	}
+
+	ret = dpaa2_flow_rule_add_all(priv, DPAA2_FLOW_QOS_TYPE,
+			entry_size, 0);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item pattern[],
+	const struct rte_flow_action actions[],
+	struct rte_flow_error *error)
+{
 	const struct rte_flow_action_rss *rss_conf;
 	int is_keycfg_configured = 0, end_of_list = 0;
 	int ret = 0, i = 0, j = 0;
-	struct dpni_rx_dist_cfg tc_cfg;
-	struct dpni_qos_tbl_cfg qos_cfg;
-	struct dpni_fs_action_cfg action;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct dpaa2_queue *dest_q;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
-	size_t param;
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
-	uint16_t qos_index;
-	struct rte_eth_dev *dest_dev;
-	struct dpaa2_dev_priv *dest_priv;
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	uint16_t dist_size, key_size;
+	struct dpaa2_key_extract *qos_key_extract;
+	struct dpaa2_key_extract *tc_key_extract;
 
 	ret = dpaa2_flow_verify_attr(priv, attr);
 	if (ret)
@@ -3417,7 +2692,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("ETH flow configuration failed!");
+				DPAA2_PMD_ERR("ETH flow config failed!");
 				return ret;
 			}
 			break;
@@ -3426,17 +2701,25 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("vLan flow configuration failed!");
+				DPAA2_PMD_ERR("vLan flow config failed!");
 				return ret;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
+			ret = dpaa2_configure_flow_ipv4(flow,
+					dev, attr, &pattern[i], actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("IPV4 flow config failed!");
+				return ret;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ret = dpaa2_configure_flow_generic_ip(flow,
+			ret = dpaa2_configure_flow_ipv6(flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("IP flow configuration failed!");
+				DPAA2_PMD_ERR("IPV6 flow config failed!");
 				return ret;
 			}
 			break;
@@ -3445,7 +2728,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("ICMP flow configuration failed!");
+				DPAA2_PMD_ERR("ICMP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3454,7 +2737,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("UDP flow configuration failed!");
+				DPAA2_PMD_ERR("UDP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3463,7 +2746,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("TCP flow configuration failed!");
+				DPAA2_PMD_ERR("TCP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3472,7 +2755,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("SCTP flow configuration failed!");
+				DPAA2_PMD_ERR("SCTP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3481,17 +2764,17 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("GRE flow configuration failed!");
+				DPAA2_PMD_ERR("GRE flow config failed!");
 				return ret;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow,
-						       dev, attr, &pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					dev, attr, &pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("RAW flow configuration failed!");
+				DPAA2_PMD_ERR("RAW flow config failed!");
 				return ret;
 			}
 			break;
@@ -3506,6 +2789,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 		i++;
 	}
 
+	qos_key_extract = &priv->extract.qos_key_extract;
+	key_size = qos_key_extract->key_profile.key_max_size;
+	flow->qos_rule.key_size = dpaa2_flow_entry_size(key_size);
+
+	tc_key_extract = &priv->extract.tc_key_extract[flow->tc_id];
+	key_size = tc_key_extract->key_profile.key_max_size;
+	flow->fs_rule.key_size = dpaa2_flow_entry_size(key_size);
+
 	/* Let's parse action on matching traffic */
 	end_of_list = 0;
 	while (!end_of_list) {
@@ -3513,150 +2804,33 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
 		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
 		case RTE_FLOW_ACTION_TYPE_PORT_ID:
-			memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
-			flow->action = actions[j].type;
-
-			if (actions[j].type == RTE_FLOW_ACTION_TYPE_QUEUE) {
-				dest_queue = (const struct rte_flow_action_queue *)
-								(actions[j].conf);
-				dest_q = priv->rx_vq[dest_queue->index];
-				action.flow_id = dest_q->flow_id;
-			} else {
-				dest_dev = dpaa2_flow_redirect_dev(priv,
-								   &actions[j]);
-				if (!dest_dev) {
-					DPAA2_PMD_ERR("Invalid destination device to redirect!");
-					return -1;
-				}
-
-				dest_priv = dest_dev->data->dev_private;
-				dest_q = dest_priv->tx_vq[0];
-				action.options =
-						DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
-				action.redirect_obj_token = dest_priv->token;
-				action.flow_id = dest_q->flow_id;
-			}
+			ret = dpaa2_configure_flow_fs_action(priv, flow,
+							     &actions[j]);
+			if (ret)
+				return ret;
 
 			/* Configure FS table first*/
-			if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
-				dpaa2_flow_fs_table_extracts_log(priv,
-							flow->tc_id, stdout);
-				if (dpkg_prepare_key_cfg(
-				&priv->extract.tc_key_extract[flow->tc_id].dpkg,
-				(uint8_t *)(size_t)priv->extract
-				.tc_extract_param[flow->tc_id]) < 0) {
-					DPAA2_PMD_ERR(
-					"Unable to prepare extract parameters");
-					return -1;
-				}
-
-				memset(&tc_cfg, 0,
-					sizeof(struct dpni_rx_dist_cfg));
-				tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc;
-				tc_cfg.key_cfg_iova =
-					(uint64_t)priv->extract.tc_extract_param[flow->tc_id];
-				tc_cfg.tc = flow->tc_id;
-				tc_cfg.enable = false;
-				ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
-						priv->token, &tc_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"TC hash cannot be disabled.(%d)",
-						ret);
-					return -1;
-				}
-				tc_cfg.enable = true;
-				tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
-				ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
-							 priv->token, &tc_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"TC distribution cannot be configured.(%d)",
-						ret);
-					return -1;
-				}
+			dist_size = priv->nb_rx_queues / priv->num_rx_tc;
+			if (is_keycfg_configured & DPAA2_FLOW_FS_TYPE) {
+				ret = dpaa2_configure_fs_rss_table(priv,
+								   flow->tc_id,
+								   dist_size,
+								   false);
+				if (ret)
+					return ret;
 			}
 
 			/* Configure QoS table then.*/
-			if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
-				dpaa2_flow_qos_table_extracts_log(priv, stdout);
-				if (dpkg_prepare_key_cfg(
-					&priv->extract.qos_key_extract.dpkg,
-					(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
-					DPAA2_PMD_ERR(
-						"Unable to prepare extract parameters");
-					return -1;
-				}
-
-				memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
-				qos_cfg.discard_on_miss = false;
-				qos_cfg.default_tc = 0;
-				qos_cfg.keep_entries = true;
-				qos_cfg.key_cfg_iova =
-					(size_t)priv->extract.qos_extract_param;
-				/* QoS table is effective for multiple TCs. */
-				if (priv->num_rx_tc > 1) {
-					ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
-						priv->token, &qos_cfg);
-					if (ret < 0) {
-						DPAA2_PMD_ERR(
-						"RSS QoS table can not be configured(%d)\n",
-							ret);
-						return -1;
-					}
-				}
-			}
-
-			flow->qos_real_key_size = priv->extract
-				.qos_key_extract.key_info.key_total_size;
-			if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
-				if (flow->ipaddr_rule.qos_ipdst_offset >=
-					flow->ipaddr_rule.qos_ipsrc_offset) {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipdst_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				} else {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipsrc_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				}
-			} else if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV6_ADDR) {
-				if (flow->ipaddr_rule.qos_ipdst_offset >=
-					flow->ipaddr_rule.qos_ipsrc_offset) {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipdst_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				} else {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipsrc_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				}
+			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
+				ret = dpaa2_configure_qos_table(priv, false);
+				if (ret)
+					return ret;
 			}
 
-			/* QoS entry added is only effective for multiple TCs.*/
 			if (priv->num_rx_tc > 1) {
-				qos_index = flow->tc_id * priv->fs_entries +
-					flow->tc_index;
-				if (qos_index >= priv->qos_entries) {
-					DPAA2_PMD_ERR("QoS table with %d entries full",
-						priv->qos_entries);
-					return -1;
-				}
-				flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
-
-				dpaa2_flow_qos_entry_log("Start add", flow,
-							qos_index, stdout);
-
-				ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
-						priv->token, &flow->qos_rule,
-						flow->tc_id, qos_index,
-						0, 0);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"Error in adding entry to QoS table(%d)", ret);
+				ret = dpaa2_flow_add_qos_rule(priv, flow);
+				if (ret)
 					return ret;
-				}
 			}
 
 			if (flow->tc_index >= priv->fs_entries) {
@@ -3665,140 +2839,47 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 				return -1;
 			}
 
-			flow->fs_real_key_size =
-				priv->extract.tc_key_extract[flow->tc_id]
-				.key_info.key_total_size;
-
-			if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV4_ADDR) {
-				if (flow->ipaddr_rule.fs_ipdst_offset >=
-					flow->ipaddr_rule.fs_ipsrc_offset) {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipdst_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				} else {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipsrc_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				}
-			} else if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV6_ADDR) {
-				if (flow->ipaddr_rule.fs_ipdst_offset >=
-					flow->ipaddr_rule.fs_ipsrc_offset) {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipdst_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				} else {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipsrc_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				}
-			}
-
-			flow->fs_rule.key_size = FIXED_ENTRY_SIZE;
-
-			dpaa2_flow_fs_entry_log("Start add", flow, stdout);
-
-			ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
-						flow->tc_id, flow->tc_index,
-						&flow->fs_rule, &action);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"Error in adding entry to FS table(%d)", ret);
+			ret = dpaa2_flow_add_fs_rule(priv, flow);
+			if (ret)
 				return ret;
-			}
-			memcpy(&flow->action_cfg, &action,
-				sizeof(struct dpni_fs_action_cfg));
+
 			break;
 		case RTE_FLOW_ACTION_TYPE_RSS:
-			rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
+			rss_conf = actions[j].conf;
+			flow->action_type = RTE_FLOW_ACTION_TYPE_RSS;
 
-			flow->action = RTE_FLOW_ACTION_TYPE_RSS;
 			ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types,
-					&priv->extract.tc_key_extract[flow->tc_id].dpkg);
+					&tc_key_extract->dpkg);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"unable to set flow distribution.please check queue config\n");
+				DPAA2_PMD_ERR("TC[%d] distset RSS failed",
+					      flow->tc_id);
 				return ret;
 			}
 
-			/* Allocate DMA'ble memory to write the rules */
-			param = (size_t)rte_malloc(NULL, 256, 64);
-			if (!param) {
-				DPAA2_PMD_ERR("Memory allocation failure\n");
-				return -1;
-			}
-
-			if (dpkg_prepare_key_cfg(
-				&priv->extract.tc_key_extract[flow->tc_id].dpkg,
-				(uint8_t *)param) < 0) {
-				DPAA2_PMD_ERR(
-				"Unable to prepare extract parameters");
-				rte_free((void *)param);
-				return -1;
-			}
-
-			memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
-			tc_cfg.dist_size = rss_conf->queue_num;
-			tc_cfg.key_cfg_iova = (size_t)param;
-			tc_cfg.enable = true;
-			tc_cfg.tc = flow->tc_id;
-			ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
-						 priv->token, &tc_cfg);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"RSS TC table cannot be configured: %d\n",
-					ret);
-				rte_free((void *)param);
-				return -1;
+			dist_size = rss_conf->queue_num;
+			if (is_keycfg_configured & DPAA2_FLOW_FS_TYPE) {
+				ret = dpaa2_configure_fs_rss_table(priv,
+								   flow->tc_id,
+								   dist_size,
+								   true);
+				if (ret)
+					return ret;
 			}
 
-			rte_free((void *)param);
-			if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
-				if (dpkg_prepare_key_cfg(
-					&priv->extract.qos_key_extract.dpkg,
-					(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
-					DPAA2_PMD_ERR(
-					"Unable to prepare extract parameters");
-					return -1;
-				}
-				memset(&qos_cfg, 0,
-					sizeof(struct dpni_qos_tbl_cfg));
-				qos_cfg.discard_on_miss = true;
-				qos_cfg.keep_entries = true;
-				qos_cfg.key_cfg_iova =
-					(size_t)priv->extract.qos_extract_param;
-				ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
-							 priv->token, &qos_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-					"RSS QoS dist can't be configured-%d\n",
-					ret);
-					return -1;
-				}
+			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
+				ret = dpaa2_configure_qos_table(priv, true);
+				if (ret)
+					return ret;
 			}
 
-			/* Add Rule into QoS table */
-			qos_index = flow->tc_id * priv->fs_entries +
-				flow->tc_index;
-			if (qos_index >= priv->qos_entries) {
-				DPAA2_PMD_ERR("QoS table with %d entries full",
-					priv->qos_entries);
-				return -1;
-			}
+			ret = dpaa2_flow_add_qos_rule(priv, flow);
+			if (ret)
+				return ret;
 
-			flow->qos_real_key_size =
-			  priv->extract.qos_key_extract.key_info.key_total_size;
-			flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
-			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-						&flow->qos_rule, flow->tc_id,
-						qos_index, 0, 0);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"Error in entry addition in QoS table(%d)",
-				ret);
+			ret = dpaa2_flow_add_fs_rule(priv, flow);
+			if (ret)
 				return ret;
-			}
+
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
@@ -3812,16 +2893,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 	}
 
 	if (!ret) {
-		if (is_keycfg_configured &
-			(DPAA2_QOS_TABLE_RECONFIGURE |
-			DPAA2_FS_TABLE_RECONFIGURE)) {
-			ret = dpaa2_flow_entry_update(priv, flow->tc_id);
-			if (ret) {
-				DPAA2_PMD_ERR("Flow entry update failed.");
-
-				return -1;
-			}
-		}
 		/* New rules are inserted. */
 		if (!curr) {
 			LIST_INSERT_HEAD(&priv->flows, flow, next);
@@ -3836,7 +2907,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 
 static inline int
 dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
-		      const struct rte_flow_attr *attr)
+	const struct rte_flow_attr *attr)
 {
 	int ret = 0;
 
@@ -3910,18 +2981,18 @@ dpaa2_dev_verify_actions(const struct rte_flow_action actions[])
 	}
 	for (j = 0; actions[j].type != RTE_FLOW_ACTION_TYPE_END; j++) {
 		if (actions[j].type != RTE_FLOW_ACTION_TYPE_DROP &&
-				!actions[j].conf)
+		    !actions[j].conf)
 			ret = -EINVAL;
 	}
 	return ret;
 }
 
-static
-int dpaa2_flow_validate(struct rte_eth_dev *dev,
-			const struct rte_flow_attr *flow_attr,
-			const struct rte_flow_item pattern[],
-			const struct rte_flow_action actions[],
-			struct rte_flow_error *error)
+static int
+dpaa2_flow_validate(struct rte_eth_dev *dev,
+	const struct rte_flow_attr *flow_attr,
+	const struct rte_flow_item pattern[],
+	const struct rte_flow_action actions[],
+	struct rte_flow_error *error)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpni_attr dpni_attr;
@@ -3975,127 +3046,128 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static
-struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
-				   const struct rte_flow_attr *attr,
-				   const struct rte_flow_item pattern[],
-				   const struct rte_flow_action actions[],
-				   struct rte_flow_error *error)
+static struct rte_flow *
+dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
 {
-	struct rte_flow *flow = NULL;
-	size_t key_iova = 0, mask_iova = 0;
+	struct dpaa2_dev_flow *flow = NULL;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int ret;
 
 	dpaa2_flow_control_log =
 		getenv("DPAA2_FLOW_CONTROL_LOG");
 
 	if (getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")) {
-		struct dpaa2_dev_priv *priv = dev->data->dev_private;
-
 		dpaa2_flow_miss_flow_id =
 			(uint16_t)atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
 		if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
-			DPAA2_PMD_ERR(
-				"The missed flow ID %d exceeds the max flow ID %d",
-				dpaa2_flow_miss_flow_id,
-				priv->dist_queues - 1);
+			DPAA2_PMD_ERR("Missed flow ID %d >= dist size(%d)",
+				      dpaa2_flow_miss_flow_id,
+				      priv->dist_queues);
 			return NULL;
 		}
 	}
 
-	flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
+	flow = rte_zmalloc(NULL, sizeof(struct dpaa2_dev_flow),
+			   RTE_CACHE_LINE_SIZE);
 	if (!flow) {
 		DPAA2_PMD_ERR("Failure to allocate memory for flow");
 		goto mem_failure;
 	}
-	/* Allocate DMA'ble memory to write the rules */
-	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!key_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+
+	/* Allocate DMA'ble memory to write the qos rules */
+	flow->qos_key_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->qos_key_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!mask_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+	flow->qos_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->qos_key_addr);
+
+	flow->qos_mask_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->qos_mask_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
+	flow->qos_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->qos_mask_addr);
 
-	flow->qos_rule.key_iova = key_iova;
-	flow->qos_rule.mask_iova = mask_iova;
-
-	/* Allocate DMA'ble memory to write the rules */
-	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!key_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+	/* Allocate DMA'ble memory to write the FS rules */
+	flow->fs_key_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->fs_key_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!mask_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+	flow->fs_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->fs_key_addr);
+
+	flow->fs_mask_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->fs_mask_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
+	flow->fs_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->fs_mask_addr);
 
-	flow->fs_rule.key_iova = key_iova;
-	flow->fs_rule.mask_iova = mask_iova;
-
-	flow->ipaddr_rule.ipaddr_type = FLOW_NONE_IPADDR;
-	flow->ipaddr_rule.qos_ipsrc_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.qos_ipdst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.fs_ipsrc_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.fs_ipdst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
+	priv->curr = flow;
 
-	ret = dpaa2_generic_flow_set(flow, dev, attr, pattern,
-			actions, error);
+	ret = dpaa2_generic_flow_set(flow, dev, attr, pattern, actions, error);
 	if (ret < 0) {
 		if (error && error->type > RTE_FLOW_ERROR_TYPE_ACTION)
 			rte_flow_error_set(error, EPERM,
-					RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-					attr, "unknown");
-		DPAA2_PMD_ERR("Failure to create flow, return code (%d)", ret);
+					   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   attr, "unknown");
+		DPAA2_PMD_ERR("Create flow failed (%d)", ret);
 		goto creation_error;
 	}
 
-	return flow;
+	priv->curr = NULL;
+	return (struct rte_flow *)flow;
+
 mem_failure:
-	rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-			   NULL, "memory alloc");
+	rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "memory alloc");
+
 creation_error:
-	rte_free((void *)flow);
-	rte_free((void *)key_iova);
-	rte_free((void *)mask_iova);
+	if (flow) {
+		if (flow->qos_key_addr)
+			rte_free(flow->qos_key_addr);
+		if (flow->qos_mask_addr)
+			rte_free(flow->qos_mask_addr);
+		if (flow->fs_key_addr)
+			rte_free(flow->fs_key_addr);
+		if (flow->fs_mask_addr)
+			rte_free(flow->fs_mask_addr);
+		rte_free(flow);
+	}
+	priv->curr = NULL;
 
 	return NULL;
 }
 
-static
-int dpaa2_flow_destroy(struct rte_eth_dev *dev,
-		       struct rte_flow *flow,
-		       struct rte_flow_error *error)
+static int
+dpaa2_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *_flow,
+		   struct rte_flow_error *error)
 {
 	int ret = 0;
+	struct dpaa2_dev_flow *flow;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
+	struct fsl_mc_io *dpni = priv->hw;
 
-	switch (flow->action) {
+	flow = (struct dpaa2_dev_flow *)_flow;
+
+	switch (flow->action_type) {
 	case RTE_FLOW_ACTION_TYPE_QUEUE:
 	case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
 	case RTE_FLOW_ACTION_TYPE_PORT_ID:
 		if (priv->num_rx_tc > 1) {
 			/* Remove entry from QoS table first */
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-					&flow->qos_rule);
+			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+						    priv->token,
+						    &flow->qos_rule);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"Error in removing entry from QoS table(%d)", ret);
+				DPAA2_PMD_ERR("Remove FS QoS entry failed");
+				dpaa2_flow_qos_entry_log("Delete failed", flow,
+							 -1);
+				abort();
 				goto error;
 			}
 		}
@@ -4104,34 +3176,37 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
 		ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token,
 					   flow->tc_id, &flow->fs_rule);
 		if (ret < 0) {
-			DPAA2_PMD_ERR(
-				"Error in removing entry from FS table(%d)", ret);
+			DPAA2_PMD_ERR("Remove entry from FS[%d] failed",
+				      flow->tc_id);
 			goto error;
 		}
 		break;
 	case RTE_FLOW_ACTION_TYPE_RSS:
 		if (priv->num_rx_tc > 1) {
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-					&flow->qos_rule);
+			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+						    priv->token,
+						    &flow->qos_rule);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"Error in entry addition in QoS table(%d)", ret);
+				DPAA2_PMD_ERR("Remove RSS QoS entry failed");
 				goto error;
 			}
 		}
 		break;
 	default:
-		DPAA2_PMD_ERR(
-		"Action type (%d) is not supported", flow->action);
+		DPAA2_PMD_ERR("Action(%d) not supported", flow->action_type);
 		ret = -ENOTSUP;
 		break;
 	}
 
 	LIST_REMOVE(flow, next);
-	rte_free((void *)(size_t)flow->qos_rule.key_iova);
-	rte_free((void *)(size_t)flow->qos_rule.mask_iova);
-	rte_free((void *)(size_t)flow->fs_rule.key_iova);
-	rte_free((void *)(size_t)flow->fs_rule.mask_iova);
+	if (flow->qos_key_addr)
+		rte_free(flow->qos_key_addr);
+	if (flow->qos_mask_addr)
+		rte_free(flow->qos_mask_addr);
+	if (flow->fs_key_addr)
+		rte_free(flow->fs_key_addr);
+	if (flow->fs_mask_addr)
+		rte_free(flow->fs_mask_addr);
 	/* Now free the flow */
 	rte_free(flow);
 
@@ -4156,12 +3231,12 @@ dpaa2_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct rte_flow *flow = LIST_FIRST(&priv->flows);
+	struct dpaa2_dev_flow *flow = LIST_FIRST(&priv->flows);
 
 	while (flow) {
-		struct rte_flow *next = LIST_NEXT(flow, next);
+		struct dpaa2_dev_flow *next = LIST_NEXT(flow, next);
 
-		dpaa2_flow_destroy(dev, flow, error);
+		dpaa2_flow_destroy(dev, (struct rte_flow *)flow, error);
 		flow = next;
 	}
 	return 0;
@@ -4169,10 +3244,10 @@ dpaa2_flow_flush(struct rte_eth_dev *dev,
 
 static int
 dpaa2_flow_query(struct rte_eth_dev *dev __rte_unused,
-		struct rte_flow *flow __rte_unused,
-		const struct rte_flow_action *actions __rte_unused,
-		void *data __rte_unused,
-		struct rte_flow_error *error __rte_unused)
+	struct rte_flow *_flow __rte_unused,
+	const struct rte_flow_action *actions __rte_unused,
+	void *data __rte_unused,
+	struct rte_flow_error *error __rte_unused)
 {
 	return 0;
 }
@@ -4189,11 +3264,11 @@ dpaa2_flow_query(struct rte_eth_dev *dev __rte_unused,
 void
 dpaa2_flow_clean(struct rte_eth_dev *dev)
 {
-	struct rte_flow *flow;
+	struct dpaa2_dev_flow *flow;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	while ((flow = LIST_FIRST(&priv->flows)))
-		dpaa2_flow_destroy(dev, flow, NULL);
+		dpaa2_flow_destroy(dev, (struct rte_flow *)flow, NULL);
 }
 
 const struct rte_flow_ops dpaa2_flow_ops = {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 25/43] net/dpaa2: dump Rx parser result
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (23 preceding siblings ...)
  2024-09-18  7:50   ` [v2 24/43] net/dpaa2: flow API refactor vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 26/43] net/dpaa2: enhancement of raw flow extract vanshika.shukla
                     ` (18 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

export DPAA2_PRINT_RX_PARSER_RESULT=1 is used to dump
RX parser result and frame attribute flags generated by
hardware parser and soft parser.
The parser results are converted to big endian described in RM.
The areas set by soft parser are dump as well.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c     |   5 +
 drivers/net/dpaa2/dpaa2_ethdev.h     |  90 ++++++++++
 drivers/net/dpaa2/dpaa2_parse_dump.h | 248 +++++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_rxtx.c       |   7 +
 4 files changed, 350 insertions(+)
 create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 533effd72b..000d7da85c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -75,6 +75,8 @@ int dpaa2_timestamp_dynfield_offset = -1;
 /* Enable error queue */
 bool dpaa2_enable_err_queue;
 
+bool dpaa2_print_parser_result;
+
 #define MAX_NB_RX_DESC		11264
 int total_nb_rx_desc;
 
@@ -2727,6 +2729,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		DPAA2_PMD_INFO("Enable error queue");
 	}
 
+	if (getenv("DPAA2_PRINT_RX_PARSER_RESULT"))
+		dpaa2_print_parser_result = 1;
+
 	/* Allocate memory for hardware structure for queues */
 	ret = dpaa2_alloc_rx_tx_queues(eth_dev);
 	if (ret) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index ea1c1b5117..c864859b3f 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -19,6 +19,8 @@
 #include <mc/fsl_dpni.h>
 #include <mc/fsl_mc_sys.h>
 
+#include "base/dpaa2_hw_dpni_annot.h"
+
 #define DPAA2_MIN_RX_BUF_SIZE 512
 #define DPAA2_MAX_RX_PKT_LEN  10240 /*WRIOP support*/
 #define NET_DPAA2_PMD_DRIVER_NAME net_dpaa2
@@ -152,6 +154,88 @@ extern const struct rte_tm_ops dpaa2_tm_ops;
 
 extern bool dpaa2_enable_err_queue;
 
+extern bool dpaa2_print_parser_result;
+
+#define DPAA2_FAPR_SIZE \
+	(sizeof(struct dpaa2_annot_hdr) - \
+	offsetof(struct dpaa2_annot_hdr, word3))
+
+#define DPAA2_PR_NXTHDR_OFFSET 0
+
+#define DPAA2_FAFE_PSR_OFFSET 2
+#define DPAA2_FAFE_PSR_SIZE 2
+
+#define DPAA2_FAF_PSR_OFFSET 4
+#define DPAA2_FAF_PSR_SIZE 12
+
+#define DPAA2_FAF_TOTAL_SIZE \
+	(DPAA2_FAFE_PSR_SIZE + DPAA2_FAF_PSR_SIZE)
+
+/* Just most popular Frame attribute flags (FAF) here.*/
+enum dpaa2_rx_faf_offset {
+	/* Set by SP start*/
+	FAFE_VXLAN_IN_VLAN_FRAM = 0,
+	FAFE_VXLAN_IN_IPV4_FRAM = 1,
+	FAFE_VXLAN_IN_IPV6_FRAM = 2,
+	FAFE_VXLAN_IN_UDP_FRAM = 3,
+	FAFE_VXLAN_IN_TCP_FRAM = 4,
+	/* Set by SP end*/
+
+	FAF_GTP_PRIMED_FRAM = 1 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_PTP_FRAM = 3 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_VXLAN_FRAM = 4 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ETH_FRAM = 10 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_LLC_SNAP_FRAM = 18 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_VLAN_FRAM = 21 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_PPPOE_PPP_FRAM = 25 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_MPLS_FRAM = 27 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ARP_FRAM = 30 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPV4_FRAM = 34 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPV6_FRAM = 42 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IP_FRAM = 48 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ICMP_FRAM = 57 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IGMP_FRAM = 58 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_GRE_FRAM = 65 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_UDP_FRAM = 70 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_TCP_FRAM = 72 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_FRAM = 77 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_ESP_FRAM = 78 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_AH_FRAM = 79 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_SCTP_FRAM = 81 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_DCCP_FRAM = 83 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_GTP_FRAM = 87 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ESP_FRAM = 89 + DPAA2_FAFE_PSR_SIZE * 8,
+};
+
+#define DPAA2_PR_ETH_OFF_OFFSET 19
+#define DPAA2_PR_TCI_OFF_OFFSET 21
+#define DPAA2_PR_LAST_ETYPE_OFFSET 23
+#define DPAA2_PR_L3_OFF_OFFSET 27
+#define DPAA2_PR_L4_OFF_OFFSET 30
+#define DPAA2_PR_L5_OFF_OFFSET 31
+#define DPAA2_PR_NXTHDR_OFF_OFFSET 34
+
+/* Set by SP for vxlan distribution start*/
+#define DPAA2_VXLAN_IN_TCI_OFFSET 16
+
+#define DPAA2_VXLAN_IN_DADDR0_OFFSET 20
+#define DPAA2_VXLAN_IN_DADDR1_OFFSET 22
+#define DPAA2_VXLAN_IN_DADDR2_OFFSET 24
+#define DPAA2_VXLAN_IN_DADDR3_OFFSET 25
+#define DPAA2_VXLAN_IN_DADDR4_OFFSET 26
+#define DPAA2_VXLAN_IN_DADDR5_OFFSET 28
+
+#define DPAA2_VXLAN_IN_SADDR0_OFFSET 29
+#define DPAA2_VXLAN_IN_SADDR1_OFFSET 32
+#define DPAA2_VXLAN_IN_SADDR2_OFFSET 33
+#define DPAA2_VXLAN_IN_SADDR3_OFFSET 35
+#define DPAA2_VXLAN_IN_SADDR4_OFFSET 41
+#define DPAA2_VXLAN_IN_SADDR5_OFFSET 42
+
+#define DPAA2_VXLAN_VNI_OFFSET 43
+#define DPAA2_VXLAN_IN_TYPE_OFFSET 46
+/* Set by SP for vxlan distribution end*/
+
 struct ipv4_sd_addr_extract_rule {
 	uint32_t ipv4_src;
 	uint32_t ipv4_dst;
@@ -197,7 +281,13 @@ enum ip_addr_extract_type {
 	IP_DST_SRC_EXTRACT
 };
 
+enum key_prot_type {
+	DPAA2_NET_PROT_KEY,
+	DPAA2_FAF_KEY
+};
+
 struct key_prot_field {
+	enum key_prot_type type;
 	enum net_prot prot;
 	uint32_t key_field;
 };
diff --git a/drivers/net/dpaa2/dpaa2_parse_dump.h b/drivers/net/dpaa2/dpaa2_parse_dump.h
new file mode 100644
index 0000000000..f1cdc003de
--- /dev/null
+++ b/drivers/net/dpaa2/dpaa2_parse_dump.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ *   Copyright 2022 NXP
+ *
+ */
+
+#ifndef _DPAA2_PARSE_DUMP_H
+#define _DPAA2_PARSE_DUMP_H
+
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_pmd_dpaa2.h>
+
+#include <dpaa2_hw_pvt.h>
+#include "dpaa2_tm.h"
+
+#include <mc/fsl_dpni.h>
+#include <mc/fsl_mc_sys.h>
+
+#include "base/dpaa2_hw_dpni_annot.h"
+
+#define DPAA2_PR_PRINT printf
+
+struct dpaa2_faf_bit_info {
+	const char *name;
+	int position;
+};
+
+struct dpaa2_fapr_field_info {
+	const char *name;
+	uint16_t value;
+};
+
+struct dpaa2_fapr_array {
+	union {
+		uint64_t pr_64[DPAA2_FAPR_SIZE / 8];
+		uint8_t pr[DPAA2_FAPR_SIZE];
+	};
+};
+
+#define NEXT_HEADER_NAME "Next Header"
+#define ETH_OFF_NAME "ETH OFFSET"
+#define VLAN_TCI_OFF_NAME "VLAN TCI OFFSET"
+#define LAST_ENTRY_OFF_NAME "LAST ETYPE Offset"
+#define L3_OFF_NAME "L3 Offset"
+#define L4_OFF_NAME "L4 Offset"
+#define L5_OFF_NAME "L5 Offset"
+#define NEXT_HEADER_OFF_NAME "Next Header Offset"
+
+static const
+struct dpaa2_fapr_field_info support_dump_fields[] = {
+	{
+		.name = NEXT_HEADER_NAME,
+	},
+	{
+		.name = ETH_OFF_NAME,
+	},
+	{
+		.name = VLAN_TCI_OFF_NAME,
+	},
+	{
+		.name = LAST_ENTRY_OFF_NAME,
+	},
+	{
+		.name = L3_OFF_NAME,
+	},
+	{
+		.name = L4_OFF_NAME,
+	},
+	{
+		.name = L5_OFF_NAME,
+	},
+	{
+		.name = NEXT_HEADER_OFF_NAME,
+	}
+};
+
+static inline void
+dpaa2_print_faf(struct dpaa2_fapr_array *fapr)
+{
+	const int faf_bit_len = DPAA2_FAF_TOTAL_SIZE * 8;
+	struct dpaa2_faf_bit_info faf_bits[faf_bit_len];
+	int i, byte_pos, bit_pos, vxlan = 0, vxlan_vlan = 0;
+	struct rte_ether_hdr vxlan_in_eth;
+	uint16_t vxlan_vlan_tci;
+
+	for (i = 0; i < faf_bit_len; i++) {
+		faf_bits[i].position = i;
+		if (i == FAFE_VXLAN_IN_VLAN_FRAM)
+			faf_bits[i].name = "VXLAN VLAN Present";
+		else if (i == FAFE_VXLAN_IN_IPV4_FRAM)
+			faf_bits[i].name = "VXLAN IPV4 Present";
+		else if (i == FAFE_VXLAN_IN_IPV6_FRAM)
+			faf_bits[i].name = "VXLAN IPV6 Present";
+		else if (i == FAFE_VXLAN_IN_UDP_FRAM)
+			faf_bits[i].name = "VXLAN UDP Present";
+		else if (i == FAFE_VXLAN_IN_TCP_FRAM)
+			faf_bits[i].name = "VXLAN TCP Present";
+		else if (i == FAF_VXLAN_FRAM)
+			faf_bits[i].name = "VXLAN Present";
+		else if (i == FAF_ETH_FRAM)
+			faf_bits[i].name = "Ethernet MAC Present";
+		else if (i == FAF_VLAN_FRAM)
+			faf_bits[i].name = "VLAN 1 Present";
+		else if (i == FAF_IPV4_FRAM)
+			faf_bits[i].name = "IPv4 1 Present";
+		else if (i == FAF_IPV6_FRAM)
+			faf_bits[i].name = "IPv6 1 Present";
+		else if (i == FAF_UDP_FRAM)
+			faf_bits[i].name = "UDP Present";
+		else if (i == FAF_TCP_FRAM)
+			faf_bits[i].name = "TCP Present";
+		else
+			faf_bits[i].name = "Check RM for this unusual frame";
+	}
+
+	DPAA2_PR_PRINT("Frame Annotation Flags:\r\n");
+	for (i = 0; i < faf_bit_len; i++) {
+		byte_pos = i / 8 + DPAA2_FAFE_PSR_OFFSET;
+		bit_pos = i % 8;
+		if (fapr->pr[byte_pos] & (1 << (7 - bit_pos))) {
+			DPAA2_PR_PRINT("FAF bit %d : %s\r\n",
+				faf_bits[i].position, faf_bits[i].name);
+			if (i == FAF_VXLAN_FRAM)
+				vxlan = 1;
+		}
+	}
+
+	if (vxlan) {
+		vxlan_in_eth.dst_addr.addr_bytes[0] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR0_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[1] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR1_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[2] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR2_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[3] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR3_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[4] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR4_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[5] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR5_OFFSET];
+
+		vxlan_in_eth.src_addr.addr_bytes[0] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR0_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[1] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR1_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[2] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR2_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[3] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR3_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[4] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR4_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[5] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR5_OFFSET];
+
+		vxlan_in_eth.ether_type =
+			fapr->pr[DPAA2_VXLAN_IN_TYPE_OFFSET];
+		vxlan_in_eth.ether_type =
+			vxlan_in_eth.ether_type << 8;
+		vxlan_in_eth.ether_type |=
+			fapr->pr[DPAA2_VXLAN_IN_TYPE_OFFSET + 1];
+
+		if (vxlan_in_eth.ether_type == RTE_ETHER_TYPE_VLAN)
+			vxlan_vlan = 1;
+		DPAA2_PR_PRINT("VXLAN inner eth:\r\n");
+		DPAA2_PR_PRINT("dst addr: ");
+		for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+			if (i != 0)
+				DPAA2_PR_PRINT(":");
+			DPAA2_PR_PRINT("%02x",
+				vxlan_in_eth.dst_addr.addr_bytes[i]);
+		}
+		DPAA2_PR_PRINT("\r\n");
+		DPAA2_PR_PRINT("src addr: ");
+		for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+			if (i != 0)
+				DPAA2_PR_PRINT(":");
+			DPAA2_PR_PRINT("%02x",
+				vxlan_in_eth.src_addr.addr_bytes[i]);
+		}
+		DPAA2_PR_PRINT("\r\n");
+		DPAA2_PR_PRINT("type: 0x%04x\r\n",
+			vxlan_in_eth.ether_type);
+		if (vxlan_vlan) {
+			vxlan_vlan_tci = fapr->pr[DPAA2_VXLAN_IN_TCI_OFFSET];
+			vxlan_vlan_tci = vxlan_vlan_tci << 8;
+			vxlan_vlan_tci |=
+				fapr->pr[DPAA2_VXLAN_IN_TCI_OFFSET + 1];
+
+			DPAA2_PR_PRINT("vlan tci: 0x%04x\r\n",
+				vxlan_vlan_tci);
+		}
+	}
+}
+
+static inline void
+dpaa2_print_parse_result(struct dpaa2_annot_hdr *annotation)
+{
+	struct dpaa2_fapr_array fapr;
+	struct dpaa2_fapr_field_info
+		fapr_fields[sizeof(support_dump_fields) /
+		sizeof(struct dpaa2_fapr_field_info)];
+	uint64_t len, i;
+
+	memcpy(&fapr, &annotation->word3, DPAA2_FAPR_SIZE);
+	for (i = 0; i < (DPAA2_FAPR_SIZE / 8); i++)
+		fapr.pr_64[i] = rte_cpu_to_be_64(fapr.pr_64[i]);
+
+	memcpy(fapr_fields, support_dump_fields,
+		sizeof(support_dump_fields));
+
+	for (i = 0;
+		i < sizeof(fapr_fields) /
+		sizeof(struct dpaa2_fapr_field_info);
+		i++) {
+		if (!strcmp(fapr_fields[i].name, NEXT_HEADER_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_NXTHDR_OFFSET];
+			fapr_fields[i].value = fapr_fields[i].value << 8;
+			fapr_fields[i].value |=
+				fapr.pr[DPAA2_PR_NXTHDR_OFFSET + 1];
+		} else if (!strcmp(fapr_fields[i].name, ETH_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_ETH_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, VLAN_TCI_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_TCI_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, LAST_ENTRY_OFF_NAME)) {
+			fapr_fields[i].value =
+				fapr.pr[DPAA2_PR_LAST_ETYPE_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L3_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L3_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L4_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L4_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L5_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L5_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, NEXT_HEADER_OFF_NAME)) {
+			fapr_fields[i].value =
+				fapr.pr[DPAA2_PR_NXTHDR_OFF_OFFSET];
+		}
+	}
+
+	len = sizeof(fapr_fields) / sizeof(struct dpaa2_fapr_field_info);
+	DPAA2_PR_PRINT("Parse Result:\r\n");
+	for (i = 0; i < len; i++) {
+		DPAA2_PR_PRINT("%21s : 0x%02x\r\n",
+			fapr_fields[i].name, fapr_fields[i].value);
+	}
+	dpaa2_print_faf(&fapr);
+}
+
+#endif
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 23f7c4132d..4bb785aa49 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -25,6 +25,7 @@
 #include "dpaa2_pmd_logs.h"
 #include "dpaa2_ethdev.h"
 #include "base/dpaa2_hw_dpni_annot.h"
+#include "dpaa2_parse_dump.h"
 
 static inline uint32_t __rte_hot
 dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
@@ -57,6 +58,9 @@ dpaa2_dev_rx_parse_new(struct rte_mbuf *m, const struct qbman_fd *fd,
 	struct dpaa2_annot_hdr *annotation =
 			(struct dpaa2_annot_hdr *)hw_annot_addr;
 
+	if (unlikely(dpaa2_print_parser_result))
+		dpaa2_print_parse_result(annotation);
+
 	m->packet_type = RTE_PTYPE_UNKNOWN;
 	switch (frc) {
 	case DPAA2_PKT_TYPE_ETHER:
@@ -252,6 +256,9 @@ dpaa2_dev_rx_parse(struct rte_mbuf *mbuf, void *hw_annot_addr)
 	else
 		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
+	if (unlikely(dpaa2_print_parser_result))
+		dpaa2_print_parse_result(annotation);
+
 	if (dpaa2_enable_ts[mbuf->port]) {
 		*dpaa2_timestamp_dynfield(mbuf) = annotation->word2;
 		mbuf->ol_flags |= dpaa2_timestamp_rx_dynflag;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 26/43] net/dpaa2: enhancement of raw flow extract
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (24 preceding siblings ...)
  2024-09-18  7:50   ` [v2 25/43] net/dpaa2: dump Rx parser result vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 27/43] net/dpaa2: frame attribute flags parser vanshika.shukla
                     ` (17 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support combination of RAW extract and header extracts.
RAW extract can start from any absolute offset.

TBD: relative offset support.
To support relative offset of previous L3 protocol item,
extracts should be expanded to identify if the frame is:
vlan or none-vlan.

To support relative offset of previous L4 protocol item,
extracts should be expanded to identify if the frame is:
vlan/IPv4 or vlan/IPv6 or none-vlan/IPv4 or none-vlan/IPv6.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |  10 +
 drivers/net/dpaa2/dpaa2_flow.c   | 385 ++++++++++++++++++++++++++-----
 2 files changed, 340 insertions(+), 55 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index c864859b3f..8f548467a4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -292,6 +292,11 @@ struct key_prot_field {
 	uint32_t key_field;
 };
 
+struct dpaa2_raw_region {
+	uint8_t raw_start;
+	uint8_t raw_size;
+};
+
 struct dpaa2_key_profile {
 	uint8_t num;
 	uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
@@ -301,6 +306,10 @@ struct dpaa2_key_profile {
 	uint8_t ip_addr_extract_pos;
 	uint8_t ip_addr_extract_off;
 
+	uint8_t raw_extract_pos;
+	uint8_t raw_extract_off;
+	uint8_t raw_extract_num;
+
 	uint8_t l4_src_port_present;
 	uint8_t l4_src_port_pos;
 	uint8_t l4_src_port_offset;
@@ -309,6 +318,7 @@ struct dpaa2_key_profile {
 	uint8_t l4_dst_port_offset;
 	struct key_prot_field prot_field[DPKG_MAX_NUM_OF_EXTRACTS];
 	uint16_t key_max_size;
+	struct dpaa2_raw_region raw_region;
 };
 
 struct dpaa2_key_extract {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 0522fdb026..fe3c9f6d7d 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -772,42 +772,272 @@ dpaa2_flow_extract_add_hdr(enum net_prot prot,
 }
 
 static int
-dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
-	int size)
+dpaa2_flow_extract_new_raw(struct dpaa2_dev_priv *priv,
+	int offset, int size,
+	enum dpaa2_flow_dist_type dist_type, int tc_id)
 {
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_profile *key_info = &key_extract->key_profile;
-	int last_extract_size, index;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpaa2_key_profile *key_profile;
+	int last_extract_size, index, pos, item_size;
+	uint8_t num_extracts;
+	uint32_t field;
 
-	if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
-	    DPKG_EXTRACT_FROM_DATA) {
-		DPAA2_PMD_WARN("RAW extract cannot be combined with others");
-		return -1;
-	}
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	key_profile = &key_extract->key_profile;
+
+	key_profile->raw_region.raw_start = 0;
+	key_profile->raw_region.raw_size = 0;
 
 	last_extract_size = (size % DPAA2_FLOW_MAX_KEY_SIZE);
-	dpkg->num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
+	num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
 	if (last_extract_size)
-		dpkg->num_extracts++;
+		num_extracts++;
 	else
 		last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
 
-	for (index = 0; index < dpkg->num_extracts; index++) {
-		dpkg->extracts[index].type = DPKG_EXTRACT_FROM_DATA;
-		if (index == dpkg->num_extracts - 1)
-			dpkg->extracts[index].extract.from_data.size =
-				last_extract_size;
+	for (index = 0; index < num_extracts; index++) {
+		if (index == num_extracts - 1)
+			item_size = last_extract_size;
 		else
-			dpkg->extracts[index].extract.from_data.size =
-				DPAA2_FLOW_MAX_KEY_SIZE;
-		dpkg->extracts[index].extract.from_data.offset =
-			DPAA2_FLOW_MAX_KEY_SIZE * index;
+			item_size = DPAA2_FLOW_MAX_KEY_SIZE;
+		field = offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+		field |= item_size;
+
+		pos = dpaa2_flow_key_profile_advance(NET_PROT_PAYLOAD,
+				field, item_size, priv, dist_type,
+				tc_id, NULL);
+		if (pos < 0)
+			return pos;
+
+		dpkg->extracts[pos].type = DPKG_EXTRACT_FROM_DATA;
+		dpkg->extracts[pos].extract.from_data.size = item_size;
+		dpkg->extracts[pos].extract.from_data.offset = offset;
+
+		if (index == 0) {
+			key_profile->raw_extract_pos = pos;
+			key_profile->raw_extract_off =
+				key_profile->key_offset[pos];
+			key_profile->raw_region.raw_start = offset;
+		}
+		key_profile->raw_extract_num++;
+		key_profile->raw_region.raw_size +=
+			key_profile->key_size[pos];
+
+		offset += item_size;
+		dpkg->num_extracts++;
 	}
 
-	key_info->key_max_size = size;
 	return 0;
 }
 
+static int
+dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
+	int offset, int size, enum dpaa2_flow_dist_type dist_type,
+	int tc_id, int *recfg)
+{
+	struct dpaa2_key_profile *key_profile;
+	struct dpaa2_raw_region *raw_region;
+	int end = offset + size, ret = 0, extract_extended, sz_extend;
+	int start_cmp, end_cmp, new_size, index, pos, end_pos;
+	int last_extract_size, item_size, num_extracts, bk_num = 0;
+	struct dpkg_extract extract_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint8_t key_offset_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint8_t key_size_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	struct key_prot_field prot_field_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	struct dpaa2_raw_region raw_hole;
+	struct dpkg_profile_cfg *dpkg;
+	enum net_prot prot;
+	uint32_t field;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+		dpkg = &priv->extract.qos_key_extract.dpkg;
+	} else {
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+		dpkg = &priv->extract.tc_key_extract[tc_id].dpkg;
+	}
+
+	raw_region = &key_profile->raw_region;
+	if (!raw_region->raw_size) {
+		/* New RAW region*/
+		ret = dpaa2_flow_extract_new_raw(priv, offset, size,
+			dist_type, tc_id);
+		if (!ret && recfg)
+			(*recfg) |= dist_type;
+
+		return ret;
+	}
+	start_cmp = raw_region->raw_start;
+	end_cmp = raw_region->raw_start + raw_region->raw_size;
+
+	if (offset >= start_cmp && end <= end_cmp)
+		return 0;
+
+	sz_extend = 0;
+	new_size = raw_region->raw_size;
+	if (offset < start_cmp) {
+		sz_extend += start_cmp - offset;
+		new_size += (start_cmp - offset);
+	}
+	if (end > end_cmp) {
+		sz_extend += end - end_cmp;
+		new_size += (end - end_cmp);
+	}
+
+	last_extract_size = (new_size % DPAA2_FLOW_MAX_KEY_SIZE);
+	num_extracts = (new_size / DPAA2_FLOW_MAX_KEY_SIZE);
+	if (last_extract_size)
+		num_extracts++;
+	else
+		last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
+
+	if ((key_profile->num + num_extracts -
+		key_profile->raw_extract_num) >=
+		DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("%s Failed to expand raw extracts",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (offset < start_cmp) {
+		raw_hole.raw_start = key_profile->raw_extract_off;
+		raw_hole.raw_size = start_cmp - offset;
+		raw_region->raw_start = offset;
+		raw_region->raw_size += start_cmp - offset;
+
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size);
+			if (ret)
+				return ret;
+		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE) {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size, tc_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	if (end > end_cmp) {
+		raw_hole.raw_start =
+			key_profile->raw_extract_off +
+			raw_region->raw_size;
+		raw_hole.raw_size = end - end_cmp;
+		raw_region->raw_size += end - end_cmp;
+
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size);
+			if (ret)
+				return ret;
+		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE) {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size, tc_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	end_pos = key_profile->raw_extract_pos +
+		key_profile->raw_extract_num;
+	if (key_profile->num > end_pos) {
+		bk_num = key_profile->num - end_pos;
+		memcpy(extract_bk, &dpkg->extracts[end_pos],
+			bk_num * sizeof(struct dpkg_extract));
+		memcpy(key_offset_bk, &key_profile->key_offset[end_pos],
+			bk_num * sizeof(uint8_t));
+		memcpy(key_size_bk, &key_profile->key_size[end_pos],
+			bk_num * sizeof(uint8_t));
+		memcpy(prot_field_bk, &key_profile->prot_field[end_pos],
+			bk_num * sizeof(struct key_prot_field));
+
+		for (index = 0; index < bk_num; index++) {
+			key_offset_bk[index] += sz_extend;
+			prot = prot_field_bk[index].prot;
+			field = prot_field_bk[index].key_field;
+			if (dpaa2_flow_l4_src_port_extract(prot,
+				field)) {
+				key_profile->l4_src_port_present = 1;
+				key_profile->l4_src_port_pos = end_pos + index;
+				key_profile->l4_src_port_offset =
+					key_offset_bk[index];
+			} else if (dpaa2_flow_l4_dst_port_extract(prot,
+				field)) {
+				key_profile->l4_dst_port_present = 1;
+				key_profile->l4_dst_port_pos = end_pos + index;
+				key_profile->l4_dst_port_offset =
+					key_offset_bk[index];
+			}
+		}
+	}
+
+	pos = key_profile->raw_extract_pos;
+
+	for (index = 0; index < num_extracts; index++) {
+		if (index == num_extracts - 1)
+			item_size = last_extract_size;
+		else
+			item_size = DPAA2_FLOW_MAX_KEY_SIZE;
+		field = offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+		field |= item_size;
+
+		if (pos > 0) {
+			key_profile->key_offset[pos] =
+				key_profile->key_offset[pos - 1] +
+				key_profile->key_size[pos - 1];
+		} else {
+			key_profile->key_offset[pos] = 0;
+		}
+		key_profile->key_size[pos] = item_size;
+		key_profile->prot_field[pos].prot = NET_PROT_PAYLOAD;
+		key_profile->prot_field[pos].key_field = field;
+
+		dpkg->extracts[pos].type = DPKG_EXTRACT_FROM_DATA;
+		dpkg->extracts[pos].extract.from_data.size = item_size;
+		dpkg->extracts[pos].extract.from_data.offset = offset;
+		offset += item_size;
+		pos++;
+	}
+
+	if (bk_num) {
+		memcpy(&dpkg->extracts[pos], extract_bk,
+			bk_num * sizeof(struct dpkg_extract));
+		memcpy(&key_profile->key_offset[end_pos],
+			key_offset_bk, bk_num * sizeof(uint8_t));
+		memcpy(&key_profile->key_size[end_pos],
+			key_size_bk, bk_num * sizeof(uint8_t));
+		memcpy(&key_profile->prot_field[end_pos],
+			prot_field_bk, bk_num * sizeof(struct key_prot_field));
+	}
+
+	extract_extended = num_extracts - key_profile->raw_extract_num;
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		key_profile->ip_addr_extract_pos += extract_extended;
+		key_profile->ip_addr_extract_off += sz_extend;
+	}
+	key_profile->raw_extract_num = num_extracts;
+	key_profile->num += extract_extended;
+	key_profile->key_max_size += sz_extend;
+
+	dpkg->num_extracts += extract_extended;
+	if (!ret && recfg)
+		(*recfg) |= dist_type;
+
+	return ret;
+}
+
 static inline int
 dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 	enum net_prot prot, uint32_t key_field)
@@ -847,7 +1077,6 @@ dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
 	int i;
 
 	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
-
 	if (i >= 0)
 		return key_profile->key_offset[i];
 	else
@@ -996,13 +1225,37 @@ dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 }
 
 static inline int
-dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
-			     const void *key, const void *mask, int size)
+dpaa2_flow_raw_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	uint32_t extract_offset, int size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int offset = 0;
+	int extract_size = size > DPAA2_FLOW_MAX_KEY_SIZE ?
+		DPAA2_FLOW_MAX_KEY_SIZE : size;
+	int offset, field;
+
+	field = extract_offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+	field |= extract_size;
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			NET_PROT_PAYLOAD, field);
+	if (offset < 0) {
+		DPAA2_PMD_ERR("offset(%d)/size(%d) raw extract failed",
+			extract_offset, size);
+		return -EINVAL;
+	}
 
-	memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
-	memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, size);
+		memcpy((flow->qos_mask_addr + offset), mask, size);
+		flow->qos_rule_size = offset + size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, size);
+		memcpy((flow->fs_mask_addr + offset), mask, size);
+		flow->fs_rule_size = offset + size;
+	}
 
 	return 0;
 }
@@ -2237,22 +2490,36 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_raw *spec = pattern->spec;
 	const struct rte_flow_item_raw *mask = pattern->mask;
-	int prev_key_size =
-		priv->extract.qos_key_extract.key_profile.key_max_size;
 	int local_cfg = 0, ret;
 	uint32_t group;
+	struct dpaa2_key_extract *qos_key_extract;
+	struct dpaa2_key_extract *tc_key_extract;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
 		DPAA2_PMD_ERR("spec or mask not present.");
 		return -EINVAL;
 	}
-	/* Only supports non-relative with offset 0 */
-	if (spec->relative || spec->offset != 0 ||
-	    spec->search || spec->limit) {
-		DPAA2_PMD_ERR("relative and non zero offset not supported.");
+
+	if (spec->relative) {
+		/* TBD: relative offset support.
+		 * To support relative offset of previous L3 protocol item,
+		 * extracts should be expanded to identify if the frame is:
+		 * vlan or none-vlan.
+		 *
+		 * To support relative offset of previous L4 protocol item,
+		 * extracts should be expanded to identify if the frame is:
+		 * vlan/IPv4 or vlan/IPv6 or none-vlan/IPv4 or none-vlan/IPv6.
+		 */
+		DPAA2_PMD_ERR("relative not supported.");
+		return -EINVAL;
+	}
+
+	if (spec->search) {
+		DPAA2_PMD_ERR("search not supported.");
 		return -EINVAL;
 	}
+
 	/* Spec len and mask len should be same */
 	if (spec->length != mask->length) {
 		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
@@ -2264,36 +2531,44 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (prev_key_size <= spec->length) {
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
-						 spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FLOW_QOS_TYPE;
+	qos_key_extract = &priv->extract.qos_key_extract;
+	tc_key_extract = &priv->extract.tc_key_extract[group];
 
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.tc_key_extract[group],
-					spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FLOW_FS_TYPE;
+	ret = dpaa2_flow_extract_add_raw(priv,
+			spec->offset, spec->length,
+			DPAA2_FLOW_QOS_TYPE, 0, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS Extract RAW add failed.");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
-					   mask->pattern, spec->length);
+	ret = dpaa2_flow_extract_add_raw(priv,
+			spec->offset, spec->length,
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("FS[%d] Extract RAW add failed.",
+			group);
+		return -EINVAL;
+	}
+
+	ret = dpaa2_flow_raw_rule_data_set(flow,
+			&qos_key_extract->key_profile,
+			spec->offset, spec->length,
+			spec->pattern, mask->pattern,
+			DPAA2_FLOW_QOS_TYPE);
 	if (ret) {
 		DPAA2_PMD_ERR("QoS RAW rule data set failed");
-		return -1;
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
-					   mask->pattern, spec->length);
+	ret = dpaa2_flow_raw_rule_data_set(flow,
+			&tc_key_extract->key_profile,
+			spec->offset, spec->length,
+			spec->pattern, mask->pattern,
+			DPAA2_FLOW_FS_TYPE);
 	if (ret) {
 		DPAA2_PMD_ERR("FS RAW rule data set failed");
-		return -1;
+		return -EINVAL;
 	}
 
 	(*device_configured) |= local_cfg;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 27/43] net/dpaa2: frame attribute flags parser
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (25 preceding siblings ...)
  2024-09-18  7:50   ` [v2 26/43] net/dpaa2: enhancement of raw flow extract vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 28/43] net/dpaa2: add VXLAN distribution support vanshika.shukla
                     ` (16 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

FAF parser extracts are used to identify protocol type
instead of extracts of previous protocol' type.
FAF starts from offset 2 to include user defined flags which
will be used for soft protocol distribution.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 475 +++++++++++++++++++--------------
 1 file changed, 273 insertions(+), 202 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index fe3c9f6d7d..d7b53a1916 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -22,13 +22,6 @@
 #include <dpaa2_ethdev.h>
 #include <dpaa2_pmd_logs.h>
 
-/* Workaround to discriminate the UDP/TCP/SCTP
- * with next protocol of l3.
- * MC/WRIOP are not able to identify
- * the l4 protocol with l4 ports.
- */
-static int mc_l4_port_identification;
-
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
@@ -260,6 +253,10 @@ dpaa2_flow_qos_extracts_log(const struct dpaa2_dev_priv *priv)
 			sprintf(string, "raw offset/len: %d/%d",
 				extract->extract.from_data.offset,
 				extract->extract.from_data.size);
+		} else if (type == DPKG_EXTRACT_FROM_PARSE) {
+			sprintf(string, "parse offset/len: %d/%d",
+				extract->extract.from_parse.offset,
+				extract->extract.from_parse.size);
 		}
 		DPAA2_FLOW_DUMP("%s", string);
 		if ((idx + 1) < dpkg->num_extracts)
@@ -298,6 +295,10 @@ dpaa2_flow_fs_extracts_log(const struct dpaa2_dev_priv *priv,
 			sprintf(string, "raw offset/len: %d/%d",
 				extract->extract.from_data.offset,
 				extract->extract.from_data.size);
+		} else if (type == DPKG_EXTRACT_FROM_PARSE) {
+			sprintf(string, "parse offset/len: %d/%d",
+				extract->extract.from_parse.offset,
+				extract->extract.from_parse.size);
 		}
 		DPAA2_FLOW_DUMP("%s", string);
 		if ((idx + 1) < dpkg->num_extracts)
@@ -631,6 +632,66 @@ dpaa2_flow_fs_rule_insert_hole(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_faf_advance(struct dpaa2_dev_priv *priv,
+	int faf_byte, enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off++;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, 1);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, 1, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = 1;
+	key_profile->prot_field[pos].type = DPAA2_FAF_KEY;
+	key_profile->prot_field[pos].key_field = faf_byte;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	key_profile->key_max_size++;
+
+	return pos;
+}
+
 /* Move IPv4/IPv6 addresses to fill new extract previous IP address.
  * Current MC/WRIOP only support generic IP extract but IP address
  * is not fixed, so we have to put them at end of extracts, otherwise,
@@ -692,6 +753,7 @@ dpaa2_flow_key_profile_advance(enum net_prot prot,
 	}
 
 	key_profile->key_size[pos] = field_size;
+	key_profile->prot_field[pos].type = DPAA2_NET_PROT_KEY;
 	key_profile->prot_field[pos].prot = prot;
 	key_profile->prot_field[pos].key_field = field;
 	key_profile->num++;
@@ -715,6 +777,55 @@ dpaa2_flow_key_profile_advance(enum net_prot prot,
 	return pos;
 }
 
+static int
+dpaa2_flow_faf_add_hdr(int faf_byte,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i, offset;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	pos = dpaa2_flow_faf_advance(priv,
+			faf_byte, dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
+	}
+
+	offset = DPAA2_FAFE_PSR_OFFSET + faf_byte;
+
+	extracts[pos].type = DPKG_EXTRACT_FROM_PARSE;
+	extracts[pos].extract.from_parse.offset = offset;
+	extracts[pos].extract.from_parse.size = 1;
+
+	dpkg->num_extracts++;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_extract_add_hdr(enum net_prot prot,
 	uint32_t field, uint8_t field_size,
@@ -1001,6 +1112,7 @@ dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
 			key_profile->key_offset[pos] = 0;
 		}
 		key_profile->key_size[pos] = item_size;
+		key_profile->prot_field[pos].type = DPAA2_NET_PROT_KEY;
 		key_profile->prot_field[pos].prot = NET_PROT_PAYLOAD;
 		key_profile->prot_field[pos].key_field = field;
 
@@ -1040,7 +1152,7 @@ dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
 
 static inline int
 dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
-	enum net_prot prot, uint32_t key_field)
+	enum key_prot_type type, enum net_prot prot, uint32_t key_field)
 {
 	int pos;
 	struct key_prot_field *prot_field;
@@ -1053,16 +1165,23 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 
 	prot_field = key_profile->prot_field;
 	for (pos = 0; pos < key_profile->num; pos++) {
-		if (prot_field[pos].prot == prot &&
-			prot_field[pos].key_field == key_field) {
+		if (type == DPAA2_NET_PROT_KEY &&
+			prot_field[pos].prot == prot &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
+			return pos;
+		else if (type == DPAA2_FAF_KEY &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
 			return pos;
-		}
 	}
 
-	if (dpaa2_flow_l4_src_port_extract(prot, key_field)) {
+	if (type == DPAA2_NET_PROT_KEY &&
+		dpaa2_flow_l4_src_port_extract(prot, key_field)) {
 		if (key_profile->l4_src_port_present)
 			return key_profile->l4_src_port_pos;
-	} else if (dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
+	} else if (type == DPAA2_NET_PROT_KEY &&
+		dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
 		if (key_profile->l4_dst_port_present)
 			return key_profile->l4_dst_port_pos;
 	}
@@ -1072,80 +1191,53 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 
 static inline int
 dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
-	enum net_prot prot, uint32_t key_field)
+	enum key_prot_type type, enum net_prot prot, uint32_t key_field)
 {
 	int i;
 
-	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
+	i = dpaa2_flow_extract_search(key_profile, type, prot, key_field);
 	if (i >= 0)
 		return key_profile->key_offset[i];
 	else
 		return i;
 }
 
-struct prev_proto_field_id {
-	enum net_prot prot;
-	union {
-		rte_be16_t eth_type;
-		uint8_t ip_proto;
-	};
-};
-
 static int
-dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
+dpaa2_flow_faf_add_rule(struct dpaa2_dev_priv *priv,
 	struct dpaa2_dev_flow *flow,
-	const struct prev_proto_field_id *prev_proto,
+	enum dpaa2_rx_faf_offset faf_bit_off,
 	int group,
 	enum dpaa2_flow_dist_type dist_type)
 {
 	int offset;
 	uint8_t *key_addr;
 	uint8_t *mask_addr;
-	uint32_t field = 0;
-	rte_be16_t eth_type;
-	uint8_t ip_proto;
 	struct dpaa2_key_extract *key_extract;
 	struct dpaa2_key_profile *key_profile;
+	uint8_t faf_byte = faf_bit_off / 8;
+	uint8_t faf_bit_in_byte = faf_bit_off % 8;
 
-	if (prev_proto->prot == NET_PROT_ETH) {
-		field = NH_FLD_ETH_TYPE;
-	} else if (prev_proto->prot == NET_PROT_IP) {
-		field = NH_FLD_IP_PROTO;
-	} else {
-		DPAA2_PMD_ERR("Prev proto(%d) not support!",
-			prev_proto->prot);
-		return -EINVAL;
-	}
+	faf_bit_in_byte = 7 - faf_bit_in_byte;
 
 	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
 		key_extract = &priv->extract.qos_key_extract;
 		key_profile = &key_extract->key_profile;
 
 		offset = dpaa2_flow_extract_key_offset(key_profile,
-				prev_proto->prot, field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (offset < 0) {
 			DPAA2_PMD_ERR("%s QoS key extract failed", __func__);
 			return -EINVAL;
 		}
 		key_addr = flow->qos_key_addr + offset;
 		mask_addr = flow->qos_mask_addr + offset;
-		if (prev_proto->prot == NET_PROT_ETH) {
-			eth_type = prev_proto->eth_type;
-			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
-			eth_type = 0xffff;
-			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
-			flow->qos_rule_size += sizeof(rte_be16_t);
-		} else if (prev_proto->prot == NET_PROT_IP) {
-			ip_proto = prev_proto->ip_proto;
-			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
-			ip_proto = 0xff;
-			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
-			flow->qos_rule_size += sizeof(uint8_t);
-		} else {
-			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
-				prev_proto->prot);
-			return -EINVAL;
-		}
+
+		if (!(*key_addr) &&
+			key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size++;
+
+		*key_addr |=  (1 << faf_bit_in_byte);
+		*mask_addr |=  (1 << faf_bit_in_byte);
 	}
 
 	if (dist_type & DPAA2_FLOW_FS_TYPE) {
@@ -1153,7 +1245,7 @@ dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
 		key_profile = &key_extract->key_profile;
 
 		offset = dpaa2_flow_extract_key_offset(key_profile,
-				prev_proto->prot, field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (offset < 0) {
 			DPAA2_PMD_ERR("%s TC[%d] key extract failed",
 				__func__, group);
@@ -1162,23 +1254,12 @@ dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
 		key_addr = flow->fs_key_addr + offset;
 		mask_addr = flow->fs_mask_addr + offset;
 
-		if (prev_proto->prot == NET_PROT_ETH) {
-			eth_type = prev_proto->eth_type;
-			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
-			eth_type = 0xffff;
-			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
-			flow->fs_rule_size += sizeof(rte_be16_t);
-		} else if (prev_proto->prot == NET_PROT_IP) {
-			ip_proto = prev_proto->ip_proto;
-			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
-			ip_proto = 0xff;
-			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
-			flow->fs_rule_size += sizeof(uint8_t);
-		} else {
-			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
-				prev_proto->prot);
-			return -EINVAL;
-		}
+		if (!(*key_addr) &&
+			key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size++;
+
+		*key_addr |=  (1 << faf_bit_in_byte);
+		*mask_addr |=  (1 << faf_bit_in_byte);
 	}
 
 	return 0;
@@ -1200,7 +1281,7 @@ dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 	}
 
 	offset = dpaa2_flow_extract_key_offset(key_profile,
-			prot, field);
+			DPAA2_NET_PROT_KEY, prot, field);
 	if (offset < 0) {
 		DPAA2_PMD_ERR("P(%d)/F(%d) does not exist!",
 			prot, field);
@@ -1238,7 +1319,7 @@ dpaa2_flow_raw_rule_data_set(struct dpaa2_dev_flow *flow,
 	field = extract_offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
 	field |= extract_size;
 	offset = dpaa2_flow_extract_key_offset(key_profile,
-			NET_PROT_PAYLOAD, field);
+			DPAA2_NET_PROT_KEY, NET_PROT_PAYLOAD, field);
 	if (offset < 0) {
 		DPAA2_PMD_ERR("offset(%d)/size(%d) raw extract failed",
 			extract_offset, size);
@@ -1321,60 +1402,39 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 }
 
 static int
-dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
+dpaa2_flow_identify_by_faf(struct dpaa2_dev_priv *priv,
 	struct dpaa2_dev_flow *flow,
-	const struct prev_proto_field_id *prev_prot,
+	enum dpaa2_rx_faf_offset faf_off,
 	enum dpaa2_flow_dist_type dist_type,
 	int group, int *recfg)
 {
-	int ret, index, local_cfg = 0, size = 0;
+	int ret, index, local_cfg = 0;
 	struct dpaa2_key_extract *extract;
 	struct dpaa2_key_profile *key_profile;
-	enum net_prot prot = prev_prot->prot;
-	uint32_t key_field = 0;
-
-	if (prot == NET_PROT_ETH) {
-		key_field = NH_FLD_ETH_TYPE;
-		size = sizeof(rte_be16_t);
-	} else if (prot == NET_PROT_IP) {
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else if (prot == NET_PROT_IPV4) {
-		prot = NET_PROT_IP;
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else if (prot == NET_PROT_IPV6) {
-		prot = NET_PROT_IP;
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else {
-		DPAA2_PMD_ERR("Invalid Prev prot(%d)", prot);
-		return -EINVAL;
-	}
+	uint8_t faf_byte = faf_off / 8;
 
 	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
 		extract = &priv->extract.qos_key_extract;
 		key_profile = &extract->key_profile;
 
 		index = dpaa2_flow_extract_search(key_profile,
-				prot, key_field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add_hdr(prot,
-					key_field, size, priv,
-					DPAA2_FLOW_QOS_TYPE, group,
+			ret = dpaa2_flow_faf_add_hdr(faf_byte,
+					priv, DPAA2_FLOW_QOS_TYPE, group,
 					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("QOS prev extract add failed");
+				DPAA2_PMD_ERR("QOS faf extract add failed");
 
 				return -EINVAL;
 			}
 			local_cfg |= DPAA2_FLOW_QOS_TYPE;
 		}
 
-		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+		ret = dpaa2_flow_faf_add_rule(priv, flow, faf_off, group,
 				DPAA2_FLOW_QOS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("QoS prev rule set failed");
+			DPAA2_PMD_ERR("QoS faf rule set failed");
 			return -EINVAL;
 		}
 	}
@@ -1384,14 +1444,13 @@ dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
 		key_profile = &extract->key_profile;
 
 		index = dpaa2_flow_extract_search(key_profile,
-				prot, key_field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add_hdr(prot,
-					key_field, size, priv,
-					DPAA2_FLOW_FS_TYPE, group,
+			ret = dpaa2_flow_faf_add_hdr(faf_byte,
+					priv, DPAA2_FLOW_FS_TYPE, group,
 					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS[%d] prev extract add failed",
+				DPAA2_PMD_ERR("FS[%d] faf extract add failed",
 					group);
 
 				return -EINVAL;
@@ -1399,17 +1458,17 @@ dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
 			local_cfg |= DPAA2_FLOW_FS_TYPE;
 		}
 
-		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+		ret = dpaa2_flow_faf_add_rule(priv, flow, faf_off, group,
 				DPAA2_FLOW_FS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS[%d] prev rule set failed",
+			DPAA2_PMD_ERR("FS[%d] faf rule set failed",
 				group);
 			return -EINVAL;
 		}
 	}
 
 	if (recfg)
-		*recfg = local_cfg;
+		*recfg |= local_cfg;
 
 	return 0;
 }
@@ -1436,7 +1495,7 @@ dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
 	key_profile = &key_extract->key_profile;
 
 	index = dpaa2_flow_extract_search(key_profile,
-			prot, field);
+			DPAA2_NET_PROT_KEY, prot, field);
 	if (index < 0) {
 		ret = dpaa2_flow_extract_add_hdr(prot,
 				field, size, priv,
@@ -1575,6 +1634,7 @@ dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
 		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
 	}
 	key_profile->num++;
+	key_profile->prot_field[num].type = DPAA2_NET_PROT_KEY;
 
 	dpkg->extracts[num].extract.from_hdr.prot = prot;
 	dpkg->extracts[num].extract.from_hdr.field = field;
@@ -1685,15 +1745,28 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 	spec = pattern->spec;
 	mask = pattern->mask ?
 			pattern->mask : &dpaa2_flow_item_eth_mask;
-	if (!spec) {
-		DPAA2_PMD_WARN("No pattern spec for Eth flow");
-		return -EINVAL;
-	}
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ETH_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ETH_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
 		RTE_FLOW_ITEM_TYPE_ETH)) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
@@ -1782,15 +1855,18 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_ETH;
-		prev_proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
-		ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_proto,
-				DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-				group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
 		if (ret)
 			return ret;
+
 		(*device_configured) |= local_cfg;
 		return 0;
 	}
@@ -1837,7 +1913,6 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	const void *key, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int size;
-	struct prev_proto_field_id prev_prot;
 
 	group = attr->group;
 
@@ -1850,19 +1925,21 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	prev_prot.prot = NET_PROT_ETH;
-	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
+					 DPAA2_FLOW_QOS_TYPE, group,
+					 &local_cfg);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE, group,
-			&local_cfg);
-	if (ret) {
-		DPAA2_PMD_ERR("IPv4 identification failed!");
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
+					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret)
 		return ret;
-	}
 
-	if (!spec_ipv4)
+	if (!spec_ipv4) {
+		(*device_configured) |= local_cfg;
 		return 0;
+	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
 				       RTE_FLOW_ITEM_TYPE_IPV4)) {
@@ -1954,7 +2031,6 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
 	int size;
-	struct prev_proto_field_id prev_prot;
 
 	group = attr->group;
 
@@ -1966,19 +2042,21 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	prev_prot.prot = NET_PROT_ETH;
-	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
+					 DPAA2_FLOW_QOS_TYPE, group,
+					 &local_cfg);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
-	if (ret) {
-		DPAA2_PMD_ERR("IPv6 identification failed!");
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
+					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret)
 		return ret;
-	}
 
-	if (!spec_ipv6)
+	if (!spec_ipv6) {
+		(*device_configured) |= local_cfg;
 		return 0;
+	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
 				       RTE_FLOW_ITEM_TYPE_IPV6)) {
@@ -2082,18 +2160,15 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		/* Next proto of Generical IP is actually used
-		 * for ICMP identification.
-		 * Example: flow create 0 ingress pattern icmp
-		 */
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ICMP_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_ICMP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ICMP_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
 		if (ret)
 			return ret;
 
@@ -2170,22 +2245,21 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_UDP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_UDP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_UDP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2257,22 +2331,21 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_TCP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_TCP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_TCP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2344,22 +2417,21 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_SCTP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_SCTP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_SCTP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2432,21 +2504,20 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GRE_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_GRE;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GRE_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
 		if (ret)
 			return ret;
 
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 28/43] net/dpaa2: add VXLAN distribution support
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (26 preceding siblings ...)
  2024-09-18  7:50   ` [v2 27/43] net/dpaa2: frame attribute flags parser vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 29/43] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
                     ` (15 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Extracts from vxlan header for distribution.
The vxlan header is set by soft parser code in
soft parser context located from offset 43 of parser results:

<assign-variable name="$softparsectx[0:3]" value="vxlan.vnid"/>

vxlan protocol is identified by vxlan bit of frame attribute flags.
The parser result extracts are added for this functionality.

Example:
flow create 0 ingress pattern vxlan / end actions pf / queue index 4 / end

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |   6 +-
 drivers/net/dpaa2/dpaa2_flow.c   | 313 +++++++++++++++++++++++++++++++
 2 files changed, 318 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 8f548467a4..aeddcfdfa9 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -282,8 +282,12 @@ enum ip_addr_extract_type {
 };
 
 enum key_prot_type {
+	/* HW extracts from standard protocol fields*/
 	DPAA2_NET_PROT_KEY,
-	DPAA2_FAF_KEY
+	/* HW extracts from FAF of PR*/
+	DPAA2_FAF_KEY,
+	/* HW extracts from PR other than FAF*/
+	DPAA2_PR_KEY
 };
 
 struct key_prot_field {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index d7b53a1916..7bec13d4eb 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -38,6 +38,8 @@ enum dpaa2_flow_dist_type {
 #define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
 #define DPAA2_FLOW_MAX_KEY_SIZE			16
 
+#define VXLAN_HF_VNI 0x08
+
 struct dpaa2_dev_flow {
 	LIST_ENTRY(dpaa2_dev_flow) next;
 	struct dpni_rule_cfg qos_rule;
@@ -144,6 +146,11 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
+
+static const struct rte_flow_item_vxlan dpaa2_flow_item_vxlan_mask = {
+	.flags = 0xff,
+	.vni = "\xff\xff\xff",
+};
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -692,6 +699,68 @@ dpaa2_flow_faf_advance(struct dpaa2_dev_priv *priv,
 	return pos;
 }
 
+static int
+dpaa2_flow_pr_advance(struct dpaa2_dev_priv *priv,
+	uint32_t pr_offset, uint32_t pr_size,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off += pr_size;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, pr_size);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, pr_size, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = pr_size;
+	key_profile->prot_field[pos].type = DPAA2_PR_KEY;
+	key_profile->prot_field[pos].key_field =
+		(pr_offset << 16) | pr_size;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	key_profile->key_max_size += pr_size;
+
+	return pos;
+}
+
 /* Move IPv4/IPv6 addresses to fill new extract previous IP address.
  * Current MC/WRIOP only support generic IP extract but IP address
  * is not fixed, so we have to put them at end of extracts, otherwise,
@@ -826,6 +895,59 @@ dpaa2_flow_faf_add_hdr(int faf_byte,
 	return 0;
 }
 
+static int
+dpaa2_flow_pr_add_hdr(uint32_t pr_offset,
+	uint32_t pr_size, struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if ((pr_offset + pr_size) > DPAA2_FAPR_SIZE) {
+		DPAA2_PMD_ERR("PR extracts(%d:%d) overflow",
+			pr_offset, pr_size);
+		return -EINVAL;
+	}
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	pos = dpaa2_flow_pr_advance(priv,
+			pr_offset, pr_size, dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
+	}
+
+	extracts[pos].type = DPKG_EXTRACT_FROM_PARSE;
+	extracts[pos].extract.from_parse.offset = pr_offset;
+	extracts[pos].extract.from_parse.size = pr_size;
+
+	dpkg->num_extracts++;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_extract_add_hdr(enum net_prot prot,
 	uint32_t field, uint8_t field_size,
@@ -1174,6 +1296,10 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 			prot_field[pos].key_field == key_field &&
 			prot_field[pos].type == type)
 			return pos;
+		else if (type == DPAA2_PR_KEY &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
+			return pos;
 	}
 
 	if (type == DPAA2_NET_PROT_KEY &&
@@ -1265,6 +1391,41 @@ dpaa2_flow_faf_add_rule(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static inline int
+dpaa2_flow_pr_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	uint32_t pr_offset, uint32_t pr_size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int offset;
+	uint32_t pr_field = pr_offset << 16 | pr_size;
+
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			DPAA2_PR_KEY, NET_PROT_NONE, pr_field);
+	if (offset < 0) {
+		DPAA2_PMD_ERR("PR off(%d)/size(%d) does not exist!",
+			pr_offset, pr_size);
+		return -EINVAL;
+	}
+
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, pr_size);
+		memcpy((flow->qos_mask_addr + offset), mask, pr_size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size = offset + pr_size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, pr_size);
+		memcpy((flow->fs_mask_addr + offset), mask, pr_size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size = offset + pr_size;
+	}
+
+	return 0;
+}
+
 static inline int
 dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 	struct dpaa2_key_profile *key_profile,
@@ -1386,6 +1547,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_gre_mask;
 		size = sizeof(struct rte_flow_item_gre);
 		break;
+	case RTE_FLOW_ITEM_TYPE_VXLAN:
+		mask_support = (const char *)&dpaa2_flow_item_vxlan_mask;
+		size = sizeof(struct rte_flow_item_vxlan);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -1473,6 +1638,55 @@ dpaa2_flow_identify_by_faf(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_add_pr_extract_rule(struct dpaa2_dev_flow *flow,
+	uint32_t pr_offset, uint32_t pr_size,
+	const void *key, const void *mask,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int index, ret, local_cfg = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
+	uint32_t pr_field = pr_offset << 16 | pr_size;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	key_profile = &key_extract->key_profile;
+
+	index = dpaa2_flow_extract_search(key_profile,
+			DPAA2_PR_KEY, NET_PROT_NONE, pr_field);
+	if (index < 0) {
+		ret = dpaa2_flow_pr_add_hdr(pr_offset,
+				pr_size, priv,
+				dist_type, tc_id, NULL);
+		if (ret) {
+			DPAA2_PMD_ERR("PR add off(%d)/size(%d) failed",
+				pr_offset, pr_size);
+
+			return ret;
+		}
+		local_cfg |= dist_type;
+	}
+
+	ret = dpaa2_flow_pr_rule_data_set(flow, key_profile,
+			pr_offset, pr_size, key, mask, dist_type);
+	if (ret) {
+		DPAA2_PMD_ERR("PR off(%d)/size(%d) rule data set failed",
+			pr_offset, pr_size);
+
+		return ret;
+	}
+
+	if (recfg)
+		*recfg |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
 	enum net_prot prot, uint32_t field,
@@ -2549,6 +2763,90 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_vxlan *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_vxlan_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_VXLAN_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_VXLAN_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VXLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of VXLAN not support.");
+
+		return -1;
+	}
+
+	if (mask->flags) {
+		if (spec->flags != VXLAN_HF_VNI) {
+			DPAA2_PMD_ERR("vxlan flag(0x%02x) must be 0x%02x.",
+				spec->flags, VXLAN_HF_VNI);
+			return -EINVAL;
+		}
+		if (mask->flags != 0xff) {
+			DPAA2_PMD_ERR("Not support to extract vxlan flag.");
+			return -EINVAL;
+		}
+	}
+
+	if (mask->vni[0] || mask->vni[1] || mask->vni[2]) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_VNI_OFFSET,
+			sizeof(mask->vni), spec->vni,
+			mask->vni,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_VNI_OFFSET,
+			sizeof(mask->vni), spec->vni,
+			mask->vni,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -2764,6 +3062,9 @@ dpaa2_flow_verify_action(struct dpaa2_dev_priv *priv,
 				}
 			}
 
+			break;
+		case RTE_FLOW_ACTION_TYPE_PF:
+			/* Skip this action, have to add for vxlan*/
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
@@ -3114,6 +3415,15 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				return ret;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+			ret = dpaa2_configure_flow_vxlan(flow,
+					dev, attr, &pattern[i], actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("VXLAN flow config failed!");
+				return ret;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow,
 					dev, attr, &pattern[i],
@@ -3226,6 +3536,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			if (ret)
 				return ret;
 
+			break;
+		case RTE_FLOW_ACTION_TYPE_PF:
+			/* Skip this action, have to add for vxlan*/
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 29/43] net/dpaa2: protocol inside tunnel distribution
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (27 preceding siblings ...)
  2024-09-18  7:50   ` [v2 28/43] net/dpaa2: add VXLAN distribution support vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 30/43] net/dpaa2: eCPRI support by parser result vanshika.shukla
                     ` (14 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Control flow by protocols inside tunnel.
The tunnel flow items applied by application are in order from
outer to inner. The inner items start from tunnel item, something
like vxlan, GRE etc.

For example:
flow create 0 ingress pattern ipv4 / vxlan / ipv6 / end
	actions pf / queue index 2 / end

So the items following the tunnel item are tagged with "innner".
The inner items are extracted from parser results which are set
by soft parser.
So far only vxlan tunnel is supported. Limited by soft parser area,
only ethernet header and vlan header inside tunnel are able to be used
for flow distribution. IPv4, IPv6, UDP and TCP inside tunnel can be
detected by user defined FAF set by SP for flow distribution.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 587 +++++++++++++++++++++++++++++----
 1 file changed, 519 insertions(+), 68 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 7bec13d4eb..e4d7117192 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -58,6 +58,11 @@ struct dpaa2_dev_flow {
 	struct dpni_fs_action_cfg fs_action_cfg;
 };
 
+struct rte_dpaa2_flow_item {
+	struct rte_flow_item generic_item;
+	int in_tunnel;
+};
+
 static const
 enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
@@ -1939,10 +1944,203 @@ dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+dpaa2_configure_flow_tunnel_eth(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_eth *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+			pattern->mask : &dpaa2_flow_item_eth_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec)
+		return 0;
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH)) {
+		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
+
+		return -EINVAL;
+	}
+
+	if (memcmp((const char *)&mask->src,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		/*SRC[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR0_OFFSET,
+			1, &spec->src.addr_bytes[0],
+			&mask->src.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[1:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR1_OFFSET,
+			2, &spec->src.addr_bytes[1],
+			&mask->src.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[3:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR3_OFFSET,
+			1, &spec->src.addr_bytes[3],
+			&mask->src.addr_bytes[3],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[4:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR4_OFFSET,
+			2, &spec->src.addr_bytes[4],
+			&mask->src.addr_bytes[4],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		/*SRC[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR0_OFFSET,
+			1, &spec->src.addr_bytes[0],
+			&mask->src.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[1:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR1_OFFSET,
+			2, &spec->src.addr_bytes[1],
+			&mask->src.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[3:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR3_OFFSET,
+			1, &spec->src.addr_bytes[3],
+			&mask->src.addr_bytes[3],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[4:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR4_OFFSET,
+			2, &spec->src.addr_bytes[4],
+			&mask->src.addr_bytes[4],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->dst,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		/*DST[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR0_OFFSET,
+			1, &spec->dst.addr_bytes[0],
+			&mask->dst.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[1:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR1_OFFSET,
+			1, &spec->dst.addr_bytes[1],
+			&mask->dst.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[2:3]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR2_OFFSET,
+			3, &spec->dst.addr_bytes[2],
+			&mask->dst.addr_bytes[2],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[5:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR5_OFFSET,
+			1, &spec->dst.addr_bytes[5],
+			&mask->dst.addr_bytes[5],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		/*DST[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR0_OFFSET,
+			1, &spec->dst.addr_bytes[0],
+			&mask->dst.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[1:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR1_OFFSET,
+			1, &spec->dst.addr_bytes[1],
+			&mask->dst.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[2:3]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR2_OFFSET,
+			3, &spec->dst.addr_bytes[2],
+			&mask->dst.addr_bytes[2],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[5:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR5_OFFSET,
+			1, &spec->dst.addr_bytes[5],
+			&mask->dst.addr_bytes[5],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->type,
+		zero_cmp, sizeof(rte_be16_t))) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TYPE_OFFSET,
+			sizeof(rte_be16_t), &spec->type, &mask->type,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TYPE_OFFSET,
+			sizeof(rte_be16_t), &spec->type, &mask->type,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -1952,6 +2150,13 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 	const struct rte_flow_item_eth *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	if (dpaa2_pattern->in_tunnel) {
+		return dpaa2_configure_flow_tunnel_eth(flow,
+				dev, attr, pattern, device_configured);
+	}
 
 	group = attr->group;
 
@@ -2045,10 +2250,81 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
+dpaa2_configure_flow_tunnel_vlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_vlan *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_vlan_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAFE_VXLAN_IN_VLAN_FRAM,
+				DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAFE_VXLAN_IN_VLAN_FRAM,
+				DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+
+		return -EINVAL;
+	}
+
+	if (!mask->tci)
+		return 0;
+
+	ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TCI_OFFSET,
+			sizeof(rte_be16_t), &spec->tci, &mask->tci,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TCI_OFFSET,
+			sizeof(rte_be16_t), &spec->tci, &mask->tci,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2057,6 +2333,13 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_vlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	if (dpaa2_pattern->in_tunnel) {
+		return dpaa2_configure_flow_tunnel_vlan(flow,
+				dev, attr, pattern, device_configured);
+	}
 
 	group = attr->group;
 
@@ -2116,7 +2399,7 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 static int
 dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
+			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
 			  const struct rte_flow_action actions[] __rte_unused,
 			  struct rte_flow_error *error __rte_unused,
 			  int *device_configured)
@@ -2127,6 +2410,7 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	const void *key, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int size;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2135,6 +2419,26 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	mask_ipv4 = pattern->mask ?
 		    pattern->mask : &dpaa2_flow_item_ipv4_mask;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec_ipv4) {
+			DPAA2_PMD_ERR("Tunnel-IPv4 distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV4_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV4_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
@@ -2233,7 +2537,7 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 static int
 dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
+			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
 			  const struct rte_flow_action actions[] __rte_unused,
 			  struct rte_flow_error *error __rte_unused,
 			  int *device_configured)
@@ -2245,6 +2549,7 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
 	int size;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2256,6 +2561,26 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec_ipv6) {
+			DPAA2_PMD_ERR("Tunnel-IPv6 distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV6_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV6_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
 					 DPAA2_FLOW_QOS_TYPE, group,
 					 &local_cfg);
@@ -2352,7 +2677,7 @@ static int
 dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2361,6 +2686,7 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_icmp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2373,6 +2699,11 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ICMP distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_ICMP_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2438,7 +2769,7 @@ static int
 dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2447,6 +2778,7 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_udp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2459,6 +2791,26 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec) {
+			DPAA2_PMD_ERR("Tunnel-UDP distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_UDP_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_UDP_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_UDP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2524,7 +2876,7 @@ static int
 dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2533,6 +2885,7 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_tcp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2545,6 +2898,26 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec) {
+			DPAA2_PMD_ERR("Tunnel-TCP distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_TCP_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_TCP_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_TCP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2610,7 +2983,7 @@ static int
 dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2619,6 +2992,7 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_sctp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2631,6 +3005,11 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-SCTP distribution not support");
+		return -ENOTSUP;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_SCTP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2696,7 +3075,7 @@ static int
 dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2705,6 +3084,7 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_gre *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2717,6 +3097,11 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-GRE distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_GRE_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2767,7 +3152,7 @@ static int
 dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2776,6 +3161,7 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_vxlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2788,6 +3174,11 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-VXLAN distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_VXLAN_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2851,18 +3242,19 @@ static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const struct rte_flow_item_raw *spec = pattern->spec;
-	const struct rte_flow_item_raw *mask = pattern->mask;
 	int local_cfg = 0, ret;
 	uint32_t group;
 	struct dpaa2_key_extract *qos_key_extract;
 	struct dpaa2_key_extract *tc_key_extract;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
+	const struct rte_flow_item_raw *spec = pattern->spec;
+	const struct rte_flow_item_raw *mask = pattern->mask;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
@@ -3306,6 +3698,45 @@ dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_item_convert(const struct rte_flow_item pattern[],
+			struct rte_dpaa2_flow_item **dpaa2_pattern)
+{
+	struct rte_dpaa2_flow_item *new_pattern;
+	int num = 0, tunnel_start = 0;
+
+	while (1) {
+		num++;
+		if (pattern[num].type == RTE_FLOW_ITEM_TYPE_END)
+			break;
+	}
+
+	new_pattern = rte_malloc(NULL, sizeof(struct rte_dpaa2_flow_item) * num,
+				 RTE_CACHE_LINE_SIZE);
+	if (!new_pattern) {
+		DPAA2_PMD_ERR("Failed to alloc %d flow items", num);
+		return -ENOMEM;
+	}
+
+	num = 0;
+	while (pattern[num].type != RTE_FLOW_ITEM_TYPE_END) {
+		memcpy(&new_pattern[num].generic_item, &pattern[num],
+		       sizeof(struct rte_flow_item));
+		new_pattern[num].in_tunnel = 0;
+
+		if (pattern[num].type == RTE_FLOW_ITEM_TYPE_VXLAN)
+			tunnel_start = 1;
+		else if (tunnel_start)
+			new_pattern[num].in_tunnel = 1;
+		num++;
+	}
+
+	new_pattern[num].generic_item.type = RTE_FLOW_ITEM_TYPE_END;
+	*dpaa2_pattern = new_pattern;
+
+	return 0;
+}
+
 static int
 dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3322,6 +3753,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	uint16_t dist_size, key_size;
 	struct dpaa2_key_extract *qos_key_extract;
 	struct dpaa2_key_extract *tc_key_extract;
+	struct rte_dpaa2_flow_item *dpaa2_pattern = NULL;
 
 	ret = dpaa2_flow_verify_attr(priv, attr);
 	if (ret)
@@ -3331,107 +3763,121 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	if (ret)
 		return ret;
 
+	ret = dpaa2_flow_item_convert(pattern, &dpaa2_pattern);
+	if (ret)
+		return ret;
+
 	/* Parse pattern list to get the matching parameters */
 	while (!end_of_list) {
 		switch (pattern[i].type) {
 		case RTE_FLOW_ITEM_TYPE_ETH:
-			ret = dpaa2_configure_flow_eth(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_eth(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ETH flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_VLAN:
-			ret = dpaa2_configure_flow_vlan(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_vlan(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("vLan flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
-			ret = dpaa2_configure_flow_ipv4(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_ipv4(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV4 flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ret = dpaa2_configure_flow_ipv6(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_ipv6(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV6 flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_ICMP:
-			ret = dpaa2_configure_flow_icmp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_icmp(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ICMP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_UDP:
-			ret = dpaa2_configure_flow_udp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_udp(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("UDP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
-			ret = dpaa2_configure_flow_tcp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_tcp(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("TCP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_SCTP:
-			ret = dpaa2_configure_flow_sctp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_sctp(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("SCTP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
-			ret = dpaa2_configure_flow_gre(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_gre(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("GRE flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = dpaa2_configure_flow_vxlan(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_vxlan(flow, dev, attr,
+							 &dpaa2_pattern[i],
+							 actions, error,
+							 &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("VXLAN flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
-			ret = dpaa2_configure_flow_raw(flow,
-					dev, attr, &pattern[i],
-					actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_raw(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("RAW flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_END:
@@ -3463,7 +3909,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			ret = dpaa2_configure_flow_fs_action(priv, flow,
 							     &actions[j]);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			/* Configure FS table first*/
 			dist_size = priv->nb_rx_queues / priv->num_rx_tc;
@@ -3473,20 +3919,20 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 								   dist_size,
 								   false);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			/* Configure QoS table then.*/
 			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
 				ret = dpaa2_configure_qos_table(priv, false);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (priv->num_rx_tc > 1) {
 				ret = dpaa2_flow_add_qos_rule(priv, flow);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (flow->tc_index >= priv->fs_entries) {
@@ -3497,7 +3943,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 
 			ret = dpaa2_flow_add_fs_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			break;
 		case RTE_FLOW_ACTION_TYPE_RSS:
@@ -3509,7 +3955,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			if (ret < 0) {
 				DPAA2_PMD_ERR("TC[%d] distset RSS failed",
 					      flow->tc_id);
-				return ret;
+				goto end_flow_set;
 			}
 
 			dist_size = rss_conf->queue_num;
@@ -3519,22 +3965,22 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 								   dist_size,
 								   true);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
 				ret = dpaa2_configure_qos_table(priv, true);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			ret = dpaa2_flow_add_qos_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			ret = dpaa2_flow_add_fs_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			break;
 		case RTE_FLOW_ACTION_TYPE_PF:
@@ -3551,6 +3997,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 		j++;
 	}
 
+end_flow_set:
 	if (!ret) {
 		/* New rules are inserted. */
 		if (!curr) {
@@ -3561,6 +4008,10 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			LIST_INSERT_AFTER(curr, flow, next);
 		}
 	}
+
+	if (dpaa2_pattern)
+		rte_free(dpaa2_pattern);
+
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 30/43] net/dpaa2: eCPRI support by parser result
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (28 preceding siblings ...)
  2024-09-18  7:50   ` [v2 29/43] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 31/43] net/dpaa2: add GTP flow support vanshika.shukla
                     ` (13 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Soft parser extracts ECPRI header and message to specified
areas of parser result.
Flow is classified according to the ECPRI extracts from praser result.
This implementation supports ECPRI over ethernet/vlan/UDP and various
types/messages combinations.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |  18 ++
 drivers/net/dpaa2/dpaa2_flow.c   | 348 ++++++++++++++++++++++++++++++-
 2 files changed, 365 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index aeddcfdfa9..eaa653d266 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -179,6 +179,8 @@ enum dpaa2_rx_faf_offset {
 	FAFE_VXLAN_IN_IPV6_FRAM = 2,
 	FAFE_VXLAN_IN_UDP_FRAM = 3,
 	FAFE_VXLAN_IN_TCP_FRAM = 4,
+
+	FAFE_ECPRI_FRAM = 7,
 	/* Set by SP end*/
 
 	FAF_GTP_PRIMED_FRAM = 1 + DPAA2_FAFE_PSR_SIZE * 8,
@@ -207,6 +209,17 @@ enum dpaa2_rx_faf_offset {
 	FAF_ESP_FRAM = 89 + DPAA2_FAFE_PSR_SIZE * 8,
 };
 
+enum dpaa2_ecpri_fafe_type {
+	ECPRI_FAFE_TYPE_0 = (8 - FAFE_ECPRI_FRAM),
+	ECPRI_FAFE_TYPE_1 = (8 - FAFE_ECPRI_FRAM) | (1 << 1),
+	ECPRI_FAFE_TYPE_2 = (8 - FAFE_ECPRI_FRAM) | (2 << 1),
+	ECPRI_FAFE_TYPE_3 = (8 - FAFE_ECPRI_FRAM) | (3 << 1),
+	ECPRI_FAFE_TYPE_4 = (8 - FAFE_ECPRI_FRAM) | (4 << 1),
+	ECPRI_FAFE_TYPE_5 = (8 - FAFE_ECPRI_FRAM) | (5 << 1),
+	ECPRI_FAFE_TYPE_6 = (8 - FAFE_ECPRI_FRAM) | (6 << 1),
+	ECPRI_FAFE_TYPE_7 = (8 - FAFE_ECPRI_FRAM) | (7 << 1)
+};
+
 #define DPAA2_PR_ETH_OFF_OFFSET 19
 #define DPAA2_PR_TCI_OFF_OFFSET 21
 #define DPAA2_PR_LAST_ETYPE_OFFSET 23
@@ -236,6 +249,11 @@ enum dpaa2_rx_faf_offset {
 #define DPAA2_VXLAN_IN_TYPE_OFFSET 46
 /* Set by SP for vxlan distribution end*/
 
+/* ECPRI shares SP context with VXLAN*/
+#define DPAA2_ECPRI_MSG_OFFSET DPAA2_VXLAN_VNI_OFFSET
+
+#define DPAA2_ECPRI_MAX_EXTRACT_NB 8
+
 struct ipv4_sd_addr_extract_rule {
 	uint32_t ipv4_src;
 	uint32_t ipv4_dst;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index e4d7117192..e4fffdbf33 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -156,6 +156,13 @@ static const struct rte_flow_item_vxlan dpaa2_flow_item_vxlan_mask = {
 	.flags = 0xff,
 	.vni = "\xff\xff\xff",
 };
+
+static const struct rte_flow_item_ecpri dpaa2_flow_item_ecpri_mask = {
+	.hdr.common.type = 0xff,
+	.hdr.dummy[0] = RTE_BE32(0xffffffff),
+	.hdr.dummy[1] = RTE_BE32(0xffffffff),
+	.hdr.dummy[2] = RTE_BE32(0xffffffff),
+};
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -1556,6 +1563,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_vxlan_mask;
 		size = sizeof(struct rte_flow_item_vxlan);
 		break;
+	case RTE_FLOW_ITEM_TYPE_ECPRI:
+		mask_support = (const char *)&dpaa2_flow_item_ecpri_mask;
+		size = sizeof(struct rte_flow_item_ecpri);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -3238,6 +3249,330 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ecpri *spec, *mask;
+	struct rte_flow_item_ecpri local_mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+	uint8_t extract_nb = 0, i;
+	uint64_t rule_data[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint64_t mask_data[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint8_t extract_size[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint8_t extract_off[DPAA2_ECPRI_MAX_EXTRACT_NB];
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	if (pattern->mask) {
+		memcpy(&local_mask, pattern->mask,
+			sizeof(struct rte_flow_item_ecpri));
+		local_mask.hdr.common.u32 =
+			rte_be_to_cpu_32(local_mask.hdr.common.u32);
+		mask = &local_mask;
+	} else {
+		mask = &dpaa2_flow_item_ecpri_mask;
+	}
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ECPRI distribution not support");
+		return -ENOTSUP;
+	}
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAFE_ECPRI_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAFE_ECPRI_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ECPRI)) {
+		DPAA2_PMD_WARN("Extract field(s) of ECPRI not support.");
+
+		return -1;
+	}
+
+	if (mask->hdr.common.type != 0xff) {
+		DPAA2_PMD_WARN("ECPRI header type not specified.");
+
+		return -1;
+	}
+
+	if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_0;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type0.pc_id) {
+			rule_data[extract_nb] = spec->hdr.type0.pc_id;
+			mask_data[extract_nb] = mask->hdr.type0.pc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_iq_data, pc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type0.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type0.seq_id;
+			mask_data[extract_nb] = mask->hdr.type0.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_iq_data, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_BIT_SEQ) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_1;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type1.pc_id) {
+			rule_data[extract_nb] = spec->hdr.type1.pc_id;
+			mask_data[extract_nb] = mask->hdr.type1.pc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_bit_seq, pc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type1.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type1.seq_id;
+			mask_data[extract_nb] = mask->hdr.type1.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_bit_seq, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RTC_CTRL) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_2;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type2.rtc_id) {
+			rule_data[extract_nb] = spec->hdr.type2.rtc_id;
+			mask_data[extract_nb] = mask->hdr.type2.rtc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_rtc_ctrl, rtc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type2.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type2.seq_id;
+			mask_data[extract_nb] = mask->hdr.type2.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_rtc_ctrl, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_GEN_DATA) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_3;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type3.pc_id || mask->hdr.type3.seq_id)
+			DPAA2_PMD_WARN("Extract type3 msg not support.");
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RM_ACC) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_4;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type4.rma_id) {
+			rule_data[extract_nb] = spec->hdr.type4.rma_id;
+			mask_data[extract_nb] = mask->hdr.type4.rma_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET + 0;
+				/** Compiler not support to take address
+				 * of bit-field
+				 * offsetof(struct rte_ecpri_msg_rm_access,
+				 * rma_id);
+				 */
+			extract_nb++;
+		}
+		if (mask->hdr.type4.ele_id) {
+			rule_data[extract_nb] = spec->hdr.type4.ele_id;
+			mask_data[extract_nb] = mask->hdr.type4.ele_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET + 2;
+				/** Compiler not support to take address
+				 * of bit-field
+				 * offsetof(struct rte_ecpri_msg_rm_access,
+				 * ele_id);
+				 */
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_DLY_MSR) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_5;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type5.msr_id) {
+			rule_data[extract_nb] = spec->hdr.type5.msr_id;
+			mask_data[extract_nb] = mask->hdr.type5.msr_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_delay_measure,
+					msr_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type5.act_type) {
+			rule_data[extract_nb] = spec->hdr.type5.act_type;
+			mask_data[extract_nb] = mask->hdr.type5.act_type;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_delay_measure,
+					act_type);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RMT_RST) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_6;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type6.rst_id) {
+			rule_data[extract_nb] = spec->hdr.type6.rst_id;
+			mask_data[extract_nb] = mask->hdr.type6.rst_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_remote_reset,
+					rst_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type6.rst_op) {
+			rule_data[extract_nb] = spec->hdr.type6.rst_op;
+			mask_data[extract_nb] = mask->hdr.type6.rst_op;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_remote_reset,
+					rst_op);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_EVT_IND) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_7;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type7.evt_id) {
+			rule_data[extract_nb] = spec->hdr.type7.evt_id;
+			mask_data[extract_nb] = mask->hdr.type7.evt_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					evt_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.evt_type) {
+			rule_data[extract_nb] = spec->hdr.type7.evt_type;
+			mask_data[extract_nb] = mask->hdr.type7.evt_type;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					evt_type);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.seq) {
+			rule_data[extract_nb] = spec->hdr.type7.seq;
+			mask_data[extract_nb] = mask->hdr.type7.seq;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					seq);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.number) {
+			rule_data[extract_nb] = spec->hdr.type7.number;
+			mask_data[extract_nb] = mask->hdr.type7.number;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					number);
+			extract_nb++;
+		}
+	} else {
+		DPAA2_PMD_ERR("Invalid ecpri header type(%d)",
+				spec->hdr.common.type);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < extract_nb; i++) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			extract_off[i],
+			extract_size[i], &rule_data[i], &mask_data[i],
+			priv, group,
+			device_configured,
+			DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			extract_off[i],
+			extract_size[i], &rule_data[i], &mask_data[i],
+			priv, group,
+			device_configured,
+			DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3870,6 +4205,16 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_ECPRI:
+			ret = dpaa2_configure_flow_ecpri(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("ECPRI flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow, dev, attr,
 						       &dpaa2_pattern[i],
@@ -3884,7 +4229,8 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			end_of_list = 1;
 			break; /*End of List*/
 		default:
-			DPAA2_PMD_ERR("Invalid action type");
+			DPAA2_PMD_ERR("Invalid flow item[%d] type(%d)",
+				i, pattern[i].type);
 			ret = -ENOTSUP;
 			break;
 		}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 31/43] net/dpaa2: add GTP flow support
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (29 preceding siblings ...)
  2024-09-18  7:50   ` [v2 30/43] net/dpaa2: eCPRI support by parser result vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 32/43] net/dpaa2: check if Soft parser is loaded vanshika.shukla
                     ` (12 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Configure gtp flow to support RSS and FS.
Check FAF of parser result to identify GTP frame.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 170 ++++++++++++++++++++++++++-------
 1 file changed, 137 insertions(+), 33 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index e4fffdbf33..02938ad27b 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -75,6 +75,7 @@ enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_TCP,
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
+	RTE_FLOW_ITEM_TYPE_GTP
 };
 
 static const
@@ -163,6 +164,11 @@ static const struct rte_flow_item_ecpri dpaa2_flow_item_ecpri_mask = {
 	.hdr.dummy[1] = RTE_BE32(0xffffffff),
 	.hdr.dummy[2] = RTE_BE32(0xffffffff),
 };
+
+static const struct rte_flow_item_gtp dpaa2_flow_item_gtp_mask = {
+	.teid = RTE_BE32(0xffffffff),
+};
+
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -238,6 +244,12 @@ dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 			strcat(string, ".type");
 		else
 			strcat(string, ".unknown field");
+	} else if (prot == NET_PROT_GTP) {
+		strcpy(string, "gtp");
+		if (field == NH_FLD_GTP_TEID)
+			strcat(string, ".teid");
+		else
+			strcat(string, ".unknown field");
 	} else {
 		strcpy(string, "unknown protocol");
 	}
@@ -1567,6 +1579,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_ecpri_mask;
 		size = sizeof(struct rte_flow_item_ecpri);
 		break;
+	case RTE_FLOW_ITEM_TYPE_GTP:
+		mask_support = (const char *)&dpaa2_flow_item_gtp_mask;
+		size = sizeof(struct rte_flow_item_gtp);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -3573,6 +3589,84 @@ dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_gtp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_gtp *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_gtp_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-GTP distribution not support");
+		return -ENOTSUP;
+	}
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GTP_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GTP_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GTP)) {
+		DPAA2_PMD_WARN("Extract field(s) of GTP not support.");
+
+		return -1;
+	}
+
+	if (!mask->teid)
+		return 0;
+
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GTP,
+			NH_FLD_GTP_TEID, &spec->teid,
+			&mask->teid, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GTP,
+			NH_FLD_GTP_TEID, &spec->teid,
+			&mask->teid, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -4107,9 +4201,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 		switch (pattern[i].type) {
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			ret = dpaa2_configure_flow_eth(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ETH flow config failed!");
 				goto end_flow_set;
@@ -4117,9 +4211,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_VLAN:
 			ret = dpaa2_configure_flow_vlan(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("vLan flow config failed!");
 				goto end_flow_set;
@@ -4127,9 +4221,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			ret = dpaa2_configure_flow_ipv4(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV4 flow config failed!");
 				goto end_flow_set;
@@ -4137,9 +4231,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
 			ret = dpaa2_configure_flow_ipv6(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV6 flow config failed!");
 				goto end_flow_set;
@@ -4147,9 +4241,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_ICMP:
 			ret = dpaa2_configure_flow_icmp(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ICMP flow config failed!");
 				goto end_flow_set;
@@ -4157,9 +4251,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_UDP:
 			ret = dpaa2_configure_flow_udp(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("UDP flow config failed!");
 				goto end_flow_set;
@@ -4167,9 +4261,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			ret = dpaa2_configure_flow_tcp(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("TCP flow config failed!");
 				goto end_flow_set;
@@ -4177,9 +4271,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_SCTP:
 			ret = dpaa2_configure_flow_sctp(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("SCTP flow config failed!");
 				goto end_flow_set;
@@ -4187,9 +4281,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
 			ret = dpaa2_configure_flow_gre(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("GRE flow config failed!");
 				goto end_flow_set;
@@ -4197,9 +4291,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			ret = dpaa2_configure_flow_vxlan(flow, dev, attr,
-							 &dpaa2_pattern[i],
-							 actions, error,
-							 &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("VXLAN flow config failed!");
 				goto end_flow_set;
@@ -4215,11 +4309,21 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_GTP:
+			ret = dpaa2_configure_flow_gtp(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("GTP flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("RAW flow config failed!");
 				goto end_flow_set;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 32/43] net/dpaa2: check if Soft parser is loaded
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (30 preceding siblings ...)
  2024-09-18  7:50   ` [v2 31/43] net/dpaa2: add GTP flow support vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 33/43] net/dpaa2: soft parser flow verification vanshika.shukla
                     ` (11 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang, Vanshika Shukla

From: Jun Yang <jun.yang@nxp.com>

Access sp instruction area to check if sp is loaded.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c |  4 ++
 drivers/net/dpaa2/dpaa2_ethdev.h |  2 +
 drivers/net/dpaa2/dpaa2_flow.c   | 88 ++++++++++++++++++++++++++++++++
 3 files changed, 94 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 000d7da85c..21955ad903 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2858,6 +2858,10 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 			return ret;
 		}
 	}
+
+	ret = dpaa2_soft_parser_loaded();
+	if (ret > 0)
+		DPAA2_PMD_INFO("soft parser is loaded");
 	DPAA2_PMD_INFO("%s: netdev created, connected to %s",
 		eth_dev->data->name, dpaa2_dev->ep_name);
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index eaa653d266..db918725a7 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -479,6 +479,8 @@ int dpaa2_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
 
 int dpaa2_dev_recycle_config(struct rte_eth_dev *eth_dev);
 int dpaa2_dev_recycle_deconfig(struct rte_eth_dev *eth_dev);
+int dpaa2_soft_parser_loaded(void);
+
 int dpaa2_dev_recycle_qp_setup(struct rte_dpaa2_device *dpaa2_dev,
 	uint16_t qidx, uint64_t cntx,
 	eth_rx_burst_t tx_lpbk, eth_tx_burst_t rx_lpbk,
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 02938ad27b..a376acffcf 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -9,6 +9,7 @@
 #include <string.h>
 #include <unistd.h>
 #include <stdarg.h>
+#include <sys/mman.h>
 
 #include <rte_ethdev.h>
 #include <rte_log.h>
@@ -24,6 +25,7 @@
 
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
+static int dpaa2_sp_loaded = -1;
 
 enum dpaa2_flow_entry_size {
 	DPAA2_FLOW_ENTRY_MIN_SIZE = (DPNI_MAX_KEY_SIZE / 2),
@@ -401,6 +403,92 @@ dpaa2_flow_fs_entry_log(const char *log_info,
 	DPAA2_FLOW_DUMP("\r\n");
 }
 
+/** For LX2160A, LS2088A and LS1088A*/
+#define WRIOP_CCSR_BASE 0x8b80000
+#define WRIOP_CCSR_CTLU_OFFSET 0
+#define WRIOP_CCSR_CTLU_PARSER_OFFSET 0
+#define WRIOP_CCSR_CTLU_PARSER_INGRESS_OFFSET 0
+
+#define WRIOP_INGRESS_PARSER_PHY \
+	(WRIOP_CCSR_BASE + WRIOP_CCSR_CTLU_OFFSET + \
+	WRIOP_CCSR_CTLU_PARSER_OFFSET + \
+	WRIOP_CCSR_CTLU_PARSER_INGRESS_OFFSET)
+
+struct dpaa2_parser_ccsr {
+	uint32_t psr_cfg;
+	uint32_t psr_idle;
+	uint32_t psr_pclm;
+	uint8_t psr_ver_min;
+	uint8_t psr_ver_maj;
+	uint8_t psr_id1_l;
+	uint8_t psr_id1_h;
+	uint32_t psr_rev2;
+	uint8_t rsv[0x2c];
+	uint8_t sp_ins[4032];
+};
+
+int
+dpaa2_soft_parser_loaded(void)
+{
+	int fd, i, ret = 0;
+	struct dpaa2_parser_ccsr *parser_ccsr = NULL;
+
+	dpaa2_flow_control_log = getenv("DPAA2_FLOW_CONTROL_LOG");
+
+	if (dpaa2_sp_loaded >= 0)
+		return dpaa2_sp_loaded;
+
+	fd = open("/dev/mem", O_RDWR | O_SYNC);
+	if (fd < 0) {
+		DPAA2_PMD_ERR("open \"/dev/mem\" ERROR(%d)", fd);
+		ret = fd;
+		goto exit;
+	}
+
+	parser_ccsr = mmap(NULL, sizeof(struct dpaa2_parser_ccsr),
+		PROT_READ | PROT_WRITE, MAP_SHARED, fd,
+		WRIOP_INGRESS_PARSER_PHY);
+	if (!parser_ccsr) {
+		DPAA2_PMD_ERR("Map 0x%" PRIx64 "(size=0x%x) failed",
+			(uint64_t)WRIOP_INGRESS_PARSER_PHY,
+			(uint32_t)sizeof(struct dpaa2_parser_ccsr));
+		ret = -ENOBUFS;
+		goto exit;
+	}
+
+	DPAA2_PMD_INFO("Parser ID:0x%02x%02x, Rev:major(%02x), minor(%02x)",
+		parser_ccsr->psr_id1_h, parser_ccsr->psr_id1_l,
+		parser_ccsr->psr_ver_maj, parser_ccsr->psr_ver_min);
+
+	if (dpaa2_flow_control_log) {
+		for (i = 0; i < 64; i++) {
+			DPAA2_FLOW_DUMP("%02x ",
+				parser_ccsr->sp_ins[i]);
+			if (!((i + 1) % 16))
+				DPAA2_FLOW_DUMP("\r\n");
+		}
+	}
+
+	for (i = 0; i < 16; i++) {
+		if (parser_ccsr->sp_ins[i]) {
+			dpaa2_sp_loaded = 1;
+			break;
+		}
+	}
+	if (dpaa2_sp_loaded < 0)
+		dpaa2_sp_loaded = 0;
+
+	ret = dpaa2_sp_loaded;
+
+exit:
+	if (parser_ccsr)
+		munmap(parser_ccsr, sizeof(struct dpaa2_parser_ccsr));
+	if (fd >= 0)
+		close(fd);
+
+	return ret;
+}
+
 static int
 dpaa2_flow_ip_address_extract(enum net_prot prot,
 	uint32_t field)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 33/43] net/dpaa2: soft parser flow verification
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (31 preceding siblings ...)
  2024-09-18  7:50   ` [v2 32/43] net/dpaa2: check if Soft parser is loaded vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 34/43] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
                     ` (10 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Add flow supported by soft parser to verification list.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 86 ++++++++++++++++++++--------------
 1 file changed, 52 insertions(+), 34 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index a376acffcf..72075473fc 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -66,7 +66,7 @@ struct rte_dpaa2_flow_item {
 };
 
 static const
-enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
+enum rte_flow_item_type dpaa2_hp_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
 	RTE_FLOW_ITEM_TYPE_ETH,
 	RTE_FLOW_ITEM_TYPE_VLAN,
@@ -77,7 +77,14 @@ enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_TCP,
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
-	RTE_FLOW_ITEM_TYPE_GTP
+	RTE_FLOW_ITEM_TYPE_GTP,
+	RTE_FLOW_ITEM_TYPE_RAW
+};
+
+static const
+enum rte_flow_item_type dpaa2_sp_supported_pattern_type[] = {
+	RTE_FLOW_ITEM_TYPE_VXLAN,
+	RTE_FLOW_ITEM_TYPE_ECPRI
 };
 
 static const
@@ -4560,20 +4567,21 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
 	int ret = 0;
 
 	if (unlikely(attr->group >= dpni_attr->num_rx_tcs)) {
-		DPAA2_PMD_ERR("Priority group is out of range\n");
+		DPAA2_PMD_ERR("Group/TC(%d) is out of range(%d)",
+			attr->group, dpni_attr->num_rx_tcs);
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->priority >= dpni_attr->fs_entries)) {
-		DPAA2_PMD_ERR("Priority within the group is out of range\n");
+		DPAA2_PMD_ERR("Priority(%d) within group is out of range(%d)",
+			attr->priority, dpni_attr->fs_entries);
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->egress)) {
-		DPAA2_PMD_ERR(
-			"Flow configuration is not supported on egress side\n");
+		DPAA2_PMD_ERR("Egress flow configuration is not supported");
 		ret = -ENOTSUP;
 	}
 	if (unlikely(!attr->ingress)) {
-		DPAA2_PMD_ERR("Ingress flag must be configured\n");
+		DPAA2_PMD_ERR("Ingress flag must be configured");
 		ret = -EINVAL;
 	}
 	return ret;
@@ -4584,27 +4592,41 @@ dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
 {
 	unsigned int i, j, is_found = 0;
 	int ret = 0;
+	const enum rte_flow_item_type *hp_supported;
+	const enum rte_flow_item_type *sp_supported;
+	uint64_t hp_supported_num, sp_supported_num;
+
+	hp_supported = dpaa2_hp_supported_pattern_type;
+	hp_supported_num = RTE_DIM(dpaa2_hp_supported_pattern_type);
+
+	sp_supported = dpaa2_sp_supported_pattern_type;
+	sp_supported_num = RTE_DIM(dpaa2_sp_supported_pattern_type);
 
 	for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
-		for (i = 0; i < RTE_DIM(dpaa2_supported_pattern_type); i++) {
-			if (dpaa2_supported_pattern_type[i]
-					== pattern[j].type) {
+		is_found = 0;
+		for (i = 0; i < hp_supported_num; i++) {
+			if (hp_supported[i] == pattern[j].type) {
 				is_found = 1;
 				break;
 			}
 		}
+		if (is_found)
+			continue;
+		if (dpaa2_sp_loaded > 0) {
+			for (i = 0; i < sp_supported_num; i++) {
+				if (sp_supported[i] == pattern[j].type) {
+					is_found = 1;
+					break;
+				}
+			}
+		}
 		if (!is_found) {
+			DPAA2_PMD_WARN("Flow type(%d) not supported",
+				pattern[j].type);
 			ret = -ENOTSUP;
 			break;
 		}
 	}
-	/* Lets verify other combinations of given pattern rules */
-	for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
-		if (!pattern[j].spec) {
-			ret = -EINVAL;
-			break;
-		}
-	}
 
 	return ret;
 }
@@ -4651,43 +4673,39 @@ dpaa2_flow_validate(struct rte_eth_dev *dev,
 	memset(&dpni_attr, 0, sizeof(struct dpni_attr));
 	ret = dpni_get_attributes(dpni, CMD_PRI_LOW, token, &dpni_attr);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Failure to get dpni@%p attribute, err code  %d\n",
-			dpni, ret);
+		DPAA2_PMD_ERR("Get dpni@%d attribute failed(%d)",
+			priv->hw_id, ret);
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ATTR,
-			   flow_attr, "invalid");
+			RTE_FLOW_ERROR_TYPE_ATTR,
+			flow_attr, "invalid");
 		return ret;
 	}
 
 	/* Verify input attributes */
 	ret = dpaa2_dev_verify_attr(&dpni_attr, flow_attr);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid attributes are given\n");
+		DPAA2_PMD_ERR("Invalid attributes are given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ATTR,
-			   flow_attr, "invalid");
+			RTE_FLOW_ERROR_TYPE_ATTR,
+			flow_attr, "invalid");
 		goto not_valid_params;
 	}
 	/* Verify input pattern list */
 	ret = dpaa2_dev_verify_patterns(pattern);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid pattern list is given\n");
+		DPAA2_PMD_ERR("Invalid pattern list is given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ITEM,
-			   pattern, "invalid");
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			pattern, "invalid");
 		goto not_valid_params;
 	}
 	/* Verify input action list */
 	ret = dpaa2_dev_verify_actions(actions);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid action list is given\n");
+		DPAA2_PMD_ERR("Invalid action list is given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ACTION,
-			   actions, "invalid");
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			actions, "invalid");
 		goto not_valid_params;
 	}
 not_valid_params:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 34/43] net/dpaa2: add flow support for IPsec AH and ESP
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (32 preceding siblings ...)
  2024-09-18  7:50   ` [v2 33/43] net/dpaa2: soft parser flow verification vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 35/43] net/dpaa2: fix memory corruption in TM vanshika.shukla
                     ` (9 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support AH/ESP flow with SPI field.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 528 ++++++++++++++++++++++++---------
 1 file changed, 385 insertions(+), 143 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 72075473fc..3afe331023 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -78,6 +78,8 @@ enum rte_flow_item_type dpaa2_hp_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
 	RTE_FLOW_ITEM_TYPE_GTP,
+	RTE_FLOW_ITEM_TYPE_ESP,
+	RTE_FLOW_ITEM_TYPE_AH,
 	RTE_FLOW_ITEM_TYPE_RAW
 };
 
@@ -158,6 +160,17 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 	},
 };
 
+static const struct rte_flow_item_esp dpaa2_flow_item_esp_mask = {
+	.hdr = {
+		.spi = RTE_BE32(0xffffffff),
+		.seq = RTE_BE32(0xffffffff),
+	},
+};
+
+static const struct rte_flow_item_ah dpaa2_flow_item_ah_mask = {
+	.spi = RTE_BE32(0xffffffff),
+};
+
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
@@ -259,8 +272,16 @@ dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 			strcat(string, ".teid");
 		else
 			strcat(string, ".unknown field");
+	} else if (prot == NET_PROT_IPSEC_ESP) {
+		strcpy(string, "esp");
+		if (field == NH_FLD_IPSEC_ESP_SPI)
+			strcat(string, ".spi");
+		else if (field == NH_FLD_IPSEC_ESP_SEQUENCE_NUM)
+			strcat(string, ".seq");
+		else
+			strcat(string, ".unknown field");
 	} else {
-		strcpy(string, "unknown protocol");
+		sprintf(string, "unknown protocol(%d)", prot);
 	}
 }
 
@@ -1658,6 +1679,14 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_tcp_mask;
 		size = sizeof(struct rte_flow_item_tcp);
 		break;
+	case RTE_FLOW_ITEM_TYPE_ESP:
+		mask_support = (const char *)&dpaa2_flow_item_esp_mask;
+		size = sizeof(struct rte_flow_item_esp);
+		break;
+	case RTE_FLOW_ITEM_TYPE_AH:
+		mask_support = (const char *)&dpaa2_flow_item_ah_mask;
+		size = sizeof(struct rte_flow_item_ah);
+		break;
 	case RTE_FLOW_ITEM_TYPE_SCTP:
 		mask_support = (const char *)&dpaa2_flow_item_sctp_mask;
 		size = sizeof(struct rte_flow_item_sctp);
@@ -1688,7 +1717,7 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask[i] = (mask[i] | mask_src[i]);
 
 	if (memcmp(mask, mask_support, size))
-		return -1;
+		return -ENOTSUP;
 
 	return 0;
 }
@@ -2092,11 +2121,12 @@ dpaa2_configure_flow_tunnel_eth(struct dpaa2_dev_flow *flow,
 	if (!spec)
 		return 0;
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask->src,
@@ -2308,11 +2338,12 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask->src,
@@ -2413,11 +2444,12 @@ dpaa2_configure_flow_tunnel_vlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (!mask->tci)
@@ -2475,14 +2507,14 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
@@ -2490,27 +2522,28 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-				       RTE_FLOW_ITEM_TYPE_VLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+			RTE_FLOW_ITEM_TYPE_VLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (!mask->tci)
 		return 0;
 
 	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
-					      NH_FLD_VLAN_TCI, &spec->tci,
-					      &mask->tci, sizeof(rte_be16_t),
-					      priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+			NH_FLD_VLAN_TCI, &spec->tci,
+			&mask->tci, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
-					      NH_FLD_VLAN_TCI, &spec->tci,
-					      &mask->tci, sizeof(rte_be16_t),
-					      priv, group, &local_cfg,
-					      DPAA2_FLOW_FS_TYPE);
+			NH_FLD_VLAN_TCI, &spec->tci,
+			&mask->tci, sizeof(rte_be16_t),
+			priv, group, &local_cfg,
+			DPAA2_FLOW_FS_TYPE);
 	if (ret)
 		return ret;
 
@@ -2519,12 +2552,13 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	int ret, local_cfg = 0;
 	uint32_t group;
@@ -2548,16 +2582,16 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV4_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV4_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV4_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV4_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
@@ -2566,13 +2600,13 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_index = attr->priority;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
-					 DPAA2_FLOW_QOS_TYPE, group,
-					 &local_cfg);
+			DPAA2_FLOW_QOS_TYPE, group,
+			&local_cfg);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
-					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
 	if (ret)
 		return ret;
 
@@ -2581,10 +2615,11 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
-				       RTE_FLOW_ITEM_TYPE_IPV4)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+			RTE_FLOW_ITEM_TYPE_IPV4);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask_ipv4->hdr.src_addr) {
@@ -2593,18 +2628,18 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(rte_be32_t);
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV4_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV4_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2615,17 +2650,17 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(rte_be32_t);
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV4_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV4_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2636,18 +2671,18 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(uint8_t);
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2657,12 +2692,13 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 }
 
 static int
-dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	int ret, local_cfg = 0;
 	uint32_t group;
@@ -2690,27 +2726,27 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV6_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV6_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV6_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV6_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
-					 DPAA2_FLOW_QOS_TYPE, group,
-					 &local_cfg);
+			DPAA2_FLOW_QOS_TYPE, group,
+			&local_cfg);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
-					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
 	if (ret)
 		return ret;
 
@@ -2719,10 +2755,11 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
-				       RTE_FLOW_ITEM_TYPE_IPV6)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+			RTE_FLOW_ITEM_TYPE_IPV6);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp(mask_ipv6->hdr.src_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
@@ -2731,18 +2768,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = NH_FLD_IPV6_ADDR_SIZE;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV6_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV6_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2753,18 +2790,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = NH_FLD_IPV6_ADDR_SIZE;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV6_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV6_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2775,18 +2812,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(uint8_t);
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2843,11 +2880,12 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ICMP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ICMP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.icmp_type) {
@@ -2920,16 +2958,16 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_UDP_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_UDP_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_UDP_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_UDP_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
@@ -2950,11 +2988,12 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_UDP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_UDP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3027,9 +3066,9 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_TCP_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_TCP_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
@@ -3057,11 +3096,12 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_TCP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_TCP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3101,6 +3141,183 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_esp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_esp *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_esp_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ESP distribution not support");
+		return -ENOTSUP;
+	}
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_ESP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_ESP_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	if (!spec) {
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ESP);
+	if (ret) {
+		DPAA2_PMD_WARN("Extract field(s) of ESP not support.");
+
+		return ret;
+	}
+
+	if (mask->hdr.spi) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SPI, &spec->hdr.spi,
+			&mask->hdr.spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SPI, &spec->hdr.spi,
+			&mask->hdr.spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask->hdr.seq) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SEQUENCE_NUM, &spec->hdr.seq,
+			&mask->hdr.seq, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SEQUENCE_NUM, &spec->hdr.seq,
+			&mask->hdr.seq, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_ah(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ah *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_ah_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-AH distribution not support");
+		return -ENOTSUP;
+	}
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_AH_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_AH_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	if (!spec) {
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_AH);
+	if (ret) {
+		DPAA2_PMD_WARN("Extract field(s) of AH not support.");
+
+		return ret;
+	}
+
+	if (mask->spi) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_AH,
+			NH_FLD_IPSEC_AH_SPI, &spec->spi,
+			&mask->spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_AH,
+			NH_FLD_IPSEC_AH_SPI, &spec->spi,
+			&mask->spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask->seq_num) {
+		DPAA2_PMD_ERR("AH seq distribution not support");
+		return -ENOTSUP;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3149,11 +3366,12 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_SCTP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_SCTP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of SCTP not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3241,11 +3459,12 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_GRE)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GRE);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of GRE not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (!mask->protocol)
@@ -3318,11 +3537,12 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VXLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VXLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of VXLAN not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->flags) {
@@ -3422,17 +3642,18 @@ dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ECPRI)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ECPRI);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ECPRI not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->hdr.common.type != 0xff) {
 		DPAA2_PMD_WARN("ECPRI header type not specified.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA) {
@@ -3733,11 +3954,12 @@ dpaa2_configure_flow_gtp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_GTP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GTP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of GTP not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (!mask->teid)
@@ -4374,6 +4596,26 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_ESP:
+			ret = dpaa2_configure_flow_esp(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("ESP flow config failed!");
+				goto end_flow_set;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_AH:
+			ret = dpaa2_configure_flow_ah(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("AH flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
 			ret = dpaa2_configure_flow_gre(flow, dev, attr,
 					&dpaa2_pattern[i],
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 35/43] net/dpaa2: fix memory corruption in TM
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (33 preceding siblings ...)
  2024-09-18  7:50   ` [v2 34/43] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 36/43] net/dpaa2: support software taildrop vanshika.shukla
                     ` (8 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh; +Cc: stable

From: Gagandeep Singh <g.singh@nxp.com>

driver was reserving memory in an array for 8 queues only,
but it can support many more queues configuration.

This patch fixes the memory corruption issue by defining the
queue array with correct size.

Fixes: 72100f0dee21 ("net/dpaa2: support level 2 in traffic management")
Cc: g.singh@nxp.com
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/net/dpaa2/dpaa2_tm.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index cb854964b4..83d0d669ce 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -684,6 +684,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 	struct dpaa2_tm_node *leaf_node, *temp_leaf_node, *channel_node;
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
 	int ret, t;
+	bool conf_schedule = false;
 
 	/* Populate TCs */
 	LIST_FOREACH(channel_node, &priv->nodes, next) {
@@ -757,7 +758,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 	}
 
 	LIST_FOREACH(channel_node, &priv->nodes, next) {
-		int wfq_grp = 0, is_wfq_grp = 0, conf[DPNI_MAX_TC];
+		int wfq_grp = 0, is_wfq_grp = 0, conf[priv->nb_tx_queues];
 		struct dpni_tx_priorities_cfg prio_cfg;
 
 		memset(&prio_cfg, 0, sizeof(prio_cfg));
@@ -767,6 +768,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 		if (channel_node->level_id != CHANNEL_LEVEL)
 			continue;
 
+		conf_schedule = false;
 		LIST_FOREACH(leaf_node, &priv->nodes, next) {
 			struct dpaa2_queue *leaf_dpaa2_q;
 			uint8_t leaf_tc_id;
@@ -789,6 +791,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			if (leaf_node->parent != channel_node)
 				continue;
 
+			conf_schedule = true;
 			leaf_dpaa2_q =  (struct dpaa2_queue *)dev->data->tx_queues[leaf_node->id];
 			leaf_tc_id = leaf_dpaa2_q->tc_index;
 			/* Process sibling leaf nodes */
@@ -829,8 +832,8 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 						goto out;
 					}
 					is_wfq_grp = 1;
-					conf[temp_leaf_node->id] = 1;
 				}
+				conf[temp_leaf_node->id] = 1;
 			}
 			if (is_wfq_grp) {
 				if (wfq_grp == 0) {
@@ -851,6 +854,9 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			}
 			conf[leaf_node->id] = 1;
 		}
+		if (!conf_schedule)
+			continue;
+
 		if (wfq_grp > 1) {
 			prio_cfg.separate_groups = 1;
 			if (prio_cfg.prio_group_B < prio_cfg.prio_group_A) {
@@ -864,6 +870,16 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 
 		prio_cfg.prio_group_A = 1;
 		prio_cfg.channel_idx = channel_node->channel_id;
+		DPAA2_PMD_DEBUG("########################################\n");
+		DPAA2_PMD_DEBUG("Channel idx = %d\n", prio_cfg.channel_idx);
+		for (t = 0; t < DPNI_MAX_TC; t++)
+			DPAA2_PMD_DEBUG("tc = %d mode = %d, delta = %d\n", t,
+					prio_cfg.tc_sched[t].mode,
+					prio_cfg.tc_sched[t].delta_bandwidth);
+
+		DPAA2_PMD_DEBUG("prioritya = %d, priorityb = %d, separate grps"
+				" = %d\n\n", prio_cfg.prio_group_A,
+				prio_cfg.prio_group_B, prio_cfg.separate_groups);
 		ret = dpni_set_tx_priorities(dpni, 0, priv->token, &prio_cfg);
 		if (ret) {
 			ret = -rte_tm_error_set(error, EINVAL,
@@ -871,15 +887,6 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 					"Scheduling Failed\n");
 			goto out;
 		}
-		DPAA2_PMD_DEBUG("########################################\n");
-		DPAA2_PMD_DEBUG("Channel idx = %d\n", prio_cfg.channel_idx);
-		for (t = 0; t < DPNI_MAX_TC; t++) {
-			DPAA2_PMD_DEBUG("tc = %d mode = %d ", t, prio_cfg.tc_sched[t].mode);
-			DPAA2_PMD_DEBUG("delta = %d\n", prio_cfg.tc_sched[t].delta_bandwidth);
-		}
-		DPAA2_PMD_DEBUG("prioritya = %d\n", prio_cfg.prio_group_A);
-		DPAA2_PMD_DEBUG("priorityb = %d\n", prio_cfg.prio_group_B);
-		DPAA2_PMD_DEBUG("separate grps = %d\n\n", prio_cfg.separate_groups);
 	}
 	return 0;
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 36/43] net/dpaa2: support software taildrop
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (34 preceding siblings ...)
  2024-09-18  7:50   ` [v2 35/43] net/dpaa2: fix memory corruption in TM vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 37/43] net/dpaa2: check IOVA before sending MC command vanshika.shukla
                     ` (7 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

Add software based taildrop support.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  2 +-
 drivers/net/dpaa2/dpaa2_rxtx.c          | 24 +++++++++++++++++++++++-
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index c5900bd06a..03b9088cc6 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -179,7 +179,7 @@ struct __rte_cache_aligned dpaa2_queue {
 	struct dpaa2_queue *tx_conf_queue;
 	int32_t eventfd;	/*!< Event Fd of this queue */
 	uint16_t nb_desc;
-	uint16_t resv;
+	uint16_t tm_sw_td;	/*!< TM software taildrop */
 	uint64_t offloads;
 	uint64_t lpbk_cntx;
 };
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 4bb785aa49..065b219ffd 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1297,8 +1297,11 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		while (qbman_result_SCN_state(dpaa2_q->cscn)) {
 			retry_count++;
 			/* Retry for some time before giving up */
-			if (retry_count > CONG_RETRY_COUNT)
+			if (retry_count > CONG_RETRY_COUNT) {
+				if (dpaa2_q->tm_sw_td)
+					goto sw_td;
 				goto skip_tx;
+			}
 		}
 
 		frames_to_send = (nb_pkts > dpaa2_eqcr_size) ?
@@ -1490,6 +1493,25 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
+	return num_tx;
+sw_td:
+	loop = 0;
+	while (loop < num_tx) {
+		if (unlikely(RTE_MBUF_HAS_EXTBUF(*bufs)))
+			rte_pktmbuf_free(*bufs);
+		bufs++;
+		loop++;
+	}
+
+	/* free the pending buffers */
+	while (nb_pkts) {
+		rte_pktmbuf_free(*bufs);
+		bufs++;
+		nb_pkts--;
+		num_tx++;
+	}
+	dpaa2_q->tx_pkts += num_tx;
+
 	return num_tx;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 37/43] net/dpaa2: check IOVA before sending MC command
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (35 preceding siblings ...)
  2024-09-18  7:50   ` [v2 36/43] net/dpaa2: support software taildrop vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 38/43] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
                     ` (6 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Convert VA to IOVA and check IOVA before sending parameter
to MC. Invalid IOVA of parameter sent to MC will cause system
stuck and not be recovered unless power reset.
IOVA is not checked in data path because:
1) MC is not involved and error can be recovered.
2) IOVA check impacts performance a little bit.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c |  63 +++--
 drivers/net/dpaa2/dpaa2_ethdev.c       | 338 +++++++++++++------------
 drivers/net/dpaa2/dpaa2_ethdev.h       |   3 +
 drivers/net/dpaa2/dpaa2_flow.c         |  67 ++++-
 drivers/net/dpaa2/dpaa2_sparser.c      |  27 +-
 drivers/net/dpaa2/dpaa2_tm.c           |  43 ++--
 6 files changed, 321 insertions(+), 220 deletions(-)

diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 4d33b51fea..20b37a97bb 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -30,8 +30,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 
 int
 rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
-			      uint16_t offset,
-			      uint8_t size)
+	uint16_t offset, uint8_t size)
 {
 	struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
 	struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
@@ -52,8 +51,8 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	p_params = rte_zmalloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_zmalloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -73,17 +72,23 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 	}
 
 	memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
-	tc_cfg.key_cfg_iova = (size_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.dist_size = eth_dev->data->nb_rx_queues;
 	tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
 
 	ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index,
-				  &tc_cfg);
+			&tc_cfg);
 	rte_free(p_params);
 	if (ret) {
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("Set RX TC dist failed(err=%d)", ret);
 		return ret;
 	}
 
@@ -115,8 +120,8 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 	if (tc_dist_queues > priv->dist_queues)
 		tc_dist_queues = priv->dist_queues;
 
-	p_params = rte_malloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_malloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -133,7 +138,15 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 		return ret;
 	}
 
-	tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.dist_size = tc_dist_queues;
 	tc_cfg.enable = true;
 	tc_cfg.tc = tc_index;
@@ -148,17 +161,15 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 	ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, &tc_cfg);
 	rte_free(p_params);
 	if (ret) {
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("RX Hash dist for failed(err=%d)", ret);
 		return ret;
 	}
 
 	return 0;
 }
 
-int dpaa2_remove_flow_dist(
-	struct rte_eth_dev *eth_dev,
+int
+dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
 	uint8_t tc_index)
 {
 	struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
@@ -168,8 +179,8 @@ int dpaa2_remove_flow_dist(
 	void *p_params;
 	int ret;
 
-	p_params = rte_malloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_malloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -177,7 +188,15 @@ int dpaa2_remove_flow_dist(
 
 	memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
 	tc_cfg.dist_size = 0;
-	tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.enable = true;
 	tc_cfg.tc = tc_index;
 
@@ -194,9 +213,7 @@ int dpaa2_remove_flow_dist(
 			&tc_cfg);
 	rte_free(p_params);
 	if (ret)
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("RX hash dist failed(err=%d)", ret);
 	return ret;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 21955ad903..9f859aef66 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -123,9 +123,9 @@ dpaa2_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
-		return -1;
+		return -EINVAL;
 	}
 
 	if (on)
@@ -174,8 +174,8 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 static int
 dpaa2_vlan_tpid_set(struct rte_eth_dev *dev,
-		      enum rte_vlan_type vlan_type __rte_unused,
-		      uint16_t tpid)
+	enum rte_vlan_type vlan_type __rte_unused,
+	uint16_t tpid)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct fsl_mc_io *dpni = dev->process_private;
@@ -212,8 +212,7 @@ dpaa2_vlan_tpid_set(struct rte_eth_dev *dev,
 
 static int
 dpaa2_fw_version_get(struct rte_eth_dev *dev,
-		     char *fw_version,
-		     size_t fw_size)
+	char *fw_version, size_t fw_size)
 {
 	int ret;
 	struct fsl_mc_io *dpni = dev->process_private;
@@ -245,7 +244,8 @@ dpaa2_fw_version_get(struct rte_eth_dev *dev,
 }
 
 static int
-dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+dpaa2_dev_info_get(struct rte_eth_dev *dev,
+	struct rte_eth_dev_info *dev_info)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
@@ -291,8 +291,8 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 static int
 dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
-			__rte_unused uint16_t queue_id,
-			struct rte_eth_burst_mode *mode)
+	__rte_unused uint16_t queue_id,
+	struct rte_eth_burst_mode *mode)
 {
 	struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
 	int ret = -EINVAL;
@@ -368,7 +368,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	uint8_t num_rxqueue_per_tc;
 	struct dpaa2_queue *mc_q, *mcq;
 	uint32_t tot_queues;
-	int i;
+	int i, ret;
 	struct dpaa2_queue *dpaa2_q;
 
 	PMD_INIT_FUNC_TRACE();
@@ -382,7 +382,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 			  RTE_CACHE_LINE_SIZE);
 	if (!mc_q) {
 		DPAA2_PMD_ERR("Memory allocation failed for rx/tx queues");
-		return -1;
+		return -ENOBUFS;
 	}
 
 	for (i = 0; i < priv->nb_rx_queues; i++) {
@@ -404,8 +404,10 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	if (dpaa2_enable_err_queue) {
 		priv->rx_err_vq = rte_zmalloc("dpni_rx_err",
 			sizeof(struct dpaa2_queue), 0);
-		if (!priv->rx_err_vq)
+		if (!priv->rx_err_vq) {
+			ret = -ENOBUFS;
 			goto fail;
+		}
 
 		dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq;
 		dpaa2_q->q_storage = rte_malloc("err_dq_storage",
@@ -424,13 +426,15 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 
 	for (i = 0; i < priv->nb_tx_queues; i++) {
 		mc_q->eth_data = dev->data;
-		mc_q->flow_id = 0xffff;
+		mc_q->flow_id = DPAA2_INVALID_FLOW_ID;
 		priv->tx_vq[i] = mc_q++;
 		dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i];
 		dpaa2_q->cscn = rte_malloc(NULL,
 					   sizeof(struct qbman_result), 16);
-		if (!dpaa2_q->cscn)
+		if (!dpaa2_q->cscn) {
+			ret = -ENOBUFS;
 			goto fail_tx;
+		}
 	}
 
 	if (priv->flags & DPAA2_TX_CONF_ENABLE) {
@@ -498,7 +502,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	}
 
 	rte_free(mc_q);
-	return -1;
+	return ret;
 }
 
 static void
@@ -718,14 +722,14 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
  */
 static int
 dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
-			 uint16_t rx_queue_id,
-			 uint16_t nb_rx_desc,
-			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_rxconf *rx_conf,
-			 struct rte_mempool *mb_pool)
+	uint16_t rx_queue_id,
+	uint16_t nb_rx_desc,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_rxconf *rx_conf,
+	struct rte_mempool *mb_pool)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct dpni_queue cfg;
 	uint8_t options = 0;
@@ -747,8 +751,8 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* Rx deferred start is not supported */
 	if (rx_conf->rx_deferred_start) {
-		DPAA2_PMD_ERR("%p:Rx deferred start not supported",
-				(void *)dev);
+		DPAA2_PMD_ERR("%s:Rx deferred start not supported",
+			dev->data->name);
 		return -EINVAL;
 	}
 
@@ -764,7 +768,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		if (ret)
 			return ret;
 	}
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = priv->rx_vq[rx_queue_id];
 	dpaa2_q->mb_pool = mb_pool; /**< mbuf pool to populate RX ring. */
 	dpaa2_q->bp_array = rte_dpaa2_bpid_info;
 	dpaa2_q->nb_desc = UINT16_MAX;
@@ -790,7 +794,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		cfg.cgid = i;
 		dpaa2_q->cgid = cfg.cgid;
 	} else {
-		dpaa2_q->cgid = 0xff;
+		dpaa2_q->cgid = DPAA2_INVALID_CGID;
 	}
 
 	/*if ls2088 or rev2 device, enable the stashing */
@@ -811,10 +815,10 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			cfg.flc.value |= 0x14;
 	}
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_RX,
-			     dpaa2_q->tc_index, flow_id, options, &cfg);
+			dpaa2_q->tc_index, flow_id, options, &cfg);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in setting the rx flow: = %d", ret);
-		return -1;
+		return ret;
 	}
 
 	if (!(priv->flags & DPAA2_RX_TAILDROP_OFF)) {
@@ -827,7 +831,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		 * There is no HW restriction, but number of CGRs are limited,
 		 * hence this restriction is placed.
 		 */
-		if (dpaa2_q->cgid != 0xff) {
+		if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 			/*enabling per rx queue congestion control */
 			taildrop.threshold = nb_rx_desc;
 			taildrop.units = DPNI_CONGESTION_UNIT_FRAMES;
@@ -853,15 +857,15 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 		if (ret) {
 			DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)",
-				      ret);
-			return -1;
+				ret);
+			return ret;
 		}
 	} else { /* Disable tail Drop */
 		struct dpni_taildrop taildrop = {0};
 		DPAA2_PMD_INFO("Tail drop is disabled on queue");
 
 		taildrop.enable = 0;
-		if (dpaa2_q->cgid != 0xff) {
+		if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
 					dpaa2_q->tc_index,
@@ -873,8 +877,8 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 		if (ret) {
 			DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)",
-				      ret);
-			return -1;
+				ret);
+			return ret;
 		}
 	}
 
@@ -884,16 +888,14 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 static int
 dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
-			 uint16_t tx_queue_id,
-			 uint16_t nb_tx_desc,
-			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_txconf *tx_conf)
+	uint16_t tx_queue_id,
+	uint16_t nb_tx_desc,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_txconf *tx_conf)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)
-		priv->tx_vq[tx_queue_id];
-	struct dpaa2_queue *dpaa2_tx_conf_q = (struct dpaa2_queue *)
-		priv->tx_conf_vq[tx_queue_id];
+	struct dpaa2_queue *dpaa2_q = priv->tx_vq[tx_queue_id];
+	struct dpaa2_queue *dpaa2_tx_conf_q = priv->tx_conf_vq[tx_queue_id];
 	struct fsl_mc_io *dpni = dev->process_private;
 	struct dpni_queue tx_conf_cfg;
 	struct dpni_queue tx_flow_cfg;
@@ -903,13 +905,14 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct dpni_queue_id qid;
 	uint32_t tc_id;
 	int ret;
+	uint64_t iova;
 
 	PMD_INIT_FUNC_TRACE();
 
 	/* Tx deferred start is not supported */
 	if (tx_conf->tx_deferred_start) {
-		DPAA2_PMD_ERR("%p:Tx deferred start not supported",
-				(void *)dev);
+		DPAA2_PMD_ERR("%s:Tx deferred start not supported",
+			dev->data->name);
 		return -EINVAL;
 	}
 
@@ -917,7 +920,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	dpaa2_q->offloads = tx_conf->offloads;
 
 	/* Return if queue already configured */
-	if (dpaa2_q->flow_id != 0xffff) {
+	if (dpaa2_q->flow_id != DPAA2_INVALID_FLOW_ID) {
 		dev->data->tx_queues[tx_queue_id] = dpaa2_q;
 		return 0;
 	}
@@ -959,7 +962,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		DPAA2_PMD_ERR("Error in setting the tx flow: "
 			"tc_id=%d, flow=%d err=%d",
 			tc_id, flow_id, ret);
-			return -1;
+			return ret;
 	}
 
 	dpaa2_q->flow_id = flow_id;
@@ -967,11 +970,11 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	dpaa2_q->tc_index = tc_id;
 
 	ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX, ((channel_id << 8) | dpaa2_q->tc_index),
-			     dpaa2_q->flow_id, &tx_flow_cfg, &qid);
+			DPNI_QUEUE_TX, ((channel_id << 8) | dpaa2_q->tc_index),
+			dpaa2_q->flow_id, &tx_flow_cfg, &qid);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-		return -1;
+		return ret;
 	}
 	dpaa2_q->fqid = qid.fqid;
 
@@ -987,8 +990,17 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		 */
 		cong_notif_cfg.threshold_exit = (nb_tx_desc * 9) / 10;
 		cong_notif_cfg.message_ctx = 0;
-		cong_notif_cfg.message_iova =
-				(size_t)DPAA2_VADDR_TO_IOVA(dpaa2_q->cscn);
+
+		iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(dpaa2_q->cscn,
+			sizeof(struct qbman_result));
+		if (iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("No IOMMU map for cscn(%p)(size=%x)",
+				dpaa2_q->cscn, (uint32_t)sizeof(struct qbman_result));
+
+			return -ENOBUFS;
+		}
+
+		cong_notif_cfg.message_iova = iova;
 		cong_notif_cfg.dest_cfg.dest_type = DPNI_DEST_NONE;
 		cong_notif_cfg.notification_mode =
 					 DPNI_CONG_OPT_WRITE_MEM_ON_ENTER |
@@ -996,16 +1008,13 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 					 DPNI_CONG_OPT_COHERENT_WRITE;
 		cong_notif_cfg.cg_point = DPNI_CP_QUEUE;
 
-		ret = dpni_set_congestion_notification(dpni, CMD_PRI_LOW,
-						       priv->token,
-						       DPNI_QUEUE_TX,
-						       ((channel_id << 8) | tc_id),
-						       &cong_notif_cfg);
+		ret = dpni_set_congestion_notification(dpni,
+				CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
+				((channel_id << 8) | tc_id), &cong_notif_cfg);
 		if (ret) {
-			DPAA2_PMD_ERR(
-			   "Error in setting tx congestion notification: "
-			   "err=%d", ret);
-			return -ret;
+			DPAA2_PMD_ERR("Set TX congestion notification err=%d",
+			   ret);
+			return ret;
 		}
 	}
 	dpaa2_q->cb_eqresp_free = dpaa2_dev_free_eqresp_buf;
@@ -1016,22 +1025,24 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		options = options | DPNI_QUEUE_OPT_USER_CTX;
 		tx_conf_cfg.user_context = (size_t)(dpaa2_q);
 		ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX_CONFIRM, ((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
-			     dpaa2_tx_conf_q->flow_id, options, &tx_conf_cfg);
+				DPNI_QUEUE_TX_CONFIRM,
+				((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
+				dpaa2_tx_conf_q->flow_id,
+				options, &tx_conf_cfg);
 		if (ret) {
-			DPAA2_PMD_ERR("Error in setting the tx conf flow: "
-			      "tc_index=%d, flow=%d err=%d",
-			      dpaa2_tx_conf_q->tc_index,
-			      dpaa2_tx_conf_q->flow_id, ret);
-			return -1;
+			DPAA2_PMD_ERR("Set TC[%d].TX[%d] conf flow err=%d",
+				dpaa2_tx_conf_q->tc_index,
+				dpaa2_tx_conf_q->flow_id, ret);
+			return ret;
 		}
 
 		ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX_CONFIRM, ((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
-			     dpaa2_tx_conf_q->flow_id, &tx_conf_cfg, &qid);
+				DPNI_QUEUE_TX_CONFIRM,
+				((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
+				dpaa2_tx_conf_q->flow_id, &tx_conf_cfg, &qid);
 		if (ret) {
 			DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-			return -1;
+			return ret;
 		}
 		dpaa2_tx_conf_q->fqid = qid.fqid;
 	}
@@ -1043,8 +1054,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct dpaa2_queue *dpaa2_q = dev->data->rx_queues[rx_queue_id];
 	struct dpaa2_dev_priv *priv = dpaa2_q->eth_data->dev_private;
-	struct fsl_mc_io *dpni =
-		(struct fsl_mc_io *)priv->eth_dev->process_private;
+	struct fsl_mc_io *dpni = priv->eth_dev->process_private;
 	uint8_t options = 0;
 	int ret;
 	struct dpni_queue cfg;
@@ -1054,7 +1064,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	total_nb_rx_desc -= dpaa2_q->nb_desc;
 
-	if (dpaa2_q->cgid != 0xff) {
+	if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 		options = DPNI_QUEUE_OPT_CLEAR_CGID;
 		cfg.cgid = dpaa2_q->cgid;
 
@@ -1066,7 +1076,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 			DPAA2_PMD_ERR("Unable to clear CGR from q=%u err=%d",
 					dpaa2_q->fqid, ret);
 		priv->cgid_in_use[dpaa2_q->cgid] = 0;
-		dpaa2_q->cgid = 0xff;
+		dpaa2_q->cgid = DPAA2_INVALID_CGID;
 	}
 }
 
@@ -1230,10 +1240,10 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
 	dpaa2_dev_set_link_up(dev);
 
 	for (i = 0; i < data->nb_rx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)data->rx_queues[i];
+		dpaa2_q = data->rx_queues[i];
 		ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-				     DPNI_QUEUE_RX, dpaa2_q->tc_index,
-				       dpaa2_q->flow_id, &cfg, &qid);
+				DPNI_QUEUE_RX, dpaa2_q->tc_index,
+				dpaa2_q->flow_id, &cfg, &qid);
 		if (ret) {
 			DPAA2_PMD_ERR("Error in getting flow information: "
 				      "err=%d", ret);
@@ -1250,7 +1260,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
 						ret);
 			return ret;
 		}
-		dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq;
+		dpaa2_q = priv->rx_err_vq;
 		dpaa2_q->fqid = qid.fqid;
 		dpaa2_q->eth_data = dev->data;
 
@@ -1315,7 +1325,7 @@ static int
 dpaa2_dev_stop(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int ret;
 	struct rte_eth_link link;
 	struct rte_device *rdev = dev->device;
@@ -1368,7 +1378,7 @@ static int
 dpaa2_dev_close(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int i, ret;
 	struct rte_eth_link link;
 
@@ -1379,7 +1389,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 
 	if (!dpni) {
 		DPAA2_PMD_WARN("Already closed or not started");
-		return -1;
+		return -EINVAL;
 	}
 
 	dpaa2_tm_deinit(dev);
@@ -1388,7 +1398,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	ret = dpni_reset(dpni, CMD_PRI_LOW, priv->token);
 	if (ret) {
 		DPAA2_PMD_ERR("Failure cleaning dpni device: err=%d", ret);
-		return -1;
+		return ret;
 	}
 
 	memset(&link, 0, sizeof(link));
@@ -1400,7 +1410,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	ret = dpni_close(dpni, CMD_PRI_LOW, priv->token);
 	if (ret) {
 		DPAA2_PMD_ERR("Failure closing dpni device with err code %d",
-			      ret);
+			ret);
 	}
 
 	/* Free the allocated memory for ethernet private data and dpni*/
@@ -1409,18 +1419,17 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	rte_free(dpni);
 
 	for (i = 0; i < MAX_TCS; i++)
-		rte_free((void *)(size_t)priv->extract.tc_extract_param[i]);
+		rte_free(priv->extract.tc_extract_param[i]);
 
 	if (priv->extract.qos_extract_param)
-		rte_free((void *)(size_t)priv->extract.qos_extract_param);
+		rte_free(priv->extract.qos_extract_param);
 
 	DPAA2_PMD_INFO("%s: netdev deleted", dev->data->name);
 	return 0;
 }
 
 static int
-dpaa2_dev_promiscuous_enable(
-		struct rte_eth_dev *dev)
+dpaa2_dev_promiscuous_enable(struct rte_eth_dev *dev)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
@@ -1480,7 +1489,7 @@ dpaa2_dev_allmulticast_enable(
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1501,7 +1510,7 @@ dpaa2_dev_allmulticast_disable(struct rte_eth_dev *dev)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1526,13 +1535,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
 				+ VLAN_TAG_SIZE;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1544,7 +1553,7 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 					frame_size - RTE_ETHER_CRC_LEN);
 	if (ret) {
 		DPAA2_PMD_ERR("Setting the max frame length failed");
-		return -1;
+		return ret;
 	}
 	dev->data->mtu = mtu;
 	DPAA2_PMD_INFO("MTU configured for the device: %d", mtu);
@@ -1553,36 +1562,35 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 static int
 dpaa2_dev_add_mac_addr(struct rte_eth_dev *dev,
-		       struct rte_ether_addr *addr,
-		       __rte_unused uint32_t index,
-		       __rte_unused uint32_t pool)
+	struct rte_ether_addr *addr,
+	__rte_unused uint32_t index,
+	__rte_unused uint32_t pool)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
-		return -1;
+		return -EINVAL;
 	}
 
 	ret = dpni_add_mac_addr(dpni, CMD_PRI_LOW, priv->token,
 				addr->addr_bytes, 0, 0, 0);
 	if (ret)
-		DPAA2_PMD_ERR(
-			"error: Adding the MAC ADDR failed: err = %d", ret);
-	return 0;
+		DPAA2_PMD_ERR("ERR(%d) Adding the MAC ADDR failed", ret);
+	return ret;
 }
 
 static void
 dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
-			  uint32_t index)
+	uint32_t index)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct rte_eth_dev_data *data = dev->data;
 	struct rte_ether_addr *macaddr;
 
@@ -1590,7 +1598,7 @@ dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
 
 	macaddr = &data->mac_addrs[index];
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return;
 	}
@@ -1604,15 +1612,15 @@ dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
 
 static int
 dpaa2_dev_set_mac_addr(struct rte_eth_dev *dev,
-		       struct rte_ether_addr *addr)
+	struct rte_ether_addr *addr)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1621,19 +1629,18 @@ dpaa2_dev_set_mac_addr(struct rte_eth_dev *dev,
 					priv->token, addr->addr_bytes);
 
 	if (ret)
-		DPAA2_PMD_ERR(
-			"error: Setting the MAC ADDR failed %d", ret);
+		DPAA2_PMD_ERR("ERR(%d) Setting the MAC ADDR failed", ret);
 
 	return ret;
 }
 
-static
-int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
-			 struct rte_eth_stats *stats)
+static int
+dpaa2_dev_stats_get(struct rte_eth_dev *dev,
+	struct rte_eth_stats *stats)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
-	int32_t  retcode;
+	struct fsl_mc_io *dpni = dev->process_private;
+	int32_t retcode;
 	uint8_t page0 = 0, page1 = 1, page2 = 2;
 	union dpni_statistics value;
 	int i;
@@ -1688,8 +1695,8 @@ int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
 	/* Fill in per queue stats */
 	for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) &&
 		(i < priv->nb_rx_queues || i < priv->nb_tx_queues); ++i) {
-		dpaa2_rxq = (struct dpaa2_queue *)priv->rx_vq[i];
-		dpaa2_txq = (struct dpaa2_queue *)priv->tx_vq[i];
+		dpaa2_rxq = priv->rx_vq[i];
+		dpaa2_txq = priv->tx_vq[i];
 		if (dpaa2_rxq)
 			stats->q_ipackets[i] = dpaa2_rxq->rx_pkts;
 		if (dpaa2_txq)
@@ -1708,19 +1715,20 @@ int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
 };
 
 static int
-dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
-		     unsigned int n)
+dpaa2_dev_xstats_get(struct rte_eth_dev *dev,
+	struct rte_eth_xstat *xstats, unsigned int n)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
-	int32_t  retcode;
+	int32_t retcode;
 	union dpni_statistics value[5] = {};
 	unsigned int i = 0, num = RTE_DIM(dpaa2_xstats_strings);
+	uint8_t page_id, stats_id;
 
 	if (n < num)
 		return num;
 
-	if (xstats == NULL)
+	if (!xstats)
 		return 0;
 
 	/* Get Counters from page_0*/
@@ -1755,8 +1763,9 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 
 	for (i = 0; i < num; i++) {
 		xstats[i].id = i;
-		xstats[i].value = value[dpaa2_xstats_strings[i].page_id].
-			raw.counter[dpaa2_xstats_strings[i].stats_id];
+		page_id = dpaa2_xstats_strings[i].page_id;
+		stats_id = dpaa2_xstats_strings[i].stats_id;
+		xstats[i].value = value[page_id].raw.counter[stats_id];
 	}
 	return i;
 err:
@@ -1766,8 +1775,8 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 
 static int
 dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
-		       struct rte_eth_xstat_name *xstats_names,
-		       unsigned int limit)
+	struct rte_eth_xstat_name *xstats_names,
+	unsigned int limit)
 {
 	unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings);
 
@@ -1785,16 +1794,16 @@ dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 
 static int
 dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
-		       uint64_t *values, unsigned int n)
+	uint64_t *values, unsigned int n)
 {
 	unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings);
 	uint64_t values_copy[stat_cnt];
+	uint8_t page_id, stats_id;
 
 	if (!ids) {
 		struct dpaa2_dev_priv *priv = dev->data->dev_private;
-		struct fsl_mc_io *dpni =
-			(struct fsl_mc_io *)dev->process_private;
-		int32_t  retcode;
+		struct fsl_mc_io *dpni = dev->process_private;
+		int32_t retcode;
 		union dpni_statistics value[5] = {};
 
 		if (n < stat_cnt)
@@ -1828,8 +1837,9 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 			return 0;
 
 		for (i = 0; i < stat_cnt; i++) {
-			values[i] = value[dpaa2_xstats_strings[i].page_id].
-				raw.counter[dpaa2_xstats_strings[i].stats_id];
+			page_id = dpaa2_xstats_strings[i].page_id;
+			stats_id = dpaa2_xstats_strings[i].stats_id;
+			values[i] = value[page_id].raw.counter[stats_id];
 		}
 		return stat_cnt;
 	}
@@ -1839,7 +1849,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 	for (i = 0; i < n; i++) {
 		if (ids[i] >= stat_cnt) {
 			DPAA2_PMD_ERR("xstats id value isn't valid");
-			return -1;
+			return -EINVAL;
 		}
 		values[i] = values_copy[ids[i]];
 	}
@@ -1847,8 +1857,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 }
 
 static int
-dpaa2_xstats_get_names_by_id(
-	struct rte_eth_dev *dev,
+dpaa2_xstats_get_names_by_id(struct rte_eth_dev *dev,
 	const uint64_t *ids,
 	struct rte_eth_xstat_name *xstats_names,
 	unsigned int limit)
@@ -1875,14 +1884,14 @@ static int
 dpaa2_dev_stats_reset(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int retcode;
 	int i;
 	struct dpaa2_queue *dpaa2_q;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1893,13 +1902,13 @@ dpaa2_dev_stats_reset(struct rte_eth_dev *dev)
 
 	/* Reset the per queue stats in dpaa2_queue structure */
 	for (i = 0; i < priv->nb_rx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[i];
+		dpaa2_q = priv->rx_vq[i];
 		if (dpaa2_q)
 			dpaa2_q->rx_pkts = 0;
 	}
 
 	for (i = 0; i < priv->nb_tx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i];
+		dpaa2_q = priv->tx_vq[i];
 		if (dpaa2_q)
 			dpaa2_q->tx_pkts = 0;
 	}
@@ -1918,12 +1927,12 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct rte_eth_link link;
 	struct dpni_link_state state = {0};
 	uint8_t count;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return 0;
 	}
@@ -1933,7 +1942,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 					  &state);
 		if (ret < 0) {
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
-			return -1;
+			return ret;
 		}
 		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
@@ -1952,7 +1961,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
-	if (ret == -1)
+	if (ret < 0)
 		DPAA2_PMD_DEBUG("No change in status");
 	else
 		DPAA2_PMD_INFO("Port %d Link is %s\n", dev->data->port_id,
@@ -1975,9 +1984,9 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	struct dpni_link_state state = {0};
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return ret;
 	}
@@ -2037,9 +2046,9 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("Device has not yet been configured");
 		return ret;
 	}
@@ -2091,9 +2100,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL || fc_conf == NULL) {
+	if (!dpni || !fc_conf) {
 		DPAA2_PMD_ERR("device not configured");
 		return ret;
 	}
@@ -2146,9 +2155,9 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return ret;
 	}
@@ -2391,10 +2400,10 @@ dpaa2_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 {
 	struct dpaa2_queue *rxq;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	uint16_t max_frame_length;
 
-	rxq = (struct dpaa2_queue *)dev->data->rx_queues[queue_id];
+	rxq = dev->data->rx_queues[queue_id];
 
 	qinfo->mp = rxq->mb_pool;
 	qinfo->scattered_rx = dev->data->scattered_rx;
@@ -2510,10 +2519,10 @@ static struct eth_dev_ops dpaa2_ethdev_ops = {
  * Returns the table of MAC entries (multiple entries)
  */
 static int
-populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv,
-		  struct rte_ether_addr *mac_entry)
+populate_mac_addr(struct fsl_mc_io *dpni_dev,
+	struct dpaa2_dev_priv *priv, struct rte_ether_addr *mac_entry)
 {
-	int ret;
+	int ret = 0;
 	struct rte_ether_addr phy_mac, prime_mac;
 
 	memset(&phy_mac, 0, sizeof(struct rte_ether_addr));
@@ -2571,7 +2580,7 @@ populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv,
 	return 0;
 
 cleanup:
-	return -1;
+	return ret;
 }
 
 static int
@@ -2630,7 +2639,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 	dpni_dev->regs = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
-	eth_dev->process_private = (void *)dpni_dev;
+	eth_dev->process_private = dpni_dev;
 
 	/* For secondary processes, the primary has done all the work */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
@@ -2659,7 +2668,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 			     "Failure in opening dpni@%d with err code %d",
 			     hw_id, ret);
 		rte_free(dpni_dev);
-		return -1;
+		return ret;
 	}
 
 	if (eth_dev->data->dev_conf.lpbk_mode)
@@ -2810,7 +2819,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	/* Init fields w.r.t. classification */
 	memset(&priv->extract.qos_key_extract, 0,
 		sizeof(struct dpaa2_key_extract));
-	priv->extract.qos_extract_param = rte_malloc(NULL, 256, 64);
+	priv->extract.qos_extract_param = rte_malloc(NULL,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE,
+		RTE_CACHE_LINE_SIZE);
 	if (!priv->extract.qos_extract_param) {
 		DPAA2_PMD_ERR("Memory alloc failed");
 		goto init_err;
@@ -2819,7 +2830,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	for (i = 0; i < MAX_TCS; i++) {
 		memset(&priv->extract.tc_key_extract[i], 0,
 			sizeof(struct dpaa2_key_extract));
-		priv->extract.tc_extract_param[i] = rte_malloc(NULL, 256, 64);
+		priv->extract.tc_extract_param[i] = rte_malloc(NULL,
+			DPAA2_EXTRACT_PARAM_MAX_SIZE,
+			RTE_CACHE_LINE_SIZE);
 		if (!priv->extract.tc_extract_param[i]) {
 			DPAA2_PMD_ERR("Memory alloc failed");
 			goto init_err;
@@ -2979,12 +2992,11 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 
 	if ((DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE) >
 		RTE_PKTMBUF_HEADROOM) {
-		DPAA2_PMD_ERR(
-		"RTE_PKTMBUF_HEADROOM(%d) shall be > DPAA2 Annotation req(%d)",
-		RTE_PKTMBUF_HEADROOM,
-		DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE);
+		DPAA2_PMD_ERR("RTE_PKTMBUF_HEADROOM(%d) < DPAA2 Annotation(%d)",
+			RTE_PKTMBUF_HEADROOM,
+			DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE);
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index db918725a7..a2b9fc5678 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -31,6 +31,9 @@
 #define MAX_DPNI		8
 #define DPAA2_MAX_CHANNELS	16
 
+#define DPAA2_EXTRACT_PARAM_MAX_SIZE 256
+#define DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE 256
+
 #define DPAA2_RX_DEFAULT_NBDESC 512
 
 #define DPAA2_ETH_MAX_LEN (RTE_ETHER_MTU + \
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 3afe331023..54f38e2e25 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -4322,7 +4322,14 @@ dpaa2_configure_fs_rss_table(struct dpaa2_dev_priv *priv,
 
 	tc_extract = &priv->extract.tc_key_extract[tc_id];
 	key_cfg_buf = priv->extract.tc_extract_param[tc_id];
-	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_buf,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_buf);
+
+		return -ENOBUFS;
+	}
 
 	key_max_size = tc_extract->key_profile.key_max_size;
 	entry_size = dpaa2_flow_entry_size(key_max_size);
@@ -4406,7 +4413,14 @@ dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
 
 	qos_extract = &priv->extract.qos_key_extract;
 	key_cfg_buf = priv->extract.qos_extract_param;
-	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_buf,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_buf);
+
+		return -ENOBUFS;
+	}
 
 	key_max_size = qos_extract->key_profile.key_max_size;
 	entry_size = dpaa2_flow_entry_size(key_max_size);
@@ -4963,6 +4977,7 @@ dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	struct dpaa2_dev_flow *flow = NULL;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int ret;
+	uint64_t iova;
 
 	dpaa2_flow_control_log =
 		getenv("DPAA2_FLOW_CONTROL_LOG");
@@ -4986,34 +5001,66 @@ dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	}
 
 	/* Allocate DMA'ble memory to write the qos rules */
-	flow->qos_key_addr = rte_zmalloc(NULL, 256, 64);
+	flow->qos_key_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->qos_key_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->qos_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->qos_key_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->qos_key_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for qos key(%p)",
+			__func__, flow->qos_key_addr);
+		goto mem_failure;
+	}
+	flow->qos_rule.key_iova = iova;
 
-	flow->qos_mask_addr = rte_zmalloc(NULL, 256, 64);
+	flow->qos_mask_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->qos_mask_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->qos_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->qos_mask_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->qos_mask_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for qos mask(%p)",
+			__func__, flow->qos_mask_addr);
+		goto mem_failure;
+	}
+	flow->qos_rule.mask_iova = iova;
 
 	/* Allocate DMA'ble memory to write the FS rules */
-	flow->fs_key_addr = rte_zmalloc(NULL, 256, 64);
+	flow->fs_key_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->fs_key_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->fs_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->fs_key_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->fs_key_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for fs key(%p)",
+			__func__, flow->fs_key_addr);
+		goto mem_failure;
+	}
+	flow->fs_rule.key_iova = iova;
 
-	flow->fs_mask_addr = rte_zmalloc(NULL, 256, 64);
+	flow->fs_mask_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->fs_mask_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->fs_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->fs_mask_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->fs_mask_addr,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for fs mask(%p)",
+			__func__, flow->fs_mask_addr);
+		goto mem_failure;
+	}
+	flow->fs_rule.mask_iova = iova;
 
 	priv->curr = flow;
 
diff --git a/drivers/net/dpaa2/dpaa2_sparser.c b/drivers/net/dpaa2/dpaa2_sparser.c
index 36a14526a5..aa12e49e46 100644
--- a/drivers/net/dpaa2/dpaa2_sparser.c
+++ b/drivers/net/dpaa2/dpaa2_sparser.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2023 NXP
  */
 
 #include <rte_mbuf.h>
@@ -170,16 +170,23 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 	}
 
 	memcpy(addr, sp_param.byte_code, sp_param.size);
-	cfg.ss_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(addr));
+	cfg.ss_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(addr, sp_param.size);
+	if (cfg.ss_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("No IOMMU map for soft sequence(%p), size=%d",
+			addr, sp_param.size);
+		rte_free(addr);
+
+		return -ENOBUFS;
+	}
 
 	ret = dpni_load_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("dpni_load_sw_sequence failed\n");
+		DPAA2_PMD_ERR("dpni_load_sw_sequence failed");
 		rte_free(addr);
 		return ret;
 	}
 
-	priv->ss_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(addr));
+	priv->ss_iova = cfg.ss_iova;
 	priv->ss_offset += sp_param.size;
 	DPAA2_PMD_INFO("Soft parser loaded for dpni@%d", priv->hw_id);
 
@@ -219,7 +226,15 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 		}
 
 		memcpy(param_addr, sp_param.param_array, cfg.param_size);
-		cfg.param_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(param_addr));
+		cfg.param_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(param_addr,
+			cfg.param_size);
+		if (cfg.param_iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("%s: No IOMMU map for %p, size=%d",
+				__func__, param_addr, cfg.param_size);
+			rte_free(param_addr);
+
+			return -ENOBUFS;
+		}
 		priv->ss_param_iova = cfg.param_iova;
 	} else {
 		cfg.param_iova = 0;
@@ -227,7 +242,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 
 	ret = dpni_enable_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d\n",
+		DPAA2_PMD_ERR("Soft parser enabled for dpni@%d failed",
 			priv->hw_id);
 		rte_free(param_addr);
 		return ret;
diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index 83d0d669ce..a5b7d39ed4 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2020-2021 NXP
+ * Copyright 2020-2023 NXP
  */
 
 #include <rte_ethdev.h>
@@ -572,41 +572,42 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
+	uint64_t iova;
 
 	memset(&tx_flow_cfg, 0, sizeof(struct dpni_queue));
-	dpaa2_q =  (struct dpaa2_queue *)dev->data->tx_queues[node->id];
+	dpaa2_q = (struct dpaa2_queue *)dev->data->tx_queues[node->id];
 	tc_id = node->parent->tc_id;
 	node->parent->tc_id++;
 	flow_id = 0;
 
-	if (dpaa2_q == NULL) {
-		DPAA2_PMD_ERR("Queue is not configured for node = %d", node->id);
-		return -1;
+	if (!dpaa2_q) {
+		DPAA2_PMD_ERR("Queue is not configured for node = %d",
+			node->id);
+		return -ENOMEM;
 	}
 
 	DPAA2_PMD_DEBUG("tc_id = %d, channel = %d\n\n", tc_id,
 			node->parent->channel_id);
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
-			     ((node->parent->channel_id << 8) | tc_id),
-			     flow_id, options, &tx_flow_cfg);
+			((node->parent->channel_id << 8) | tc_id),
+			flow_id, options, &tx_flow_cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("Error in setting the tx flow: "
-		       "channel id  = %d tc_id= %d, param = 0x%x "
-		       "flow=%d err=%d", node->parent->channel_id, tc_id,
-		       ((node->parent->channel_id << 8) | tc_id), flow_id,
-		       ret);
-		return -1;
+		DPAA2_PMD_ERR("Set the TC[%d].ch[%d].TX flow[%d] (err=%d)",
+			tc_id, node->parent->channel_id, flow_id,
+			ret);
+		return ret;
 	}
 
 	dpaa2_q->flow_id = flow_id;
 	dpaa2_q->tc_index = tc_id;
 
 	ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-		DPNI_QUEUE_TX, ((node->parent->channel_id << 8) | dpaa2_q->tc_index),
-		dpaa2_q->flow_id, &tx_flow_cfg, &qid);
+			DPNI_QUEUE_TX,
+			((node->parent->channel_id << 8) | dpaa2_q->tc_index),
+			dpaa2_q->flow_id, &tx_flow_cfg, &qid);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-		return -1;
+		return ret;
 	}
 	dpaa2_q->fqid = qid.fqid;
 
@@ -621,8 +622,13 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 		 */
 		cong_notif_cfg.threshold_exit = (dpaa2_q->nb_desc * 9) / 10;
 		cong_notif_cfg.message_ctx = 0;
-		cong_notif_cfg.message_iova =
-			(size_t)DPAA2_VADDR_TO_IOVA(dpaa2_q->cscn);
+		iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(dpaa2_q->cscn,
+				sizeof(struct qbman_result));
+		if (iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("No IOMMU map for cscn(%p)", dpaa2_q->cscn);
+			return -ENOBUFS;
+		}
+		cong_notif_cfg.message_iova = iova;
 		cong_notif_cfg.dest_cfg.dest_type = DPNI_DEST_NONE;
 		cong_notif_cfg.notification_mode =
 					DPNI_CONG_OPT_WRITE_MEM_ON_ENTER |
@@ -641,6 +647,7 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 			return -ret;
 		}
 	}
+	dpaa2_q->tm_sw_td = true;
 
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 38/43] net/dpaa2: improve DPDMUX error behavior settings
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (36 preceding siblings ...)
  2024-09-18  7:50   ` [v2 37/43] net/dpaa2: check IOVA before sending MC command vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 39/43] net/dpaa2: store drop priority in mbuf vanshika.shukla
                     ` (5 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena

From: Sachin Saxena <sachin.saxena@nxp.com>

compatible with MC v10.36 or later

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 4390be9789..3c9e155b23 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2021,2023 NXP
  */
 
 #include <sys/queue.h>
@@ -448,13 +448,12 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 		struct dpdmux_error_cfg mux_err_cfg;
 
 		memset(&mux_err_cfg, 0, sizeof(mux_err_cfg));
+		/* Note: Discarded flag(DPDMUX_ERROR_DISC) has effect only when
+		 * ERROR_ACTION is set to DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE.
+		 */
+		mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
 		mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE;
 
-		if (attr.method != DPDMUX_METHOD_C_VLAN_MAC)
-			mux_err_cfg.errors = DPDMUX_ERROR_DISC;
-		else
-			mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
-
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
 				dpdmux_dev->token, DPAA2_DPDMUX_DPMAC_IDX,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 39/43] net/dpaa2: store drop priority in mbuf
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (37 preceding siblings ...)
  2024-09-18  7:50   ` [v2 38/43] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 40/43] net/dpaa2: add API to get endpoint name vanshika.shukla
                     ` (4 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

store drop priority in mbuf from fd.

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 1 +
 drivers/net/dpaa2/dpaa2_rxtx.c          | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 03b9088cc6..de31dc6be7 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -328,6 +328,7 @@ enum qbman_fd_format {
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
 #define DPAA2_GET_FD_IVP(fd)   (((fd)->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_GET_FD_DROPP(fd)  (((fd)->simple.ctrl & 0x07000000) >> 24)
 #define DPAA2_GET_FD_FRC(fd)   ((fd)->simple.frc)
 #define DPAA2_GET_FD_FLC(fd) \
 	(((uint64_t)((fd)->simple.flc_hi) << 32) + (fd)->simple.flc_lo)
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 065b219ffd..b9f1f0d05e 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -388,6 +388,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 	mbuf->pkt_len = mbuf->data_len;
 	mbuf->port = port_id;
 	mbuf->next = NULL;
+	mbuf->hash.sched.color = DPAA2_GET_FD_DROPP(fd);
 	rte_mbuf_refcnt_set(mbuf, 1);
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 40/43] net/dpaa2: add API to get endpoint name
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (38 preceding siblings ...)
  2024-09-18  7:50   ` [v2 39/43] net/dpaa2: store drop priority in mbuf vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 41/43] net/dpaa2: support VLAN traffic splitting vanshika.shukla
                     ` (3 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Export API in rte_pmd_dpaa2.h

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 24 ++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h  |  4 ++++
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  3 +++
 drivers/net/dpaa2/version.map     |  1 +
 4 files changed, 32 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9f859aef66..4119949c77 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2900,6 +2900,30 @@ rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+const char *
+rte_pmd_dpaa2_ep_name(uint32_t eth_id)
+{
+	struct rte_eth_dev *dev;
+	struct dpaa2_dev_priv *priv;
+
+	if (eth_id >= RTE_MAX_ETHPORTS)
+		return NULL;
+
+	if (!rte_pmd_dpaa2_dev_is_dpaa2(eth_id))
+		return NULL;
+
+	dev = &rte_eth_devices[eth_id];
+	if (!dev->data)
+		return NULL;
+
+	if (!dev->data->dev_private)
+		return NULL;
+
+	priv = dev->data->dev_private;
+
+	return priv->ep_name;
+}
+
 #if defined(RTE_LIBRTE_IEEE1588)
 int
 rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index a2b9fc5678..fd6bad7f74 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -385,6 +385,10 @@ struct dpaa2_dev_priv {
 	uint8_t max_cgs;
 	uint8_t cgid_in_use[MAX_RX_QUEUES];
 
+	enum rte_dpaa2_dev_type ep_dev_type;   /**< Endpoint Device Type */
+	uint16_t ep_object_id;                 /**< Endpoint DPAA2 Object ID */
+	char ep_name[RTE_DEV_NAME_MAX_LEN];
+
 	struct extract_s extract;
 
 	uint16_t ss_offset;
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index fc52a9218e..f93af1c65f 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -130,6 +130,9 @@ rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
 __rte_experimental
 int
 rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id);
+__rte_experimental
+const char *
+rte_pmd_dpaa2_ep_name(uint32_t eth_id);
 
 #if defined(RTE_LIBRTE_IEEE1588)
 __rte_experimental
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 233c6e6b2c..35815f7777 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -18,6 +18,7 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_get_tlu_hash;
 	# added in 24.11
 	rte_pmd_dpaa2_dev_is_dpaa2;
+	rte_pmd_dpaa2_ep_name;
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
 	rte_pmd_dpaa2_mux_dump_counter;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 41/43] net/dpaa2: support VLAN traffic splitting
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (39 preceding siblings ...)
  2024-09-18  7:50   ` [v2 40/43] net/dpaa2: add API to get endpoint name vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 42/43] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
                     ` (2 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds support for adding rules in DPDMUX
to split VLAN traffic based on VLAN ids.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 3c9e155b23..c35baf4cde 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -118,6 +118,26 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	}
 	break;
 
+	case RTE_FLOW_ITEM_TYPE_VLAN:
+	{
+		const struct rte_flow_item_vlan *spec;
+
+		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
+		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
+		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
+		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FROM_FIELD;
+		kg_cfg.extracts[0].extract.from_hdr.offset = 1;
+		kg_cfg.extracts[0].extract.from_hdr.size = 1;
+		kg_cfg.num_extracts = 1;
+
+		spec = (const struct rte_flow_item_vlan *)pattern[0]->spec;
+		memcpy((void *)key_iova, (const void *)(&spec->hdr.vlan_tci),
+			sizeof(uint16_t));
+		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
+		key_size = sizeof(uint16_t);
+	}
+	break;
+
 	case RTE_FLOW_ITEM_TYPE_UDP:
 	{
 		const struct rte_flow_item_udp *spec;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 42/43] net/dpaa2: add support for C-VLAN and MAC
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (40 preceding siblings ...)
  2024-09-18  7:50   ` [v2 41/43] net/dpaa2: support VLAN traffic splitting vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-09-18  7:50   ` [v2 43/43] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
  2024-10-10  2:54   ` [v2 00/43] DPAA2 specific patches Stephen Hemminger
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds the support for DPDMUX_METHOD_C_VLAN_MAC method
which implements DPDMUX based on C-VLAN and MAC address.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     |  2 +-
 drivers/net/dpaa2/mc/fsl_dpdmux.h | 16 ++++++++++++++++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index c35baf4cde..5c37701939 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021,2023 NXP
+ * Copyright 2018-2024 NXP
  */
 
 #include <sys/queue.h>
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 97b09e59f9..70b81f3b3b 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -593,6 +593,22 @@ int dpdmux_dump_table(struct fsl_mc_io *mc_io,
  */
 #define DPDMUX__ERROR_L4CE			0x00000001
 
+#define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
+				 DPDMUX__ERROR_L4CV | \
+				 DPDMUX__ERROR_L3CE | \
+				 DPDMUX__ERROR_L3CV | \
+				 DPDMUX_ERROR_BLE | \
+				 DPDMUX_ERROR_PHE | \
+				 DPDMUX_ERROR_ISP | \
+				 DPDMUX_ERROR_PTE | \
+				 DPDMUX_ERROR_FPE | \
+				 DPDMUX_ERROR_FLE | \
+				 DPDMUX_ERROR_PIEE | \
+				 DPDMUX_ERROR_TIDE | \
+				 DPDMUX_ERROR_MNLE | \
+				 DPDMUX_ERROR_EOFHE | \
+				 DPDMUX_ERROR_KSE)
+
 #define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
 				 DPDMUX__ERROR_L4CV | \
 				 DPDMUX__ERROR_L3CE | \
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v2 43/43] net/dpaa2: dpdmux single flow/multiple rules support
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (41 preceding siblings ...)
  2024-09-18  7:50   ` [v2 42/43] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
@ 2024-09-18  7:50   ` vanshika.shukla
  2024-10-10  2:54   ` [v2 00/43] DPAA2 specific patches Stephen Hemminger
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-09-18  7:50 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support multiple extractions as well as hardware descriptions
instead of hard code.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h     |   1 +
 drivers/net/dpaa2/dpaa2_flow.c       |  22 --
 drivers/net/dpaa2/dpaa2_mux.c        | 395 ++++++++++++++++-----------
 drivers/net/dpaa2/dpaa2_parse_dump.h |   2 +
 drivers/net/dpaa2/rte_pmd_dpaa2.h    |   8 +-
 5 files changed, 247 insertions(+), 181 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fd6bad7f74..fd3119247a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -198,6 +198,7 @@ enum dpaa2_rx_faf_offset {
 	FAF_IPV4_FRAM = 34 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IPV6_FRAM = 42 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IP_FRAM = 48 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IP_FRAG_FRAM = 50 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_ICMP_FRAM = 57 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IGMP_FRAM = 58 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_GRE_FRAM = 65 + DPAA2_FAFE_PSR_SIZE * 8,
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 54f38e2e25..9dd9163880 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -98,13 +98,6 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = {
 	RTE_FLOW_ACTION_TYPE_RSS
 };
 
-static const
-enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
-	RTE_FLOW_ACTION_TYPE_QUEUE,
-	RTE_FLOW_ACTION_TYPE_PORT_ID,
-	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
-};
-
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
 	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
@@ -4083,21 +4076,6 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
-static inline int
-dpaa2_fs_action_supported(enum rte_flow_action_type action)
-{
-	int i;
-	int action_num = sizeof(dpaa2_supported_fs_action_type) /
-		sizeof(enum rte_flow_action_type);
-
-	for (i = 0; i < action_num; i++) {
-		if (action == dpaa2_supported_fs_action_type[i])
-			return true;
-	}
-
-	return false;
-}
-
 static inline int
 dpaa2_flow_verify_attr(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr)
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 5c37701939..79a1c7f981 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -32,8 +32,9 @@ struct dpaa2_dpdmux_dev {
 	uint8_t num_ifs;   /* Number of interfaces in DPDMUX */
 };
 
-struct rte_flow {
-	struct dpdmux_rule_cfg rule;
+#define DPAA2_MUX_FLOW_MAX_RULE_NUM 8
+struct dpaa2_mux_flow {
+	struct dpdmux_rule_cfg rule[DPAA2_MUX_FLOW_MAX_RULE_NUM];
 };
 
 TAILQ_HEAD(dpdmux_dev_list, dpaa2_dpdmux_dev);
@@ -53,204 +54,287 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 	return dpdmux_dev;
 }
 
-struct rte_flow *
+int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
-			      struct rte_flow_item *pattern[],
-			      struct rte_flow_action *actions[])
+	struct rte_flow_item pattern[],
+	struct rte_flow_action actions[])
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev;
+	static struct dpkg_profile_cfg s_kg_cfg;
 	struct dpkg_profile_cfg kg_cfg;
 	const struct rte_flow_action_vf *vf_conf;
 	struct dpdmux_cls_action dpdmux_action;
-	struct rte_flow *flow = NULL;
-	void *key_iova, *mask_iova, *key_cfg_iova = NULL;
+	uint8_t *key_va = NULL, *mask_va = NULL;
+	void *key_cfg_va = NULL;
+	uint64_t key_iova, mask_iova, key_cfg_iova;
 	uint8_t key_size = 0;
-	int ret;
-	static int i;
+	int ret = 0, loop = 0;
+	static int s_i;
+	struct dpkg_extract *extract;
+	struct dpdmux_rule_cfg rule;
 
-	if (!pattern || !actions || !pattern[0] || !actions[0])
-		return NULL;
+	memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
 
 	/* Find the DPDMUX from dpdmux_id in our list */
 	dpdmux_dev = get_dpdmux_from_id(dpdmux_id);
 	if (!dpdmux_dev) {
 		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
-		return NULL;
+		ret = -ENODEV;
+		goto creation_error;
 	}
 
-	key_cfg_iova = rte_zmalloc(NULL, DIST_PARAM_IOVA_SIZE,
-				   RTE_CACHE_LINE_SIZE);
-	if (!key_cfg_iova) {
-		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
-		return NULL;
+	key_cfg_va = rte_zmalloc(NULL, DIST_PARAM_IOVA_SIZE,
+				RTE_CACHE_LINE_SIZE);
+	if (!key_cfg_va) {
+		DPAA2_PMD_ERR("Unable to allocate key configure buffer");
+		ret = -ENOMEM;
+		goto creation_error;
+	}
+
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_va,
+		DIST_PARAM_IOVA_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_va);
+		ret = -ENOBUFS;
+		goto creation_error;
 	}
-	flow = rte_zmalloc(NULL, sizeof(struct rte_flow) +
-			   (2 * DIST_PARAM_IOVA_SIZE), RTE_CACHE_LINE_SIZE);
-	if (!flow) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+
+	key_va = rte_zmalloc(NULL, (2 * DIST_PARAM_IOVA_SIZE),
+		RTE_CACHE_LINE_SIZE);
+	if (!key_va) {
+		DPAA2_PMD_ERR("Unable to allocate flow dist parameter");
+		ret = -ENOMEM;
 		goto creation_error;
 	}
-	key_iova = (void *)((size_t)flow + sizeof(struct rte_flow));
-	mask_iova = (void *)((size_t)key_iova + DIST_PARAM_IOVA_SIZE);
+
+	key_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_va,
+		(2 * DIST_PARAM_IOVA_SIZE));
+	if (key_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU mapping for address(%p)",
+			__func__, key_va);
+		ret = -ENOBUFS;
+		goto creation_error;
+	}
+
+	mask_va = key_va + DIST_PARAM_IOVA_SIZE;
+	mask_iova = key_iova + DIST_PARAM_IOVA_SIZE;
 
 	/* Currently taking only IP protocol as an extract type.
-	 * This can be extended to other fields using pattern->type.
+	 * This can be exended to other fields using pattern->type.
 	 */
 	memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
 
-	switch (pattern[0]->type) {
-	case RTE_FLOW_ITEM_TYPE_IPV4:
-	{
-		const struct rte_flow_item_ipv4 *spec;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_IP;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_ipv4 *)pattern[0]->spec;
-		memcpy(key_iova, (const void *)(&spec->hdr.next_proto_id),
-			sizeof(uint8_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint8_t));
-		key_size = sizeof(uint8_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_VLAN:
-	{
-		const struct rte_flow_item_vlan *spec;
-
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FROM_FIELD;
-		kg_cfg.extracts[0].extract.from_hdr.offset = 1;
-		kg_cfg.extracts[0].extract.from_hdr.size = 1;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_vlan *)pattern[0]->spec;
-		memcpy((void *)key_iova, (const void *)(&spec->hdr.vlan_tci),
-			sizeof(uint16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_UDP:
-	{
-		const struct rte_flow_item_udp *spec;
-		uint16_t udp_dst_port;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_UDP;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_udp *)pattern[0]->spec;
-		udp_dst_port = rte_constant_bswap16(spec->hdr.dst_port);
-		memcpy((void *)key_iova, (const void *)&udp_dst_port,
-							sizeof(rte_be16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_ETH:
-	{
-		const struct rte_flow_item_eth *spec;
-		uint16_t eth_type;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_ETH;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_ETH_TYPE;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
-		memcpy((void *)key_iova, (const void *)&eth_type,
-							sizeof(rte_be16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_RAW:
-	{
-		const struct rte_flow_item_raw *spec;
-
-		spec = (const struct rte_flow_item_raw *)pattern[0]->spec;
-		kg_cfg.extracts[0].extract.from_data.offset = spec->offset;
-		kg_cfg.extracts[0].extract.from_data.size = spec->length;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_DATA;
-		kg_cfg.num_extracts = 1;
-		memcpy((void *)key_iova, (const void *)spec->pattern,
-							spec->length);
-		memcpy(mask_iova, pattern[0]->mask, spec->length);
-
-		key_size = spec->length;
-	}
-	break;
+	while (pattern[loop].type != RTE_FLOW_ITEM_TYPE_END) {
+		if (kg_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+			DPAA2_PMD_ERR("Too many extracts(%d)",
+				kg_cfg.num_extracts);
+			ret = -ENOTSUP;
+			goto creation_error;
+		}
+		switch (pattern[loop].type) {
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		{
+			const struct rte_flow_item_ipv4 *spec;
+			const struct rte_flow_item_ipv4 *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_IP;
+			extract->extract.from_hdr.field = NH_FLD_IP_PROTO;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->hdr.next_proto_id, sizeof(uint8_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->hdr.next_proto_id,
+					sizeof(uint8_t));
+			} else {
+				mask_va[key_size] = 0xff;
+			}
+			key_size += sizeof(uint8_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+		{
+			const struct rte_flow_item_vlan *spec;
+			const struct rte_flow_item_vlan *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_VLAN;
+			extract->extract.from_hdr.field = NH_FLD_VLAN_TCI;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->tci, sizeof(uint16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->tci, sizeof(uint16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(uint16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_UDP:
+		{
+			const struct rte_flow_item_udp *spec;
+			const struct rte_flow_item_udp *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_UDP;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+			extract->extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->hdr.dst_port, sizeof(rte_be16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->hdr.dst_port,
+					sizeof(rte_be16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(rte_be16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_ETH:
+		{
+			const struct rte_flow_item_eth *spec;
+			const struct rte_flow_item_eth *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_ETH;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+			extract->extract.from_hdr.field = NH_FLD_ETH_TYPE;
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->type, sizeof(rte_be16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->type, sizeof(rte_be16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(rte_be16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_RAW:
+		{
+			const struct rte_flow_item_raw *spec;
+			const struct rte_flow_item_raw *mask;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_DATA;
+			extract->extract.from_data.offset = spec->offset;
+			extract->extract.from_data.size = spec->length;
+			kg_cfg.num_extracts++;
+
+			rte_memcpy(&key_va[key_size],
+				spec->pattern, spec->length);
+			if (mask && mask->pattern) {
+				rte_memcpy(&mask_va[key_size],
+					mask->pattern, spec->length);
+			} else {
+				memset(&mask_va[key_size], 0xff, spec->length);
+			}
+
+			key_size += spec->length;
+		}
+		break;
 
-	default:
-		DPAA2_PMD_ERR("Not supported pattern type: %d",
-				pattern[0]->type);
-		goto creation_error;
+		default:
+			DPAA2_PMD_ERR("Not supported pattern[%d] type: %d",
+				loop, pattern[loop].type);
+			ret = -ENOTSUP;
+			goto creation_error;
+		}
+		loop++;
 	}
 
-	ret = dpkg_prepare_key_cfg(&kg_cfg, key_cfg_iova);
+	ret = dpkg_prepare_key_cfg(&kg_cfg, key_cfg_va);
 	if (ret) {
 		DPAA2_PMD_ERR("dpkg_prepare_key_cfg failed: err(%d)", ret);
 		goto creation_error;
 	}
 
-	/* Multiple rules with same DPKG extracts (kg_cfg.extracts) like same
-	 * offset and length values in raw is supported right now. Different
-	 * values of kg_cfg may not work.
-	 */
-	if (i == 0) {
-		ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-					    dpdmux_dev->token,
-				(uint64_t)(DPAA2_VADDR_TO_IOVA(key_cfg_iova)));
+	if (!s_i) {
+		ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux,
+				CMD_PRI_LOW, dpdmux_dev->token, key_cfg_iova);
 		if (ret) {
 			DPAA2_PMD_ERR("dpdmux_set_custom_key failed: err(%d)",
-					ret);
+				ret);
+			goto creation_error;
+		}
+		rte_memcpy(&s_kg_cfg, &kg_cfg, sizeof(struct dpkg_profile_cfg));
+	} else {
+		if (memcmp(&s_kg_cfg, &kg_cfg,
+			sizeof(struct dpkg_profile_cfg))) {
+			DPAA2_PMD_ERR("%s: Single flow support only.",
+				__func__);
+			ret = -ENOTSUP;
 			goto creation_error;
 		}
 	}
-	/* As now our key extract parameters are set, let us configure
-	 * the rule.
-	 */
-	flow->rule.key_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(key_iova));
-	flow->rule.mask_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(mask_iova));
-	flow->rule.key_size = key_size;
-	flow->rule.entry_index = i++;
 
-	vf_conf = (const struct rte_flow_action_vf *)(actions[0]->conf);
+	vf_conf = actions[0].conf;
 	if (vf_conf->id == 0 || vf_conf->id > dpdmux_dev->num_ifs) {
-		DPAA2_PMD_ERR("Invalid destination id\n");
+		DPAA2_PMD_ERR("Invalid destination id(%d)", vf_conf->id);
 		goto creation_error;
 	}
 	dpdmux_action.dest_if = vf_conf->id;
 
-	ret = dpdmux_add_custom_cls_entry(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-					  dpdmux_dev->token, &flow->rule,
-					  &dpdmux_action);
+	rule.key_iova = key_iova;
+	rule.mask_iova = mask_iova;
+	rule.key_size = key_size;
+	rule.entry_index = s_i;
+	s_i++;
+
+	/* As now our key extract parameters are set, let us configure
+	 * the rule.
+	 */
+	ret = dpdmux_add_custom_cls_entry(&dpdmux_dev->dpdmux,
+			CMD_PRI_LOW, dpdmux_dev->token,
+			&rule, &dpdmux_action);
 	if (ret) {
-		DPAA2_PMD_ERR("dpdmux_add_custom_cls_entry failed: err(%d)",
-			      ret);
+		DPAA2_PMD_ERR("Add classification entry failed:err(%d)", ret);
 		goto creation_error;
 	}
 
-	return flow;
-
 creation_error:
-	rte_free((void *)key_cfg_iova);
-	rte_free((void *)flow);
-	return NULL;
+	if (key_cfg_va)
+		rte_free(key_cfg_va);
+	if (key_va)
+		rte_free(key_va);
+
+	return ret;
 }
 
 int
@@ -407,10 +491,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	PMD_INIT_FUNC_TRACE();
 
 	/* Allocate DPAA2 dpdmux handle */
-	dpdmux_dev = rte_malloc(NULL, sizeof(struct dpaa2_dpdmux_dev), 0);
+	dpdmux_dev = rte_zmalloc(NULL,
+		sizeof(struct dpaa2_dpdmux_dev), RTE_CACHE_LINE_SIZE);
 	if (!dpdmux_dev) {
 		DPAA2_PMD_ERR("Memory allocation failed for DPDMUX Device");
-		return -1;
+		return -ENOMEM;
 	}
 
 	/* Open the dpdmux object */
diff --git a/drivers/net/dpaa2/dpaa2_parse_dump.h b/drivers/net/dpaa2/dpaa2_parse_dump.h
index f1cdc003de..78fd3b768c 100644
--- a/drivers/net/dpaa2/dpaa2_parse_dump.h
+++ b/drivers/net/dpaa2/dpaa2_parse_dump.h
@@ -105,6 +105,8 @@ dpaa2_print_faf(struct dpaa2_fapr_array *fapr)
 			faf_bits[i].name = "IPv4 1 Present";
 		else if (i == FAF_IPV6_FRAM)
 			faf_bits[i].name = "IPv6 1 Present";
+		else if (i == FAF_IP_FRAG_FRAM)
+			faf_bits[i].name = "IP fragment Present";
 		else if (i == FAF_UDP_FRAM)
 			faf_bits[i].name = "UDP Present";
 		else if (i == FAF_TCP_FRAM)
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index f93af1c65f..237c3cd6e7 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -26,12 +26,12 @@
  *    Associated actions.
  *
  * @return
- *    A valid handle in case of success, NULL otherwise.
+ *    0 in case of success,  otherwise failure.
  */
-struct rte_flow *
+int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
-			      struct rte_flow_item *pattern[],
-			      struct rte_flow_action *actions[]);
+	struct rte_flow_item pattern[],
+	struct rte_flow_action actions[]);
 int
 rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
 	uint16_t entry_index);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* Re: [v2 00/43] DPAA2 specific patches
  2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
                     ` (42 preceding siblings ...)
  2024-09-18  7:50   ` [v2 43/43] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
@ 2024-10-10  2:54   ` Stephen Hemminger
  43 siblings, 0 replies; 229+ messages in thread
From: Stephen Hemminger @ 2024-10-10  2:54 UTC (permalink / raw)
  To: vanshika.shukla; +Cc: dev

On Wed, 18 Sep 2024 13:20:13 +0530
vanshika.shukla@nxp.com wrote:

> From: Vanshika Shukla <vanshika.shukla@nxp.com>
> 
> This series includes:
> -> Fixes and enhancements for NXP DPAA2 drivers.
> -> Upgrade with MC version 10.37
> -> Enhancements in DPDMUX code
> -> Fixes for coverity issues reported  
> 
> V2 changes:
> Fixed the broken compilation for clang in:
>         "net/dpaa2: dpdmux single flow/multiple rules support" patch.
> Fixed checkpatch warnings in the below patches:
>         "net/dpaa2: protocol inside tunnel distribution"
>         "net/dpaa2: add VXLAN distribution support"
>         "bus/fslmc: dynamic IOVA mode configuration"
>         "bus/fslmc: enhance MC VFIO multiprocess support"
> 
> Apeksha Gupta (2):
>   net/dpaa2: add proper MTU debugging print
>   net/dpaa2: store drop priority in mbuf
> 
> Brick Yang (1):
>   net/dpaa2: update DPNI link status method
> 
> Gagandeep Singh (3):
>   bus/fslmc: upgrade with MC version 10.37
>   net/dpaa2: fix memory corruption in TM
>   net/dpaa2: support software taildrop
> 
> Hemant Agrawal (2):
>   net/dpaa2: add support to dump dpdmux counters
>   bus/fslmc: change dpcon close as internal symbol
> 
> Jun Yang (23):
>   net/dpaa2: enhance Tx scatter-gather mempool
>   net/dpaa2: add new PMD API to check dpaa platform version
>   bus/fslmc: improve BMAN buffer acquire
>   bus/fslmc: get MC VFIO group FD directly
>   bus/fslmc: enhance MC VFIO multiprocess support
>   bus/fslmc: dynamic IOVA mode configuration
>   bus/fslmc: remove VFIO IRQ mapping
>   bus/fslmc: create dpaa2 device with it's object
>   bus/fslmc: introduce VFIO DMA mapping API for fslmc
>   net/dpaa2: flow API refactor
>   net/dpaa2: dump Rx parser result
>   net/dpaa2: enhancement of raw flow extract
>   net/dpaa2: frame attribute flags parser
>   net/dpaa2: add VXLAN distribution support
>   net/dpaa2: protocol inside tunnel distribution
>   net/dpaa2: eCPRI support by parser result
>   net/dpaa2: add GTP flow support
>   net/dpaa2: check if Soft parser is loaded
>   net/dpaa2: soft parser flow verification
>   net/dpaa2: add flow support for IPsec AH and ESP
>   net/dpaa2: check IOVA before sending MC command
>   net/dpaa2: add API to get endpoint name
>   net/dpaa2: dpdmux single flow/multiple rules support
> 
> Rohit Raj (7):
>   bus/fslmc: add close API to close DPAA2 device
>   net/dpaa2: support link state for eth interfaces
>   bus/fslmc: free VFIO group FD in case of add group failure
>   bus/fslmc: fix coverity issue
>   bus/fslmc: fix invalid error FD code
>   bus/fslmc: change qbman eq desc from d to desc
>   net/dpaa2: change miss flow ID macro name
> 
> Sachin Saxena (1):
>   net/dpaa2: improve DPDMUX error behavior settings
> 
> Vanshika Shukla (4):
>   net/dpaa2: support PTP packet one-step timestamp
>   net/dpaa2: dpdmux: add support for CVLAN
>   net/dpaa2: support VLAN traffic splitting
>   net/dpaa2: add support for C-VLAN and MAC
> 
>  doc/guides/platform/dpaa2.rst                 |    4 +-
>  drivers/bus/fslmc/bus_fslmc_driver.h          |   72 +-
>  drivers/bus/fslmc/fslmc_bus.c                 |   62 +-
>  drivers/bus/fslmc/fslmc_logs.h                |    5 +-
>  drivers/bus/fslmc/fslmc_vfio.c                | 1628 +++-
>  drivers/bus/fslmc/fslmc_vfio.h                |   39 +-
>  drivers/bus/fslmc/mc/dpio.c                   |   94 +-
>  drivers/bus/fslmc/mc/fsl_dpcon.h              |    6 +-
>  drivers/bus/fslmc/mc/fsl_dpio.h               |   21 +-
>  drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |   13 +-
>  drivers/bus/fslmc/mc/fsl_dpmng.h              |    4 +-
>  drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |    8 +-
>  drivers/bus/fslmc/meson.build                 |    3 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |   38 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |   38 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   50 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |    3 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dprc.c      |    8 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  114 +-
>  .../bus/fslmc/qbman/include/fsl_qbman_debug.h |   12 +-
>  drivers/bus/fslmc/qbman/qbman_debug.c         |   49 +-
>  drivers/bus/fslmc/qbman/qbman_portal.c        |   30 +-
>  drivers/bus/fslmc/version.map                 |   16 +-
>  drivers/crypto/dpaa2_sec/mc/dpseci.c          |   91 +-
>  drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |   47 +-
>  drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |   19 +-
>  drivers/dma/dpaa2/dpaa2_qdma.c                |    1 +
>  drivers/event/dpaa2/dpaa2_hw_dpcon.c          |   38 +-
>  drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |    2 +-
>  drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |   63 +-
>  drivers/net/dpaa2/dpaa2_ethdev.c              |  597 +-
>  drivers/net/dpaa2/dpaa2_ethdev.h              |  225 +-
>  drivers/net/dpaa2/dpaa2_flow.c                | 7070 ++++++++++-------
>  drivers/net/dpaa2/dpaa2_mux.c                 |  543 +-
>  drivers/net/dpaa2/dpaa2_parse_dump.h          |  250 +
>  drivers/net/dpaa2/dpaa2_ptp.c                 |    8 +-
>  drivers/net/dpaa2/dpaa2_rxtx.c                |   32 +-
>  drivers/net/dpaa2/dpaa2_sparser.c             |   27 +-
>  drivers/net/dpaa2/dpaa2_tm.c                  |   72 +-
>  drivers/net/dpaa2/mc/dpdmux.c                 |  205 +-
>  drivers/net/dpaa2/mc/dpkg.c                   |   12 +-
>  drivers/net/dpaa2/mc/dpni.c                   |  383 +-
>  drivers/net/dpaa2/mc/fsl_dpdmux.h             |   99 +-
>  drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |   83 +-
>  drivers/net/dpaa2/mc/fsl_dpkg.h               |    7 +-
>  drivers/net/dpaa2/mc/fsl_dpni.h               |  176 +-
>  drivers/net/dpaa2/mc/fsl_dpni_cmd.h           |  125 +-
>  drivers/net/dpaa2/rte_pmd_dpaa2.h             |   51 +-
>  drivers/net/dpaa2/version.map                 |    6 +
>  49 files changed, 8289 insertions(+), 4260 deletions(-)
>  create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h
> 

You need to make a v3 of this patchset.
Recent logging changes (around newline) causes patches to no longer apply cleanly.

^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 00/43] DPAA2 specific patches
  2024-09-18  7:50   ` [v2 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
@ 2024-10-14 12:00     ` vanshika.shukla
  2024-10-14 12:00       ` [v3 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
                         ` (43 more replies)
  0 siblings, 44 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This series includes:
-> Fixes and enhancements for NXP DPAA2 drivers.
-> Upgrade with MC version 10.37
-> Enhancements in DPDMUX code
-> Fixes for coverity issues reported

V2 changes:
Fixed the broken compilation for clang in:
        "net/dpaa2: dpdmux single flow/multiple rules support" patch.
Fixed checkpatch warnings in the below patches:
	"net/dpaa2: protocol inside tunnel distribution"
	"net/dpaa2: add VXLAN distribution support"
	"bus/fslmc: dynamic IOVA mode configuration"
        "bus/fslmc: enhance MC VFIO multiprocess support"
V3 changes:
Rebased to the latest commit.

Apeksha Gupta (2):
  net/dpaa2: add proper MTU debugging print
  net/dpaa2: store drop priority in mbuf

Brick Yang (1):
  net/dpaa2: update DPNI link status method

Gagandeep Singh (3):
  bus/fslmc: upgrade with MC version 10.37
  net/dpaa2: fix memory corruption in TM
  net/dpaa2: support software taildrop

Hemant Agrawal (2):
  net/dpaa2: add support to dump dpdmux counters
  bus/fslmc: change dpcon close as internal symbol

Jun Yang (23):
  net/dpaa2: enhance Tx scatter-gather mempool
  net/dpaa2: add new PMD API to check dpaa platform version
  bus/fslmc: improve BMAN buffer acquire
  bus/fslmc: get MC VFIO group FD directly
  bus/fslmc: enhance MC VFIO multiprocess support
  bus/fslmc: dynamic IOVA mode configuration
  bus/fslmc: remove VFIO IRQ mapping
  bus/fslmc: create dpaa2 device with it's object
  bus/fslmc: introduce VFIO DMA mapping API for fslmc
  net/dpaa2: flow API refactor
  net/dpaa2: dump Rx parser result
  net/dpaa2: enhancement of raw flow extract
  net/dpaa2: frame attribute flags parser
  net/dpaa2: add VXLAN distribution support
  net/dpaa2: protocol inside tunnel distribution
  net/dpaa2: eCPRI support by parser result
  net/dpaa2: add GTP flow support
  net/dpaa2: check if Soft parser is loaded
  net/dpaa2: soft parser flow verification
  net/dpaa2: add flow support for IPsec AH and ESP
  net/dpaa2: check IOVA before sending MC command
  net/dpaa2: add API to get endpoint name
  net/dpaa2: dpdmux single flow/multiple rules support

Rohit Raj (7):
  bus/fslmc: add close API to close DPAA2 device
  net/dpaa2: support link state for eth interfaces
  bus/fslmc: free VFIO group FD in case of add group failure
  bus/fslmc: fix coverity issue
  bus/fslmc: fix invalid error FD code
  bus/fslmc: change qbman eq desc from d to desc
  net/dpaa2: change miss flow ID macro name

Sachin Saxena (1):
  net/dpaa2: improve DPDMUX error behavior settings

Vanshika Shukla (4):
  net/dpaa2: support PTP packet one-step timestamp
  net/dpaa2: dpdmux: add support for CVLAN
  net/dpaa2: support VLAN traffic splitting
  net/dpaa2: add support for C-VLAN and MAC

 doc/guides/platform/dpaa2.rst                 |    4 +-
 drivers/bus/fslmc/bus_fslmc_driver.h          |   72 +-
 drivers/bus/fslmc/fslmc_bus.c                 |   62 +-
 drivers/bus/fslmc/fslmc_vfio.c                | 1628 +++-
 drivers/bus/fslmc/fslmc_vfio.h                |   39 +-
 drivers/bus/fslmc/mc/dpio.c                   |   94 +-
 drivers/bus/fslmc/mc/fsl_dpcon.h              |    6 +-
 drivers/bus/fslmc/mc/fsl_dpio.h               |   21 +-
 drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |   13 +-
 drivers/bus/fslmc/mc/fsl_dpmng.h              |    4 +-
 drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |    8 +-
 drivers/bus/fslmc/meson.build                 |    3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |   38 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |   38 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   50 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |    3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dprc.c      |    8 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  114 +-
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |   12 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |   49 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   30 +-
 drivers/bus/fslmc/version.map                 |   16 +-
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |   91 +-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |   47 +-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |   19 +-
 drivers/dma/dpaa2/dpaa2_qdma.c                |    1 +
 drivers/event/dpaa2/dpaa2_hw_dpcon.c          |   38 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |    2 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |   63 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  597 +-
 drivers/net/dpaa2/dpaa2_ethdev.h              |  225 +-
 drivers/net/dpaa2/dpaa2_flow.c                | 7068 ++++++++++-------
 drivers/net/dpaa2/dpaa2_mux.c                 |  543 +-
 drivers/net/dpaa2/dpaa2_parse_dump.h          |  250 +
 drivers/net/dpaa2/dpaa2_ptp.c                 |    8 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |   32 +-
 drivers/net/dpaa2/dpaa2_sparser.c             |   25 +-
 drivers/net/dpaa2/dpaa2_tm.c                  |   72 +-
 drivers/net/dpaa2/mc/dpdmux.c                 |  205 +-
 drivers/net/dpaa2/mc/dpkg.c                   |   12 +-
 drivers/net/dpaa2/mc/dpni.c                   |  383 +-
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |   99 +-
 drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |   83 +-
 drivers/net/dpaa2/mc/fsl_dpkg.h               |    7 +-
 drivers/net/dpaa2/mc/fsl_dpni.h               |  176 +-
 drivers/net/dpaa2/mc/fsl_dpni_cmd.h           |  125 +-
 drivers/net/dpaa2/rte_pmd_dpaa2.h             |   51 +-
 drivers/net/dpaa2/version.map                 |    6 +
 48 files changed, 8284 insertions(+), 4256 deletions(-)
 create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 01/43] net/dpaa2: enhance Tx scatter-gather mempool
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 02/43] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
                         ` (42 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Create TX SG pool only for primary process and lookup
this pool in secondary process.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 46 +++++++++++++++++++++++---------
 1 file changed, 33 insertions(+), 13 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 7b3e587a8d..4b93606de1 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2870,6 +2870,35 @@ int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+static int dpaa2_tx_sg_pool_init(void)
+{
+	char name[RTE_MEMZONE_NAMESIZE];
+
+	if (dpaa2_tx_sg_pool)
+		return 0;
+
+	sprintf(name, "dpaa2_mbuf_tx_sg_pool");
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		dpaa2_tx_sg_pool = rte_pktmbuf_pool_create(name,
+			DPAA2_POOL_SIZE,
+			DPAA2_POOL_CACHE_SIZE, 0,
+			DPAA2_MAX_SGS * sizeof(struct qbman_sge),
+			rte_socket_id());
+		if (!dpaa2_tx_sg_pool) {
+			DPAA2_PMD_ERR("SG pool creation failed");
+			return -ENOMEM;
+		}
+	} else {
+		dpaa2_tx_sg_pool = rte_mempool_lookup(name);
+		if (!dpaa2_tx_sg_pool) {
+			DPAA2_PMD_ERR("SG pool lookup failed");
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
 static int
 rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 		struct rte_dpaa2_device *dpaa2_dev)
@@ -2924,19 +2953,10 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 
 	/* Invoke PMD device initialization function */
 	diag = dpaa2_dev_init(eth_dev);
-	if (diag == 0) {
-		if (!dpaa2_tx_sg_pool) {
-			dpaa2_tx_sg_pool =
-				rte_pktmbuf_pool_create("dpaa2_mbuf_tx_sg_pool",
-				DPAA2_POOL_SIZE,
-				DPAA2_POOL_CACHE_SIZE, 0,
-				DPAA2_MAX_SGS * sizeof(struct qbman_sge),
-				rte_socket_id());
-			if (dpaa2_tx_sg_pool == NULL) {
-				DPAA2_PMD_ERR("SG pool creation failed");
-				return -ENOMEM;
-			}
-		}
+	if (!diag) {
+		diag = dpaa2_tx_sg_pool_init();
+		if (diag)
+			return diag;
 		rte_eth_dev_probing_finish(eth_dev);
 		dpaa2_valid_dev++;
 		return 0;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 02/43] net/dpaa2: support PTP packet one-step timestamp
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
  2024-10-14 12:00       ` [v3 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 03/43] net/dpaa2: add proper MTU debugging print vanshika.shukla
                         ` (41 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds PTP one-step timestamping support.
dpni_set_single_step_cfg() MC API is utilized with offset provided
to insert correction time on frame.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 61 +++++++++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h  |  3 ++
 drivers/net/dpaa2/rte_pmd_dpaa2.h | 10 +++++
 drivers/net/dpaa2/version.map     |  3 ++
 4 files changed, 77 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4b93606de1..051ebd9d8e 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -548,6 +548,9 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	int tx_l4_csum_offload = false;
 	int ret, tc_index;
 	uint32_t max_rx_pktlen;
+#if defined(RTE_LIBRTE_IEEE1588)
+	uint16_t ptp_correction_offset;
+#endif
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -632,6 +635,11 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
+#if defined(RTE_LIBRTE_IEEE1588)
+	/* By default setting ptp correction offset for Ethernet SYNC packets */
+	ptp_correction_offset = RTE_ETHER_HDR_LEN + 8;
+	rte_pmd_dpaa2_set_one_step_ts(dev->data->port_id, ptp_correction_offset, 0);
+#endif
 	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
@@ -2870,6 +2878,59 @@ int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+#if defined(RTE_LIBRTE_IEEE1588)
+int
+rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpni = priv->eth_dev->process_private;
+	struct dpni_single_step_cfg ptp_cfg;
+	int err;
+
+	if (!mc_query)
+		return priv->ptp_correction_offset;
+
+	err = dpni_get_single_step_cfg(dpni, CMD_PRI_LOW, priv->token, &ptp_cfg);
+	if (err) {
+		DPAA2_PMD_ERR("Failed to retrieve onestep configuration");
+		return err;
+	}
+
+	if (!ptp_cfg.ptp_onestep_reg_base) {
+		DPAA2_PMD_ERR("1588 onestep reg not available");
+		return -1;
+	}
+
+	priv->ptp_correction_offset = ptp_cfg.offset;
+
+	return priv->ptp_correction_offset;
+}
+
+int
+rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpni = dev->process_private;
+	struct dpni_single_step_cfg cfg;
+	int err;
+
+	cfg.en = 1;
+	cfg.ch_update = ch_update;
+	cfg.offset = offset;
+	cfg.peer_delay = 0;
+
+	err = dpni_set_single_step_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
+	if (err)
+		return err;
+
+	priv->ptp_correction_offset = offset;
+
+	return 0;
+}
+#endif
+
 static int dpaa2_tx_sg_pool_init(void)
 {
 	char name[RTE_MEMZONE_NAMESIZE];
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 9feb631d5f..6625afaba3 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -230,6 +230,9 @@ struct dpaa2_dev_priv {
 	rte_spinlock_t lpbk_qp_lock;
 
 	uint8_t channel_inuse;
+	/* Stores correction offset for one step timestamping */
+	uint16_t ptp_correction_offset;
+
 	LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */
 	LIST_HEAD(nodes, dpaa2_tm_node) nodes;
 	LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles;
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index a1152eb717..aea9bae905 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -102,4 +102,14 @@ rte_pmd_dpaa2_thread_init(void);
 __rte_experimental
 uint32_t
 rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
+
+#if defined(RTE_LIBRTE_IEEE1588)
+__rte_experimental
+int
+rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update);
+
+__rte_experimental
+int
+rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query);
+#endif
 #endif /* _RTE_PMD_DPAA2_H */
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index ba756d26bd..2d95303e27 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -16,6 +16,9 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_thread_init;
 	# added in 21.11
 	rte_pmd_dpaa2_get_tlu_hash;
+	# added in 24.11
+	rte_pmd_dpaa2_set_one_step_ts;
+	rte_pmd_dpaa2_get_one_step_ts;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 03/43] net/dpaa2: add proper MTU debugging print
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
  2024-10-14 12:00       ` [v3 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
  2024-10-14 12:00       ` [v3 02/43] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 04/43] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
                         ` (40 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Apeksha Gupta, Jun Yang

From: Apeksha Gupta <apeksha.gupta@nxp.com>

This patch add proper debug info for check information of
max-pkt-len and configured params.

also store MTU

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 051ebd9d8e..ab64df6a59 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -579,9 +579,11 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 			DPAA2_PMD_ERR("Unable to set mtu. check config");
 			return ret;
 		}
-		DPAA2_PMD_INFO("MTU configured for the device: %d",
+		DPAA2_PMD_DEBUG("MTU configured for the device: %d",
 				dev->data->mtu);
 	} else {
+		DPAA2_PMD_ERR("Configured mtu %d and calculated max-pkt-len is %d which should be <= %d",
+			eth_conf->rxmode.mtu, max_rx_pktlen, DPAA2_MAX_RX_PKT_LEN);
 		return -1;
 	}
 
@@ -1537,6 +1539,7 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 		DPAA2_PMD_ERR("Setting the max frame length failed");
 		return -1;
 	}
+	dev->data->mtu = mtu;
 	DPAA2_PMD_INFO("MTU configured for the device: %d", mtu);
 	return 0;
 }
@@ -2839,6 +2842,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		DPAA2_PMD_ERR("Unable to set mtu. check config");
 		goto init_err;
 	}
+	eth_dev->data->mtu = RTE_ETHER_MTU;
 
 	/*TODO To enable soft parser support DPAA2 driver needs to integrate
 	 * with external entity to receive byte code for software sequence
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 04/43] net/dpaa2: add support to dump dpdmux counters
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (2 preceding siblings ...)
  2024-10-14 12:00       ` [v3 03/43] net/dpaa2: add proper MTU debugging print vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 05/43] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
                         ` (39 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Hemant Agrawal <hemant.agrawal@nxp.com>

This patch add supports to dump dpdmux counters as they are required
to identify the reasons for packet drop in dpdmux.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     | 84 +++++++++++++++++++++++++++++++
 drivers/net/dpaa2/rte_pmd_dpaa2.h | 18 +++++++
 drivers/net/dpaa2/version.map     |  1 +
 3 files changed, 103 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 7dd5a60966..b2ec5337b1 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -259,6 +259,90 @@ rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 	return ret;
 }
 
+/* dump the status of the dpaa2_mux counters on the console */
+void
+rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
+{
+	struct dpaa2_dpdmux_dev *dpdmux;
+	uint64_t counter;
+	int ret;
+	int if_id;
+
+	/* Find the DPDMUX from dpdmux_id in our list */
+	dpdmux = get_dpdmux_from_id(dpdmux_id);
+	if (!dpdmux) {
+		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
+		return;
+	}
+
+	for (if_id = 0; if_id < num_if; if_id++) {
+		fprintf(f, "dpdmux.%d\n", if_id);
+
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FRAME, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BYTE, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FLTR_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FLTR_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FRAME_DISCARD,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FRAME_DISCARD %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_MCAST_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_MCAST_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_MCAST_BYTE,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_MCAST_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BCAST_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BCAST_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BCAST_BYTES,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BCAST_BYTES %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_FRAME, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_BYTE, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_FRAME_DISCARD,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_FRAME_DISCARD %" PRIu64 "\n",
+				counter);
+	}
+}
+
 static int
 dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 			   struct vfio_device_info *obj_info __rte_unused,
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index aea9bae905..fd9acd841b 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -33,6 +33,24 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			      struct rte_flow_item *pattern[],
 			      struct rte_flow_action *actions[]);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Dump demultiplex ethernet traffic counters
+ *
+ * @param f
+ *    output stream
+ * @param dpdmux_id
+ *    ID of the DPDMUX MC object.
+ * @param num_if
+ *    number of interface in dpdmux object
+ *
+ */
+__rte_experimental
+void
+rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 2d95303e27..7323fc8869 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	# added in 24.11
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
+	rte_pmd_dpaa2_mux_dump_counter;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 05/43] bus/fslmc: change dpcon close as internal symbol
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (3 preceding siblings ...)
  2024-10-14 12:00       ` [v3 04/43] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 06/43] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
                         ` (38 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena

From: Hemant Agrawal <hemant.agrawal@nxp.com>

This patch marks dpcon_close API as internal symbol and
also adds it into version map file

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/mc/fsl_dpcon.h | 3 ++-
 drivers/bus/fslmc/version.map    | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index db72477c8a..34b30d15c2 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2021 NXP
  *
  */
 #ifndef __FSL_DPCON_H
@@ -28,6 +28,7 @@ int dpcon_open(struct fsl_mc_io *mc_io,
 	       int dpcon_id,
 	       uint16_t *token);
 
+__rte_internal
 int dpcon_close(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index e19b8d1f6b..01e28c6625 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -36,6 +36,7 @@ INTERNAL {
 	dpci_set_rx_queue;
 	dpcon_get_attributes;
 	dpcon_open;
+	dpcon_close;
 	dpdmai_close;
 	dpdmai_disable;
 	dpdmai_enable;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 06/43] bus/fslmc: add close API to close DPAA2 device
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (4 preceding siblings ...)
  2024-10-14 12:00       ` [v3 05/43] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 07/43] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
                         ` (37 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Add rte_fslmc_close API to close all the DPAA2 devices while
closing the DPDK application.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     |  3 +
 drivers/bus/fslmc/fslmc_bus.c            | 13 ++++
 drivers/bus/fslmc/fslmc_vfio.c           | 87 ++++++++++++++++++++++++
 drivers/bus/fslmc/fslmc_vfio.h           |  3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c | 31 ++++++++-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c | 32 ++++++++-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 34 +++++++++
 drivers/event/dpaa2/dpaa2_hw_dpcon.c     | 32 ++++++++-
 drivers/net/dpaa2/dpaa2_mux.c            | 18 ++++-
 drivers/net/dpaa2/rte_pmd_dpaa2.h        |  5 +-
 10 files changed, 252 insertions(+), 6 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 3095458133..a3428fe28b 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -98,6 +98,8 @@ typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
 				      struct vfio_device_info *obj_info,
 				      int object_id);
 
+typedef void (*rte_dpaa2_obj_close_t)(int object_id);
+
 /**
  * A structure describing a DPAA2 object.
  */
@@ -106,6 +108,7 @@ struct rte_dpaa2_object {
 	const char *name;                   /**< Name of Object. */
 	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
 	rte_dpaa2_obj_create_t create;
+	rte_dpaa2_obj_close_t close;
 };
 
 /**
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 097d6dca08..97473c278f 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -384,6 +384,18 @@ rte_fslmc_match(struct rte_dpaa2_driver *dpaa2_drv,
 	return 1;
 }
 
+static int
+rte_fslmc_close(void)
+{
+	int ret = 0;
+
+	ret = fslmc_vfio_close_group();
+	if (ret)
+		DPAA2_BUS_ERR("Unable to close devices %d", ret);
+
+	return 0;
+}
+
 static int
 rte_fslmc_probe(void)
 {
@@ -664,6 +676,7 @@ struct rte_fslmc_bus rte_fslmc_bus = {
 	.bus = {
 		.scan = rte_fslmc_scan,
 		.probe = rte_fslmc_probe,
+		.cleanup = rte_fslmc_close,
 		.parse = rte_fslmc_parse,
 		.find_device = rte_fslmc_find_device,
 		.get_iommu_class = rte_dpaa2_get_iommu_class,
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 6981679a2d..ecca593c34 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -702,6 +702,54 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 	return -1;
 }
 
+static void
+fslmc_close_iodevices(struct rte_dpaa2_device *dev)
+{
+	struct rte_dpaa2_object *object = NULL;
+	struct rte_dpaa2_driver *drv;
+	int ret, probe_all;
+
+	switch (dev->dev_type) {
+	case DPAA2_IO:
+	case DPAA2_CON:
+	case DPAA2_CI:
+	case DPAA2_BPOOL:
+	case DPAA2_MUX:
+		TAILQ_FOREACH(object, &dpaa2_obj_list, next) {
+			if (dev->dev_type == object->dev_type)
+				object->close(dev->object_id);
+			else
+				continue;
+		}
+		break;
+	case DPAA2_ETH:
+	case DPAA2_CRYPTO:
+	case DPAA2_QDMA:
+		probe_all = rte_fslmc_bus.bus.conf.scan_mode !=
+			    RTE_BUS_SCAN_ALLOWLIST;
+		TAILQ_FOREACH(drv, &rte_fslmc_bus.driver_list, next) {
+			if (drv->drv_type != dev->dev_type)
+				continue;
+			if (rte_dev_is_probed(&dev->device))
+				continue;
+			if (probe_all ||
+			    (dev->device.devargs &&
+			     dev->device.devargs->policy ==
+			     RTE_DEV_ALLOWED)) {
+				ret = drv->remove(dev);
+				if (ret)
+					DPAA2_BUS_ERR("Unable to remove");
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	DPAA2_BUS_LOG(DEBUG, "Device (%s) Closed",
+		      dev->device.name);
+}
+
 /*
  * fslmc_process_iodevices for processing only IO (ETH, CRYPTO, and possibly
  * EVENT) devices.
@@ -807,6 +855,45 @@ fslmc_process_mcp(struct rte_dpaa2_device *dev)
 	return ret;
 }
 
+int
+fslmc_vfio_close_group(void)
+{
+	struct rte_dpaa2_device *dev, *dev_temp;
+
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+		if (dev->device.devargs &&
+		    dev->device.devargs->policy == RTE_DEV_BLOCKED) {
+			DPAA2_BUS_LOG(DEBUG, "%s Blacklisted, skipping",
+				      dev->device.name);
+			TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+				continue;
+		}
+		switch (dev->dev_type) {
+		case DPAA2_ETH:
+		case DPAA2_CRYPTO:
+		case DPAA2_QDMA:
+		case DPAA2_IO:
+			fslmc_close_iodevices(dev);
+			break;
+		case DPAA2_CON:
+		case DPAA2_CI:
+		case DPAA2_BPOOL:
+		case DPAA2_MUX:
+			if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+				continue;
+
+			fslmc_close_iodevices(dev);
+			break;
+		case DPAA2_DPRTC:
+		default:
+			DPAA2_BUS_DEBUG("Device cannot be closed: Not supported (%s)",
+					dev->device.name);
+		}
+	}
+
+	return 0;
+}
+
 int
 fslmc_vfio_process_group(void)
 {
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 133606a9fd..b6677bdd18 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2019 NXP
+ *   Copyright 2016,2019-2020 NXP
  *
  */
 
@@ -55,6 +55,7 @@ int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 
 int fslmc_vfio_setup_group(void);
 int fslmc_vfio_process_group(void);
+int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
 int fslmc_get_container_group(int *gropuid);
 int rte_fslmc_vfio_dmamap(void);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index d7f6e45b7d..bc36607e64 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016 NXP
+ *   Copyright 2016,2020 NXP
  *
  */
 
@@ -33,6 +33,19 @@ TAILQ_HEAD(dpbp_dev_list, dpaa2_dpbp_dev);
 static struct dpbp_dev_list dpbp_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpbp_dev_list); /*!< DPBP device list */
 
+static struct dpaa2_dpbp_dev *get_dpbp_from_id(uint32_t dpbp_id)
+{
+	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
+
+	/* Get DPBP dev handle from list using index */
+	TAILQ_FOREACH(dpbp_dev, &dpbp_dev_list, next) {
+		if (dpbp_dev->dpbp_id == dpbp_id)
+			break;
+	}
+
+	return dpbp_dev;
+}
+
 static int
 dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
 			 struct vfio_device_info *obj_info __rte_unused,
@@ -116,9 +129,25 @@ int dpaa2_dpbp_supported(void)
 	return 0;
 }
 
+static void
+dpaa2_close_dpbp_device(int object_id)
+{
+	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
+
+	dpbp_dev = get_dpbp_from_id((uint32_t)object_id);
+
+	if (dpbp_dev) {
+		dpaa2_free_dpbp_dev(dpbp_dev);
+		dpbp_close(&dpbp_dev->dpbp, CMD_PRI_LOW, dpbp_dev->token);
+		TAILQ_REMOVE(&dpbp_dev_list, dpbp_dev, next);
+		rte_free(dpbp_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpbp_obj = {
 	.dev_type = DPAA2_BPOOL,
 	.create = dpaa2_create_dpbp_device,
+	.close = dpaa2_close_dpbp_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpbp, rte_dpaa2_dpbp_obj);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 7e858a113f..99f2147ccb 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017 NXP
+ *   Copyright 2017,2020 NXP
  *
  */
 
@@ -30,6 +30,19 @@ TAILQ_HEAD(dpci_dev_list, dpaa2_dpci_dev);
 static struct dpci_dev_list dpci_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpci_dev_list); /*!< DPCI device list */
 
+static struct dpaa2_dpci_dev *get_dpci_from_id(uint32_t dpci_id)
+{
+	struct dpaa2_dpci_dev *dpci_dev = NULL;
+
+	/* Get DPCI dev handle from list using index */
+	TAILQ_FOREACH(dpci_dev, &dpci_dev_list, next) {
+		if (dpci_dev->dpci_id == dpci_id)
+			break;
+	}
+
+	return dpci_dev;
+}
+
 static int
 rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
 			     struct vfio_device_info *obj_info __rte_unused,
@@ -179,9 +192,26 @@ void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci)
 	}
 }
 
+
+static void
+rte_dpaa2_close_dpci_device(int object_id)
+{
+	struct dpaa2_dpci_dev *dpci_dev = NULL;
+
+	dpci_dev = get_dpci_from_id((uint32_t)object_id);
+
+	if (dpci_dev) {
+		rte_dpaa2_free_dpci_dev(dpci_dev);
+		dpci_close(&dpci_dev->dpci, CMD_PRI_LOW, dpci_dev->token);
+		TAILQ_REMOVE(&dpci_dev_list, dpci_dev, next);
+		rte_free(dpci_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpci_obj = {
 	.dev_type = DPAA2_CI,
 	.create = rte_dpaa2_create_dpci_device,
+	.close = rte_dpaa2_close_dpci_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpci, rte_dpaa2_dpci_obj);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 4aec7b2cd8..8265fee497 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -86,6 +86,19 @@ static int dpaa2_cluster_sz = 2;
  * Cluster 4 (ID = x07) : CPU14, CPU15;
  */
 
+static struct dpaa2_dpio_dev *get_dpio_dev_from_id(int32_t dpio_id)
+{
+	struct dpaa2_dpio_dev *dpio_dev = NULL;
+
+	/* Get DPIO dev handle from list using index */
+	TAILQ_FOREACH(dpio_dev, &dpio_dev_list, next) {
+		if (dpio_dev->hw_id == dpio_id)
+			break;
+	}
+
+	return dpio_dev;
+}
+
 static int
 dpaa2_get_core_id(void)
 {
@@ -358,6 +371,26 @@ static void dpaa2_portal_finish(void *arg)
 	pthread_setspecific(dpaa2_portal_key, NULL);
 }
 
+static void
+dpaa2_close_dpio_device(int object_id)
+{
+	struct dpaa2_dpio_dev *dpio_dev = NULL;
+
+	dpio_dev = get_dpio_dev_from_id((int32_t)object_id);
+
+	if (dpio_dev) {
+		if (dpio_dev->dpio) {
+			dpio_disable(dpio_dev->dpio, CMD_PRI_LOW,
+				     dpio_dev->token);
+			dpio_close(dpio_dev->dpio, CMD_PRI_LOW,
+				   dpio_dev->token);
+			rte_free(dpio_dev->dpio);
+		}
+		TAILQ_REMOVE(&dpio_dev_list, dpio_dev, next);
+		rte_free(dpio_dev);
+	}
+}
+
 static int
 dpaa2_create_dpio_device(int vdev_fd,
 			 struct vfio_device_info *obj_info,
@@ -635,6 +668,7 @@ dpaa2_free_eq_descriptors(void)
 static struct rte_dpaa2_object rte_dpaa2_dpio_obj = {
 	.dev_type = DPAA2_IO,
 	.create = dpaa2_create_dpio_device,
+	.close = dpaa2_close_dpio_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpio, rte_dpaa2_dpio_obj);
diff --git a/drivers/event/dpaa2/dpaa2_hw_dpcon.c b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
index a68d3ac154..64b0136e24 100644
--- a/drivers/event/dpaa2/dpaa2_hw_dpcon.c
+++ b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017 NXP
+ *   Copyright 2017,2020 NXP
  *
  */
 
@@ -30,6 +30,19 @@ TAILQ_HEAD(dpcon_dev_list, dpaa2_dpcon_dev);
 static struct dpcon_dev_list dpcon_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpcon_dev_list); /*!< DPCON device list */
 
+static struct dpaa2_dpcon_dev *get_dpcon_from_id(uint32_t dpcon_id)
+{
+	struct dpaa2_dpcon_dev *dpcon_dev = NULL;
+
+	/* Get DPCONC dev handle from list using index */
+	TAILQ_FOREACH(dpcon_dev, &dpcon_dev_list, next) {
+		if (dpcon_dev->dpcon_id == dpcon_id)
+			break;
+	}
+
+	return dpcon_dev;
+}
+
 static int
 rte_dpaa2_create_dpcon_device(int dev_fd __rte_unused,
 			      struct vfio_device_info *obj_info __rte_unused,
@@ -105,9 +118,26 @@ void rte_dpaa2_free_dpcon_dev(struct dpaa2_dpcon_dev *dpcon)
 	}
 }
 
+
+static void
+rte_dpaa2_close_dpcon_device(int object_id)
+{
+	struct dpaa2_dpcon_dev *dpcon_dev = NULL;
+
+	dpcon_dev = get_dpcon_from_id((uint32_t)object_id);
+
+	if (dpcon_dev) {
+		rte_dpaa2_free_dpcon_dev(dpcon_dev);
+		dpcon_close(&dpcon_dev->dpcon, CMD_PRI_LOW, dpcon_dev->token);
+		TAILQ_REMOVE(&dpcon_dev_list, dpcon_dev, next);
+		rte_free(dpcon_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpcon_obj = {
 	.dev_type = DPAA2_CON,
 	.create = rte_dpaa2_create_dpcon_device,
+	.close = rte_dpaa2_close_dpcon_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpcon, rte_dpaa2_dpcon_obj);
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index b2ec5337b1..489beb6f27 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -44,7 +44,7 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev = NULL;
 
-	/* Get DPBP dev handle from list using index */
+	/* Get DPDMUX dev handle from list using index */
 	TAILQ_FOREACH(dpdmux_dev, &dpdmux_dev_list, next) {
 		if (dpdmux_dev->dpdmux_id == dpdmux_id)
 			break;
@@ -442,9 +442,25 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	return -1;
 }
 
+static void
+dpaa2_close_dpdmux_device(int object_id)
+{
+	struct dpaa2_dpdmux_dev *dpdmux_dev;
+
+	dpdmux_dev = get_dpdmux_from_id((uint32_t)object_id);
+
+	if (dpdmux_dev) {
+		dpdmux_close(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+			     dpdmux_dev->token);
+		TAILQ_REMOVE(&dpdmux_dev_list, dpdmux_dev, next);
+		rte_free(dpdmux_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpdmux_obj = {
 	.dev_type = DPAA2_MUX,
 	.create = dpaa2_create_dpdmux_device,
+	.close = dpaa2_close_dpdmux_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpdmux, rte_dpaa2_dpdmux_obj);
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index fd9acd841b..80e5e3298b 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2024 NXP
  */
 
 #ifndef _RTE_PMD_DPAA2_H
@@ -32,6 +32,9 @@ struct rte_flow *
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			      struct rte_flow_item *pattern[],
 			      struct rte_flow_action *actions[]);
+int
+rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
+	uint16_t entry_index);
 
 /**
  * @warning
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 07/43] net/dpaa2: dpdmux: add support for CVLAN
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (5 preceding siblings ...)
  2024-10-14 12:00       ` [v3 06/43] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 08/43] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
                         ` (36 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds the support for DPDMUX_METHOD_C_VLAN_MAC method
which implements DPDMUX based on C-VLAN and MAC address.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     | 59 +++++++++++++++++++++++++------
 drivers/net/dpaa2/mc/fsl_dpdmux.h | 18 +++++++++-
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  3 ++
 3 files changed, 68 insertions(+), 12 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 489beb6f27..3693f4b62e 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -233,6 +233,35 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	return NULL;
 }
 
+int
+rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
+	uint8_t mac_addr[6], uint16_t vlan_id, int dest_if)
+{
+	struct dpaa2_dpdmux_dev *dpdmux_dev;
+	struct dpdmux_l2_rule rule;
+	int ret, i;
+
+	/* Find the DPDMUX from dpdmux_id in our list */
+	dpdmux_dev = get_dpdmux_from_id(dpdmux_id);
+	if (!dpdmux_dev) {
+		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
+		return -ENODEV;
+	}
+
+	for (i = 0; i < 6; i++)
+		rule.mac_addr[i] = mac_addr[i];
+	rule.vlan_id = vlan_id;
+
+	ret = dpdmux_if_add_l2_rule(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+			dpdmux_dev->token, dest_if, &rule);
+	if (ret) {
+		DPAA2_PMD_ERR("dpdmux_if_add_l2_rule failed:err(%d)", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
 int
 rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 {
@@ -353,6 +382,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	int ret;
 	uint16_t maj_ver;
 	uint16_t min_ver;
+	uint8_t skip_reset_flags;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -379,12 +409,18 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 		goto init_err;
 	}
 
-	ret = dpdmux_if_set_default(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-				    dpdmux_dev->token, attr.default_if);
-	if (ret) {
-		DPAA2_PMD_ERR("setting default interface failed in %s",
-			      __func__);
-		goto init_err;
+	if (attr.method != DPDMUX_METHOD_C_VLAN_MAC) {
+		ret = dpdmux_if_set_default(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+				dpdmux_dev->token, attr.default_if);
+		if (ret) {
+			DPAA2_PMD_ERR("setting default interface failed in %s",
+				      __func__);
+			goto init_err;
+		}
+		skip_reset_flags = DPDMUX_SKIP_DEFAULT_INTERFACE
+			| DPDMUX_SKIP_UNICAST_RULES | DPDMUX_SKIP_MULTICAST_RULES;
+	} else {
+		skip_reset_flags = DPDMUX_SKIP_DEFAULT_INTERFACE;
 	}
 
 	ret = dpdmux_get_api_version(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
@@ -400,10 +436,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	 */
 	if (maj_ver >= 6 && min_ver >= 6) {
 		ret = dpdmux_set_resetable(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-				dpdmux_dev->token,
-				DPDMUX_SKIP_DEFAULT_INTERFACE |
-				DPDMUX_SKIP_UNICAST_RULES |
-				DPDMUX_SKIP_MULTICAST_RULES);
+				dpdmux_dev->token, skip_reset_flags);
 		if (ret) {
 			DPAA2_PMD_ERR("setting default interface failed in %s",
 				      __func__);
@@ -416,7 +449,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 
 		memset(&mux_err_cfg, 0, sizeof(mux_err_cfg));
 		mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE;
-		mux_err_cfg.errors = DPDMUX_ERROR_DISC;
+
+		if (attr.method != DPDMUX_METHOD_C_VLAN_MAC)
+			mux_err_cfg.errors = DPDMUX_ERROR_DISC;
+		else
+			mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
 
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 4600ea94d4..9bbac44219 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2022 NXP
  *
  */
 #ifndef __FSL_DPDMUX_H
@@ -549,6 +549,22 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
  */
 #define DPDMUX__ERROR_L4CE			0x00000001
 
+#define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
+				 DPDMUX__ERROR_L4CV | \
+				 DPDMUX__ERROR_L3CE | \
+				 DPDMUX__ERROR_L3CV | \
+				 DPDMUX_ERROR_BLE | \
+				 DPDMUX_ERROR_PHE | \
+				 DPDMUX_ERROR_ISP | \
+				 DPDMUX_ERROR_PTE | \
+				 DPDMUX_ERROR_FPE | \
+				 DPDMUX_ERROR_FLE | \
+				 DPDMUX_ERROR_PIEE | \
+				 DPDMUX_ERROR_TIDE | \
+				 DPDMUX_ERROR_MNLE | \
+				 DPDMUX_ERROR_EOFHE | \
+				 DPDMUX_ERROR_KSE)
+
 enum dpdmux_error_action {
 	DPDMUX_ERROR_ACTION_DISCARD = 0,
 	DPDMUX_ERROR_ACTION_CONTINUE = 1
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index 80e5e3298b..bebebcacdc 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -35,6 +35,9 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 int
 rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
 	uint16_t entry_index);
+int
+rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
+	uint8_t mac_addr[6], uint16_t vlan_id, int dest_if);
 
 /**
  * @warning
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 08/43] bus/fslmc: upgrade with MC version 10.37
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (6 preceding siblings ...)
  2024-10-14 12:00       ` [v3 07/43] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 09/43] net/dpaa2: support link state for eth interfaces vanshika.shukla
                         ` (35 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh; +Cc: Apeksha Gupta

From: Gagandeep Singh <g.singh@nxp.com>

This patch upgrades the MC version compaitbility to 10.37

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 doc/guides/platform/dpaa2.rst                 |   4 +-
 drivers/bus/fslmc/mc/dpio.c                   |  94 ++++-
 drivers/bus/fslmc/mc/fsl_dpcon.h              |   5 +-
 drivers/bus/fslmc/mc/fsl_dpio.h               |  21 +-
 drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |  13 +-
 drivers/bus/fslmc/mc/fsl_dpmng.h              |   4 +-
 drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |   8 +-
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |  12 +-
 drivers/bus/fslmc/version.map                 |   7 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |  91 ++++-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |  47 ++-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |  19 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  36 +-
 drivers/net/dpaa2/mc/dpdmux.c                 | 205 +++++++++-
 drivers/net/dpaa2/mc/dpkg.c                   |  12 +-
 drivers/net/dpaa2/mc/dpni.c                   | 383 +++++++++++++++++-
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |  67 ++-
 drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |  83 +++-
 drivers/net/dpaa2/mc/fsl_dpkg.h               |   7 +-
 drivers/net/dpaa2/mc/fsl_dpni.h               | 176 +++++---
 drivers/net/dpaa2/mc/fsl_dpni_cmd.h           | 125 ++++--
 21 files changed, 1267 insertions(+), 152 deletions(-)

diff --git a/doc/guides/platform/dpaa2.rst b/doc/guides/platform/dpaa2.rst
index 2b0d93a976..c9ec21334f 100644
--- a/doc/guides/platform/dpaa2.rst
+++ b/doc/guides/platform/dpaa2.rst
@@ -105,8 +105,8 @@ separately:
 
 Currently supported by DPDK:
 
-- NXP SDK **LSDK 19.09++**.
-- MC Firmware version **10.18.0** and higher.
+- NXP SDK **LSDK 21.08++**.
+- MC Firmware version **10.37.0** and higher.
 - Supported architectures:  **arm64 LE**.
 
 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
diff --git a/drivers/bus/fslmc/mc/dpio.c b/drivers/bus/fslmc/mc/dpio.c
index a3382ed142..97c08fa713 100644
--- a/drivers/bus/fslmc/mc/dpio.c
+++ b/drivers/bus/fslmc/mc/dpio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -376,6 +376,98 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpio_set_stashing_destination_by_core_id() - Set the stashing destination source
+ * using the core id.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @core_id:	Core id stashing destination
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
+					uint32_t cmd_flags,
+					uint16_t token,
+					uint8_t core_id)
+{
+	struct dpio_stashing_dest_by_core_id *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST_BY_CORE_ID,
+										cmd_flags,
+										token);
+	cmd_params = (struct dpio_stashing_dest_by_core_id  *)cmd.params;
+	cmd_params->core_id = core_id;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpio_set_stashing_destination_source() - Set the stashing destination source.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @ss:		Stashing destination source (0 manual/1 automatic)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ss)
+{
+	struct dpio_stashing_dest_source *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST_SOURCE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpio_stashing_dest_source *)cmd.params;
+	cmd_params->ss = ss;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpio_get_stashing_destination_source() - Get the stashing destination source.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @ss:		Returns the stashing destination source (0 manual/1 automatic)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *ss)
+{
+	struct dpio_stashing_dest_source *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_STASHING_DEST_SOURCE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpio_stashing_dest_source *)cmd.params;
+	*ss = rsp_params->ss;
+
+	return 0;
+}
+
 /**
  * dpio_add_static_dequeue_channel() - Add a static dequeue channel.
  * @mc_io:		Pointer to MC portal's I/O object
diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index 34b30d15c2..e3a626077e 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2021, 2024 NXP
  *
  */
 #ifndef __FSL_DPCON_H
@@ -52,10 +52,12 @@ int dpcon_destroy(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint32_t obj_id);
 
+__rte_internal
 int dpcon_enable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
 
+__rte_internal
 int dpcon_disable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token);
@@ -65,6 +67,7 @@ int dpcon_is_enabled(struct fsl_mc_io *mc_io,
 		     uint16_t token,
 		     int *en);
 
+__rte_internal
 int dpcon_reset(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
index c2db76bdf8..eddce58a5f 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPIO_H
@@ -87,11 +87,30 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint16_t token,
 				  uint8_t sdest);
 
+__rte_internal
 int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
 				  uint8_t *sdest);
 
+__rte_internal
+int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t core_id);
+
+__rte_internal
+int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ss);
+
+__rte_internal
+int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *ss);
+
 __rte_internal
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
diff --git a/drivers/bus/fslmc/mc/fsl_dpio_cmd.h b/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
index 45ed01f809..360c68eaa5 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2019 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPIO_CMD_H
@@ -40,6 +40,9 @@
 #define DPIO_CMDID_GET_STASHING_DEST			DPIO_CMD(0x121)
 #define DPIO_CMDID_ADD_STATIC_DEQUEUE_CHANNEL		DPIO_CMD(0x122)
 #define DPIO_CMDID_REMOVE_STATIC_DEQUEUE_CHANNEL	DPIO_CMD(0x123)
+#define DPIO_CMDID_SET_STASHING_DEST_SOURCE		DPIO_CMD(0x124)
+#define DPIO_CMDID_GET_STASHING_DEST_SOURCE		DPIO_CMD(0x125)
+#define DPIO_CMDID_SET_STASHING_DEST_BY_CORE_ID		DPIO_CMD(0x126)
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPIO_MASK(field)        \
@@ -98,6 +101,14 @@ struct dpio_stashing_dest {
 	uint8_t sdest;
 };
 
+struct dpio_stashing_dest_source {
+	uint8_t ss;
+};
+
+struct dpio_stashing_dest_by_core_id {
+	uint8_t core_id;
+};
+
 struct dpio_cmd_static_dequeue_channel {
 	uint32_t dpcon_id;
 };
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index c6ea220df7..dfa51b3a86 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2017-2022 NXP
+ * Copyright 2017-2023 NXP
  *
  */
 #ifndef __FSL_DPMNG_H
@@ -20,7 +20,7 @@ struct fsl_mc_io;
  * Management Complex firmware version information
  */
 #define MC_VER_MAJOR 10
-#define MC_VER_MINOR 32
+#define MC_VER_MINOR 37
 
 /**
  * struct mc_version
diff --git a/drivers/bus/fslmc/mc/fsl_dprc_cmd.h b/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
index 6efa5634d2..d5ba35b5f0 100644
--- a/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 
@@ -10,13 +10,17 @@
 
 /* Minimal supported DPRC Version */
 #define DPRC_VER_MAJOR			6
-#define DPRC_VER_MINOR			6
+#define DPRC_VER_MINOR			7
 
 /* Command versioning */
 #define DPRC_CMD_BASE_VERSION			1
+#define DPRC_CMD_VERSION_2			2
+#define DPRC_CMD_VERSION_3			3
 #define DPRC_CMD_ID_OFFSET			4
 
 #define DPRC_CMD(id)	((id << DPRC_CMD_ID_OFFSET) | DPRC_CMD_BASE_VERSION)
+#define DPRC_CMD_V2(id)	(((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_VERSION_2)
+#define DPRC_CMD_V3(id)	(((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_VERSION_3)
 
 /* Command IDs */
 #define DPRC_CMDID_CLOSE                        DPRC_CMD(0x800)
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 18b6a3c2e4..297d4ed4fc 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2023 NXP
  */
 #ifndef _FSL_QBMAN_DEBUG_H
 #define _FSL_QBMAN_DEBUG_H
@@ -105,16 +105,6 @@ uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r);
 uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r);
 uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r);
 
-/* FQ query command for non-programmable fields*/
-enum qbman_fq_schedstate_e {
-	qbman_fq_schedstate_oos = 0,
-	qbman_fq_schedstate_retired,
-	qbman_fq_schedstate_tentatively_scheduled,
-	qbman_fq_schedstate_truly_scheduled,
-	qbman_fq_schedstate_parked,
-	qbman_fq_schedstate_held_active,
-};
-
 struct qbman_fq_query_np_rslt {
 uint8_t verb;
 	uint8_t rslt;
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index 01e28c6625..df1143733d 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -37,6 +37,9 @@ INTERNAL {
 	dpcon_get_attributes;
 	dpcon_open;
 	dpcon_close;
+	dpcon_reset;
+	dpcon_enable;
+	dpcon_disable;
 	dpdmai_close;
 	dpdmai_disable;
 	dpdmai_enable;
@@ -53,7 +56,11 @@ INTERNAL {
 	dpio_open;
 	dpio_remove_static_dequeue_channel;
 	dpio_reset;
+	dpio_get_stashing_destination;
+	dpio_get_stashing_destination_source;
 	dpio_set_stashing_destination;
+	dpio_set_stashing_destination_by_core_id;
+	dpio_set_stashing_destination_source;
 	mc_get_soc_version;
 	mc_get_version;
 	mc_send_command;
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
index 87e0defdc6..773b4648e0 100644
--- a/drivers/crypto/dpaa2_sec/mc/dpseci.c
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -763,3 +763,92 @@ int dpseci_get_congestion_notification(
 
 	return 0;
 }
+
+
+/**
+ * dpseci_get_rx_queue_status() - Get queue status attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue_index:	Select the queue_index
+ * @attr:	Returned queue status attributes
+ *
+ * Return:	'0' on success, error code otherwise
+ */
+int dpseci_get_rx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr)
+{
+	struct dpseci_rsp_get_queue_status *rsp_params;
+	struct dpseci_cmd_get_queue_status *cmd_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpseci_cmd_get_queue_status *)cmd.params;
+	cmd_params->queue_index = cpu_to_le32(queue_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpseci_rsp_get_queue_status *)cmd.params;
+	attr->fqid = le32_to_cpu(rsp_params->fqid);
+	attr->schedstate = (enum qbman_fq_schedstate_e)(le16_to_cpu(rsp_params->schedstate));
+	attr->state_flags = le16_to_cpu(rsp_params->state_flags);
+	attr->frame_count = le32_to_cpu(rsp_params->frame_count);
+	attr->byte_count = le32_to_cpu(rsp_params->byte_count);
+
+	return 0;
+}
+
+/**
+ * dpseci_get_tx_queue_status() - Get queue status attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue_index:	Select the queue_index
+ * @attr:	Returned queue status attributes
+ *
+ * Return:	'0' on success, error code otherwise
+ */
+int dpseci_get_tx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr)
+{
+	struct dpseci_rsp_get_queue_status *rsp_params;
+	struct dpseci_cmd_get_queue_status *cmd_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpseci_cmd_get_queue_status *)cmd.params;
+	cmd_params->queue_index = cpu_to_le32(queue_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpseci_rsp_get_queue_status *)cmd.params;
+	attr->fqid = le32_to_cpu(rsp_params->fqid);
+	attr->schedstate = (enum qbman_fq_schedstate_e)(le16_to_cpu(rsp_params->schedstate));
+	attr->state_flags = le16_to_cpu(rsp_params->state_flags);
+	attr->frame_count = le32_to_cpu(rsp_params->frame_count);
+	attr->byte_count = le32_to_cpu(rsp_params->byte_count);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
index c295c04f24..e371abdd64 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPSECI_H
@@ -429,4 +429,49 @@ int dpseci_get_congestion_notification(
 			uint16_t token,
 			struct dpseci_congestion_notification_cfg *cfg);
 
+/* Available FQ's scheduling states */
+enum qbman_fq_schedstate_e {
+	qbman_fq_schedstate_oos = 0,
+	qbman_fq_schedstate_retired,
+	qbman_fq_schedstate_tentatively_scheduled,
+	qbman_fq_schedstate_truly_scheduled,
+	qbman_fq_schedstate_parked,
+	qbman_fq_schedstate_held_active,
+};
+
+/* FQ's force eligible pending bit */
+#define DPSECI_FQ_STATE_FORCE_ELIGIBLE			0x00000001
+/* FQ's XON/XOFF state, 0: XON, 1: XOFF */
+#define DPSECI_FQ_STATE_XOFF					0x00000002
+/* FQ's retirement pending bit */
+#define DPSECI_FQ_STATE_RETIREMENT_PENDING		0x00000004
+/* FQ's overflow error bit */
+#define DPSECI_FQ_STATE_OVERFLOW_ERROR			0x00000008
+
+struct dpseci_queue_status {
+	uint32_t fqid;
+	/* FQ's scheduling states
+	 * (available scheduling states are defined in qbman_fq_schedstate_e)
+	 */
+	enum qbman_fq_schedstate_e schedstate;
+	/* FQ's state flags (available flags are defined above) */
+	uint16_t state_flags;
+	/* FQ's frame count */
+	uint32_t frame_count;
+	/* FQ's byte count */
+	uint32_t byte_count;
+};
+
+int dpseci_get_rx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr);
+
+int dpseci_get_tx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr);
+
 #endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
index af3518a0f3..065464b701 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPSECI_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPSECI Version */
 #define DPSECI_VER_MAJOR		5
-#define DPSECI_VER_MINOR		3
+#define DPSECI_VER_MINOR		4
 
 /* Command versioning */
 #define DPSECI_CMD_BASE_VERSION		1
@@ -46,6 +46,9 @@
 #define DPSECI_CMDID_GET_OPR		DPSECI_CMD_V1(0x19B)
 #define DPSECI_CMDID_SET_CONGESTION_NOTIFICATION	DPSECI_CMD_V1(0x170)
 #define DPSECI_CMDID_GET_CONGESTION_NOTIFICATION	DPSECI_CMD_V1(0x171)
+#define DPSECI_CMDID_GET_RX_QUEUE_STATUS	DPSECI_CMD_V1(0x172)
+#define DPSECI_CMDID_GET_TX_QUEUE_STATUS	DPSECI_CMD_V1(0x173)
+
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPSECI_MASK(field)        \
@@ -251,5 +254,17 @@ struct dpseci_cmd_set_congestion_notification {
 	uint32_t threshold_exit;
 };
 
+struct dpseci_cmd_get_queue_status {
+	uint32_t queue_index;
+};
+
+struct dpseci_rsp_get_queue_status {
+	uint32_t fqid;
+	uint16_t schedstate;
+	uint16_t state_flags;
+	uint32_t frame_count;
+	uint32_t byte_count;
+};
+
 #pragma pack(pop)
 #endif /* _FSL_DPSECI_CMD_H */
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index ab64df6a59..439b8f97a4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -899,6 +899,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct dpni_queue tx_conf_cfg;
 	struct dpni_queue tx_flow_cfg;
 	uint8_t options = 0, flow_id;
+	uint8_t ceetm_ch_idx;
 	uint16_t channel_id;
 	struct dpni_queue_id qid;
 	uint32_t tc_id;
@@ -925,20 +926,27 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	memset(&tx_conf_cfg, 0, sizeof(struct dpni_queue));
 	memset(&tx_flow_cfg, 0, sizeof(struct dpni_queue));
 
-	if (tx_queue_id == 0) {
-		/*Set tx-conf and error configuration*/
-		if (priv->flags & DPAA2_TX_CONF_ENABLE)
-			ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW,
-							    priv->token,
-							    DPNI_CONF_AFFINE);
-		else
-			ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW,
-							    priv->token,
-							    DPNI_CONF_DISABLE);
-		if (ret) {
-			DPAA2_PMD_ERR("Error in set tx conf mode settings: "
-				      "err=%d", ret);
-			return -1;
+	if (!tx_queue_id) {
+		for (ceetm_ch_idx = 0;
+			ceetm_ch_idx <= (priv->num_channels - 1);
+			ceetm_ch_idx++) {
+			/*Set tx-conf and error configuration*/
+			if (priv->flags & DPAA2_TX_CONF_ENABLE) {
+				ret = dpni_set_tx_confirmation_mode(dpni,
+						CMD_PRI_LOW, priv->token,
+						ceetm_ch_idx,
+						DPNI_CONF_AFFINE);
+			} else {
+				ret = dpni_set_tx_confirmation_mode(dpni,
+						CMD_PRI_LOW, priv->token,
+						ceetm_ch_idx,
+						DPNI_CONF_DISABLE);
+			}
+			if (ret) {
+				DPAA2_PMD_ERR("Error(%d) in tx conf setting",
+					ret);
+				return ret;
+			}
 		}
 	}
 
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 1bb153cad7..f4feef3840 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -287,15 +287,19 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
  * @token:	Token of DPDMUX object
  * @skip_reset_flags:	By default all are 0.
  *			By setting 1 will deactivate the reset.
- *	The flags are:
- *			DPDMUX_SKIP_DEFAULT_INTERFACE  0x01
- *			DPDMUX_SKIP_UNICAST_RULES      0x02
- *			DPDMUX_SKIP_MULTICAST_RULES    0x04
+ * The flags are:
+ *			DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE  0x01
+ *			DPDMUX_SKIP_UNICAST_RULES             0x02
+ *			DPDMUX_SKIP_MULTICAST_RULES           0x04
+ *			DPDMUX_SKIP_RESET_DEFAULT_INTERFACE   0x08
  *
  * For example, by default, through DPDMUX_RESET the default
  * interface will be restored with the one from create.
- * By setting DPDMUX_SKIP_DEFAULT_INTERFACE flag,
- * through DPDMUX_RESET the default interface will not be modified.
+ * By setting DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE flag,
+ * through DPDMUX_RESET the default interface will not be modified after reset.
+ * By setting DPDMUX_SKIP_RESET_DEFAULT_INTERFACE flag,
+ * through DPDMUX_RESET the default interface will not be reset
+ * and will continue to be functional during reset procedure.
  *
  * Return:	'0' on Success; Error code otherwise.
  */
@@ -327,10 +331,11 @@ int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
  * @token:	Token of DPDMUX object
  * @skip_reset_flags:	Get the reset flags.
  *
- *	The flags are:
- *			DPDMUX_SKIP_DEFAULT_INTERFACE  0x01
- *			DPDMUX_SKIP_UNICAST_RULES      0x02
- *			DPDMUX_SKIP_MULTICAST_RULES    0x04
+ * The flags are:
+ *			DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE  0x01
+ *			DPDMUX_SKIP_UNICAST_RULES             0x02
+ *			DPDMUX_SKIP_MULTICAST_RULES           0x04
+ *			DPDMUX_SKIP_RESET_DEFAULT_INTERFACE   0x08
  *
  * Return:	'0' on Success; Error code otherwise.
  */
@@ -1064,6 +1069,127 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpdmux_if_set_taildrop() - enable taildrop for egress interface queues.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id:	Interface Identifier
+ * @cfg: Taildrop configuration
+ */
+int dpdmux_if_set_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+	struct dpdmux_cmd_set_taildrop *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_SET_TAILDROP,
+			cmd_flags,
+			token);
+	cmd_params = (struct dpdmux_cmd_set_taildrop *)cmd.params;
+	cmd_params->if_id		= cpu_to_le16(if_id);
+	cmd_params->units		= cfg->units;
+	cmd_params->threshold	= cpu_to_le32(cfg->threshold);
+	dpdmux_set_field(cmd_params->oal_en, ENABLE, (!!cfg->enable));
+
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpdmux_if_get_taildrop() - get current taildrop configuration.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id:	Interface Identifier
+ * @cfg: Taildrop configuration
+ */
+int dpdmux_if_get_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg)
+{
+	struct mc_command cmd = {0};
+	struct dpdmux_cmd_get_taildrop *cmd_params;
+	struct dpdmux_rsp_get_taildrop *rsp_params;
+	int err = 0;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_TAILDROP,
+			cmd_flags,
+			token);
+	cmd_params = (struct dpdmux_cmd_get_taildrop *)cmd.params;
+	cmd_params->if_id	= cpu_to_le16(if_id);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpdmux_rsp_get_taildrop *)cmd.params;
+	cfg->threshold = le32_to_cpu(rsp_params->threshold);
+	cfg->units = rsp_params->units;
+	cfg->enable = dpdmux_get_field(rsp_params->oal_en, ENABLE);
+
+	return err;
+}
+
+/**
+ * dpdmux_dump_table() - Dump the content of table_type table into memory.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @table_type: The type of the table to dump
+ *	- DPDMUX_DMAT_TABLE
+ *	- DPDMUX_MISS_TABLE
+ *	- DPDMUX_PRUNE_TABLE
+ * @table_index: The index of the table to dump in case of more than one table
+ *	if table_type == DPDMUX_DMAT_TABLE
+ *		- DPDMUX_HMAP_UNICAST
+ *		- DPDMUX_HMAP_MULTICAST
+ *	else 0
+ * @iova_addr: The snapshot will be stored in this variable as an header of struct dump_table_header
+ *             followed by an array of struct dump_table_entry
+ * @iova_size: Memory size allocated for iova_addr
+ * @num_entries: Number of entries written in iova_addr
+ *
+ * Return: Completion status. '0' on Success; Error code otherwise.
+ *
+ * The memory allocated at iova_addr must be zeroed before command execution.
+ * If the table content exceeds the memory size provided the dump will be truncated.
+ */
+int dpdmux_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	struct dpdmux_cmd_dump_table *cmd_params;
+	struct dpdmux_rsp_dump_table *rsp_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_DUMP_TABLE, cmd_flags, token);
+	cmd_params = (struct dpdmux_cmd_dump_table *)cmd.params;
+	cmd_params->table_type = cpu_to_le16(table_type);
+	cmd_params->table_index = cpu_to_le16(table_index);
+	cmd_params->iova_addr = cpu_to_le64(iova_addr);
+	cmd_params->iova_size = cpu_to_le32(iova_size);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpdmux_rsp_dump_table *)cmd.params;
+	*num_entries = le16_to_cpu(rsp_params->num_entries);
+
+	return 0;
+}
+
+
 /**
  * dpdmux_if_set_errors_behavior() - Set errors behavior
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
@@ -1100,3 +1226,60 @@ int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 	/* send command to mc*/
 	return mc_send_command(mc_io, &cmd);
 }
+
+/* Sets up a Soft Parser Profile on this DPDMUX
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @sp_profile: Soft Parser Profile name (must a valid name for a defined profile)
+ *			Maximum allowed length for this string is 8 characters long
+ *			If this parameter is empty string (all zeros)
+ *			then the Default SP Profile is set on this dpdmux
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ */
+int dpdmux_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type)
+{
+	struct dpdmux_cmd_set_sp_profile *cmd_params;
+	struct mc_command cmd = { 0 };
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SET_SP_PROFILE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpdmux_cmd_set_sp_profile *)cmd.params;
+	for (i = 0; i < MAX_SP_PROFILE_ID_SIZE && sp_profile[i]; i++)
+		cmd_params->sp_profile[i] = sp_profile[i];
+	cmd_params->type = type;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/* Enable/Disable Soft Parser on this DPDMUX interface
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id: interface id
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ * @en: 1 to enable or 0 to disable
+ */
+int dpdmux_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t if_id, uint8_t type, uint8_t en)
+{
+	struct dpdmux_cmd_sp_enable *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SP_ENABLE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpdmux_cmd_sp_enable *)cmd.params;
+	cmd_params->if_id = if_id;
+	cmd_params->type = type;
+	cmd_params->en = en;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
diff --git a/drivers/net/dpaa2/mc/dpkg.c b/drivers/net/dpaa2/mc/dpkg.c
index 4789976b7d..5db3d092c1 100644
--- a/drivers/net/dpaa2/mc/dpkg.c
+++ b/drivers/net/dpaa2/mc/dpkg.c
@@ -1,16 +1,18 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2021, 2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
 #include <fsl_mc_cmd.h>
 #include <fsl_dpkg.h>
+#include <string.h>
 
 /**
  * dpkg_prepare_key_cfg() - function prepare extract parameters
  * @cfg: defining a full Key Generation profile (rule)
- * @key_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
+ * @key_cfg_buf: Zeroed memory whose size is sizeo of
+ *		"struct dpni_ext_set_rx_tc_dist" before mapping it to DMA
  *
  * This function has to be called before the following functions:
  *	- dpni_set_rx_tc_dist()
@@ -18,7 +20,8 @@
  *	- dpkg_prepare_key_cfg()
  */
 int
-dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, uint8_t *key_cfg_buf)
+dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+	void *key_cfg_buf)
 {
 	int i, j;
 	struct dpni_ext_set_rx_tc_dist *dpni_ext;
@@ -27,11 +30,12 @@ dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, uint8_t *key_cfg_buf)
 	if (cfg->num_extracts > DPKG_MAX_NUM_OF_EXTRACTS)
 		return -EINVAL;
 
-	dpni_ext = (struct dpni_ext_set_rx_tc_dist *)key_cfg_buf;
+	dpni_ext = key_cfg_buf;
 	dpni_ext->num_extracts = cfg->num_extracts;
 
 	for (i = 0; i < cfg->num_extracts; i++) {
 		extr = &dpni_ext->extracts[i];
+		memset(extr, 0, sizeof(struct dpni_dist_extract));
 
 		switch (cfg->extracts[i].type) {
 		case DPKG_EXTRACT_FROM_HDR:
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index 4d97b98939..558f08dc69 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2022 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -852,6 +852,92 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpni_get_qdid_ex() - Extension for the function to get the Queuing Destination ID (QDID)
+ *			that should be used for enqueue operations.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @qtype:	Type of queue to receive QDID for
+ * @qdid:	Array of virtual QDID value that should be used as an argument
+ *			in all enqueue operations.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * This function must be used when dpni is created using multiple Tx channels to return one
+ * qdid for each channel.
+ */
+int dpni_get_qdid_ex(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  enum dpni_queue_type qtype,
+		  uint16_t *qdid)
+{
+	struct mc_command cmd = { 0 };
+	struct dpni_cmd_get_qdid *cmd_params;
+	struct dpni_rsp_get_qdid_ex *rsp_params;
+	int i;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QDID_EX,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_qdid *)cmd.params;
+	cmd_params->qtype = qtype;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_qdid_ex *)cmd.params;
+	for (i = 0; i < DPNI_MAX_CHANNELS; i++)
+		qdid[i] = le16_to_cpu(rsp_params->qdid[i]);
+
+	return 0;
+}
+
+/**
+ * dpni_get_sp_info() - Get the AIOP storage profile IDs associated
+ *			with the DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @sp_info:	Returned AIOP storage-profile information
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * @warning	Only relevant for DPNI that belongs to AIOP container.
+ */
+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
+		     uint32_t cmd_flags,
+		     uint16_t token,
+		     struct dpni_sp_info *sp_info)
+{
+	struct dpni_rsp_get_sp_info *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err, i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SP_INFO,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_sp_info *)cmd.params;
+	for (i = 0; i < DPNI_MAX_SP; i++)
+		sp_info->spids[i] = le16_to_cpu(rsp_params->spids[i]);
+
+	return 0;
+}
+
 /**
  * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
  * @mc_io:	Pointer to MC portal's I/O object
@@ -1684,6 +1770,7 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
  * @mc_io:	Pointer to MC portal's I/O object
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
  * @mode:	Tx confirmation mode
  *
  * This function is useful only when 'DPNI_OPT_TX_CONF_DISABLED' is not
@@ -1701,6 +1788,7 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
 int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode mode)
 {
 	struct dpni_tx_confirmation_mode *cmd_params;
@@ -1711,6 +1799,7 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 					  cmd_flags,
 					  token);
 	cmd_params = (struct dpni_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
 	cmd_params->confirmation_mode = mode;
 
 	/* send command to mc*/
@@ -1722,6 +1811,7 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
  * @mc_io:	Pointer to MC portal's I/O object
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
  * @mode:	Tx confirmation mode
  *
  * Return:  '0' on Success; Error code otherwise.
@@ -1729,8 +1819,10 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode *mode)
 {
+	struct dpni_tx_confirmation_mode *cmd_params;
 	struct dpni_tx_confirmation_mode *rsp_params;
 	struct mc_command cmd = { 0 };
 	int err;
@@ -1738,6 +1830,8 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONFIRMATION_MODE,
 					cmd_flags,
 					token);
+	cmd_params = (struct dpni_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
 
 	err = mc_send_command(mc_io, &cmd);
 	if (err)
@@ -1749,6 +1843,78 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpni_set_queue_tx_confirmation_mode() - Set Tx confirmation mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
+ * @index:	queue index
+ * @mode:	Tx confirmation mode
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode mode)
+{
+	struct dpni_queue_tx_confirmation_mode *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QUEUE_TX_CONFIRMATION_MODE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
+	cmd_params->index = index;
+	cmd_params->confirmation_mode = mode;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_queue_tx_confirmation_mode() - Get Tx confirmation mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
+ * @index:	queue index
+ * @mode:	Tx confirmation mode
+ *
+ * Return:  '0' on Success; Error code otherwise.
+ */
+int dpni_get_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode *mode)
+{
+	struct dpni_queue_tx_confirmation_mode *cmd_params;
+	struct dpni_queue_tx_confirmation_mode *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QUEUE_TX_CONFIRMATION_MODE,
+					cmd_flags,
+					token);
+	cmd_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
+	cmd_params->index = index;
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	*mode =  rsp_params->confirmation_mode;
+
+	return 0;
+}
+
 /**
  * dpni_set_qos_table() - Set QoS mapping table
  * @mc_io:	Pointer to MC portal's I/O object
@@ -2291,8 +2457,7 @@ int dpni_set_congestion_notification(struct fsl_mc_io *mc_io,
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
  * @qtype:	Type of queue - Rx, Tx and Tx confirm types are supported
- * @param:	Traffic class and channel. Bits[0-7] contain traaffic class,
- *		byte[8-15] contains channel id
+ * @tc_id:	Traffic class selection (0-7)
  * @cfg:	congestion notification configuration
  *
  * Return:	'0' on Success; error code otherwise.
@@ -3114,8 +3279,216 @@ int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 
 	cmd_params = (struct dpni_cmd_set_port_cfg *)cmd.params;
 	cmd_params->flags = cpu_to_le32(flags);
-	dpni_set_field(cmd_params->bit_params,	PORT_LOOPBACK_EN,
-			!!port_cfg->loopback_en);
+	dpni_set_field(cmd_params->bit_params, PORT_LOOPBACK_EN, !!port_cfg->loopback_en);
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_single_step_cfg() - return current configuration for single step PTP
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPNI object
+ * @ptp_cfg: ptp single step configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ */
+int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_single_step_cfg *ptp_cfg)
+{
+	struct dpni_rsp_single_step_cfg *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SINGLE_STEP_CFG,
+						cmd_flags,
+						token);
+	/* send command to mc*/
+	err =  mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* read command response */
+	rsp_params = (struct dpni_rsp_single_step_cfg *)cmd.params;
+	ptp_cfg->offset = le16_to_cpu(rsp_params->offset);
+	ptp_cfg->en = dpni_get_field(rsp_params->flags, PTP_ENABLE);
+	ptp_cfg->ch_update = dpni_get_field(rsp_params->flags, PTP_CH_UPDATE);
+	ptp_cfg->peer_delay = le32_to_cpu(rsp_params->peer_delay);
+	ptp_cfg->ptp_onestep_reg_base =
+				  le32_to_cpu(rsp_params->ptp_onestep_reg_base);
+
+	return err;
+}
+
+/**
+ * dpni_get_port_cfg() - return configuration from physical port. The command has effect only if
+ *			dpni is connected to a mac object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @port_cfg: Configuration data
+ * The command can be called only when dpni is connected to a dpmac object.
+ * If the dpni is unconnected or the endpoint is not a dpni it will return error;
+ */
+int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_port_cfg *port_cfg)
+{
+	struct dpni_rsp_get_port_cfg *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PORT_CFG,
+			cmd_flags, token);
+
+	/* send command to MC */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* read command response */
+	rsp_params = (struct dpni_rsp_get_port_cfg *)cmd.params;
+	port_cfg->loopback_en = dpni_get_field(rsp_params->bit_params, PORT_LOOPBACK_EN);
+
+	return 0;
+}
+
+/**
+ * dpni_set_single_step_cfg() - enable/disable and configure single step PTP
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPNI object
+ * @ptp_cfg: ptp single step configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * The function has effect only when dpni object is connected to a dpmac object. If the
+ * dpni is not connected to a dpmac the configuration will be stored inside and applied
+ * when connection is made.
+ */
+int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_single_step_cfg *ptp_cfg)
+{
+	struct dpni_cmd_single_step_cfg *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_SINGLE_STEP_CFG,
+						cmd_flags,
+						token);
+	cmd_params = (struct dpni_cmd_single_step_cfg *)cmd.params;
+	cmd_params->offset = cpu_to_le16(ptp_cfg->offset);
+	cmd_params->peer_delay = cpu_to_le32(ptp_cfg->peer_delay);
+	dpni_set_field(cmd_params->flags, PTP_ENABLE, !!ptp_cfg->en);
+	dpni_set_field(cmd_params->flags, PTP_CH_UPDATE, !!ptp_cfg->ch_update);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_dump_table() - Dump the content of table_type table into memory.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @table_type: The type of the table to dump
+ * @table_index: The index of the table to dump in case of more than one table
+ * @iova_addr: The snapshot will be stored in this variable as an header of struct dump_table_header
+ *             followed by an array of struct dump_table_entry
+ * @iova_size: Memory size allocated for iova_addr
+ * @num_entries: Number of entries written in iova_addr
+ *
+ * Return: Completion status. '0' on Success; Error code otherwise.
+ *
+ * The memory allocated at iova_addr must be zeroed before command execution.
+ * If the table content exceeds the memory size provided the dump will be truncated.
+ */
+int dpni_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	struct dpni_cmd_dump_table *cmd_params;
+	struct dpni_rsp_dump_table *rsp_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_DUMP_TABLE, cmd_flags, token);
+	cmd_params = (struct dpni_cmd_dump_table *)cmd.params;
+	cmd_params->table_type = cpu_to_le16(table_type);
+	cmd_params->table_index = cpu_to_le16(table_index);
+	cmd_params->iova_addr = cpu_to_le64(iova_addr);
+	cmd_params->iova_size = cpu_to_le32(iova_size);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_rsp_dump_table *)cmd.params;
+	*num_entries = le16_to_cpu(rsp_params->num_entries);
+
+	return 0;
+}
+
+/* Sets up a Soft Parser Profile on this DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @sp_profile: Soft Parser Profile name (must a valid name for a defined profile)
+ *			Maximum allowed length for this string is 8 characters long
+ *			If this parameter is empty string (all zeros)
+ *			then the Default SP Profile is set on this dpni
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ */
+int dpni_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type)
+{
+	struct dpni_cmd_set_sp_profile *cmd_params;
+	struct mc_command cmd = { 0 };
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_SP_PROFILE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpni_cmd_set_sp_profile *)cmd.params;
+	for (i = 0; i < MAX_SP_PROFILE_ID_SIZE && sp_profile[i]; i++)
+		cmd_params->sp_profile[i] = sp_profile[i];
+	cmd_params->type = type;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/* Enable/Disable Soft Parser on this DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ * @en: 1 to enable or 0 to disable
+ */
+int dpni_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t type, uint8_t en)
+{
+	struct dpni_cmd_sp_enable *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SP_ENABLE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpni_cmd_sp_enable *)cmd.params;
+	cmd_params->type = type;
+	cmd_params->en = en;
 
 	/* send command to MC */
 	return mc_send_command(mc_io, &cmd);
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 9bbac44219..97b09e59f9 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2022 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #ifndef __FSL_DPDMUX_H
@@ -154,6 +154,10 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
  *Setting 1 DPDMUX_RESET will not reset multicast rules
  */
 #define DPDMUX_SKIP_MULTICAST_RULES	0x04
+/**
+ *Setting 4 DPDMUX_RESET will not reset default interface
+ */
+#define DPDMUX_SKIP_RESET_DEFAULT_INTERFACE	0x08
 
 int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
@@ -464,10 +468,50 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
 			   uint16_t *major_ver,
 			   uint16_t *minor_ver);
 
+enum dpdmux_congestion_unit {
+	DPDMUX_TAIDLROP_DROP_UNIT_BYTE = 0,
+	DPDMUX_TAILDROP_DROP_UNIT_FRAMES,
+	DPDMUX_TAILDROP_DROP_UNIT_BUFFERS
+};
+
 /**
- * Discard bit. This bit must be used together with other bits in
- * DPDMUX_ERROR_ACTION_CONTINUE to disable discarding of frames containing
- * errors
+ * struct dpdmux_taildrop_cfg - interface taildrop configuration
+ * @enable - enable (1 ) or disable (0) taildrop
+ * @units - taildrop units
+ * @threshold - taildtop threshold
+ */
+struct dpdmux_taildrop_cfg {
+	char enable;
+	enum dpdmux_congestion_unit units;
+	uint32_t threshold;
+};
+
+int dpdmux_if_set_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg);
+
+int dpdmux_if_get_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg);
+
+#define DPDMUX_MAX_KEY_SIZE 56
+
+enum dpdmux_table_type {
+	DPDMUX_DMAT_TABLE = 1,
+	DPDMUX_MISS_TABLE = 2,
+	DPDMUX_PRUNE_TABLE = 3,
+};
+
+int dpdmux_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries);
+
+/**
+ * Discard bit. This bit must be used together with other bits in DPDMUX_ERROR_ACTION_CONTINUE
+ * to disable discarding of frames containing errors
  */
 #define DPDMUX_ERROR_DISC		0x80000000
 /**
@@ -583,4 +627,19 @@ struct dpdmux_error_cfg {
 int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 		uint16_t token, uint16_t if_id, struct dpdmux_error_cfg *cfg);
 
+/**
+ * SP Profile on Ingress DPDMUX
+ */
+#define DPDMUX_SP_PROFILE_INGRESS 0x1
+/**
+ * SP Profile on Egress DPDMUX
+ */
+#define DPDMUX_SP_PROFILE_EGRESS	0x2
+
+int dpdmux_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type);
+
+int dpdmux_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t if_id, uint8_t type, uint8_t en);
+
 #endif /* __FSL_DPDMUX_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
index bf6b8a20d1..a94f1bf91a 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #ifndef _FSL_DPDMUX_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPDMUX Version */
 #define DPDMUX_VER_MAJOR		6
-#define DPDMUX_VER_MINOR		9
+#define DPDMUX_VER_MINOR		10
 
 #define DPDMUX_CMD_BASE_VERSION		1
 #define DPDMUX_CMD_VERSION_2		2
@@ -63,8 +63,17 @@
 
 #define DPDMUX_CMDID_SET_RESETABLE		DPDMUX_CMD(0x0ba)
 #define DPDMUX_CMDID_GET_RESETABLE		DPDMUX_CMD(0x0bb)
+
+#define DPDMUX_CMDID_IF_SET_TAILDROP		DPDMUX_CMD(0x0bc)
+#define DPDMUX_CMDID_IF_GET_TAILDROP		DPDMUX_CMD(0x0bd)
+
+#define DPDMUX_CMDID_DUMP_TABLE           DPDMUX_CMD(0x0be)
+
 #define DPDMUX_CMDID_SET_ERRORS_BEHAVIOR	DPDMUX_CMD(0x0bf)
 
+#define DPDMUX_CMDID_SET_SP_PROFILE			DPDMUX_CMD(0x0c0)
+#define DPDMUX_CMDID_SP_ENABLE				DPDMUX_CMD(0x0c1)
+
 #define DPDMUX_MASK(field)        \
 	GENMASK(DPDMUX_##field##_SHIFT + DPDMUX_##field##_SIZE - 1, \
 		DPDMUX_##field##_SHIFT)
@@ -241,7 +250,7 @@ struct dpdmux_cmd_remove_custom_cls_entry {
 };
 
 #define DPDMUX_SKIP_RESET_FLAGS_SHIFT    0
-#define DPDMUX_SKIP_RESET_FLAGS_SIZE     3
+#define DPDMUX_SKIP_RESET_FLAGS_SIZE     4
 
 struct dpdmux_cmd_set_skip_reset_flags {
 	uint8_t skip_reset_flags;
@@ -251,6 +260,61 @@ struct dpdmux_rsp_get_skip_reset_flags {
 	uint8_t skip_reset_flags;
 };
 
+struct dpdmux_cmd_set_taildrop {
+	uint32_t	pad1;
+	uint16_t	if_id;
+	uint16_t	pad2;
+	uint16_t	oal_en;
+	uint8_t		units;
+	uint8_t		pad3;
+	uint32_t	threshold;
+};
+
+struct dpdmux_cmd_get_taildrop {
+	uint32_t	pad1;
+	uint16_t	if_id;
+};
+
+struct dpdmux_rsp_get_taildrop {
+	uint16_t	pad1;
+	uint16_t	pad2;
+	uint16_t	if_id;
+	uint16_t	pad3;
+	uint16_t	oal_en;
+	uint8_t		units;
+	uint8_t		pad4;
+	uint32_t	threshold;
+};
+
+struct dpdmux_cmd_dump_table {
+	uint16_t table_type;
+	uint16_t table_index;
+	uint32_t pad0;
+	uint64_t iova_addr;
+	uint32_t iova_size;
+};
+
+struct dpdmux_rsp_dump_table {
+	uint16_t num_entries;
+};
+
+struct dpdmux_dump_table_header {
+	uint16_t table_type;
+	uint16_t table_num_entries;
+	uint16_t table_max_entries;
+	uint8_t default_action;
+	uint8_t match_type;
+	uint8_t reserved[24];
+};
+
+struct dpdmux_dump_table_entry {
+	uint8_t key[DPDMUX_MAX_KEY_SIZE];
+	uint8_t mask[DPDMUX_MAX_KEY_SIZE];
+	uint8_t key_action;
+	uint16_t result[3];
+	uint8_t reserved[21];
+};
+
 #define DPDMUX_ERROR_ACTION_SHIFT		0
 #define DPDMUX_ERROR_ACTION_SIZE		4
 
@@ -260,5 +324,18 @@ struct dpdmux_cmd_set_errors_behavior {
 	uint16_t if_id;
 };
 
+#define MAX_SP_PROFILE_ID_SIZE	8
+
+struct dpdmux_cmd_set_sp_profile {
+	uint8_t sp_profile[MAX_SP_PROFILE_ID_SIZE];
+	uint8_t type;
+};
+
+struct dpdmux_cmd_sp_enable {
+	uint16_t if_id;
+	uint8_t type;
+	uint8_t en;
+};
+
 #pragma pack(pop)
 #endif /* _FSL_DPDMUX_CMD_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpkg.h b/drivers/net/dpaa2/mc/fsl_dpkg.h
index 70f2339ea5..834c765513 100644
--- a/drivers/net/dpaa2/mc/fsl_dpkg.h
+++ b/drivers/net/dpaa2/mc/fsl_dpkg.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  * Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPKG_H_
@@ -180,7 +180,8 @@ struct dpni_ext_set_rx_tc_dist {
 	struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
 };
 
-int dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
-			 uint8_t *key_cfg_buf);
+int
+dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+	void *key_cfg_buf);
 
 #endif /* __FSL_DPKG_H_ */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index ce84f4265e..3a5fcfa8a5 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPNI_H
@@ -116,6 +116,11 @@ struct fsl_mc_io;
  * Flow steering table is shared between all traffic classes
  */
 #define DPNI_OPT_SHARED_FS				0x001000
+/*
+ * Fq frame data, context and annotations stashing disable.
+ * The stashing is enabled by default.
+ */
+#define DPNI_OPT_STASHING_DIS			0x002000
 /**
  * Software sequence maximum layout size
  */
@@ -147,6 +152,7 @@ int dpni_close(struct fsl_mc_io *mc_io,
  *		DPNI_OPT_HAS_KEY_MASKING
  *		DPNI_OPT_NO_FS
  *		DPNI_OPT_SINGLE_SENDER
+ *		DPNI_OPT_STASHING_DIS
  * @fs_entries: Number of entries in the flow steering table.
  *		This table is used to select the ingress queue for
  *		ingress traffic, targeting a GPP core or another.
@@ -335,6 +341,7 @@ int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
  *		DPNI_OPT_SHARED_CONGESTION
  *		DPNI_OPT_HAS_KEY_MASKING
  *		DPNI_OPT_NO_FS
+ *		DPNI_OPT_STASHING_DIS
  * @num_queues: Number of Tx and Rx queues used for traffic distribution.
  * @num_rx_tcs: Number of RX traffic classes (TCs), reserved for the DPNI.
  * @num_tx_tcs: Number of TX traffic classes (TCs), reserved for the DPNI.
@@ -394,7 +401,7 @@ int dpni_get_attributes(struct fsl_mc_io *mc_io,
  * error queue. To be used in dpni_set_errors_behavior() only if error_action
  * parameter is set to DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE.
  */
-#define DPNI_ERROR_DISC		0x80000000
+#define DPNI_ERROR_DISC			0x80000000
 
 /**
  * Extract out of frame header error
@@ -576,6 +583,8 @@ enum dpni_offload {
 	DPNI_OFF_TX_L3_CSUM,
 	DPNI_OFF_TX_L4_CSUM,
 	DPNI_FLCTYPE_HASH,
+	DPNI_HEADER_STASHING,
+	DPNI_PAYLOAD_STASHING,
 };
 
 int dpni_set_offload(struct fsl_mc_io *mc_io,
@@ -596,6 +605,26 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 		  enum dpni_queue_type qtype,
 		  uint16_t *qdid);
 
+int dpni_get_qdid_ex(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  enum dpni_queue_type qtype,
+		  uint16_t *qdid);
+
+/**
+ * struct dpni_sp_info - Structure representing DPNI storage-profile information
+ * (relevant only for DPNI owned by AIOP)
+ * @spids: array of storage-profiles
+ */
+struct dpni_sp_info {
+	uint16_t spids[DPNI_MAX_SP];
+};
+
+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
+		     uint32_t cmd_flags,
+		     uint16_t token,
+		     struct dpni_sp_info *sp_info);
+
 int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
 			    uint32_t cmd_flags,
 			    uint16_t token,
@@ -1443,11 +1472,25 @@ enum dpni_confirmation_mode {
 int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode mode);
 
 int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
+				  enum dpni_confirmation_mode *mode);
+
+int dpni_set_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode mode);
+
+int dpni_get_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
 				  enum dpni_confirmation_mode *mode);
 
 /**
@@ -1841,6 +1884,60 @@ void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
 				     const uint8_t *sw_sequence_layout_buf);
 
 /**
+ * When used for queue_idx in function dpni_set_rx_dist_default_queue will signal to dpni
+ * to drop all unclassified frames
+ */
+#define DPNI_FS_MISS_DROP		((uint16_t)-1)
+
+/**
+ * struct dpni_rx_dist_cfg - distribution configuration
+ * @dist_size:	distribution size; supported values: 1,2,3,4,6,7,8,
+ *		12,14,16,24,28,32,48,56,64,96,112,128,192,224,256,384,448,
+ *		512,768,896,1024
+ * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
+ *		the extractions to be used for the distribution key by calling
+ *		dpkg_prepare_key_cfg() relevant only when enable!=0 otherwise it can be '0'
+ * @enable: enable/disable the distribution.
+ * @tc: TC id for which distribution is set
+ * @fs_miss_flow_id: when packet misses all rules from flow steering table and hash is
+ *		disabled it will be put into this queue id; use DPNI_FS_MISS_DROP to drop
+ *		frames. The value of this field is used only when flow steering distribution
+ *		is enabled and hash distribution is disabled
+ */
+struct dpni_rx_dist_cfg {
+	uint16_t dist_size;
+	uint64_t key_cfg_iova;
+	uint8_t enable;
+	uint8_t tc;
+	uint16_t fs_miss_flow_id;
+};
+
+int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		const struct dpni_rx_dist_cfg *cfg);
+
+int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		const struct dpni_rx_dist_cfg *cfg);
+
+int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t tpid);
+
+int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t tpid);
+
+/**
+ * struct dpni_custom_tpid_cfg - custom TPID configuration. Contains custom TPID values
+ *		used in current dpni object to detect 802.1q frames.
+ *	@tpid1: first tag. Not used if zero.
+ *	@tpid2: second tag. Not used if zero.
+ */
+struct dpni_custom_tpid_cfg {
+	uint16_t tpid1;
+	uint16_t tpid2;
+};
+
+int dpni_get_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_custom_tpid_cfg *tpid);
+/*
  * struct dpni_ptp_cfg - configure single step PTP (IEEE 1588)
  *	@en: enable single step PTP. When enabled the PTPv1 functionality will
  *		not work. If the field is zero, offset and ch_update parameters
@@ -1858,6 +1955,7 @@ struct dpni_single_step_cfg {
 	uint8_t ch_update;
 	uint16_t offset;
 	uint32_t peer_delay;
+	uint32_t ptp_onestep_reg_base;
 };
 
 int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
@@ -1885,61 +1983,35 @@ int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 		uint16_t token, struct dpni_port_cfg *port_cfg);
 
-/**
- * When used for queue_idx in function dpni_set_rx_dist_default_queue will
- * signal to dpni to drop all unclassified frames
- */
-#define DPNI_FS_MISS_DROP		((uint16_t)-1)
-
-/**
- * struct dpni_rx_dist_cfg - distribution configuration
- * @dist_size:	distribution size; supported values: 1,2,3,4,6,7,8,
- *		12,14,16,24,28,32,48,56,64,96,112,128,192,224,256,384,448,
- *		512,768,896,1024
- * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
- *		the extractions to be used for the distribution key by calling
- *		dpkg_prepare_key_cfg() relevant only when enable!=0 otherwise
- *		it can be '0'
- * @enable: enable/disable the distribution.
- * @tc: TC id for which distribution is set
- * @fs_miss_flow_id: when packet misses all rules from flow steering table and
- *		hash is disabled it will be put into this queue id; use
- *		DPNI_FS_MISS_DROP to drop frames. The value of this field is
- *		used only when flow steering distribution is enabled and hash
- *		distribution is disabled
- */
-struct dpni_rx_dist_cfg {
-	uint16_t dist_size;
-	uint64_t key_cfg_iova;
-	uint8_t enable;
-	uint8_t tc;
-	uint16_t fs_miss_flow_id;
+enum dpni_table_type {
+	DPNI_FS_TABLE = 1,
+	DPNI_MAC_TABLE = 2,
+	DPNI_QOS_TABLE = 3,
+	DPNI_VLAN_TABLE = 4,
 };
 
-int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, const struct dpni_rx_dist_cfg *cfg);
-
-int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, const struct dpni_rx_dist_cfg *cfg);
-
-int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, uint16_t tpid);
-
-int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, uint16_t tpid);
+int dpni_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries);
 
 /**
- * struct dpni_custom_tpid_cfg - custom TPID configuration. Contains custom TPID
- *	values used in current dpni object to detect 802.1q frames.
- *	@tpid1: first tag. Not used if zero.
- *	@tpid2: second tag. Not used if zero.
+ * SP Profile on Ingress DPNI
  */
-struct dpni_custom_tpid_cfg {
-	uint16_t tpid1;
-	uint16_t tpid2;
-};
+#define DPNI_SP_PROFILE_INGRESS 0x1
+/**
+ * SP Profile on Egress DPNI
+ */
+#define DPNI_SP_PROFILE_EGRESS	0x2
+
+int dpni_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type);
 
-int dpni_get_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, struct dpni_custom_tpid_cfg *tpid);
+int dpni_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t type, uint8_t en);
 
 #endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
index 781f936add..1152182e34 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2022 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPNI_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPNI Version */
 #define DPNI_VER_MAJOR				8
-#define DPNI_VER_MINOR				2
+#define DPNI_VER_MINOR				4
 
 #define DPNI_CMD_BASE_VERSION			1
 #define DPNI_CMD_VERSION_2			2
@@ -108,8 +108,8 @@
 #define DPNI_CMDID_GET_EARLY_DROP		DPNI_CMD_V3(0x26A)
 #define DPNI_CMDID_GET_OFFLOAD			DPNI_CMD_V2(0x26B)
 #define DPNI_CMDID_SET_OFFLOAD			DPNI_CMD_V2(0x26C)
-#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE	DPNI_CMD(0x266)
-#define DPNI_CMDID_GET_TX_CONFIRMATION_MODE	DPNI_CMD(0x26D)
+#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE	DPNI_CMD_V2(0x266)
+#define DPNI_CMDID_GET_TX_CONFIRMATION_MODE	DPNI_CMD_V2(0x26D)
 #define DPNI_CMDID_SET_OPR			DPNI_CMD_V2(0x26e)
 #define DPNI_CMDID_GET_OPR			DPNI_CMD_V2(0x26f)
 #define DPNI_CMDID_LOAD_SW_SEQUENCE		DPNI_CMD(0x270)
@@ -121,7 +121,16 @@
 #define DPNI_CMDID_REMOVE_CUSTOM_TPID		DPNI_CMD(0x276)
 #define DPNI_CMDID_GET_CUSTOM_TPID		DPNI_CMD(0x277)
 #define DPNI_CMDID_GET_LINK_CFG			DPNI_CMD(0x278)
+#define DPNI_CMDID_SET_SINGLE_STEP_CFG			DPNI_CMD(0x279)
+#define DPNI_CMDID_GET_SINGLE_STEP_CFG		DPNI_CMD_V2(0x27a)
 #define DPNI_CMDID_SET_PORT_CFG			DPNI_CMD(0x27B)
+#define DPNI_CMDID_GET_PORT_CFG			DPNI_CMD(0x27C)
+#define DPNI_CMDID_DUMP_TABLE           DPNI_CMD(0x27D)
+#define DPNI_CMDID_SET_SP_PROFILE		DPNI_CMD(0x27E)
+#define DPNI_CMDID_GET_QDID_EX			DPNI_CMD(0x27F)
+#define DPNI_CMDID_SP_ENABLE		    DPNI_CMD(0x280)
+#define DPNI_CMDID_SET_QUEUE_TX_CONFIRMATION_MODE	DPNI_CMD(0x281)
+#define DPNI_CMDID_GET_QUEUE_TX_CONFIRMATION_MODE	DPNI_CMD(0x282)
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPNI_MASK(field)	\
@@ -329,6 +338,10 @@ struct dpni_rsp_get_qdid {
 	uint16_t qdid;
 };
 
+struct dpni_rsp_get_qdid_ex {
+	uint16_t qdid[16];
+};
+
 struct dpni_rsp_get_sp_info {
 	uint16_t spids[2];
 };
@@ -748,7 +761,16 @@ struct dpni_cmd_set_taildrop {
 };
 
 struct dpni_tx_confirmation_mode {
-	uint32_t pad;
+	uint8_t ceetm_ch_idx;
+	uint8_t pad1;
+	uint16_t pad2;
+	uint8_t confirmation_mode;
+};
+
+struct dpni_queue_tx_confirmation_mode {
+	uint8_t ceetm_ch_idx;
+	uint8_t index;
+	uint16_t pad;
 	uint8_t confirmation_mode;
 };
 
@@ -894,6 +916,42 @@ struct dpni_sw_sequence_layout_entry {
 	uint16_t pad;
 };
 
+#define DPNI_RX_FS_DIST_ENABLE_SHIFT	0
+#define DPNI_RX_FS_DIST_ENABLE_SIZE		1
+struct dpni_cmd_set_rx_fs_dist {
+	uint16_t	dist_size;
+	uint8_t		enable;
+	uint8_t		tc;
+	uint16_t	miss_flow_id;
+	uint16_t	pad1;
+	uint64_t	key_cfg_iova;
+};
+
+#define DPNI_RX_HASH_DIST_ENABLE_SHIFT	0
+#define DPNI_RX_HASH_DIST_ENABLE_SIZE		1
+struct dpni_cmd_set_rx_hash_dist {
+	uint16_t	dist_size;
+	uint8_t		enable;
+	uint8_t		tc_id;
+	uint32_t	pad;
+	uint64_t	key_cfg_iova;
+};
+
+struct dpni_cmd_add_custom_tpid {
+	uint16_t	pad;
+	uint16_t	tpid;
+};
+
+struct dpni_cmd_remove_custom_tpid {
+	uint16_t	pad;
+	uint16_t	tpid;
+};
+
+struct dpni_rsp_get_custom_tpid {
+	uint16_t	tpid1;
+	uint16_t	tpid2;
+};
+
 #define DPNI_PTP_ENABLE_SHIFT			0
 #define DPNI_PTP_ENABLE_SIZE			1
 #define DPNI_PTP_CH_UPDATE_SHIFT		1
@@ -925,40 +983,45 @@ struct dpni_rsp_get_port_cfg {
 	uint32_t	bit_params;
 };
 
-#define DPNI_RX_FS_DIST_ENABLE_SHIFT	0
-#define DPNI_RX_FS_DIST_ENABLE_SIZE		1
-struct dpni_cmd_set_rx_fs_dist {
-	uint16_t	dist_size;
-	uint8_t		enable;
-	uint8_t		tc;
-	uint16_t	miss_flow_id;
-	uint16_t	pad1;
-	uint64_t	key_cfg_iova;
+struct dpni_cmd_dump_table {
+	uint16_t table_type;
+	uint16_t table_index;
+	uint32_t pad0;
+	uint64_t iova_addr;
+	uint32_t iova_size;
 };
 
-#define DPNI_RX_HASH_DIST_ENABLE_SHIFT	0
-#define DPNI_RX_HASH_DIST_ENABLE_SIZE		1
-struct dpni_cmd_set_rx_hash_dist {
-	uint16_t	dist_size;
-	uint8_t		enable;
-	uint8_t		tc_id;
-	uint32_t	pad;
-	uint64_t	key_cfg_iova;
+struct dpni_rsp_dump_table {
+	uint16_t num_entries;
 };
 
-struct dpni_cmd_add_custom_tpid {
-	uint16_t	pad;
-	uint16_t	tpid;
+struct dump_table_header {
+	uint16_t table_type;
+	uint16_t table_num_entries;
+	uint16_t table_max_entries;
+	uint8_t default_action;
+	uint8_t match_type;
+	uint8_t reserved[24];
 };
 
-struct dpni_cmd_remove_custom_tpid {
-	uint16_t	pad;
-	uint16_t	tpid;
+struct dump_table_entry {
+	uint8_t key[DPNI_MAX_KEY_SIZE];
+	uint8_t mask[DPNI_MAX_KEY_SIZE];
+	uint8_t key_action;
+	uint16_t result[3];
+	uint8_t reserved[21];
 };
 
-struct dpni_rsp_get_custom_tpid {
-	uint16_t	tpid1;
-	uint16_t	tpid2;
+#define MAX_SP_PROFILE_ID_SIZE	8
+
+struct dpni_cmd_set_sp_profile {
+	uint8_t sp_profile[MAX_SP_PROFILE_ID_SIZE];
+	uint8_t type;
+};
+
+struct dpni_cmd_sp_enable {
+	uint8_t type;
+	uint8_t en;
 };
 
 #pragma pack(pop)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 09/43] net/dpaa2: support link state for eth interfaces
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (7 preceding siblings ...)
  2024-10-14 12:00       ` [v3 08/43] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 10/43] net/dpaa2: update DPNI link status method vanshika.shukla
                         ` (34 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

This patch add support to update the duplex value along with
link status and link speed after setting the link UP.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 439b8f97a4..b120e2c815 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1988,7 +1988,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	if (ret) {
 		/* Unable to obtain dpni status; Not continuing */
 		DPAA2_PMD_ERR("Interface Link UP failed (%d)", ret);
-		return -EINVAL;
+		return ret;
 	}
 
 	/* Enable link if not already enabled */
@@ -1996,13 +1996,13 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 		ret = dpni_enable(dpni, CMD_PRI_LOW, priv->token);
 		if (ret) {
 			DPAA2_PMD_ERR("Interface Link UP failed (%d)", ret);
-			return -EINVAL;
+			return ret;
 		}
 	}
 	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
 	if (ret < 0) {
 		DPAA2_PMD_DEBUG("Unable to get link state (%d)", ret);
-		return -1;
+		return ret;
 	}
 
 	/* changing tx burst function to start enqueues */
@@ -2010,10 +2010,15 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = state.up;
 	dev->data->dev_link.link_speed = state.rate;
 
+	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
+		dev->data->dev_link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	else
+		dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+
 	if (state.up)
-		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
+		DPAA2_PMD_DEBUG("Port %d Link is Up", dev->data->port_id);
 	else
-		DPAA2_PMD_INFO("Port %d Link is Down", dev->data->port_id);
+		DPAA2_PMD_DEBUG("Port %d Link is Down", dev->data->port_id);
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 10/43] net/dpaa2: update DPNI link status method
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (8 preceding siblings ...)
  2024-10-14 12:00       ` [v3 09/43] net/dpaa2: support link state for eth interfaces vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 11/43] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
                         ` (33 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Brick Yang, Rohit Raj

From: Brick Yang <brick.yang@nxp.com>

If SFP module is not connected to the port and flow control is
configured using flow control API, link will show DOWN even after
connecting the SFP module and fiber cable.

This issue cannot be reproduced if only SFP module is connected and
fiber cable is disconnected before configuring flow control even
though link is down in this case too.

This patch improves it by getting configuration values from
dpni_get_link_cfg API instead of dpni_get_link_state API, which
provides us static configuration data.

Signed-off-by: Brick Yang <brick.yang@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index b120e2c815..0adebc0bf1 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2087,7 +2087,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	int ret = -EINVAL;
 	struct dpaa2_dev_priv *priv;
 	struct fsl_mc_io *dpni;
-	struct dpni_link_state state = {0};
+	struct dpni_link_cfg cfg = {0};
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -2099,14 +2099,14 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
+	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("error: dpni_get_link_state %d", ret);
+		DPAA2_PMD_ERR("error: dpni_get_link_cfg %d", ret);
 		return ret;
 	}
 
 	memset(fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	if (state.options & DPNI_LINK_OPT_PAUSE) {
+	if (cfg.options & DPNI_LINK_OPT_PAUSE) {
 		/* DPNI_LINK_OPT_PAUSE set
 		 *  if ASYM_PAUSE not set,
 		 *	RX Side flow control (handle received Pause frame)
@@ -2115,7 +2115,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	RX Side flow control (handle received Pause frame)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
-		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
+		if (!(cfg.options & DPNI_LINK_OPT_ASYM_PAUSE))
 			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
 			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
@@ -2127,7 +2127,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *  if ASYM_PAUSE not set,
 		 *	Flow control disabled
 		 */
-		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
+		if (cfg.options & DPNI_LINK_OPT_ASYM_PAUSE)
 			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
 			fc_conf->mode = RTE_ETH_FC_NONE;
@@ -2142,7 +2142,6 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	int ret = -EINVAL;
 	struct dpaa2_dev_priv *priv;
 	struct fsl_mc_io *dpni;
-	struct dpni_link_state state = {0};
 	struct dpni_link_cfg cfg = {0};
 
 	PMD_INIT_FUNC_TRACE();
@@ -2155,23 +2154,19 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	/* It is necessary to obtain the current state before setting fc_conf
+	/* It is necessary to obtain the current cfg before setting fc_conf
 	 * as MC would return error in case rate, autoneg or duplex values are
 	 * different.
 	 */
-	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
+	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("Unable to get link state (err=%d)", ret);
+		DPAA2_PMD_ERR("Unable to get link cfg (err=%d)", ret);
 		return -1;
 	}
 
 	/* Disable link before setting configuration */
 	dpaa2_dev_set_link_down(dev);
 
-	/* Based on fc_conf, update cfg */
-	cfg.rate = state.rate;
-	cfg.options = state.options;
-
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
 	case RTE_ETH_FC_FULL:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 11/43] net/dpaa2: add new PMD API to check dpaa platform version
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (9 preceding siblings ...)
  2024-10-14 12:00       ` [v3 10/43] net/dpaa2: update DPNI link status method vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 12/43] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
                         ` (32 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

This patch add support to check the DPAA platform type from
the applications.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 16 +++++++++++++---
 drivers/net/dpaa2/dpaa2_flow.c    |  5 ++---
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  4 ++++
 drivers/net/dpaa2/version.map     |  1 +
 4 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 0adebc0bf1..bd6a578e30 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2161,7 +2161,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
 		DPAA2_PMD_ERR("Unable to get link cfg (err=%d)", ret);
-		return -1;
+		return ret;
 	}
 
 	/* Disable link before setting configuration */
@@ -2203,7 +2203,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	default:
 		DPAA2_PMD_ERR("Incorrect Flow control flag (%d)",
 			      fc_conf->mode);
-		return -1;
+		return -EINVAL;
 	}
 
 	ret = dpni_set_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
@@ -2885,8 +2885,18 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
+int
+rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 {
+	struct rte_eth_dev *dev;
+
+	if (eth_id >= RTE_MAX_ETHPORTS)
+		return false;
+
+	dev = &rte_eth_devices[eth_id];
+	if (!dev->device)
+		return false;
+
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 62e350d736..48e6eedfbc 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3300,14 +3300,13 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 	if (idx >= 0) {
 		if (!rte_eth_dev_is_valid_port(idx))
 			return NULL;
+		if (!rte_pmd_dpaa2_dev_is_dpaa2(idx))
+			return NULL;
 		dest_dev = &rte_eth_devices[idx];
 	} else {
 		dest_dev = priv->eth_dev;
 	}
 
-	if (!dpaa2_dev_is_dpaa2(dest_dev))
-		return NULL;
-
 	return dest_dev;
 }
 
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index bebebcacdc..fc52a9218e 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -127,6 +127,10 @@ __rte_experimental
 uint32_t
 rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
 
+__rte_experimental
+int
+rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id);
+
 #if defined(RTE_LIBRTE_IEEE1588)
 __rte_experimental
 int
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 7323fc8869..233c6e6b2c 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -17,6 +17,7 @@ EXPERIMENTAL {
 	# added in 21.11
 	rte_pmd_dpaa2_get_tlu_hash;
 	# added in 24.11
+	rte_pmd_dpaa2_dev_is_dpaa2;
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
 	rte_pmd_dpaa2_mux_dump_counter;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 12/43] bus/fslmc: improve BMAN buffer acquire
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (10 preceding siblings ...)
  2024-10-14 12:00       ` [v3 11/43] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 13/43] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
                         ` (31 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Ignore reserved bits of BMan acquire response number.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 1f24cdce7e..3fdca9761d 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2023-2024 NXP
  *
  */
 
@@ -42,6 +42,8 @@
 /* opaque token for static dequeues */
 #define QMAN_SDQCR_TOKEN    0xbb
 
+#define BMAN_VALID_RSLT_NUM_MASK 0x7
+
 enum qbman_sdqcr_dct {
 	qbman_sdqcr_dct_null = 0,
 	qbman_sdqcr_dct_prio_ics,
@@ -2628,7 +2630,7 @@ struct qbman_acquire_rslt {
 	uint16_t reserved;
 	uint8_t num;
 	uint8_t reserved2[3];
-	uint64_t buf[7];
+	uint64_t buf[BMAN_VALID_RSLT_NUM_MASK];
 };
 
 static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
@@ -2636,8 +2638,9 @@ static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
+	int num;
 
-	if (!num_buffers || (num_buffers > 7))
+	if (!num_buffers || (num_buffers > BMAN_VALID_RSLT_NUM_MASK))
 		return -EINVAL;
 
 	/* Start the management command */
@@ -2668,12 +2671,13 @@ static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
 		return -EIO;
 	}
 
-	QBMAN_BUG_ON(r->num > num_buffers);
+	num = r->num & BMAN_VALID_RSLT_NUM_MASK;
+	QBMAN_BUG_ON(num > num_buffers);
 
 	/* Copy the acquired buffers to the caller's array */
-	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+	u64_from_le32_copy(buffers, &r->buf[0], num);
 
-	return (int)r->num;
+	return num;
 }
 
 static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
@@ -2681,8 +2685,9 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
+	int num;
 
-	if (!num_buffers || (num_buffers > 7))
+	if (!num_buffers || (num_buffers > BMAN_VALID_RSLT_NUM_MASK))
 		return -EINVAL;
 
 	/* Start the management command */
@@ -2713,12 +2718,13 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 		return -EIO;
 	}
 
-	QBMAN_BUG_ON(r->num > num_buffers);
+	num = r->num & BMAN_VALID_RSLT_NUM_MASK;
+	QBMAN_BUG_ON(num > num_buffers);
 
 	/* Copy the acquired buffers to the caller's array */
-	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+	u64_from_le32_copy(buffers, &r->buf[0], num);
 
-	return (int)r->num;
+	return num;
 }
 
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 13/43] bus/fslmc: get MC VFIO group FD directly
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (11 preceding siblings ...)
  2024-10-14 12:00       ` [v3 12/43] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-15  2:27         ` Stephen Hemminger
  2024-10-14 12:00       ` [v3 14/43] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
                         ` (30 subsequent siblings)
  43 siblings, 1 reply; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Get vfio group fd directly from file system instead of
from RTE API to avoid conflicting with PCIe VFIO.
FSL MC VFIO should have it's own logic which doe NOT depend on
RTE VFIO.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 88 ++++++++++++++++++++++++++--------
 drivers/bus/fslmc/meson.build  |  3 +-
 2 files changed, 71 insertions(+), 20 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index ecca593c34..acf0ba6fb7 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2021 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -30,6 +30,7 @@
 #include <rte_kvargs.h>
 #include <dev_driver.h>
 #include <rte_eal_memconfig.h>
+#include <eal_vfio.h>
 
 #include "private.h"
 #include "fslmc_vfio.h"
@@ -440,6 +441,59 @@ int rte_fslmc_vfio_dmamap(void)
 	return 0;
 }
 
+static int
+fslmc_vfio_open_group_fd(int iommu_group_num)
+{
+	int vfio_group_fd;
+	char filename[PATH_MAX];
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
+
+	/* if primary, try to open the group */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		/* try regular group format */
+		snprintf(filename, sizeof(filename),
+			VFIO_GROUP_FMT, iommu_group_num);
+		vfio_group_fd = open(filename, O_RDWR);
+		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
+				filename, vfio_group_fd);
+		}
+
+		return vfio_group_fd;
+	}
+	/* if we're in a secondary process, request group fd from the primary
+	 * process via mp channel.
+	 */
+	p->req = SOCKET_REQ_GROUP;
+	p->group_num = iommu_group_num;
+	strcpy(mp_req.name, EAL_VFIO_MP);
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_group_fd = -1;
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+	    mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		p = (struct vfio_mp_param *)mp_rep->param;
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+			vfio_group_fd = mp_rep->fds[0];
+		} else if (p->result == SOCKET_NO_FD) {
+			DPAA2_BUS_ERR("Bad VFIO group fd");
+			vfio_group_fd = 0;
+		}
+	}
+
+	free(mp_reply.msgs);
+	if (vfio_group_fd < 0) {
+		DPAA2_BUS_ERR("Cannot request group fd(%d)",
+			vfio_group_fd);
+	}
+	return vfio_group_fd;
+}
+
 static int
 fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		int *vfio_dev_fd, struct vfio_device_info *device_info)
@@ -455,7 +509,7 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		return -1;
 
 	/* get the actual group fd */
-	vfio_group_fd = rte_vfio_get_group_fd(iommu_group_no);
+	vfio_group_fd = vfio_group.fd;
 	if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
 		return -1;
 
@@ -891,6 +945,11 @@ fslmc_vfio_close_group(void)
 		}
 	}
 
+	if (vfio_group.fd > 0) {
+		close(vfio_group.fd);
+		vfio_group.fd = 0;
+	}
+
 	return 0;
 }
 
@@ -1081,7 +1140,6 @@ fslmc_vfio_setup_group(void)
 {
 	int groupid;
 	int ret;
-	int vfio_container_fd;
 	struct vfio_group_status status = { .argsz = sizeof(status) };
 
 	/* if already done once */
@@ -1100,16 +1158,9 @@ fslmc_vfio_setup_group(void)
 		return 0;
 	}
 
-	ret = rte_vfio_container_create();
-	if (ret < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return ret;
-	}
-	vfio_container_fd = ret;
-
 	/* Get the actual group fd */
-	ret = rte_vfio_container_group_bind(vfio_container_fd, groupid);
-	if (ret < 0)
+	ret = fslmc_vfio_open_group_fd(groupid);
+	if (ret <= 0)
 		return ret;
 	vfio_group.fd = ret;
 
@@ -1118,14 +1169,14 @@ fslmc_vfio_setup_group(void)
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO error getting group status");
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return ret;
 	}
 
 	if (!(status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
 		DPAA2_BUS_ERR("VFIO group not viable");
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return -EPERM;
 	}
 	/* Since Group is VIABLE, Store the groupid */
@@ -1136,11 +1187,10 @@ fslmc_vfio_setup_group(void)
 		/* Now connect this IOMMU group to given container */
 		ret = vfio_connect_container();
 		if (ret) {
-			DPAA2_BUS_ERR(
-				"Error connecting container with groupid %d",
-				groupid);
+			DPAA2_BUS_ERR("vfio group(%d) connect failed(%d)",
+				groupid, ret);
 			close(vfio_group.fd);
-			rte_vfio_clear_group(vfio_group.fd);
+			vfio_group.fd = 0;
 			return ret;
 		}
 	}
@@ -1151,7 +1201,7 @@ fslmc_vfio_setup_group(void)
 		DPAA2_BUS_ERR("Error getting device %s fd from group %d",
 			      fslmc_container, vfio_group.groupid);
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return ret;
 	}
 	container_device_fd = ret;
diff --git a/drivers/bus/fslmc/meson.build b/drivers/bus/fslmc/meson.build
index 162ca286fe..70098ad778 100644
--- a/drivers/bus/fslmc/meson.build
+++ b/drivers/bus/fslmc/meson.build
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018,2021 NXP
+# Copyright 2018-2023 NXP
 
 if not is_linux
     build = false
@@ -27,3 +27,4 @@ sources = files(
 )
 
 includes += include_directories('mc', 'qbman/include', 'portal')
+includes += include_directories('../../../lib/eal/linux')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 14/43] bus/fslmc: enhance MC VFIO multiprocess support
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (12 preceding siblings ...)
  2024-10-14 12:00       ` [v3 13/43] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-15  2:29         ` Stephen Hemminger
  2024-10-14 12:00       ` [v3 15/43] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
                         ` (29 subsequent siblings)
  43 siblings, 1 reply; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

MC VFIO is not registered into RTE VFIO. Primary process registers
MC vfio mp action for secondary process to request.
VFIO/Container handlers are provided via CMSG.
Primary process is responsible to connect MC VFIO group to container.

In addition, MC VFIO code is refactored according to container/group logic.
In general, VFIO container can support multiple groups per process.
Now we only support single MC group(dprc.x) per process, but we add
logic to support connecting multiple MC groups to container.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_bus.c  |  14 +-
 drivers/bus/fslmc/fslmc_vfio.c | 996 ++++++++++++++++++++++-----------
 drivers/bus/fslmc/fslmc_vfio.h |  35 +-
 drivers/bus/fslmc/version.map  |   1 +
 4 files changed, 694 insertions(+), 352 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 97473c278f..a966df1598 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -318,6 +318,7 @@ rte_fslmc_scan(void)
 	struct dirent *entry;
 	static int process_once;
 	int groupid;
+	char *group_name;
 
 	if (process_once) {
 		DPAA2_BUS_DEBUG("Fslmc bus already scanned. Not rescanning");
@@ -325,12 +326,19 @@ rte_fslmc_scan(void)
 	}
 	process_once = 1;
 
-	ret = fslmc_get_container_group(&groupid);
+	/* Now we only support single group per process.*/
+	group_name = getenv("DPRC");
+	if (!group_name) {
+		DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
+		return -EINVAL;
+	}
+
+	ret = fslmc_get_container_group(group_name, &groupid);
 	if (ret != 0)
 		goto scan_fail;
 
 	/* Scan devices on the group */
-	sprintf(fslmc_dirpath, "%s/%s", SYSFS_FSL_MC_DEVICES, fslmc_container);
+	sprintf(fslmc_dirpath, "%s/%s", SYSFS_FSL_MC_DEVICES, group_name);
 	dir = opendir(fslmc_dirpath);
 	if (!dir) {
 		DPAA2_BUS_ERR("Unable to open VFIO group directory");
@@ -338,7 +346,7 @@ rte_fslmc_scan(void)
 	}
 
 	/* Scan the DPRC container object */
-	ret = scan_one_fslmc_device(fslmc_container);
+	ret = scan_one_fslmc_device(group_name);
 	if (ret != 0) {
 		/* Error in parsing directory - exit gracefully */
 		goto scan_fail_cleanup;
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index acf0ba6fb7..c4be89e3d5 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -42,12 +42,14 @@
 
 #define FSLMC_CONTAINER_MAX_LEN 8 /**< Of the format dprc.XX */
 
-/* Number of VFIO containers & groups with in */
-static struct fslmc_vfio_group vfio_group;
-static struct fslmc_vfio_container vfio_container;
-static int container_device_fd;
-char *fslmc_container;
-static int fslmc_iommu_type;
+#define FSLMC_VFIO_MP "fslmc_vfio_mp_sync"
+
+/* Container is composed by multiple groups, however,
+ * now each process only supports single group with in container.
+ */
+static struct fslmc_vfio_container s_vfio_container;
+/* Currently we only support single group/process. */
+const char *fslmc_group; /* dprc.x*/
 static uint32_t *msi_intr_vaddr;
 void *(*rte_mcp_ptr_list);
 
@@ -72,108 +74,547 @@ rte_fslmc_object_register(struct rte_dpaa2_object *object)
 	TAILQ_INSERT_TAIL(&dpaa2_obj_list, object, next);
 }
 
-int
-fslmc_get_container_group(int *groupid)
+static const char *
+fslmc_vfio_get_group_name(void)
 {
-	int ret;
-	char *container;
+	return fslmc_group;
+}
+
+static void
+fslmc_vfio_set_group_name(const char *group_name)
+{
+	fslmc_group = group_name;
+}
+
+static int
+fslmc_vfio_add_group(int vfio_group_fd,
+	int iommu_group_num, const char *group_name)
+{
+	struct fslmc_vfio_group *group;
+
+	group = rte_zmalloc(NULL, sizeof(struct fslmc_vfio_group), 0);
+	if (!group)
+		return -ENOMEM;
+	group->fd = vfio_group_fd;
+	group->groupid = iommu_group_num;
+	strcpy(group->group_name, group_name);
+	if (rte_vfio_noiommu_is_enabled() > 0)
+		group->iommu_type = RTE_VFIO_NOIOMMU;
+	else
+		group->iommu_type = VFIO_TYPE1_IOMMU;
+	LIST_INSERT_HEAD(&s_vfio_container.groups, group, next);
+
+	return 0;
+}
+
+static int
+fslmc_vfio_clear_group(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+	struct fslmc_vfio_device *dev;
+	int clear = 0;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			LIST_FOREACH(dev, &group->vfio_devices, next)
+				LIST_REMOVE(dev, next);
+
+			close(vfio_group_fd);
+			LIST_REMOVE(group, next);
+			rte_free(group);
+			clear = 1;
 
-	if (!fslmc_container) {
-		container = getenv("DPRC");
-		if (container == NULL) {
-			DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
-			return -EINVAL;
+			break;
 		}
+	}
 
-		if (strlen(container) >= FSLMC_CONTAINER_MAX_LEN) {
-			DPAA2_BUS_ERR("Invalid container name: %s", container);
-			return -1;
+	if (LIST_EMPTY(&s_vfio_container.groups)) {
+		if (s_vfio_container.fd > 0)
+			close(s_vfio_container.fd);
+
+		s_vfio_container.fd = -1;
+	}
+	if (clear)
+		return 0;
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_connect_container(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			group->connected = 1;
+
+			return 0;
+		}
+	}
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_container_connected(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			if (group->connected)
+				return 1;
+		}
+	}
+	return 0;
+}
+
+static int
+fslmc_vfio_iommu_type(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd)
+			return group->iommu_type;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_fd_by_name(const char *group_name)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (!strcmp(group->group_name, group_name))
+			return group->fd;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_fd_by_id(int group_id)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->groupid == group_id)
+			return group->fd;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_add_dev(int vfio_group_fd,
+	int dev_fd, const char *name)
+{
+	struct fslmc_vfio_group *group;
+	struct fslmc_vfio_device *dev;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			dev = rte_zmalloc(NULL,
+				sizeof(struct fslmc_vfio_device), 0);
+			dev->fd = dev_fd;
+			strcpy(dev->dev_name, name);
+			LIST_INSERT_HEAD(&group->vfio_devices, dev, next);
+			return 0;
 		}
+	}
+	return -ENODEV;
+}
 
-		fslmc_container = strdup(container);
-		if (!fslmc_container) {
-			DPAA2_BUS_ERR("Mem alloc failure; Container name");
-			return -ENOMEM;
+static int
+fslmc_vfio_group_remove_dev(int vfio_group_fd,
+	const char *name)
+{
+	struct fslmc_vfio_group *group = NULL;
+	struct fslmc_vfio_device *dev;
+	int removed = 0;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd)
+			break;
+	}
+
+	if (group) {
+		LIST_FOREACH(dev, &group->vfio_devices, next) {
+			if (!strcmp(dev->dev_name, name)) {
+				LIST_REMOVE(dev, next);
+				removed = 1;
+				break;
+			}
 		}
 	}
 
-	fslmc_iommu_type = (rte_vfio_noiommu_is_enabled() == 1) ?
-		RTE_VFIO_NOIOMMU : VFIO_TYPE1_IOMMU;
+	if (removed)
+		return 0;
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_container_fd(void)
+{
+	return s_vfio_container.fd;
+}
+
+static int
+fslmc_get_group_id(const char *group_name,
+	int *groupid)
+{
+	int ret;
 
 	/* get group number */
 	ret = rte_vfio_get_group_num(SYSFS_FSL_MC_DEVICES,
-				     fslmc_container, groupid);
+			group_name, groupid);
 	if (ret <= 0) {
-		DPAA2_BUS_ERR("Unable to find %s IOMMU group", fslmc_container);
-		return -1;
+		DPAA2_BUS_ERR("Unable to find %s IOMMU group", group_name);
+		if (ret < 0)
+			return ret;
+
+		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("Container: %s has VFIO iommu group id = %d",
-			fslmc_container, *groupid);
+	DPAA2_BUS_DEBUG("GROUP(%s) has VFIO iommu group id = %d",
+		group_name, *groupid);
 
 	return 0;
 }
 
 static int
-vfio_connect_container(void)
+fslmc_vfio_open_group_fd(const char *group_name)
 {
-	int fd, ret;
+	int vfio_group_fd;
+	char filename[PATH_MAX];
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
+	int iommu_group_num, ret;
 
-	if (vfio_container.used) {
-		DPAA2_BUS_DEBUG("No container available");
-		return -1;
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd > 0)
+		return vfio_group_fd;
+
+	ret = fslmc_get_group_id(group_name, &iommu_group_num);
+	if (ret)
+		return ret;
+	/* if primary, try to open the group */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		/* try regular group format */
+		snprintf(filename, sizeof(filename),
+			VFIO_GROUP_FMT, iommu_group_num);
+		vfio_group_fd = open(filename, O_RDWR);
+
+		goto add_vfio_group;
+	}
+	/* if we're in a secondary process, request group fd from the primary
+	 * process via mp channel.
+	 */
+	p->req = SOCKET_REQ_GROUP;
+	p->group_num = iommu_group_num;
+	strcpy(mp_req.name, FSLMC_VFIO_MP);
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_group_fd = -1;
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+	    mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		p = (struct vfio_mp_param *)mp_rep->param;
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+			vfio_group_fd = mp_rep->fds[0];
+		} else if (p->result == SOCKET_NO_FD) {
+			DPAA2_BUS_ERR("Bad VFIO group fd");
+			vfio_group_fd = 0;
+		}
 	}
 
-	/* Try connecting to vfio container if already created */
-	if (!ioctl(vfio_group.fd, VFIO_GROUP_SET_CONTAINER,
-		&vfio_container.fd)) {
-		DPAA2_BUS_DEBUG(
-		    "Container pre-exists with FD[0x%x] for this group",
-		    vfio_container.fd);
-		vfio_group.container = &vfio_container;
+	free(mp_reply.msgs);
+
+add_vfio_group:
+	if (vfio_group_fd <= 0) {
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
+				filename, vfio_group_fd);
+		} else {
+			DPAA2_BUS_ERR("Cannot request group fd(%d)",
+				vfio_group_fd);
+		}
+	} else {
+		ret = fslmc_vfio_add_group(vfio_group_fd, iommu_group_num,
+			group_name);
+		if (ret)
+			return ret;
+	}
+
+	return vfio_group_fd;
+}
+
+static int
+fslmc_vfio_check_extensions(int vfio_container_fd)
+{
+	int ret;
+	uint32_t idx, n_extensions = 0;
+	static const int type_id[] = {RTE_VFIO_TYPE1, RTE_VFIO_SPAPR,
+		RTE_VFIO_NOIOMMU};
+	static const char * const type_id_nm[] = {"Type 1",
+		"sPAPR", "No-IOMMU"};
+
+	for (idx = 0; idx < RTE_DIM(type_id); idx++) {
+		ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
+			type_id[idx]);
+		if (ret < 0) {
+			DPAA2_BUS_ERR("Could not get IOMMU type, error %i (%s)",
+				errno, strerror(errno));
+			close(vfio_container_fd);
+			return -errno;
+		} else if (ret == 1) {
+			/* we found a supported extension */
+			n_extensions++;
+		}
+		DPAA2_BUS_DEBUG("IOMMU type %d (%s) is %s",
+			type_id[idx], type_id_nm[idx],
+			ret ? "supported" : "not supported");
+	}
+
+	/* if we didn't find any supported IOMMU types, fail */
+	if (!n_extensions) {
+		close(vfio_container_fd);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int
+fslmc_vfio_open_container_fd(void)
+{
+	int ret, vfio_container_fd;
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (void *)mp_req.param;
+
+	if (fslmc_vfio_container_fd() > 0)
+		return fslmc_vfio_container_fd();
+
+	/* if we're in a primary process, try to open the container */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
+		if (vfio_container_fd < 0) {
+			DPAA2_BUS_ERR("Cannot open VFIO container(%s), err(%d)",
+				VFIO_CONTAINER_PATH, vfio_container_fd);
+			ret = vfio_container_fd;
+			goto err_exit;
+		}
+
+		/* check VFIO API version */
+		ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
+		if (ret < 0) {
+			DPAA2_BUS_ERR("Could not get VFIO API version(%d)",
+				ret);
+		} else if (ret != VFIO_API_VERSION) {
+			DPAA2_BUS_ERR("Unsupported VFIO API version(%d)",
+				ret);
+			ret = -ENOTSUP;
+		}
+		if (ret < 0) {
+			close(vfio_container_fd);
+			goto err_exit;
+		}
+
+		ret = fslmc_vfio_check_extensions(vfio_container_fd);
+		if (ret) {
+			DPAA2_BUS_ERR("No supported IOMMU extensions found(%d)",
+				ret);
+			close(vfio_container_fd);
+			goto err_exit;
+		}
+
+		goto success_exit;
+	}
+	/*
+	 * if we're in a secondary process, request container fd from the
+	 * primary process via mp channel
+	 */
+	p->req = SOCKET_REQ_CONTAINER;
+	strcpy(mp_req.name, FSLMC_VFIO_MP);
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_container_fd = -1;
+	ret = rte_mp_request_sync(&mp_req, &mp_reply, &ts);
+	if (ret)
+		goto err_exit;
+
+	if (mp_reply.nb_received != 1) {
+		ret = -EIO;
+		goto err_exit;
+	}
+
+	mp_rep = &mp_reply.msgs[0];
+	p = (void *)mp_rep->param;
+	if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+		vfio_container_fd = mp_rep->fds[0];
+		free(mp_reply.msgs);
+	}
+
+success_exit:
+	s_vfio_container.fd = vfio_container_fd;
+
+	return vfio_container_fd;
+
+err_exit:
+	if (mp_reply.msgs)
+		free(mp_reply.msgs);
+	DPAA2_BUS_ERR("Cannot request container fd err(%d)", ret);
+	return ret;
+}
+
+int
+fslmc_get_container_group(const char *group_name,
+	int *groupid)
+{
+	int ret;
+
+	if (!group_name) {
+		DPAA2_BUS_ERR("No group name provided!");
+
+		return -EINVAL;
+	}
+	ret = fslmc_get_group_id(group_name, groupid);
+	if (ret)
+		return ret;
+
+	fslmc_vfio_set_group_name(group_name);
+
+	return 0;
+}
+
+static int
+fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
+	const void *peer)
+{
+	int fd = -1;
+	int ret;
+	struct rte_mp_msg reply;
+	struct vfio_mp_param *r = (void *)reply.param;
+	const struct vfio_mp_param *m = (const void *)msg->param;
+
+	if (msg->len_param != sizeof(*m)) {
+		DPAA2_BUS_ERR("fslmc vfio received invalid message!");
+		return -EINVAL;
+	}
+
+	memset(&reply, 0, sizeof(reply));
+
+	switch (m->req) {
+	case SOCKET_REQ_GROUP:
+		r->req = SOCKET_REQ_GROUP;
+		r->group_num = m->group_num;
+		fd = fslmc_vfio_group_fd_by_id(m->group_num);
+		if (fd < 0) {
+			r->result = SOCKET_ERR;
+		} else if (!fd) {
+			/* if group exists but isn't bound to VFIO driver */
+			r->result = SOCKET_NO_FD;
+		} else {
+			/* if group exists and is bound to VFIO driver */
+			r->result = SOCKET_OK;
+			reply.num_fds = 1;
+			reply.fds[0] = fd;
+		}
+		break;
+	case SOCKET_REQ_CONTAINER:
+		r->req = SOCKET_REQ_CONTAINER;
+		fd = fslmc_vfio_container_fd();
+		if (fd <= 0) {
+			r->result = SOCKET_ERR;
+		} else {
+			r->result = SOCKET_OK;
+			reply.num_fds = 1;
+			reply.fds[0] = fd;
+		}
+		break;
+	default:
+		DPAA2_BUS_ERR("fslmc vfio received invalid message(%08x)",
+			m->req);
+		return -ENOTSUP;
+	}
+
+	strcpy(reply.name, FSLMC_VFIO_MP);
+	reply.len_param = sizeof(*r);
+	ret = rte_mp_reply(&reply, peer);
+
+	return ret;
+}
+
+static int
+fslmc_vfio_mp_sync_setup(void)
+{
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		ret = rte_mp_action_register(FSLMC_VFIO_MP,
+			fslmc_vfio_mp_primary);
+		if (ret && rte_errno != ENOTSUP)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int
+vfio_connect_container(int vfio_container_fd,
+	int vfio_group_fd)
+{
+	int ret;
+	int iommu_type;
+
+	if (fslmc_vfio_container_connected(vfio_group_fd)) {
+		DPAA2_BUS_WARN("VFIO FD(%d) has connected to container",
+			vfio_group_fd);
 		return 0;
 	}
 
-	/* Opens main vfio file descriptor which represents the "container" */
-	fd = rte_vfio_get_container_fd();
-	if (fd < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return -errno;
+	iommu_type = fslmc_vfio_iommu_type(vfio_group_fd);
+	if (iommu_type < 0) {
+		DPAA2_BUS_ERR("Failed to get iommu type(%d)",
+			iommu_type);
+
+		return iommu_type;
 	}
 
 	/* Check whether support for SMMU type IOMMU present or not */
-	if (ioctl(fd, VFIO_CHECK_EXTENSION, fslmc_iommu_type)) {
+	if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type)) {
 		/* Connect group to container */
-		ret = ioctl(vfio_group.fd, VFIO_GROUP_SET_CONTAINER, &fd);
+		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
+			&vfio_container_fd);
 		if (ret) {
 			DPAA2_BUS_ERR("Failed to setup group container");
-			close(fd);
 			return -errno;
 		}
 
-		ret = ioctl(fd, VFIO_SET_IOMMU, fslmc_iommu_type);
+		ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
 		if (ret) {
 			DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-			close(fd);
 			return -errno;
 		}
 	} else {
 		DPAA2_BUS_ERR("No supported IOMMU available");
-		close(fd);
 		return -EINVAL;
 	}
 
-	vfio_container.used = 1;
-	vfio_container.fd = fd;
-	vfio_container.group = &vfio_group;
-	vfio_group.container = &vfio_container;
-
-	return 0;
+	return fslmc_vfio_connect_container(vfio_group_fd);
 }
 
-static int vfio_map_irq_region(struct fslmc_vfio_group *group)
+static int vfio_map_irq_region(void)
 {
-	int ret;
+	int ret, fd;
 	unsigned long *vaddr = NULL;
 	struct vfio_iommu_type1_dma_map map = {
 		.argsz = sizeof(map),
@@ -182,9 +623,23 @@ static int vfio_map_irq_region(struct fslmc_vfio_group *group)
 		.iova = 0x6030000,
 		.size = 0x1000,
 	};
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (!fslmc_vfio_container_connected(fd)) {
+		DPAA2_BUS_ERR("Container is not connected");
+		return -EIO;
+	}
 
 	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
-		PROT_READ, MAP_SHARED, container_device_fd, 0x6030000);
+		PROT_READ, MAP_SHARED, fd, 0x6030000);
 	if (vaddr == MAP_FAILED) {
 		DPAA2_BUS_INFO("Unable to map region (errno = %d)", errno);
 		return -errno;
@@ -192,8 +647,8 @@ static int vfio_map_irq_region(struct fslmc_vfio_group *group)
 
 	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
 	map.vaddr = (unsigned long)vaddr;
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA, &map);
-	if (ret == 0)
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA, &map);
+	if (!ret)
 		return 0;
 
 	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
@@ -204,8 +659,8 @@ static int fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
 static int fslmc_unmap_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
 
 static void
-fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
-		void *arg __rte_unused)
+fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
+	size_t len, void *arg __rte_unused)
 {
 	struct rte_memseg_list *msl;
 	struct rte_memseg *ms;
@@ -262,44 +717,54 @@ fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
 }
 
 static int
-fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr __rte_unused, size_t len)
+fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr,
+	size_t len)
 {
-	struct fslmc_vfio_group *group;
 	struct vfio_iommu_type1_dma_map dma_map = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
 		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
 	};
-	int ret;
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
+	int ret, fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
 		return 0;
 	}
 
 	dma_map.size = len;
 	dma_map.vaddr = vaddr;
-
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 	dma_map.iova = iovaddr;
-#else
-	dma_map.iova = dma_map.vaddr;
+
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+	if (vaddr != iovaddr) {
+		DPAA2_BUS_WARN("vaddr(0x%lx) != iovaddr(0x%lx)",
+			vaddr, iovaddr);
+	}
 #endif
 
 	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-
-	if (!group->container) {
+	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected ");
-		return -1;
+		return -EIO;
 	}
 
 	DPAA2_BUS_DEBUG("--> Map address: 0x%"PRIx64", size: %"PRIu64"",
 			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA, &dma_map);
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA,
+		&dma_map);
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO_IOMMU_MAP_DMA API(errno = %d)",
 				errno);
-		return -1;
+		return ret;
 	}
 
 	return 0;
@@ -308,14 +773,22 @@ fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr __rte_unused, size_t len)
 static int
 fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 {
-	struct fslmc_vfio_group *group;
 	struct vfio_iommu_type1_dma_unmap dma_unmap = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_unmap),
 		.flags = 0,
 	};
-	int ret;
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
+	int ret, fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
 		return 0;
 	}
@@ -324,16 +797,15 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 	dma_unmap.iova = vaddr;
 
 	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-
-	if (!group->container) {
+	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected ");
-		return -1;
+		return -EIO;
 	}
 
 	DPAA2_BUS_DEBUG("--> Unmap address: 0x%"PRIx64", size: %"PRIu64"",
 			(uint64_t)dma_unmap.iova, (uint64_t)dma_unmap.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap);
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_UNMAP_DMA,
+		&dma_unmap);
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO_IOMMU_UNMAP_DMA API(errno = %d)",
 				errno);
@@ -367,41 +839,13 @@ fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 int
 rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 {
-	int ret;
-	struct fslmc_vfio_group *group;
-	struct vfio_iommu_type1_dma_map dma_map = {
-		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
-		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
-	};
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
-		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
-		return 0;
-	}
-
-	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-	if (!group->container) {
-		DPAA2_BUS_ERR("Container is not connected");
-		return -1;
-	}
-
-	dma_map.size = size;
-	dma_map.vaddr = vaddr;
-	dma_map.iova = iova;
-
-	DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64,
-			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.iova,
-			(uint64_t)dma_map.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA,
-		    &dma_map);
-	if (ret) {
-		DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)",
-			errno);
-		return ret;
-	}
+	return fslmc_map_dma(vaddr, iova, size);
+}
 
-	return 0;
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
+{
+	return fslmc_unmap_dma(iova, 0, size);
 }
 
 int rte_fslmc_vfio_dmamap(void)
@@ -431,7 +875,7 @@ int rte_fslmc_vfio_dmamap(void)
 	 * the interrupt region to SMMU. This should be removed once the
 	 * support is added in the Kernel.
 	 */
-	vfio_map_irq_region(&vfio_group);
+	vfio_map_irq_region();
 
 	/* Existing segments have been mapped and memory callback for hotplug
 	 * has been installed.
@@ -442,149 +886,19 @@ int rte_fslmc_vfio_dmamap(void)
 }
 
 static int
-fslmc_vfio_open_group_fd(int iommu_group_num)
-{
-	int vfio_group_fd;
-	char filename[PATH_MAX];
-	struct rte_mp_msg mp_req, *mp_rep;
-	struct rte_mp_reply mp_reply = {0};
-	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
-	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
-
-	/* if primary, try to open the group */
-	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
-		/* try regular group format */
-		snprintf(filename, sizeof(filename),
-			VFIO_GROUP_FMT, iommu_group_num);
-		vfio_group_fd = open(filename, O_RDWR);
-		if (vfio_group_fd <= 0) {
-			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
-				filename, vfio_group_fd);
-		}
-
-		return vfio_group_fd;
-	}
-	/* if we're in a secondary process, request group fd from the primary
-	 * process via mp channel.
-	 */
-	p->req = SOCKET_REQ_GROUP;
-	p->group_num = iommu_group_num;
-	strcpy(mp_req.name, EAL_VFIO_MP);
-	mp_req.len_param = sizeof(*p);
-	mp_req.num_fds = 0;
-
-	vfio_group_fd = -1;
-	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
-	    mp_reply.nb_received == 1) {
-		mp_rep = &mp_reply.msgs[0];
-		p = (struct vfio_mp_param *)mp_rep->param;
-		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
-			vfio_group_fd = mp_rep->fds[0];
-		} else if (p->result == SOCKET_NO_FD) {
-			DPAA2_BUS_ERR("Bad VFIO group fd");
-			vfio_group_fd = 0;
-		}
-	}
-
-	free(mp_reply.msgs);
-	if (vfio_group_fd < 0) {
-		DPAA2_BUS_ERR("Cannot request group fd(%d)",
-			vfio_group_fd);
-	}
-	return vfio_group_fd;
-}
-
-static int
-fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
-		int *vfio_dev_fd, struct vfio_device_info *device_info)
+fslmc_vfio_setup_device(const char *dev_addr,
+	int *vfio_dev_fd, struct vfio_device_info *device_info)
 {
 	struct vfio_group_status group_status = {
 			.argsz = sizeof(group_status)
 	};
-	int vfio_group_fd, vfio_container_fd, iommu_group_no, ret;
+	int vfio_group_fd, ret;
+	const char *group_name = fslmc_vfio_get_group_name();
 
-	/* get group number */
-	ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_no);
-	if (ret < 0)
-		return -1;
-
-	/* get the actual group fd */
-	vfio_group_fd = vfio_group.fd;
-	if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
-		return -1;
-
-	/*
-	 * if vfio_group_fd == -ENOENT, that means the device
-	 * isn't managed by VFIO
-	 */
-	if (vfio_group_fd == -ENOENT) {
-		DPAA2_BUS_WARN(" %s not managed by VFIO driver, skipping",
-				dev_addr);
-		return 1;
-	}
-
-	/* Opens main vfio file descriptor which represents the "container" */
-	vfio_container_fd = rte_vfio_get_container_fd();
-	if (vfio_container_fd < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return -errno;
-	}
-
-	/* check if the group is viable */
-	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status);
-	if (ret) {
-		DPAA2_BUS_ERR("  %s cannot get group status, "
-				"error %i (%s)", dev_addr,
-				errno, strerror(errno));
-		close(vfio_group_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
-	} else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
-		DPAA2_BUS_ERR("  %s VFIO group is not viable!", dev_addr);
-		close(vfio_group_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
-	}
-	/* At this point, we know that this group is viable (meaning,
-	 * all devices are either bound to VFIO or not bound to anything)
-	 */
-
-	/* check if group does not have a container yet */
-	if (!(group_status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
-
-		/* add group to a container */
-		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
-				&vfio_container_fd);
-		if (ret) {
-			DPAA2_BUS_ERR("  %s cannot add VFIO group to container, "
-					"error %i (%s)", dev_addr,
-					errno, strerror(errno));
-			close(vfio_group_fd);
-			close(vfio_container_fd);
-			rte_vfio_clear_group(vfio_group_fd);
-			return -1;
-		}
-
-		/*
-		 * set an IOMMU type for container
-		 *
-		 */
-		if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
-			  fslmc_iommu_type)) {
-			ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU,
-				    fslmc_iommu_type);
-			if (ret) {
-				DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-				close(vfio_group_fd);
-				close(vfio_container_fd);
-				return -errno;
-			}
-		} else {
-			DPAA2_BUS_ERR("No supported IOMMU available");
-			close(vfio_group_fd);
-			close(vfio_container_fd);
-			return -EINVAL;
-		}
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (!fslmc_vfio_container_connected(vfio_group_fd)) {
+		DPAA2_BUS_ERR("Container is not connected");
+		return -EIO;
 	}
 
 	/* get a file descriptor for the device */
@@ -594,26 +908,21 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		 * the VFIO group or the container not having IOMMU configured.
 		 */
 
-		DPAA2_BUS_WARN("Getting a vfio_dev_fd for %s failed", dev_addr);
-		close(vfio_group_fd);
-		close(vfio_container_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
+		DPAA2_BUS_ERR("Getting a vfio_dev_fd for %s from %s failed",
+			dev_addr, group_name);
+		return -EIO;
 	}
 
 	/* test and setup the device */
 	ret = ioctl(*vfio_dev_fd, VFIO_DEVICE_GET_INFO, device_info);
 	if (ret) {
-		DPAA2_BUS_ERR("  %s cannot get device info, error %i (%s)",
-				dev_addr, errno, strerror(errno));
-		close(*vfio_dev_fd);
-		close(vfio_group_fd);
-		close(vfio_container_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
+		DPAA2_BUS_ERR("%s cannot get device info err(%d)(%s)",
+			dev_addr, errno, strerror(errno));
+		return ret;
 	}
 
-	return 0;
+	return fslmc_vfio_group_add_dev(vfio_group_fd, *vfio_dev_fd,
+			dev_addr);
 }
 
 static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
@@ -625,8 +934,7 @@ static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
 	struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
 
-	fslmc_vfio_setup_device(SYSFS_FSL_MC_DEVICES, mcp_obj,
-			&mc_fd, &d_info);
+	fslmc_vfio_setup_device(mcp_obj, &mc_fd, &d_info);
 
 	/* getting device region info*/
 	ret = ioctl(mc_fd, VFIO_DEVICE_GET_REGION_INFO, &reg_info);
@@ -757,7 +1065,8 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 }
 
 static void
-fslmc_close_iodevices(struct rte_dpaa2_device *dev)
+fslmc_close_iodevices(struct rte_dpaa2_device *dev,
+	int vfio_fd)
 {
 	struct rte_dpaa2_object *object = NULL;
 	struct rte_dpaa2_driver *drv;
@@ -800,6 +1109,11 @@ fslmc_close_iodevices(struct rte_dpaa2_device *dev)
 		break;
 	}
 
+	ret = fslmc_vfio_group_remove_dev(vfio_fd, dev->device.name);
+	if (ret) {
+		DPAA2_BUS_ERR("Failed to remove %s from vfio",
+			dev->device.name);
+	}
 	DPAA2_BUS_LOG(DEBUG, "Device (%s) Closed",
 		      dev->device.name);
 }
@@ -811,17 +1125,21 @@ fslmc_close_iodevices(struct rte_dpaa2_device *dev)
 static int
 fslmc_process_iodevices(struct rte_dpaa2_device *dev)
 {
-	int dev_fd;
+	int dev_fd, ret;
 	struct vfio_device_info device_info = { .argsz = sizeof(device_info) };
 	struct rte_dpaa2_object *object = NULL;
 
-	fslmc_vfio_setup_device(SYSFS_FSL_MC_DEVICES, dev->device.name,
-			&dev_fd, &device_info);
+	ret = fslmc_vfio_setup_device(dev->device.name, &dev_fd,
+			&device_info);
+	if (ret)
+		return ret;
 
 	switch (dev->dev_type) {
 	case DPAA2_ETH:
-		rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
-					  device_info.num_irqs);
+		ret = rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
+				device_info.num_irqs);
+		if (ret)
+			return ret;
 		break;
 	case DPAA2_CON:
 	case DPAA2_IO:
@@ -913,6 +1231,10 @@ int
 fslmc_vfio_close_group(void)
 {
 	struct rte_dpaa2_device *dev, *dev_temp;
+	int vfio_group_fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
 		if (dev->device.devargs &&
@@ -927,7 +1249,7 @@ fslmc_vfio_close_group(void)
 		case DPAA2_CRYPTO:
 		case DPAA2_QDMA:
 		case DPAA2_IO:
-			fslmc_close_iodevices(dev);
+			fslmc_close_iodevices(dev, vfio_group_fd);
 			break;
 		case DPAA2_CON:
 		case DPAA2_CI:
@@ -936,7 +1258,7 @@ fslmc_vfio_close_group(void)
 			if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 				continue;
 
-			fslmc_close_iodevices(dev);
+			fslmc_close_iodevices(dev, vfio_group_fd);
 			break;
 		case DPAA2_DPRTC:
 		default:
@@ -945,10 +1267,7 @@ fslmc_vfio_close_group(void)
 		}
 	}
 
-	if (vfio_group.fd > 0) {
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
-	}
+	fslmc_vfio_clear_group(vfio_group_fd);
 
 	return 0;
 }
@@ -1138,75 +1457,84 @@ fslmc_vfio_process_group(void)
 int
 fslmc_vfio_setup_group(void)
 {
-	int groupid;
-	int ret;
+	int vfio_group_fd, vfio_container_fd, ret;
 	struct vfio_group_status status = { .argsz = sizeof(status) };
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	/* MC VFIO setup entry */
+	vfio_container_fd = fslmc_vfio_container_fd();
+	if (vfio_container_fd <= 0) {
+		vfio_container_fd = fslmc_vfio_open_container_fd();
+		if (vfio_container_fd <= 0) {
+			DPAA2_BUS_ERR("Failed to create MC VFIO container");
+			return -rte_errno;
+		}
+	}
 
-	/* if already done once */
-	if (container_device_fd)
-		return 0;
-
-	ret = fslmc_get_container_group(&groupid);
-	if (ret)
-		return ret;
-
-	/* In case this group was already opened, continue without any
-	 * processing.
-	 */
-	if (vfio_group.groupid == groupid) {
-		DPAA2_BUS_ERR("groupid already exists %d", groupid);
-		return 0;
+	if (!group_name) {
+		DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
+		return -EINVAL;
 	}
 
-	/* Get the actual group fd */
-	ret = fslmc_vfio_open_group_fd(groupid);
-	if (ret <= 0)
-		return ret;
-	vfio_group.fd = ret;
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
+		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("Failed to create MC VFIO group");
+			return -rte_errno;
+		}
+	}
 
 	/* Check group viability */
-	ret = ioctl(vfio_group.fd, VFIO_GROUP_GET_STATUS, &status);
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &status);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO error getting group status");
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		DPAA2_BUS_ERR("VFIO(%s:fd=%d) error getting group status(%d)",
+			group_name, vfio_group_fd, ret);
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return ret;
 	}
 
 	if (!(status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
 		DPAA2_BUS_ERR("VFIO group not viable");
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return -EPERM;
 	}
-	/* Since Group is VIABLE, Store the groupid */
-	vfio_group.groupid = groupid;
 
 	/* check if group does not have a container yet */
 	if (!(status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
 		/* Now connect this IOMMU group to given container */
-		ret = vfio_connect_container();
-		if (ret) {
-			DPAA2_BUS_ERR("vfio group(%d) connect failed(%d)",
-				groupid, ret);
-			close(vfio_group.fd);
-			vfio_group.fd = 0;
-			return ret;
-		}
+		ret = vfio_connect_container(vfio_container_fd,
+			vfio_group_fd);
+	} else {
+		/* Here is supposed in secondary process,
+		 * group has been set to container in primary process.
+		 */
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+			DPAA2_BUS_WARN("This group has been set container?");
+		ret = fslmc_vfio_connect_container(vfio_group_fd);
+	}
+	if (ret) {
+		DPAA2_BUS_ERR("vfio group connect failed(%d)", ret);
+		fslmc_vfio_clear_group(vfio_group_fd);
+		return ret;
 	}
 
 	/* Get Device information */
-	ret = ioctl(vfio_group.fd, VFIO_GROUP_GET_DEVICE_FD, fslmc_container);
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_DEVICE_FD, group_name);
 	if (ret < 0) {
-		DPAA2_BUS_ERR("Error getting device %s fd from group %d",
-			      fslmc_container, vfio_group.groupid);
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		DPAA2_BUS_ERR("Error getting device %s fd", group_name);
+		fslmc_vfio_clear_group(vfio_group_fd);
+		return ret;
+	}
+
+	ret = fslmc_vfio_mp_sync_setup();
+	if (ret) {
+		DPAA2_BUS_ERR("VFIO MP sync setup failed!");
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return ret;
 	}
-	container_device_fd = ret;
-	DPAA2_BUS_DEBUG("VFIO Container FD is [0x%X]",
-			container_device_fd);
+
+	DPAA2_BUS_DEBUG("VFIO GROUP FD is %d", vfio_group_fd);
 
 	return 0;
 }
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index b6677bdd18..1695b6c078 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2019-2020 NXP
+ *   Copyright 2016,2019-2023 NXP
  *
  */
 
@@ -20,26 +20,28 @@
 #define DPAA2_MC_DPBP_DEVID	10
 #define DPAA2_MC_DPCI_DEVID	11
 
-typedef struct fslmc_vfio_device {
+struct fslmc_vfio_device {
+	LIST_ENTRY(fslmc_vfio_device) next;
 	int fd; /* fslmc root container device ?? */
 	int index; /*index of child object */
+	char dev_name[64];
 	struct fslmc_vfio_device *child; /* Child object */
-} fslmc_vfio_device;
+};
 
-typedef struct fslmc_vfio_group {
+struct fslmc_vfio_group {
+	LIST_ENTRY(fslmc_vfio_group) next;
 	int fd; /* /dev/vfio/"groupid" */
 	int groupid;
-	struct fslmc_vfio_container *container;
-	int object_index;
-	struct fslmc_vfio_device *vfio_device;
-} fslmc_vfio_group;
+	int connected;
+	char group_name[64]; /* dprc.x*/
+	int iommu_type;
+	LIST_HEAD(, fslmc_vfio_device) vfio_devices;
+};
 
-typedef struct fslmc_vfio_container {
+struct fslmc_vfio_container {
 	int fd; /* /dev/vfio/vfio */
-	int used;
-	int index; /* index in group list */
-	struct fslmc_vfio_group *group;
-} fslmc_vfio_container;
+	LIST_HEAD(, fslmc_vfio_group) groups;
+};
 
 extern char *fslmc_container;
 
@@ -57,8 +59,11 @@ int fslmc_vfio_setup_group(void);
 int fslmc_vfio_process_group(void);
 int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
-int fslmc_get_container_group(int *gropuid);
+int fslmc_get_container_group(const char *group_name, int *gropuid);
 int rte_fslmc_vfio_dmamap(void);
-int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova,
+		uint64_t size);
+int rte_fslmc_vfio_mem_dmaunmap(uint64_t iova,
+		uint64_t size);
 
 #endif /* _FSLMC_VFIO_H_ */
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index df1143733d..b49bc0a62c 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -118,6 +118,7 @@ INTERNAL {
 	rte_fslmc_get_device_count;
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
+	rte_fslmc_vfio_mem_dmaunmap;
 
 	local: *;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 15/43] bus/fslmc: free VFIO group FD in case of add group failure
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (13 preceding siblings ...)
  2024-10-14 12:00       ` [v3 14/43] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-14 12:00       ` [v3 16/43] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
                         ` (28 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Free vfio_group_fd if add group fails to avoid ersource leak
NXP coverity-id: 26661846

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c4be89e3d5..19ad36f5f0 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -347,8 +347,10 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	} else {
 		ret = fslmc_vfio_add_group(vfio_group_fd, iommu_group_num,
 			group_name);
-		if (ret)
+		if (ret) {
+			close(vfio_group_fd);
 			return ret;
+		}
 	}
 
 	return vfio_group_fd;
@@ -1480,6 +1482,8 @@ fslmc_vfio_setup_group(void)
 	if (vfio_group_fd <= 0) {
 		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
 		if (vfio_group_fd <= 0) {
+			if (!vfio_group_fd)
+				close(vfio_group_fd);
 			DPAA2_BUS_ERR("Failed to create MC VFIO group");
 			return -rte_errno;
 		}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 16/43] bus/fslmc: dynamic IOVA mode configuration
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (14 preceding siblings ...)
  2024-10-14 12:00       ` [v3 15/43] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
@ 2024-10-14 12:00       ` vanshika.shukla
  2024-10-15  2:31         ` Stephen Hemminger
  2024-10-14 12:01       ` [v3 17/43] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
                         ` (27 subsequent siblings)
  43 siblings, 1 reply; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:00 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh
  Cc: Jun Yang, Vanshika Shukla

From: Jun Yang <jun.yang@nxp.com>

IOVA mode should not be configured with CFLAGS because
1) User can perform "--iova-mode" to configure IOVA.
2) IOVA mode is determined by negotiation between multiple devices.
   Eal is in VA mode only when all devices support VA mode.

Hence:
1) Remove RTE_LIBRTE_DPAA2_USE_PHYS_IOVA cflags.
   Instead, use rte_eal_iova_mode API to identify VA or PA mode.
2) Support memory IOMMU mapping and I/O IOMMU mapping(PCI space).
3) For memory IOMMU, in VA mode, IOVA:VA = 1:1;
   in PA mode, IOVA:VA = PA:VA. The mapping policy is determined by
   EAL memory driver.
4) For I/O IOMMU, IOVA:VA is up to I/O driver configuration.
   In general, it's aligned with memory IOMMU mapping.
5) Memory and I/O IOVA tables are created and update when DMA
   mapping is setup, which takes place of dpaax IOVA table.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     |  29 +-
 drivers/bus/fslmc/fslmc_bus.c            |  33 +-
 drivers/bus/fslmc/fslmc_vfio.c           | 668 ++++++++++++++++++-----
 drivers/bus/fslmc/fslmc_vfio.h           |   4 +
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c |   3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c |  10 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h |   3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h  | 111 ++--
 drivers/bus/fslmc/version.map            |   7 +-
 drivers/dma/dpaa2/dpaa2_qdma.c           |   1 +
 10 files changed, 616 insertions(+), 253 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index a3428fe28b..c6bb5d17aa 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -33,9 +33,6 @@
 
 #include <fslmc_vfio.h>
 
-#include "portal/dpaa2_hw_pvt.h"
-#include "portal/dpaa2_hw_dpio.h"
-
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -149,6 +146,32 @@ struct rte_dpaa2_driver {
 	rte_dpaa2_remove_t remove;
 };
 
+int
+rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+__rte_internal
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size);
+__rte_internal
+uint64_t
+rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
+	uint64_t size);
+__rte_internal
+void *
+rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
+	uint64_t size);
+__rte_internal
+__hot uint64_t
+rte_fslmc_mem_vaddr_to_iova(void *vaddr);
+__rte_internal
+__hot void *
+rte_fslmc_mem_iova_to_vaddr(uint64_t iova);
+__rte_internal
+uint64_t
+rte_fslmc_io_vaddr_to_iova(void *vaddr);
+__rte_internal
+void *
+rte_fslmc_io_iova_to_vaddr(uint64_t iova);
+
 /**
  * Register a DPAA2 driver.
  *
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index a966df1598..107cc70833 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -27,7 +27,6 @@
 #define FSLMC_BUS_NAME	fslmc
 
 struct rte_fslmc_bus rte_fslmc_bus;
-uint8_t dpaa2_virt_mode;
 
 #define DPAA2_SEQN_DYNFIELD_NAME "dpaa2_seqn_dynfield"
 int dpaa2_seqn_dynfield_offset = -1;
@@ -457,22 +456,6 @@ rte_fslmc_probe(void)
 
 	probe_all = rte_fslmc_bus.bus.conf.scan_mode != RTE_BUS_SCAN_ALLOWLIST;
 
-	/* In case of PA, the FD addresses returned by qbman APIs are physical
-	 * addresses, which need conversion into equivalent VA address for
-	 * rte_mbuf. For that, a table (a serial array, in memory) is used to
-	 * increase translation efficiency.
-	 * This has to be done before probe as some device initialization
-	 * (during) probe allocate memory (dpaa2_sec) which needs to be pinned
-	 * to this table.
-	 *
-	 * Error is ignored as relevant logs are handled within dpaax and
-	 * handling for unavailable dpaax table too is transparent to caller.
-	 *
-	 * And, the IOVA table is only applicable in case of PA mode.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_populate();
-
 	TAILQ_FOREACH(dev, &rte_fslmc_bus.device_list, next) {
 		TAILQ_FOREACH(drv, &rte_fslmc_bus.driver_list, next) {
 			ret = rte_fslmc_match(drv, dev);
@@ -507,9 +490,6 @@ rte_fslmc_probe(void)
 		}
 	}
 
-	if (rte_eal_iova_mode() == RTE_IOVA_VA)
-		dpaa2_virt_mode = 1;
-
 	return 0;
 }
 
@@ -558,12 +538,6 @@ rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
 void
 rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver)
 {
-	/* Cleanup the PA->VA Translation table; From wherever this function
-	 * is called from.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_depopulate();
-
 	TAILQ_REMOVE(&rte_fslmc_bus.driver_list, driver, next);
 }
 
@@ -599,13 +573,12 @@ rte_dpaa2_get_iommu_class(void)
 	bool is_vfio_noiommu_enabled = 1;
 	bool has_iova_va;
 
+	if (rte_eal_iova_mode() == RTE_IOVA_PA)
+		return RTE_IOVA_PA;
+
 	if (TAILQ_EMPTY(&rte_fslmc_bus.device_list))
 		return RTE_IOVA_DC;
 
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-	return RTE_IOVA_PA;
-#endif
-
 	/* check if all devices on the bus support Virtual addressing or not */
 	has_iova_va = fslmc_all_device_support_iova();
 
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 19ad36f5f0..4437028470 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -19,6 +19,7 @@
 #include <libgen.h>
 #include <dirent.h>
 #include <sys/eventfd.h>
+#include <ctype.h>
 
 #include <eal_filesystem.h>
 #include <rte_mbuf.h>
@@ -49,9 +50,41 @@
  */
 static struct fslmc_vfio_container s_vfio_container;
 /* Currently we only support single group/process. */
-const char *fslmc_group; /* dprc.x*/
+static const char *fslmc_group; /* dprc.x*/
 static uint32_t *msi_intr_vaddr;
-void *(*rte_mcp_ptr_list);
+static void *(*rte_mcp_ptr_list);
+
+struct fslmc_dmaseg {
+	uint64_t vaddr;
+	uint64_t iova;
+	uint64_t size;
+
+	TAILQ_ENTRY(fslmc_dmaseg) next;
+};
+
+TAILQ_HEAD(fslmc_dmaseg_list, fslmc_dmaseg);
+
+struct fslmc_dmaseg_list fslmc_memsegs =
+		TAILQ_HEAD_INITIALIZER(fslmc_memsegs);
+struct fslmc_dmaseg_list fslmc_iosegs =
+		TAILQ_HEAD_INITIALIZER(fslmc_iosegs);
+
+static uint64_t fslmc_mem_va2iova = RTE_BAD_IOVA;
+static int fslmc_mem_map_num;
+
+struct fslmc_mem_param {
+	struct vfio_mp_param mp_param;
+	struct fslmc_dmaseg_list memsegs;
+	struct fslmc_dmaseg_list iosegs;
+	uint64_t mem_va2iova;
+	int mem_map_num;
+};
+
+enum {
+	FSLMC_VFIO_SOCKET_REQ_CONTAINER = 0x100,
+	FSLMC_VFIO_SOCKET_REQ_GROUP,
+	FSLMC_VFIO_SOCKET_REQ_MEM
+};
 
 void *
 dpaa2_get_mcp_ptr(int portal_idx)
@@ -65,6 +98,64 @@ dpaa2_get_mcp_ptr(int portal_idx)
 static struct rte_dpaa2_object_list dpaa2_obj_list =
 	TAILQ_HEAD_INITIALIZER(dpaa2_obj_list);
 
+static uint64_t
+fslmc_io_virt2phy(const void *virtaddr)
+{
+	FILE *fp = fopen("/proc/self/maps", "r");
+	char *line = NULL;
+	size_t linesz;
+	uint64_t start, end, phy;
+	const uint64_t va = (const uint64_t)virtaddr;
+	char tmp[1024];
+	int ret;
+
+	if (!fp)
+		return RTE_BAD_IOVA;
+	while (getdelim(&line, &linesz, '\n', fp) > 0) {
+		char *ptr = line;
+		int n;
+
+		/** Parse virtual address range.*/
+		n = 0;
+		while (*ptr && !isspace(*ptr)) {
+			tmp[n] = *ptr;
+			ptr++;
+			n++;
+		}
+		tmp[n] = 0;
+		ret = sscanf(tmp, "%" SCNx64 "-%" SCNx64, &start, &end);
+		if (ret != 2)
+			continue;
+		if (va < start || va >= end)
+			continue;
+
+		/** This virtual address is in this segment.*/
+		while (*ptr == ' ' || *ptr == 'r' ||
+			*ptr == 'w' || *ptr == 's' ||
+			*ptr == 'p' || *ptr == 'x' ||
+			*ptr == '-')
+			ptr++;
+
+		/** Extract phy address*/
+		n = 0;
+		while (*ptr && !isspace(*ptr)) {
+			tmp[n] = *ptr;
+			ptr++;
+			n++;
+		}
+		tmp[n] = 0;
+		phy = strtoul(tmp, 0, 16);
+		if (!phy)
+			continue;
+
+		fclose(fp);
+		return phy + va - start;
+	}
+
+	fclose(fp);
+	return RTE_BAD_IOVA;
+}
+
 /*register a fslmc bus based dpaa2 driver */
 void
 rte_fslmc_object_register(struct rte_dpaa2_object *object)
@@ -271,7 +362,7 @@ fslmc_get_group_id(const char *group_name,
 	ret = rte_vfio_get_group_num(SYSFS_FSL_MC_DEVICES,
 			group_name, groupid);
 	if (ret <= 0) {
-		DPAA2_BUS_ERR("Unable to find %s IOMMU group", group_name);
+		DPAA2_BUS_ERR("Find %s IOMMU group", group_name);
 		if (ret < 0)
 			return ret;
 
@@ -314,7 +405,7 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	/* if we're in a secondary process, request group fd from the primary
 	 * process via mp channel.
 	 */
-	p->req = SOCKET_REQ_GROUP;
+	p->req = FSLMC_VFIO_SOCKET_REQ_GROUP;
 	p->group_num = iommu_group_num;
 	strcpy(mp_req.name, FSLMC_VFIO_MP);
 	mp_req.len_param = sizeof(*p);
@@ -408,7 +499,7 @@ fslmc_vfio_open_container_fd(void)
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
 		if (vfio_container_fd < 0) {
-			DPAA2_BUS_ERR("Cannot open VFIO container(%s), err(%d)",
+			DPAA2_BUS_ERR("Open VFIO container(%s), err(%d)",
 				VFIO_CONTAINER_PATH, vfio_container_fd);
 			ret = vfio_container_fd;
 			goto err_exit;
@@ -417,7 +508,7 @@ fslmc_vfio_open_container_fd(void)
 		/* check VFIO API version */
 		ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
 		if (ret < 0) {
-			DPAA2_BUS_ERR("Could not get VFIO API version(%d)",
+			DPAA2_BUS_ERR("Get VFIO API version(%d)",
 				ret);
 		} else if (ret != VFIO_API_VERSION) {
 			DPAA2_BUS_ERR("Unsupported VFIO API version(%d)",
@@ -431,7 +522,7 @@ fslmc_vfio_open_container_fd(void)
 
 		ret = fslmc_vfio_check_extensions(vfio_container_fd);
 		if (ret) {
-			DPAA2_BUS_ERR("No supported IOMMU extensions found(%d)",
+			DPAA2_BUS_ERR("Unsupported IOMMU extensions found(%d)",
 				ret);
 			close(vfio_container_fd);
 			goto err_exit;
@@ -443,7 +534,7 @@ fslmc_vfio_open_container_fd(void)
 	 * if we're in a secondary process, request container fd from the
 	 * primary process via mp channel
 	 */
-	p->req = SOCKET_REQ_CONTAINER;
+	p->req = FSLMC_VFIO_SOCKET_REQ_CONTAINER;
 	strcpy(mp_req.name, FSLMC_VFIO_MP);
 	mp_req.len_param = sizeof(*p);
 	mp_req.num_fds = 0;
@@ -473,7 +564,7 @@ fslmc_vfio_open_container_fd(void)
 err_exit:
 	if (mp_reply.msgs)
 		free(mp_reply.msgs);
-	DPAA2_BUS_ERR("Cannot request container fd err(%d)", ret);
+	DPAA2_BUS_ERR("Open container fd err(%d)", ret);
 	return ret;
 }
 
@@ -506,17 +597,19 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 	struct rte_mp_msg reply;
 	struct vfio_mp_param *r = (void *)reply.param;
 	const struct vfio_mp_param *m = (const void *)msg->param;
+	struct fslmc_mem_param *map;
 
 	if (msg->len_param != sizeof(*m)) {
-		DPAA2_BUS_ERR("fslmc vfio received invalid message!");
+		DPAA2_BUS_ERR("Invalid msg size(%d) for req(%d)",
+			msg->len_param, m->req);
 		return -EINVAL;
 	}
 
 	memset(&reply, 0, sizeof(reply));
 
 	switch (m->req) {
-	case SOCKET_REQ_GROUP:
-		r->req = SOCKET_REQ_GROUP;
+	case FSLMC_VFIO_SOCKET_REQ_GROUP:
+		r->req = FSLMC_VFIO_SOCKET_REQ_GROUP;
 		r->group_num = m->group_num;
 		fd = fslmc_vfio_group_fd_by_id(m->group_num);
 		if (fd < 0) {
@@ -530,9 +623,10 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 			reply.num_fds = 1;
 			reply.fds[0] = fd;
 		}
+		reply.len_param = sizeof(*r);
 		break;
-	case SOCKET_REQ_CONTAINER:
-		r->req = SOCKET_REQ_CONTAINER;
+	case FSLMC_VFIO_SOCKET_REQ_CONTAINER:
+		r->req = FSLMC_VFIO_SOCKET_REQ_CONTAINER;
 		fd = fslmc_vfio_container_fd();
 		if (fd <= 0) {
 			r->result = SOCKET_ERR;
@@ -541,20 +635,73 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 			reply.num_fds = 1;
 			reply.fds[0] = fd;
 		}
+		reply.len_param = sizeof(*r);
+		break;
+	case FSLMC_VFIO_SOCKET_REQ_MEM:
+		map = (void *)reply.param;
+		r = &map->mp_param;
+		r->req = FSLMC_VFIO_SOCKET_REQ_MEM;
+		r->result = SOCKET_OK;
+		rte_memcpy(&map->memsegs, &fslmc_memsegs,
+			sizeof(struct fslmc_dmaseg_list));
+		rte_memcpy(&map->iosegs, &fslmc_iosegs,
+			sizeof(struct fslmc_dmaseg_list));
+		map->mem_va2iova = fslmc_mem_va2iova;
+		map->mem_map_num = fslmc_mem_map_num;
+		reply.len_param = sizeof(struct fslmc_mem_param);
 		break;
 	default:
-		DPAA2_BUS_ERR("fslmc vfio received invalid message(%08x)",
+		DPAA2_BUS_ERR("VFIO received invalid message(%08x)",
 			m->req);
 		return -ENOTSUP;
 	}
 
 	strcpy(reply.name, FSLMC_VFIO_MP);
-	reply.len_param = sizeof(*r);
 	ret = rte_mp_reply(&reply, peer);
 
 	return ret;
 }
 
+static int
+fslmc_vfio_mp_sync_mem_req(void)
+{
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	int ret = 0;
+	struct vfio_mp_param *mp_param;
+	struct fslmc_mem_param *mem_rsp;
+
+	mp_param = (void *)mp_req.param;
+	memset(&mp_req, 0, sizeof(struct rte_mp_msg));
+	mp_param->req = FSLMC_VFIO_SOCKET_REQ_MEM;
+	strcpy(mp_req.name, FSLMC_VFIO_MP);
+	mp_req.len_param = sizeof(struct vfio_mp_param);
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+		mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		mem_rsp = (struct fslmc_mem_param *)mp_rep->param;
+		if (mem_rsp->mp_param.result == SOCKET_OK) {
+			rte_memcpy(&fslmc_memsegs,
+				&mem_rsp->memsegs,
+				sizeof(struct fslmc_dmaseg_list));
+			rte_memcpy(&fslmc_memsegs,
+				&mem_rsp->memsegs,
+				sizeof(struct fslmc_dmaseg_list));
+			fslmc_mem_va2iova = mem_rsp->mem_va2iova;
+			fslmc_mem_map_num = mem_rsp->mem_map_num;
+		} else {
+			DPAA2_BUS_ERR("Bad MEM SEG");
+			ret = -EINVAL;
+		}
+	} else {
+		ret = -EINVAL;
+	}
+	free(mp_reply.msgs);
+
+	return ret;
+}
+
 static int
 fslmc_vfio_mp_sync_setup(void)
 {
@@ -565,6 +712,10 @@ fslmc_vfio_mp_sync_setup(void)
 			fslmc_vfio_mp_primary);
 		if (ret && rte_errno != ENOTSUP)
 			return ret;
+	} else {
+		ret = fslmc_vfio_mp_sync_mem_req();
+		if (ret)
+			return ret;
 	}
 
 	return 0;
@@ -585,30 +736,34 @@ vfio_connect_container(int vfio_container_fd,
 
 	iommu_type = fslmc_vfio_iommu_type(vfio_group_fd);
 	if (iommu_type < 0) {
-		DPAA2_BUS_ERR("Failed to get iommu type(%d)",
-			iommu_type);
+		DPAA2_BUS_ERR("Get iommu type(%d)", iommu_type);
 
 		return iommu_type;
 	}
 
 	/* Check whether support for SMMU type IOMMU present or not */
-	if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type)) {
-		/* Connect group to container */
-		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
+	ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type);
+	if (ret <= 0) {
+		DPAA2_BUS_ERR("Unsupported IOMMU type(%d) ret(%d), err(%d)",
+			iommu_type, ret, -errno);
+		return -EINVAL;
+	}
+
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
 			&vfio_container_fd);
-		if (ret) {
-			DPAA2_BUS_ERR("Failed to setup group container");
-			return -errno;
-		}
+	if (ret) {
+		DPAA2_BUS_ERR("Set group container ret(%d), err(%d)",
+			ret, -errno);
 
-		ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
-		if (ret) {
-			DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-			return -errno;
-		}
-	} else {
-		DPAA2_BUS_ERR("No supported IOMMU available");
-		return -EINVAL;
+		return ret;
+	}
+
+	ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
+	if (ret) {
+		DPAA2_BUS_ERR("Set iommu ret(%d), err(%d)",
+			ret, -errno);
+
+		return ret;
 	}
 
 	return fslmc_vfio_connect_container(vfio_group_fd);
@@ -629,11 +784,11 @@ static int vfio_map_irq_region(void)
 
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected");
@@ -643,8 +798,8 @@ static int vfio_map_irq_region(void)
 	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
 		PROT_READ, MAP_SHARED, fd, 0x6030000);
 	if (vaddr == MAP_FAILED) {
-		DPAA2_BUS_INFO("Unable to map region (errno = %d)", errno);
-		return -errno;
+		DPAA2_BUS_ERR("Unable to map region (errno = %d)", errno);
+		return -ENOMEM;
 	}
 
 	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
@@ -654,141 +809,200 @@ static int vfio_map_irq_region(void)
 		return 0;
 
 	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
-	return -errno;
-}
-
-static int fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
-static int fslmc_unmap_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
-
-static void
-fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
-	size_t len, void *arg __rte_unused)
-{
-	struct rte_memseg_list *msl;
-	struct rte_memseg *ms;
-	size_t cur_len = 0, map_len = 0;
-	uint64_t virt_addr;
-	rte_iova_t iova_addr;
-	int ret;
-
-	msl = rte_mem_virt2memseg_list(addr);
-
-	while (cur_len < len) {
-		const void *va = RTE_PTR_ADD(addr, cur_len);
-
-		ms = rte_mem_virt2memseg(va, msl);
-		iova_addr = ms->iova;
-		virt_addr = ms->addr_64;
-		map_len = ms->len;
-
-		DPAA2_BUS_DEBUG("Request for %s, va=%p, "
-				"virt_addr=0x%" PRIx64 ", "
-				"iova=0x%" PRIx64 ", map_len=%zu",
-				type == RTE_MEM_EVENT_ALLOC ?
-					"alloc" : "dealloc",
-				va, virt_addr, iova_addr, map_len);
-
-		/* iova_addr may be set to RTE_BAD_IOVA */
-		if (iova_addr == RTE_BAD_IOVA) {
-			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping");
-			cur_len += map_len;
-			continue;
-		}
-
-		if (type == RTE_MEM_EVENT_ALLOC)
-			ret = fslmc_map_dma(virt_addr, iova_addr, map_len);
-		else
-			ret = fslmc_unmap_dma(virt_addr, iova_addr, map_len);
-
-		if (ret != 0) {
-			DPAA2_BUS_ERR("DMA Mapping/Unmapping failed. "
-					"Map=%d, addr=%p, len=%zu, err:(%d)",
-					type, va, map_len, ret);
-			return;
-		}
-
-		cur_len += map_len;
-	}
-
-	if (type == RTE_MEM_EVENT_ALLOC)
-		DPAA2_BUS_DEBUG("Total Mapped: addr=%p, len=%zu",
-				addr, len);
-	else
-		DPAA2_BUS_DEBUG("Total Unmapped: addr=%p, len=%zu",
-				addr, len);
+	return ret;
 }
 
 static int
-fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr,
-	size_t len)
+fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len)
 {
 	struct vfio_iommu_type1_dma_map dma_map = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
 		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
 	};
-	int ret, fd;
+	int ret, fd, is_io = 0;
 	const char *group_name = fslmc_vfio_get_group_name();
+	struct fslmc_dmaseg *dmaseg = NULL;
+	uint64_t phy = 0;
+
+	if (rte_eal_iova_mode() == RTE_IOVA_VA) {
+		if (vaddr != iovaddr) {
+			DPAA2_BUS_ERR("IOVA:VA(%" PRIx64 " : %" PRIx64 ") %s",
+				iovaddr, vaddr,
+				"should be 1:1 for VA mode");
+
+			return -EINVAL;
+		}
+	}
 
+	phy = rte_mem_virt2phy((const void *)(uintptr_t)vaddr);
+	if (phy == RTE_BAD_IOVA) {
+		phy = fslmc_io_virt2phy((const void *)(uintptr_t)vaddr);
+		if (phy == RTE_BAD_IOVA)
+			return -ENOMEM;
+		is_io = 1;
+	} else if (fslmc_mem_va2iova != RTE_BAD_IOVA &&
+		fslmc_mem_va2iova != (iovaddr - vaddr)) {
+		DPAA2_BUS_WARN("Multiple MEM PA<->VA conversions.");
+	}
+	DPAA2_BUS_DEBUG("%s(%zu): VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+		is_io ? "DMA IO map size" : "DMA MEM map size",
+		len, vaddr, iovaddr, phy);
+
+	if (is_io)
+		goto io_mapping_check;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (!((vaddr + len) <= dmaseg->vaddr ||
+			(dmaseg->vaddr + dmaseg->size) <= vaddr)) {
+			DPAA2_BUS_ERR("MEM: New VA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				vaddr, vaddr + len);
+			DPAA2_BUS_ERR("MEM: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->vaddr,
+				dmaseg->vaddr + dmaseg->size);
+			return -EEXIST;
+		}
+		if (!((iovaddr + len) <= dmaseg->iova ||
+			(dmaseg->iova + dmaseg->size) <= iovaddr)) {
+			DPAA2_BUS_ERR("MEM: New IOVA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				iovaddr, iovaddr + len);
+			DPAA2_BUS_ERR("MEM: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->iova,
+				dmaseg->iova + dmaseg->size);
+			return -EEXIST;
+		}
+	}
+	goto start_mapping;
+
+io_mapping_check:
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if (!((vaddr + len) <= dmaseg->vaddr ||
+			(dmaseg->vaddr + dmaseg->size) <= vaddr)) {
+			DPAA2_BUS_ERR("IO: New VA Range (%" PRIx64 " ~ %" PRIx64 ")",
+				vaddr, vaddr + len);
+			DPAA2_BUS_ERR("IO: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->vaddr,
+				dmaseg->vaddr + dmaseg->size);
+			return -EEXIST;
+		}
+		if (!((iovaddr + len) <= dmaseg->iova ||
+			(dmaseg->iova + dmaseg->size) <= iovaddr)) {
+			DPAA2_BUS_ERR("IO: New IOVA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				iovaddr, iovaddr + len);
+			DPAA2_BUS_ERR("IO: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->iova,
+				dmaseg->iova + dmaseg->size);
+			return -EEXIST;
+		}
+	}
+
+start_mapping:
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
-		return 0;
+		if (phy != iovaddr) {
+			DPAA2_BUS_ERR("IOVA should support with IOMMU");
+			return -EIO;
+		}
+		goto end_mapping;
 	}
 
 	dma_map.size = len;
 	dma_map.vaddr = vaddr;
 	dma_map.iova = iovaddr;
 
-#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-	if (vaddr != iovaddr) {
-		DPAA2_BUS_WARN("vaddr(0x%lx) != iovaddr(0x%lx)",
-			vaddr, iovaddr);
-	}
-#endif
-
 	/* SET DMA MAP for IOMMU */
 	if (!fslmc_vfio_container_connected(fd)) {
-		DPAA2_BUS_ERR("Container is not connected ");
+		DPAA2_BUS_ERR("Container is not connected");
 		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("--> Map address: 0x%"PRIx64", size: %"PRIu64"",
-			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.size);
 	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA,
 		&dma_map);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO_IOMMU_MAP_DMA API(errno = %d)",
-				errno);
+		DPAA2_BUS_ERR("%s(%d) VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+			is_io ? "DMA IO map err" : "DMA MEM map err",
+			errno, vaddr, iovaddr, phy);
 		return ret;
 	}
 
+end_mapping:
+	dmaseg = malloc(sizeof(struct fslmc_dmaseg));
+	if (!dmaseg) {
+		DPAA2_BUS_ERR("DMA segment malloc failed!");
+		return -ENOMEM;
+	}
+	dmaseg->vaddr = vaddr;
+	dmaseg->iova = iovaddr;
+	dmaseg->size = len;
+	if (is_io) {
+		TAILQ_INSERT_TAIL(&fslmc_iosegs, dmaseg, next);
+	} else {
+		fslmc_mem_map_num++;
+		if (fslmc_mem_map_num == 1)
+			fslmc_mem_va2iova = iovaddr - vaddr;
+		else
+			fslmc_mem_va2iova = RTE_BAD_IOVA;
+		TAILQ_INSERT_TAIL(&fslmc_memsegs, dmaseg, next);
+	}
+	DPAA2_BUS_LOG(NOTICE,
+		"%s(%zx): VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+		is_io ? "DMA I/O map size" : "DMA MEM map size",
+		len, vaddr, iovaddr, phy);
+
 	return 0;
 }
 
 static int
-fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
+fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr, size_t len)
 {
 	struct vfio_iommu_type1_dma_unmap dma_unmap = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_unmap),
 		.flags = 0,
 	};
-	int ret, fd;
+	int ret, fd, is_io = 0;
 	const char *group_name = fslmc_vfio_get_group_name();
+	struct fslmc_dmaseg *dmaseg = NULL;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (((vaddr && dmaseg->vaddr == vaddr) || !vaddr) &&
+			dmaseg->iova == iovaddr &&
+			dmaseg->size == len) {
+			is_io = 0;
+			break;
+		}
+	}
+
+	if (!dmaseg) {
+		TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+			if (((vaddr && dmaseg->vaddr == vaddr) || !vaddr) &&
+				dmaseg->iova == iovaddr &&
+				dmaseg->size == len) {
+				is_io = 1;
+				break;
+			}
+		}
+	}
+
+	if (!dmaseg) {
+		DPAA2_BUS_ERR("IOVA(%" PRIx64 ") with length(%zx) not mapped",
+			iovaddr, len);
+		return 0;
+	}
 
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
@@ -796,7 +1010,7 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 	}
 
 	dma_unmap.size = len;
-	dma_unmap.iova = vaddr;
+	dma_unmap.iova = iovaddr;
 
 	/* SET DMA MAP for IOMMU */
 	if (!fslmc_vfio_container_connected(fd)) {
@@ -804,19 +1018,164 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("--> Unmap address: 0x%"PRIx64", size: %"PRIu64"",
-			(uint64_t)dma_unmap.iova, (uint64_t)dma_unmap.size);
 	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_UNMAP_DMA,
 		&dma_unmap);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO_IOMMU_UNMAP_DMA API(errno = %d)",
-				errno);
-		return -1;
+		DPAA2_BUS_ERR("DMA un-map IOVA(%" PRIx64 " ~ %" PRIx64 ") err(%d)",
+			iovaddr, iovaddr + len, errno);
+		return ret;
+	}
+
+	if (is_io) {
+		TAILQ_REMOVE(&fslmc_iosegs, dmaseg, next);
+	} else {
+		TAILQ_REMOVE(&fslmc_memsegs, dmaseg, next);
+		fslmc_mem_map_num--;
+		if (TAILQ_EMPTY(&fslmc_memsegs))
+			fslmc_mem_va2iova = RTE_BAD_IOVA;
 	}
 
+	free(dmaseg);
+
 	return 0;
 }
 
+uint64_t
+rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
+	uint64_t size)
+{
+	struct fslmc_dmaseg *dmaseg;
+	uint64_t va;
+
+	va = (uint64_t)vaddr;
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (va >= dmaseg->vaddr &&
+			(va + size) < (dmaseg->vaddr + dmaseg->size)) {
+			return dmaseg->iova + va - dmaseg->vaddr;
+		}
+	}
+
+	return RTE_BAD_IOVA;
+}
+
+void *
+rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
+	uint64_t size)
+{
+	struct fslmc_dmaseg *dmaseg;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (iova >= dmaseg->iova &&
+			(iova + size) < (dmaseg->iova + dmaseg->size))
+			return (void *)((uintptr_t)dmaseg->vaddr
+				+ (uintptr_t)(iova - dmaseg->iova));
+	}
+
+	return NULL;
+}
+
+__hot uint64_t
+rte_fslmc_mem_vaddr_to_iova(void *vaddr)
+{
+	if (likely(fslmc_mem_va2iova != RTE_BAD_IOVA))
+		return (uint64_t)vaddr + fslmc_mem_va2iova;
+
+	return rte_fslmc_cold_mem_vaddr_to_iova(vaddr, 0);
+}
+
+__hot void *
+rte_fslmc_mem_iova_to_vaddr(uint64_t iova)
+{
+	if (likely(fslmc_mem_va2iova != RTE_BAD_IOVA))
+		return (void *)((uintptr_t)iova - (uintptr_t)fslmc_mem_va2iova);
+
+	return rte_fslmc_cold_mem_iova_to_vaddr(iova, 0);
+}
+
+uint64_t
+rte_fslmc_io_vaddr_to_iova(void *vaddr)
+{
+	struct fslmc_dmaseg *dmaseg = NULL;
+	uint64_t va = (uint64_t)vaddr;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if ((va >= dmaseg->vaddr) &&
+			va < dmaseg->vaddr + dmaseg->size)
+			return dmaseg->iova + va - dmaseg->vaddr;
+	}
+
+	return RTE_BAD_IOVA;
+}
+
+void *
+rte_fslmc_io_iova_to_vaddr(uint64_t iova)
+{
+	struct fslmc_dmaseg *dmaseg = NULL;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if ((iova >= dmaseg->iova) &&
+			iova < dmaseg->iova + dmaseg->size)
+			return (void *)((uintptr_t)dmaseg->vaddr
+				+ (uintptr_t)(iova - dmaseg->iova));
+	}
+
+	return NULL;
+}
+
+static void
+fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
+	size_t len, void *arg __rte_unused)
+{
+	struct rte_memseg_list *msl;
+	struct rte_memseg *ms;
+	size_t cur_len = 0, map_len = 0;
+	uint64_t virt_addr;
+	rte_iova_t iova_addr;
+	int ret;
+
+	msl = rte_mem_virt2memseg_list(addr);
+
+	while (cur_len < len) {
+		const void *va = RTE_PTR_ADD(addr, cur_len);
+
+		ms = rte_mem_virt2memseg(va, msl);
+		iova_addr = ms->iova;
+		virt_addr = ms->addr_64;
+		map_len = ms->len;
+
+		DPAA2_BUS_DEBUG("%s, va=%p, virt=%" PRIx64 ", iova=%" PRIx64 ", len=%zu",
+			type == RTE_MEM_EVENT_ALLOC ? "alloc" : "dealloc",
+			va, virt_addr, iova_addr, map_len);
+
+		/* iova_addr may be set to RTE_BAD_IOVA */
+		if (iova_addr == RTE_BAD_IOVA) {
+			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping");
+			cur_len += map_len;
+			continue;
+		}
+
+		if (type == RTE_MEM_EVENT_ALLOC)
+			ret = fslmc_map_dma(virt_addr, iova_addr, map_len);
+		else
+			ret = fslmc_unmap_dma(virt_addr, iova_addr, map_len);
+
+		if (ret != 0) {
+			DPAA2_BUS_ERR("%s: Map=%d, addr=%p, len=%zu, err:(%d)",
+				type == RTE_MEM_EVENT_ALLOC ?
+				"DMA Mapping failed. " :
+				"DMA Unmapping failed. ",
+				type, va, map_len, ret);
+			return;
+		}
+
+		cur_len += map_len;
+	}
+
+	DPAA2_BUS_DEBUG("Total %s: addr=%p, len=%zu",
+		type == RTE_MEM_EVENT_ALLOC ? "Mapped" : "Unmapped",
+		addr, len);
+}
+
 static int
 fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 		const struct rte_memseg *ms, void *arg)
@@ -847,7 +1206,7 @@ rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 int
 rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 {
-	return fslmc_unmap_dma(iova, 0, size);
+	return fslmc_unmap_dma(0, iova, size);
 }
 
 int rte_fslmc_vfio_dmamap(void)
@@ -857,9 +1216,10 @@ int rte_fslmc_vfio_dmamap(void)
 	/* Lock before parsing and registering callback to memory subsystem */
 	rte_mcfg_mem_read_lock();
 
-	if (rte_memseg_walk(fslmc_dmamap_seg, &i) < 0) {
+	ret = rte_memseg_walk(fslmc_dmamap_seg, &i);
+	if (ret) {
 		rte_mcfg_mem_read_unlock();
-		return -1;
+		return ret;
 	}
 
 	ret = rte_mem_event_callback_register("fslmc_memevent_clb",
@@ -898,6 +1258,14 @@ fslmc_vfio_setup_device(const char *dev_addr,
 	const char *group_name = fslmc_vfio_get_group_name();
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, vfio_group_fd);
+		if (vfio_group_fd < 0)
+			return vfio_group_fd;
+		return -EIO;
+	}
+
 	if (!fslmc_vfio_container_connected(vfio_group_fd)) {
 		DPAA2_BUS_ERR("Container is not connected");
 		return -EIO;
@@ -1006,8 +1374,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
 	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
 	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 	if (ret)
-		DPAA2_BUS_ERR(
-			"Error disabling dpaa2 interrupts for fd %d",
+		DPAA2_BUS_ERR("Error disabling dpaa2 interrupts for fd %d",
 			rte_intr_fd_get(intr_handle));
 
 	return ret;
@@ -1032,7 +1399,7 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 		if (ret < 0) {
 			DPAA2_BUS_ERR("Cannot get IRQ(%d) info, error %i (%s)",
 				      i, errno, strerror(errno));
-			return -1;
+			return ret;
 		}
 
 		/* if this vector cannot be used with eventfd,
@@ -1046,8 +1413,8 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 		fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
 		if (fd < 0) {
 			DPAA2_BUS_ERR("Cannot set up eventfd, error %i (%s)",
-				      errno, strerror(errno));
-			return -1;
+				errno, strerror(errno));
+			return fd;
 		}
 
 		if (rte_intr_fd_set(intr_handle, fd))
@@ -1063,7 +1430,7 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 	}
 
 	/* if we're here, we haven't found a suitable interrupt vector */
-	return -1;
+	return -EIO;
 }
 
 static void
@@ -1237,6 +1604,13 @@ fslmc_vfio_close_group(void)
 	const char *group_name = fslmc_vfio_get_group_name();
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, vfio_group_fd);
+		if (vfio_group_fd < 0)
+			return vfio_group_fd;
+		return -EIO;
+	}
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
 		if (dev->device.devargs &&
@@ -1328,7 +1702,7 @@ fslmc_vfio_process_group(void)
 				ret = fslmc_process_mcp(dev);
 				if (ret) {
 					DPAA2_BUS_ERR("Unable to map MC Portal");
-					return -1;
+					return ret;
 				}
 				found_mportal = 1;
 			}
@@ -1345,7 +1719,7 @@ fslmc_vfio_process_group(void)
 	/* Cannot continue if there is not even a single mportal */
 	if (!found_mportal) {
 		DPAA2_BUS_ERR("No MC Portal device found. Not continuing");
-		return -1;
+		return -EIO;
 	}
 
 	/* Search for DPRC device next as it updates endpoint of
@@ -1357,7 +1731,7 @@ fslmc_vfio_process_group(void)
 			ret = fslmc_process_iodevices(dev);
 			if (ret) {
 				DPAA2_BUS_ERR("Unable to process dprc");
-				return -1;
+				return ret;
 			}
 			TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		}
@@ -1414,7 +1788,7 @@ fslmc_vfio_process_group(void)
 			if (ret) {
 				DPAA2_BUS_DEBUG("Dev (%s) init failed",
 						dev->device.name);
-				return -1;
+				return ret;
 			}
 
 			break;
@@ -1438,7 +1812,7 @@ fslmc_vfio_process_group(void)
 			if (ret) {
 				DPAA2_BUS_DEBUG("Dev (%s) init failed",
 						dev->device.name);
-				return -1;
+				return ret;
 			}
 
 			break;
@@ -1467,9 +1841,9 @@ fslmc_vfio_setup_group(void)
 	vfio_container_fd = fslmc_vfio_container_fd();
 	if (vfio_container_fd <= 0) {
 		vfio_container_fd = fslmc_vfio_open_container_fd();
-		if (vfio_container_fd <= 0) {
+		if (vfio_container_fd < 0) {
 			DPAA2_BUS_ERR("Failed to create MC VFIO container");
-			return -rte_errno;
+			return vfio_container_fd;
 		}
 	}
 
@@ -1482,6 +1856,8 @@ fslmc_vfio_setup_group(void)
 	if (vfio_group_fd <= 0) {
 		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
 		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("%s: open group name(%s) failed(%d)",
+				__func__, group_name, vfio_group_fd);
 			if (!vfio_group_fd)
 				close(vfio_group_fd);
 			DPAA2_BUS_ERR("Failed to create MC VFIO group");
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 1695b6c078..408b35680d 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -11,6 +11,10 @@
 #include <rte_compat.h>
 #include <rte_vfio.h>
 
+#ifndef __hot
+#define __hot __attribute__((hot))
+#endif
+
 /* Pathname of FSL-MC devices directory. */
 #define SYSFS_FSL_MC_DEVICES	"/sys/bus/fsl-mc/devices"
 #define DPAA2_MC_DPNI_DEVID	7
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index bc36607e64..85e4c16c03 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2020 NXP
+ *   Copyright 2016,2020-2023 NXP
  *
  */
 
@@ -28,7 +28,6 @@
 #include "portal/dpaa2_hw_pvt.h"
 #include "portal/dpaa2_hw_dpio.h"
 
-
 TAILQ_HEAD(dpbp_dev_list, dpaa2_dpbp_dev);
 static struct dpbp_dev_list dpbp_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpbp_dev_list); /*!< DPBP device list */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 8265fee497..b52a8c8ba5 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -332,9 +332,8 @@ dpaa2_affine_qbman_swp(void)
 		}
 		RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
 
-		DPAA2_BUS_INFO(
-			"DPAA Portal=%p (%d) is affined to thread %" PRIu64,
-			dpio_dev, dpio_dev->index, tid);
+		DPAA2_BUS_DEBUG("Portal[%d] is affined to thread %" PRIu64,
+			dpio_dev->index, tid);
 	}
 	return 0;
 }
@@ -354,9 +353,8 @@ dpaa2_affine_qbman_ethrx_swp(void)
 		}
 		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
 
-		DPAA2_BUS_INFO(
-			"DPAA Portal=%p (%d) is affined for eth rx to thread %"
-			PRIu64, dpio_dev, dpio_dev->index, tid);
+		DPAA2_BUS_DEBUG("Portal_eth_rx[%d] is affined to thread %" PRIu64,
+			dpio_dev->index, tid);
 	}
 	return 0;
 }
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 7407f8d38d..328e1e788a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -12,6 +12,7 @@
 #include <mc/fsl_mc_sys.h>
 
 #include <rte_compat.h>
+#include <dpaa2_hw_pvt.h>
 
 struct dpaa2_io_portal_t {
 	struct dpaa2_dpio_dev *dpio_dev;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 4c30e6db18..74a1a8b2fa 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -14,6 +14,7 @@
 
 #include <mc/fsl_mc_sys.h>
 #include <fsl_qbman_portal.h>
+#include <bus_fslmc_driver.h>
 
 #ifndef false
 #define false      0
@@ -80,6 +81,8 @@
 #define DPAA2_PACKET_LAYOUT_ALIGN	64 /*changing from 256 */
 
 #define DPAA2_DPCI_MAX_QUEUES 2
+#define DPAA2_INVALID_FLOW_ID 0xffff
+#define DPAA2_INVALID_CGID 0xff
 
 struct dpaa2_queue;
 
@@ -366,83 +369,63 @@ enum qbman_fd_format {
  */
 #define DPAA2_EQ_RESP_ALWAYS		1
 
-/* Various structures representing contiguous memory maps */
-struct dpaa2_memseg {
-	TAILQ_ENTRY(dpaa2_memseg) next;
-	char *vaddr;
-	rte_iova_t iova;
-	size_t len;
-};
-
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-extern uint8_t dpaa2_virt_mode;
-static void *dpaa2_mem_ptov(phys_addr_t paddr) __rte_unused;
-
-static void *dpaa2_mem_ptov(phys_addr_t paddr)
+static inline uint64_t
+dpaa2_mem_va_to_iova(void *va)
 {
-	void *va;
-
-	if (dpaa2_virt_mode)
-		return (void *)(size_t)paddr;
-
-	va = (void *)dpaax_iova_table_get_va(paddr);
-	if (likely(va != NULL))
-		return va;
-
-	/* If not, Fallback to full memseg list searching */
-	va = rte_mem_iova2virt(paddr);
+	if (likely(rte_eal_iova_mode() == RTE_IOVA_VA))
+		return (uint64_t)va;
 
-	return va;
+	return rte_fslmc_mem_vaddr_to_iova(va);
 }
 
-static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr) __rte_unused;
-
-static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
+static inline void *
+dpaa2_mem_iova_to_va(uint64_t iova)
 {
-	const struct rte_memseg *memseg;
-
-	if (dpaa2_virt_mode)
-		return vaddr;
+	if (likely(rte_eal_iova_mode() == RTE_IOVA_VA))
+		return (void *)(uintptr_t)iova;
 
-	memseg = rte_mem_virt2memseg((void *)(uintptr_t)vaddr, NULL);
-	if (memseg)
-		return memseg->iova + RTE_PTR_DIFF(vaddr, memseg->addr);
-	return (size_t)NULL;
+	return rte_fslmc_mem_iova_to_vaddr(iova);
 }
 
-/**
- * When we are using Physical addresses as IO Virtual Addresses,
- * Need to call conversion routines dpaa2_mem_vtop & dpaa2_mem_ptov
- * wherever required.
- * These routines are called with help of below MACRO's
- */
-
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_iova)
-
-/**
- * macro to convert Virtual address to IOVA
- */
-#define DPAA2_VADDR_TO_IOVA(_vaddr) dpaa2_mem_vtop((size_t)(_vaddr))
-
-/**
- * macro to convert IOVA to Virtual address
- */
-#define DPAA2_IOVA_TO_VADDR(_iova) dpaa2_mem_ptov((size_t)(_iova))
-
-/**
- * macro to convert modify the memory containing IOVA to Virtual address
- */
+#define DPAA2_VADDR_TO_IOVA(_vaddr) \
+	dpaa2_mem_va_to_iova((void *)(uintptr_t)_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) \
+	dpaa2_mem_iova_to_va((uint64_t)_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type) \
-	{_mem = (_type)(dpaa2_mem_ptov((size_t)(_mem))); }
+	{_mem = (_type)DPAA2_IOVA_TO_VADDR(_mem); }
+
+#define DPAA2_VAMODE_VADDR_TO_IOVA(_vaddr) ((uint64_t)_vaddr)
+#define DPAA2_VAMODE_IOVA_TO_VADDR(_iova) ((void *)_iova)
+#define DPAA2_VAMODE_MODIFY_IOVA_TO_VADDR(_mem, _type) \
+	{_mem = (_type)(_mem); }
+
+#define DPAA2_PAMODE_VADDR_TO_IOVA(_vaddr) \
+	rte_fslmc_mem_vaddr_to_iova((void *)_vaddr)
+#define DPAA2_PAMODE_IOVA_TO_VADDR(_iova) \
+	rte_fslmc_mem_iova_to_vaddr((uint64_t)_iova)
+#define DPAA2_PAMODE_MODIFY_IOVA_TO_VADDR(_mem, _type) \
+	{_mem = (_type)rte_fslmc_mem_iova_to_vaddr(_mem); }
+
+static inline uint64_t
+dpaa2_mem_va_to_iova_check(void *va, uint64_t size)
+{
+	uint64_t iova = rte_fslmc_cold_mem_vaddr_to_iova(va, size);
 
-#else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
+	if (iova == RTE_BAD_IOVA)
+		return RTE_BAD_IOVA;
 
-#define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
-#define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
+	/** Double check the iova is valid.*/
+	if (iova != rte_mem_virt2iova(va))
+		return RTE_BAD_IOVA;
+
+	return iova;
+}
 
-#endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
+#define DPAA2_VADDR_TO_IOVA_AND_CHECK(_vaddr, size) \
+	dpaa2_mem_va_to_iova_check(_vaddr, size)
+#define DPAA2_IOVA_TO_VADDR_AND_CHECK(_iova, size) \
+	rte_fslmc_cold_mem_iova_to_vaddr(_iova, size)
 
 static inline
 int check_swp_active_dqs(uint16_t dpio_index)
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index b49bc0a62c..2c36895285 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -24,7 +24,6 @@ INTERNAL {
 	dpaa2_seqn_dynfield_offset;
 	dpaa2_seqn;
 	dpaa2_svr_family;
-	dpaa2_virt_mode;
 	dpbp_disable;
 	dpbp_enable;
 	dpbp_get_attributes;
@@ -119,6 +118,12 @@ INTERNAL {
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
 	rte_fslmc_vfio_mem_dmaunmap;
+	rte_fslmc_cold_mem_vaddr_to_iova;
+	rte_fslmc_cold_mem_iova_to_vaddr;
+	rte_fslmc_mem_vaddr_to_iova;
+	rte_fslmc_mem_iova_to_vaddr;
+	rte_fslmc_io_vaddr_to_iova;
+	rte_fslmc_io_iova_to_vaddr;
 
 	local: *;
 };
diff --git a/drivers/dma/dpaa2/dpaa2_qdma.c b/drivers/dma/dpaa2/dpaa2_qdma.c
index 5780e49297..b2cf074c7d 100644
--- a/drivers/dma/dpaa2/dpaa2_qdma.c
+++ b/drivers/dma/dpaa2/dpaa2_qdma.c
@@ -10,6 +10,7 @@
 
 #include <mc/fsl_dpdmai.h>
 
+#include <dpaa2_hw_dpio.h>
 #include "rte_pmd_dpaa2_qdma.h"
 #include "dpaa2_qdma.h"
 #include "dpaa2_qdma_logs.h"
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 17/43] bus/fslmc: remove VFIO IRQ mapping
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (15 preceding siblings ...)
  2024-10-14 12:00       ` [v3 16/43] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 18/43] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
                         ` (26 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Remove unused GITS translator VFIO mapping.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 50 ----------------------------------
 1 file changed, 50 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 4437028470..99f2bca29e 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -51,7 +51,6 @@
 static struct fslmc_vfio_container s_vfio_container;
 /* Currently we only support single group/process. */
 static const char *fslmc_group; /* dprc.x*/
-static uint32_t *msi_intr_vaddr;
 static void *(*rte_mcp_ptr_list);
 
 struct fslmc_dmaseg {
@@ -769,49 +768,6 @@ vfio_connect_container(int vfio_container_fd,
 	return fslmc_vfio_connect_container(vfio_group_fd);
 }
 
-static int vfio_map_irq_region(void)
-{
-	int ret, fd;
-	unsigned long *vaddr = NULL;
-	struct vfio_iommu_type1_dma_map map = {
-		.argsz = sizeof(map),
-		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
-		.vaddr = 0x6030000,
-		.iova = 0x6030000,
-		.size = 0x1000,
-	};
-	const char *group_name = fslmc_vfio_get_group_name();
-
-	fd = fslmc_vfio_group_fd_by_name(group_name);
-	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
-			__func__, group_name, fd);
-		if (fd < 0)
-			return fd;
-		return -EIO;
-	}
-	if (!fslmc_vfio_container_connected(fd)) {
-		DPAA2_BUS_ERR("Container is not connected");
-		return -EIO;
-	}
-
-	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
-		PROT_READ, MAP_SHARED, fd, 0x6030000);
-	if (vaddr == MAP_FAILED) {
-		DPAA2_BUS_ERR("Unable to map region (errno = %d)", errno);
-		return -ENOMEM;
-	}
-
-	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
-	map.vaddr = (unsigned long)vaddr;
-	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA, &map);
-	if (!ret)
-		return 0;
-
-	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
-	return ret;
-}
-
 static int
 fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len)
 {
@@ -1233,12 +1189,6 @@ int rte_fslmc_vfio_dmamap(void)
 
 	DPAA2_BUS_DEBUG("Total %d segments found.", i);
 
-	/* TODO - This is a W.A. as VFIO currently does not add the mapping of
-	 * the interrupt region to SMMU. This should be removed once the
-	 * support is added in the Kernel.
-	 */
-	vfio_map_irq_region();
-
 	/* Existing segments have been mapped and memory callback for hotplug
 	 * has been installed.
 	 */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 18/43] bus/fslmc: create dpaa2 device with it's object
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (16 preceding siblings ...)
  2024-10-14 12:01       ` [v3 17/43] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 19/43] bus/fslmc: fix coverity issue vanshika.shukla
                         ` (25 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Create dpaa2 device with object instead of object ID.
Assign each dpaa2 object with it's container.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     | 39 ++++++++++++------------
 drivers/bus/fslmc/fslmc_vfio.c           |  3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c |  6 ++--
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c |  8 ++---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c |  6 ++--
 drivers/bus/fslmc/portal/dpaa2_hw_dprc.c |  8 +++--
 drivers/event/dpaa2/dpaa2_hw_dpcon.c     |  8 ++---
 drivers/net/dpaa2/dpaa2_mux.c            |  6 ++--
 drivers/net/dpaa2/dpaa2_ptp.c            |  8 ++---
 9 files changed, 47 insertions(+), 45 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index c6bb5d17aa..3fcdfbed1f 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -89,25 +89,6 @@ enum rte_dpaa2_dev_type {
 	DPAA2_DEVTYPE_MAX,
 };
 
-TAILQ_HEAD(rte_dpaa2_object_list, rte_dpaa2_object);
-
-typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
-				      struct vfio_device_info *obj_info,
-				      int object_id);
-
-typedef void (*rte_dpaa2_obj_close_t)(int object_id);
-
-/**
- * A structure describing a DPAA2 object.
- */
-struct rte_dpaa2_object {
-	TAILQ_ENTRY(rte_dpaa2_object) next; /**< Next in list. */
-	const char *name;                   /**< Name of Object. */
-	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
-	rte_dpaa2_obj_create_t create;
-	rte_dpaa2_obj_close_t close;
-};
-
 /**
  * A structure describing a DPAA2 device.
  */
@@ -123,6 +104,7 @@ struct rte_dpaa2_device {
 	enum rte_dpaa2_dev_type dev_type;   /**< Device Type */
 	uint16_t object_id;                 /**< DPAA2 Object ID */
 	enum rte_dpaa2_dev_type ep_dev_type;   /**< Endpoint Device Type */
+	struct dpaa2_dprc_dev *container;
 	uint16_t ep_object_id;                 /**< Endpoint DPAA2 Object ID */
 	char ep_name[RTE_DEV_NAME_MAX_LEN];
 	struct rte_intr_handle *intr_handle; /**< Interrupt handle */
@@ -130,10 +112,29 @@ struct rte_dpaa2_device {
 	char name[FSLMC_OBJECT_MAX_LEN];    /**< DPAA2 Object name*/
 };
 
+typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
+				      struct vfio_device_info *obj_info,
+				      struct rte_dpaa2_device *dev);
+
+typedef void (*rte_dpaa2_obj_close_t)(int object_id);
+
 typedef int (*rte_dpaa2_probe_t)(struct rte_dpaa2_driver *dpaa2_drv,
 				 struct rte_dpaa2_device *dpaa2_dev);
 typedef int (*rte_dpaa2_remove_t)(struct rte_dpaa2_device *dpaa2_dev);
 
+TAILQ_HEAD(rte_dpaa2_object_list, rte_dpaa2_object);
+
+/**
+ * A structure describing a DPAA2 object.
+ */
+struct rte_dpaa2_object {
+	TAILQ_ENTRY(rte_dpaa2_object) next; /**< Next in list. */
+	const char *name;                   /**< Name of Object. */
+	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
+	rte_dpaa2_obj_create_t create;
+	rte_dpaa2_obj_close_t close;
+};
+
 /**
  * A structure describing a DPAA2 driver.
  */
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 99f2bca29e..a7fe59d25b 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1469,8 +1469,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
 	case DPAA2_DPRC:
 		TAILQ_FOREACH(object, &dpaa2_obj_list, next) {
 			if (dev->dev_type == object->dev_type)
-				object->create(dev_fd, &device_info,
-					       dev->object_id);
+				object->create(dev_fd, &device_info, dev);
 			else
 				continue;
 		}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index 85e4c16c03..0ca3b2b2e4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -47,11 +47,11 @@ static struct dpaa2_dpbp_dev *get_dpbp_from_id(uint32_t dpbp_id)
 
 static int
 dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
-			 struct vfio_device_info *obj_info __rte_unused,
-			 int dpbp_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpbp_dev *dpbp_node;
-	int ret;
+	int ret, dpbp_id = obj->object_id;
 	static int register_once;
 
 	/* Allocate DPAA2 dpbp handle */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 99f2147ccb..9d7108bfdc 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2020 NXP
+ *   Copyright 2017, 2020, 2023 NXP
  *
  */
 
@@ -45,15 +45,15 @@ static struct dpaa2_dpci_dev *get_dpci_from_id(uint32_t dpci_id)
 
 static int
 rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
-			     struct vfio_device_info *obj_info __rte_unused,
-			     int dpci_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpci_dev *dpci_node;
 	struct dpci_attr attr;
 	struct dpci_rx_queue_cfg rx_queue_cfg;
 	struct dpci_rx_queue_attr rx_attr;
 	struct dpci_tx_queue_attr tx_attr;
-	int ret, i;
+	int ret, i, dpci_id = obj->object_id;
 
 	/* Allocate DPAA2 dpci handle */
 	dpci_node = rte_malloc(NULL, sizeof(struct dpaa2_dpci_dev), 0);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index b52a8c8ba5..346092a6b4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -391,14 +391,14 @@ dpaa2_close_dpio_device(int object_id)
 
 static int
 dpaa2_create_dpio_device(int vdev_fd,
-			 struct vfio_device_info *obj_info,
-			 int object_id)
+	struct vfio_device_info *obj_info,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)};
 	struct qbman_swp_desc p_des;
 	struct dpio_attr attr;
-	int ret;
+	int ret, object_id = obj->object_id;
 
 	if (obj_info->num_regions < NUM_DPIO_REGIONS) {
 		DPAA2_BUS_ERR("Not sufficient number of DPIO regions");
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c b/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
index 65e2d799c3..a057cb1309 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
@@ -23,13 +23,13 @@ static struct dprc_dev_list dprc_dev_list
 
 static int
 rte_dpaa2_create_dprc_device(int vdev_fd __rte_unused,
-			     struct vfio_device_info *obj_info __rte_unused,
-			     int dprc_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dprc_dev *dprc_node;
 	struct dprc_endpoint endpoint1, endpoint2;
 	struct rte_dpaa2_device *dev, *dev_tmp;
-	int ret;
+	int ret, dprc_id = obj->object_id;
 
 	/* Allocate DPAA2 dprc handle */
 	dprc_node = rte_malloc(NULL, sizeof(struct dpaa2_dprc_dev), 0);
@@ -50,6 +50,8 @@ rte_dpaa2_create_dprc_device(int vdev_fd __rte_unused,
 	}
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_tmp) {
+		/** DPRC is always created before it's children are created.*/
+		dev->container = dprc_node;
 		if (dev->dev_type == DPAA2_ETH) {
 			int link_state;
 
diff --git a/drivers/event/dpaa2/dpaa2_hw_dpcon.c b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
index 64b0136e24..ea5b0d4b85 100644
--- a/drivers/event/dpaa2/dpaa2_hw_dpcon.c
+++ b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2020 NXP
+ *   Copyright 2017, 2020, 2023 NXP
  *
  */
 
@@ -45,12 +45,12 @@ static struct dpaa2_dpcon_dev *get_dpcon_from_id(uint32_t dpcon_id)
 
 static int
 rte_dpaa2_create_dpcon_device(int dev_fd __rte_unused,
-			      struct vfio_device_info *obj_info __rte_unused,
-			      int dpcon_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpcon_dev *dpcon_node;
 	struct dpcon_attr attr;
-	int ret;
+	int ret, dpcon_id = obj->object_id;
 
 	/* Allocate DPAA2 dpcon handle */
 	dpcon_node = rte_malloc(NULL, sizeof(struct dpaa2_dpcon_dev), 0);
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 3693f4b62e..f4b8d481af 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -374,12 +374,12 @@ rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
 
 static int
 dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
-			   struct vfio_device_info *obj_info __rte_unused,
-			   int dpdmux_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev;
 	struct dpdmux_attr attr;
-	int ret;
+	int ret, dpdmux_id = obj->object_id;
 	uint16_t maj_ver;
 	uint16_t min_ver;
 	uint8_t skip_reset_flags;
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index c08aa0f3bf..751e558c73 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2019 NXP
+ * Copyright 2019, 2023 NXP
  */
 
 #include <sys/queue.h>
@@ -134,11 +134,11 @@ int dpaa2_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
 #if defined(RTE_LIBRTE_IEEE1588)
 static int
 dpaa2_create_dprtc_device(int vdev_fd __rte_unused,
-			   struct vfio_device_info *obj_info __rte_unused,
-			   int dprtc_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dprtc_attr attr;
-	int ret;
+	int ret, dprtc_id = obj->object_id;
 
 	PMD_INIT_FUNC_TRACE();
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 19/43] bus/fslmc: fix coverity issue
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (17 preceding siblings ...)
  2024-10-14 12:01       ` [v3 18/43] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 20/43] bus/fslmc: fix invalid error FD code vanshika.shukla
                         ` (24 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fix Issues reported by coverity (NXP Internal Coverity)

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 49 +++++++++++++++++----------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index eea06988ff..0e471ec3fd 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2022 NXP
  */
 
 #include "compat.h"
@@ -37,6 +37,7 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
 		   struct qbman_bp_query_rslt *r)
 {
 	struct qbman_bp_query_desc *p;
+	struct qbman_bp_query_rslt *bp_query_rslt;
 
 	/* Start the management command */
 	p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s);
@@ -47,14 +48,16 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
 	p->bpid = bpid;
 
 	/* Complete the management command */
-	*r = *(struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s, p,
-						 QBMAN_BP_QUERY);
-	if (!r) {
+	bp_query_rslt = (struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s,
+						p, QBMAN_BP_QUERY);
+	if (!bp_query_rslt) {
 		pr_err("qbman: Query BPID %d failed, no response\n",
 			bpid);
 		return -EIO;
 	}
 
+	*r = *bp_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY);
 
@@ -202,20 +205,23 @@ int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
 		   struct qbman_fq_query_rslt *r)
 {
 	struct qbman_fq_query_desc *p;
+	struct qbman_fq_query_rslt *fq_query_rslt;
 
 	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->fqid = fqid;
-	*r = *(struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s, p,
-					  QBMAN_FQ_QUERY);
-	if (!r) {
+	fq_query_rslt = (struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s,
+					p, QBMAN_FQ_QUERY);
+	if (!fq_query_rslt) {
 		pr_err("qbman: Query FQID %d failed, no response\n",
 			fqid);
 		return -EIO;
 	}
 
+	*r = *fq_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY);
 
@@ -398,20 +404,23 @@ int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
 		    struct qbman_cgr_query_rslt *r)
 {
 	struct qbman_cgr_query_desc *p;
+	struct qbman_cgr_query_rslt *cgr_query_rslt;
 
 	p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->cgid = cgid;
-	*r = *(struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_CGR_QUERY);
-	if (!r) {
+	cgr_query_rslt = (struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s,
+					p, QBMAN_CGR_QUERY);
+	if (!cgr_query_rslt) {
 		pr_err("qbman: Query CGID %d failed, no response\n",
 			cgid);
 		return -EIO;
 	}
 
+	*r = *cgr_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_CGR_QUERY);
 
@@ -473,20 +482,23 @@ int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
 			struct qbman_wred_query_rslt *r)
 {
 	struct qbman_cgr_query_desc *p;
+	struct qbman_wred_query_rslt *wred_query_rslt;
 
 	p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->cgid = cgid;
-	*r = *(struct qbman_wred_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_WRED_QUERY);
-	if (!r) {
+	wred_query_rslt = (struct qbman_wred_query_rslt *)qbman_swp_mc_complete(
+					s, p, QBMAN_WRED_QUERY);
+	if (!wred_query_rslt) {
 		pr_err("qbman: Query CGID WRED %d failed, no response\n",
 			cgid);
 		return -EIO;
 	}
 
+	*r = *wred_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WRED_QUERY);
 
@@ -527,7 +539,7 @@ void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
 	if (mn == 0)
 		*maxth = ma;
 	else
-		*maxth = ((ma+256) * (1<<(mn-1)));
+		*maxth = ((uint64_t)(ma+256) * (1<<(mn-1)));
 
 	if (step_s == 0)
 		*minth = *maxth - step_i;
@@ -630,6 +642,7 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
 		       struct qbman_wqchan_query_rslt *r)
 {
 	struct qbman_wqchan_query_desc *p;
+	struct qbman_wqchan_query_rslt *wqchan_query_rslt;
 
 	/* Start the management command */
 	p = (struct qbman_wqchan_query_desc *)qbman_swp_mc_start(s);
@@ -640,14 +653,16 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
 	p->chid = chanid;
 
 	/* Complete the management command */
-	*r = *(struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_WQ_QUERY);
-	if (!r) {
+	wqchan_query_rslt = (struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(
+						s, p, QBMAN_WQ_QUERY);
+	if (!wqchan_query_rslt) {
 		pr_err("qbman: Query WQ Channel %d failed, no response\n",
 			chanid);
 		return -EIO;
 	}
 
+	*r = *wqchan_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WQ_QUERY);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 20/43] bus/fslmc: fix invalid error FD code
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (18 preceding siblings ...)
  2024-10-14 12:01       ` [v3 19/43] bus/fslmc: fix coverity issue vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 21/43] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
                         ` (23 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Since error code was being set to 0 in case of error which is a valid
fd, it caused memory leak issue.
This issue have been fixed by changing zero to a valid non fd error.
CID: 26661848

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index a7fe59d25b..0cebca4f03 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2023 NXP
+ *   Copyright 2016-2024 NXP
  *
  */
 
@@ -41,8 +41,6 @@
 #include "portal/dpaa2_hw_pvt.h"
 #include "portal/dpaa2_hw_dpio.h"
 
-#define FSLMC_CONTAINER_MAX_LEN 8 /**< Of the format dprc.XX */
-
 #define FSLMC_VFIO_MP "fslmc_vfio_mp_sync"
 
 /* Container is composed by multiple groups, however,
@@ -415,18 +413,16 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	    mp_reply.nb_received == 1) {
 		mp_rep = &mp_reply.msgs[0];
 		p = (struct vfio_mp_param *)mp_rep->param;
-		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1)
 			vfio_group_fd = mp_rep->fds[0];
-		} else if (p->result == SOCKET_NO_FD) {
+		else if (p->result == SOCKET_NO_FD)
 			DPAA2_BUS_ERR("Bad VFIO group fd");
-			vfio_group_fd = 0;
-		}
 	}
 
 	free(mp_reply.msgs);
 
 add_vfio_group:
-	if (vfio_group_fd <= 0) {
+	if (vfio_group_fd < 0) {
 		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
 				filename, vfio_group_fd);
@@ -1802,14 +1798,11 @@ fslmc_vfio_setup_group(void)
 	}
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
-	if (vfio_group_fd <= 0) {
+	if (vfio_group_fd < 0) {
 		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
-		if (vfio_group_fd <= 0) {
+		if (vfio_group_fd < 0) {
 			DPAA2_BUS_ERR("%s: open group name(%s) failed(%d)",
 				__func__, group_name, vfio_group_fd);
-			if (!vfio_group_fd)
-				close(vfio_group_fd);
-			DPAA2_BUS_ERR("Failed to create MC VFIO group");
 			return -rte_errno;
 		}
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 21/43] bus/fslmc: change qbman eq desc from d to desc
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (19 preceding siblings ...)
  2024-10-14 12:01       ` [v3 20/43] bus/fslmc: fix invalid error FD code vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
                         ` (22 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Change qbman_eq_desc name to avoid redefining same variable.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 3fdca9761d..5d0cedc136 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1008,9 +1008,9 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
 				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
 		p[0] = cl[0] | s->eqcr.pi_vb;
 		if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
-			struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+			struct qbman_eq_desc *desc = (struct qbman_eq_desc *)p;
 
-			d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+			desc->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
 				((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
 		}
 		eqcr_pi++;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (20 preceding siblings ...)
  2024-10-14 12:01       ` [v3 21/43] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 23/43] net/dpaa2: change miss flow ID macro name vanshika.shukla
                         ` (21 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Declare rte_fslmc_vfio_mem_dmamap and rte_fslmc_vfio_mem_dmaunmap
in bus_fslmc_driver.h for external usage.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     | 7 ++++++-
 drivers/bus/fslmc/fslmc_bus.c            | 2 +-
 drivers/bus/fslmc/fslmc_vfio.c           | 3 ++-
 drivers/bus/fslmc/fslmc_vfio.h           | 7 +------
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 2 +-
 5 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 3fcdfbed1f..81e4f28b22 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2016,2021 NXP
+ *   Copyright 2016,2021-2023 NXP
  *
  */
 
@@ -135,6 +135,11 @@ struct rte_dpaa2_object {
 	rte_dpaa2_obj_close_t close;
 };
 
+int
+rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size);
+
 /**
  * A structure describing a DPAA2 driver.
  */
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 107cc70833..fda0a4206d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -438,7 +438,7 @@ rte_fslmc_probe(void)
 	 * install callback handler.
 	 */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
-		ret = rte_fslmc_vfio_dmamap();
+		ret = fslmc_vfio_dmamap();
 		if (ret) {
 			DPAA2_BUS_ERR("Unable to DMA map existing VAs: (%d)",
 				      ret);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 0cebca4f03..997b469698 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1161,7 +1161,8 @@ rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 	return fslmc_unmap_dma(0, iova, size);
 }
 
-int rte_fslmc_vfio_dmamap(void)
+int
+fslmc_vfio_dmamap(void)
 {
 	int i = 0, ret;
 
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 408b35680d..11efcc036e 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -64,10 +64,5 @@ int fslmc_vfio_process_group(void);
 int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
 int fslmc_get_container_group(const char *group_name, int *gropuid);
-int rte_fslmc_vfio_dmamap(void);
-int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova,
-		uint64_t size);
-int rte_fslmc_vfio_mem_dmaunmap(uint64_t iova,
-		uint64_t size);
-
+int fslmc_vfio_dmamap(void);
 #endif /* _FSLMC_VFIO_H_ */
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 886fb7fbb0..c054988513 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -23,7 +23,7 @@
 #include <dev_driver.h>
 #include "rte_dpaa2_mempool.h"
 
-#include "fslmc_vfio.h"
+#include <bus_fslmc_driver.h>
 #include <fslmc_logs.h>
 #include <mc/fsl_dpbp.h>
 #include <portal/dpaa2_hw_pvt.h>
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 23/43] net/dpaa2: change miss flow ID macro name
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (21 preceding siblings ...)
  2024-10-14 12:01       ` [v3 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 24/43] net/dpaa2: flow API refactor vanshika.shukla
                         ` (20 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Remove miss flow id macro name to DPNI_FS_MISS_DROP since its
conflicting with enum. Also, set default miss flow id to 0.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 48e6eedfbc..aab7a71748 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -30,8 +30,7 @@
 int mc_l4_port_identification;
 
 static char *dpaa2_flow_control_log;
-static uint16_t dpaa2_flow_miss_flow_id =
-	DPNI_FS_MISS_DROP;
+static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
 #define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
 
@@ -3994,7 +3993,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
 		struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 		dpaa2_flow_miss_flow_id =
-			atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
+			(uint16_t)atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
 		if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
 			DPAA2_PMD_ERR(
 				"The missed flow ID %d exceeds the max flow ID %d",
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 24/43] net/dpaa2: flow API refactor
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (22 preceding siblings ...)
  2024-10-14 12:01       ` [v3 23/43] net/dpaa2: change miss flow ID macro name vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 25/43] net/dpaa2: dump Rx parser result vanshika.shukla
                         ` (19 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

1) Gather redundant code with same logic from various protocol
   handlers to create common functions.
2) struct dpaa2_key_profile is used to describe each extract's
   offset of rule and size. It's easy to insert new extract previous
   IP address extract.
3) IP address profile is used to describe ipv4/v6 addresses extracts
   located at end of rule.
4) L4 ports profile is used to describe the ports positions and offsets
   of rule.
5) Once the extracts of QoS/FS table are update, go through all
   the existing flows of this table to update the rule data.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c |   27 +-
 drivers/net/dpaa2/dpaa2_ethdev.h |   90 +-
 drivers/net/dpaa2/dpaa2_flow.c   | 4839 ++++++++++++------------------
 3 files changed, 2030 insertions(+), 2926 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index bd6a578e30..e55de5b614 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2808,39 +2808,20 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	/* Init fields w.r.t. classification */
 	memset(&priv->extract.qos_key_extract, 0,
 		sizeof(struct dpaa2_key_extract));
-	priv->extract.qos_extract_param = (size_t)rte_malloc(NULL, 256, 64);
+	priv->extract.qos_extract_param = rte_malloc(NULL, 256, 64);
 	if (!priv->extract.qos_extract_param) {
-		DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow "
-			    " classification ", ret);
+		DPAA2_PMD_ERR("Memory alloc failed");
 		goto init_err;
 	}
-	priv->extract.qos_key_extract.key_info.ipv4_src_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv4_dst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv6_src_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv6_dst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
 
 	for (i = 0; i < MAX_TCS; i++) {
 		memset(&priv->extract.tc_key_extract[i], 0,
 			sizeof(struct dpaa2_key_extract));
-		priv->extract.tc_extract_param[i] =
-			(size_t)rte_malloc(NULL, 256, 64);
+		priv->extract.tc_extract_param[i] = rte_malloc(NULL, 256, 64);
 		if (!priv->extract.tc_extract_param[i]) {
-			DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classification",
-				     ret);
+			DPAA2_PMD_ERR("Memory alloc failed");
 			goto init_err;
 		}
-		priv->extract.tc_key_extract[i].key_info.ipv4_src_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv4_dst_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv6_src_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv6_dst_offset =
-			IP_ADDRESS_OFFSET_INVALID;
 	}
 
 	ret = dpni_set_max_frame_length(dpni_dev, CMD_PRI_LOW, priv->token,
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 6625afaba3..ea1c1b5117 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -145,14 +145,6 @@ extern bool dpaa2_enable_ts[];
 extern uint64_t dpaa2_timestamp_rx_dynflag;
 extern int dpaa2_timestamp_dynfield_offset;
 
-#define DPAA2_QOS_TABLE_RECONFIGURE	1
-#define DPAA2_FS_TABLE_RECONFIGURE	2
-
-#define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4
-#define DPAA2_FS_TABLE_IPADDR_EXTRACT 8
-
-#define DPAA2_FLOW_MAX_KEY_SIZE		16
-
 /* Externally defined */
 extern const struct rte_flow_ops dpaa2_flow_ops;
 
@@ -160,29 +152,85 @@ extern const struct rte_tm_ops dpaa2_tm_ops;
 
 extern bool dpaa2_enable_err_queue;
 
-#define IP_ADDRESS_OFFSET_INVALID (-1)
+struct ipv4_sd_addr_extract_rule {
+	uint32_t ipv4_src;
+	uint32_t ipv4_dst;
+};
+
+struct ipv6_sd_addr_extract_rule {
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+};
 
-struct dpaa2_key_info {
+struct ipv4_ds_addr_extract_rule {
+	uint32_t ipv4_dst;
+	uint32_t ipv4_src;
+};
+
+struct ipv6_ds_addr_extract_rule {
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+union ip_addr_extract_rule {
+	struct ipv4_sd_addr_extract_rule ipv4_sd_addr;
+	struct ipv6_sd_addr_extract_rule ipv6_sd_addr;
+	struct ipv4_ds_addr_extract_rule ipv4_ds_addr;
+	struct ipv6_ds_addr_extract_rule ipv6_ds_addr;
+};
+
+union ip_src_addr_extract_rule {
+	uint32_t ipv4_src;
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+union ip_dst_addr_extract_rule {
+	uint32_t ipv4_dst;
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+enum ip_addr_extract_type {
+	IP_NONE_ADDR_EXTRACT,
+	IP_SRC_EXTRACT,
+	IP_DST_EXTRACT,
+	IP_SRC_DST_EXTRACT,
+	IP_DST_SRC_EXTRACT
+};
+
+struct key_prot_field {
+	enum net_prot prot;
+	uint32_t key_field;
+};
+
+struct dpaa2_key_profile {
+	uint8_t num;
 	uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
 	uint8_t key_size[DPKG_MAX_NUM_OF_EXTRACTS];
-	/* Special for IP address. */
-	int ipv4_src_offset;
-	int ipv4_dst_offset;
-	int ipv6_src_offset;
-	int ipv6_dst_offset;
-	uint8_t key_total_size;
+
+	enum ip_addr_extract_type ip_addr_type;
+	uint8_t ip_addr_extract_pos;
+	uint8_t ip_addr_extract_off;
+
+	uint8_t l4_src_port_present;
+	uint8_t l4_src_port_pos;
+	uint8_t l4_src_port_offset;
+	uint8_t l4_dst_port_present;
+	uint8_t l4_dst_port_pos;
+	uint8_t l4_dst_port_offset;
+	struct key_prot_field prot_field[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint16_t key_max_size;
 };
 
 struct dpaa2_key_extract {
 	struct dpkg_profile_cfg dpkg;
-	struct dpaa2_key_info key_info;
+	struct dpaa2_key_profile key_profile;
 };
 
 struct extract_s {
 	struct dpaa2_key_extract qos_key_extract;
 	struct dpaa2_key_extract tc_key_extract[MAX_TCS];
-	uint64_t qos_extract_param;
-	uint64_t tc_extract_param[MAX_TCS];
+	uint8_t *qos_extract_param;
+	uint8_t *tc_extract_param[MAX_TCS];
 };
 
 struct dpaa2_dev_priv {
@@ -233,7 +281,8 @@ struct dpaa2_dev_priv {
 	/* Stores correction offset for one step timestamping */
 	uint16_t ptp_correction_offset;
 
-	LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */
+	struct dpaa2_dev_flow *curr;
+	LIST_HEAD(, dpaa2_dev_flow) flows;
 	LIST_HEAD(nodes, dpaa2_tm_node) nodes;
 	LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles;
 };
@@ -292,7 +341,6 @@ uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
 void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci, struct dpaa2_queue *dpaa2_q);
 void dpaa2_flow_clean(struct rte_eth_dev *dev);
 uint16_t dpaa2_dev_tx_conf(void *queue)  __rte_unused;
-int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev);
 
 int dpaa2_timesync_enable(struct rte_eth_dev *dev);
 int dpaa2_timesync_disable(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index aab7a71748..3b4d5cc8d7 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2022 NXP
  */
 
 #include <sys/queue.h>
@@ -27,41 +27,40 @@
  * MC/WRIOP are not able to identify
  * the l4 protocol with l4 ports.
  */
-int mc_l4_port_identification;
+static int mc_l4_port_identification;
 
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
-#define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
-
-enum flow_rule_ipaddr_type {
-	FLOW_NONE_IPADDR,
-	FLOW_IPV4_ADDR,
-	FLOW_IPV6_ADDR
+enum dpaa2_flow_entry_size {
+	DPAA2_FLOW_ENTRY_MIN_SIZE = (DPNI_MAX_KEY_SIZE / 2),
+	DPAA2_FLOW_ENTRY_MAX_SIZE = DPNI_MAX_KEY_SIZE
 };
 
-struct flow_rule_ipaddr {
-	enum flow_rule_ipaddr_type ipaddr_type;
-	int qos_ipsrc_offset;
-	int qos_ipdst_offset;
-	int fs_ipsrc_offset;
-	int fs_ipdst_offset;
+enum dpaa2_flow_dist_type {
+	DPAA2_FLOW_QOS_TYPE = 1 << 0,
+	DPAA2_FLOW_FS_TYPE = 1 << 1
 };
 
-struct rte_flow {
-	LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
+#define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
+#define DPAA2_FLOW_MAX_KEY_SIZE			16
+
+struct dpaa2_dev_flow {
+	LIST_ENTRY(dpaa2_dev_flow) next;
 	struct dpni_rule_cfg qos_rule;
+	uint8_t *qos_key_addr;
+	uint8_t *qos_mask_addr;
+	uint16_t qos_rule_size;
 	struct dpni_rule_cfg fs_rule;
 	uint8_t qos_real_key_size;
 	uint8_t fs_real_key_size;
+	uint8_t *fs_key_addr;
+	uint8_t *fs_mask_addr;
+	uint16_t fs_rule_size;
 	uint8_t tc_id; /** Traffic Class ID. */
 	uint8_t tc_index; /** index within this Traffic Class. */
-	enum rte_flow_action_type action;
-	/* Special for IP address to specify the offset
-	 * in key/mask.
-	 */
-	struct flow_rule_ipaddr ipaddr_rule;
-	struct dpni_fs_action_cfg action_cfg;
+	enum rte_flow_action_type action_type;
+	struct dpni_fs_action_cfg fs_action_cfg;
 };
 
 static const
@@ -94,9 +93,6 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
 };
 
-/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
-#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
-
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
 	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
@@ -155,11 +151,12 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
-
 #endif
 
-static inline void dpaa2_prot_field_string(
-	enum net_prot prot, uint32_t field,
+#define DPAA2_FLOW_DUMP printf
+
+static inline void
+dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 	char *string)
 {
 	if (!dpaa2_flow_control_log)
@@ -234,60 +231,84 @@ static inline void dpaa2_prot_field_string(
 	}
 }
 
-static inline void dpaa2_flow_qos_table_extracts_log(
-	const struct dpaa2_dev_priv *priv, FILE *f)
+static inline void
+dpaa2_flow_qos_extracts_log(const struct dpaa2_dev_priv *priv)
 {
 	int idx;
 	char string[32];
+	const struct dpkg_profile_cfg *dpkg =
+		&priv->extract.qos_key_extract.dpkg;
+	const struct dpkg_extract *extract;
+	enum dpkg_extract_type type;
+	enum net_prot prot;
+	uint32_t field;
 
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "Setup QoS table: number of extracts: %d\r\n",
-			priv->extract.qos_key_extract.dpkg.num_extracts);
-	for (idx = 0; idx < priv->extract.qos_key_extract.dpkg.num_extracts;
-		idx++) {
-		dpaa2_prot_field_string(priv->extract.qos_key_extract.dpkg
-			.extracts[idx].extract.from_hdr.prot,
-			priv->extract.qos_key_extract.dpkg.extracts[idx]
-			.extract.from_hdr.field,
-			string);
-		fprintf(f, "%s", string);
-		if ((idx + 1) < priv->extract.qos_key_extract.dpkg.num_extracts)
-			fprintf(f, " / ");
-	}
-	fprintf(f, "\r\n");
+	DPAA2_FLOW_DUMP("QoS table: %d extracts\r\n",
+		dpkg->num_extracts);
+	for (idx = 0; idx < dpkg->num_extracts; idx++) {
+		extract = &dpkg->extracts[idx];
+		type = extract->type;
+		if (type == DPKG_EXTRACT_FROM_HDR) {
+			prot = extract->extract.from_hdr.prot;
+			field = extract->extract.from_hdr.field;
+			dpaa2_prot_field_string(prot, field,
+				string);
+		} else if (type == DPKG_EXTRACT_FROM_DATA) {
+			sprintf(string, "raw offset/len: %d/%d",
+				extract->extract.from_data.offset,
+				extract->extract.from_data.size);
+		}
+		DPAA2_FLOW_DUMP("%s", string);
+		if ((idx + 1) < dpkg->num_extracts)
+			DPAA2_FLOW_DUMP(" / ");
+	}
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_fs_table_extracts_log(
-	const struct dpaa2_dev_priv *priv, int tc_id, FILE *f)
+static inline void
+dpaa2_flow_fs_extracts_log(const struct dpaa2_dev_priv *priv,
+	int tc_id)
 {
 	int idx;
 	char string[32];
+	const struct dpkg_profile_cfg *dpkg =
+		&priv->extract.tc_key_extract[tc_id].dpkg;
+	const struct dpkg_extract *extract;
+	enum dpkg_extract_type type;
+	enum net_prot prot;
+	uint32_t field;
 
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "Setup FS table: number of extracts of TC[%d]: %d\r\n",
-			tc_id, priv->extract.tc_key_extract[tc_id]
-			.dpkg.num_extracts);
-	for (idx = 0; idx < priv->extract.tc_key_extract[tc_id]
-		.dpkg.num_extracts; idx++) {
-		dpaa2_prot_field_string(priv->extract.tc_key_extract[tc_id]
-			.dpkg.extracts[idx].extract.from_hdr.prot,
-			priv->extract.tc_key_extract[tc_id].dpkg.extracts[idx]
-			.extract.from_hdr.field,
-			string);
-		fprintf(f, "%s", string);
-		if ((idx + 1) < priv->extract.tc_key_extract[tc_id]
-			.dpkg.num_extracts)
-			fprintf(f, " / ");
-	}
-	fprintf(f, "\r\n");
+	DPAA2_FLOW_DUMP("FS table: %d extracts in TC[%d]\r\n",
+		dpkg->num_extracts, tc_id);
+	for (idx = 0; idx < dpkg->num_extracts; idx++) {
+		extract = &dpkg->extracts[idx];
+		type = extract->type;
+		if (type == DPKG_EXTRACT_FROM_HDR) {
+			prot = extract->extract.from_hdr.prot;
+			field = extract->extract.from_hdr.field;
+			dpaa2_prot_field_string(prot, field,
+				string);
+		} else if (type == DPKG_EXTRACT_FROM_DATA) {
+			sprintf(string, "raw offset/len: %d/%d",
+				extract->extract.from_data.offset,
+				extract->extract.from_data.size);
+		}
+		DPAA2_FLOW_DUMP("%s", string);
+		if ((idx + 1) < dpkg->num_extracts)
+			DPAA2_FLOW_DUMP(" / ");
+	}
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_qos_entry_log(
-	const char *log_info, const struct rte_flow *flow, int qos_index, FILE *f)
+static inline void
+dpaa2_flow_qos_entry_log(const char *log_info,
+	const struct dpaa2_dev_flow *flow, int qos_index)
 {
 	int idx;
 	uint8_t *key, *mask;
@@ -295,27 +316,34 @@ static inline void dpaa2_flow_qos_entry_log(
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "\r\n%s QoS entry[%d] for TC[%d], extracts size is %d\r\n",
-		log_info, qos_index, flow->tc_id, flow->qos_real_key_size);
-
-	key = (uint8_t *)(size_t)flow->qos_rule.key_iova;
-	mask = (uint8_t *)(size_t)flow->qos_rule.mask_iova;
+	if (qos_index >= 0) {
+		DPAA2_FLOW_DUMP("%s QoS entry[%d](size %d/%d) for TC[%d]\r\n",
+			log_info, qos_index, flow->qos_rule_size,
+			flow->qos_rule.key_size,
+			flow->tc_id);
+	} else {
+		DPAA2_FLOW_DUMP("%s QoS entry(size %d/%d) for TC[%d]\r\n",
+			log_info, flow->qos_rule_size,
+			flow->qos_rule.key_size,
+			flow->tc_id);
+	}
 
-	fprintf(f, "key:\r\n");
-	for (idx = 0; idx < flow->qos_real_key_size; idx++)
-		fprintf(f, "%02x ", key[idx]);
+	key = flow->qos_key_addr;
+	mask = flow->qos_mask_addr;
 
-	fprintf(f, "\r\nmask:\r\n");
-	for (idx = 0; idx < flow->qos_real_key_size; idx++)
-		fprintf(f, "%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("key:\r\n");
+	for (idx = 0; idx < flow->qos_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", key[idx]);
 
-	fprintf(f, "\r\n%s QoS ipsrc: %d, ipdst: %d\r\n", log_info,
-		flow->ipaddr_rule.qos_ipsrc_offset,
-		flow->ipaddr_rule.qos_ipdst_offset);
+	DPAA2_FLOW_DUMP("\r\nmask:\r\n");
+	for (idx = 0; idx < flow->qos_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_fs_entry_log(
-	const char *log_info, const struct rte_flow *flow, FILE *f)
+static inline void
+dpaa2_flow_fs_entry_log(const char *log_info,
+	const struct dpaa2_dev_flow *flow)
 {
 	int idx;
 	uint8_t *key, *mask;
@@ -323,187 +351,432 @@ static inline void dpaa2_flow_fs_entry_log(
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "\r\n%s FS/TC entry[%d] of TC[%d], extracts size is %d\r\n",
-		log_info, flow->tc_index, flow->tc_id, flow->fs_real_key_size);
+	DPAA2_FLOW_DUMP("%s FS/TC entry[%d](size %d/%d) of TC[%d]\r\n",
+		log_info, flow->tc_index,
+		flow->fs_rule_size, flow->fs_rule.key_size,
+		flow->tc_id);
+
+	key = flow->fs_key_addr;
+	mask = flow->fs_mask_addr;
+
+	DPAA2_FLOW_DUMP("key:\r\n");
+	for (idx = 0; idx < flow->fs_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", key[idx]);
+
+	DPAA2_FLOW_DUMP("\r\nmask:\r\n");
+	for (idx = 0; idx < flow->fs_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("\r\n");
+}
 
-	key = (uint8_t *)(size_t)flow->fs_rule.key_iova;
-	mask = (uint8_t *)(size_t)flow->fs_rule.mask_iova;
+static int
+dpaa2_flow_ip_address_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_IPV4 &&
+		(field == NH_FLD_IPV4_SRC_IP ||
+		field == NH_FLD_IPV4_DST_IP))
+		return true;
+	else if (prot == NET_PROT_IPV6 &&
+		(field == NH_FLD_IPV6_SRC_IP ||
+		field == NH_FLD_IPV6_DST_IP))
+		return true;
+	else if (prot == NET_PROT_IP &&
+		(field == NH_FLD_IP_SRC ||
+		field == NH_FLD_IP_DST))
+		return true;
 
-	fprintf(f, "key:\r\n");
-	for (idx = 0; idx < flow->fs_real_key_size; idx++)
-		fprintf(f, "%02x ", key[idx]);
+	return false;
+}
 
-	fprintf(f, "\r\nmask:\r\n");
-	for (idx = 0; idx < flow->fs_real_key_size; idx++)
-		fprintf(f, "%02x ", mask[idx]);
+static int
+dpaa2_flow_l4_src_port_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_TCP &&
+		field == NH_FLD_TCP_PORT_SRC)
+		return true;
+	else if (prot == NET_PROT_UDP &&
+		field == NH_FLD_UDP_PORT_SRC)
+		return true;
+	else if (prot == NET_PROT_SCTP &&
+		field == NH_FLD_SCTP_PORT_SRC)
+		return true;
+
+	return false;
+}
 
-	fprintf(f, "\r\n%s FS ipsrc: %d, ipdst: %d\r\n", log_info,
-		flow->ipaddr_rule.fs_ipsrc_offset,
-		flow->ipaddr_rule.fs_ipdst_offset);
+static int
+dpaa2_flow_l4_dst_port_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_TCP &&
+		field == NH_FLD_TCP_PORT_DST)
+		return true;
+	else if (prot == NET_PROT_UDP &&
+		field == NH_FLD_UDP_PORT_DST)
+		return true;
+	else if (prot == NET_PROT_SCTP &&
+		field == NH_FLD_SCTP_PORT_DST)
+		return true;
+
+	return false;
 }
 
-static inline void dpaa2_flow_extract_key_set(
-	struct dpaa2_key_info *key_info, int index, uint8_t size)
+static int
+dpaa2_flow_add_qos_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow)
 {
-	key_info->key_size[index] = size;
-	if (index > 0) {
-		key_info->key_offset[index] =
-			key_info->key_offset[index - 1] +
-			key_info->key_size[index - 1];
-	} else {
-		key_info->key_offset[index] = 0;
+	uint16_t qos_index;
+	int ret;
+	struct fsl_mc_io *dpni = priv->hw;
+
+	if (priv->num_rx_tc <= 1 &&
+		flow->action_type != RTE_FLOW_ACTION_TYPE_RSS) {
+		DPAA2_PMD_WARN("No QoS Table for FS");
+		return -EINVAL;
 	}
-	key_info->key_total_size += size;
+
+	/* QoS entry added is only effective for multiple TCs.*/
+	qos_index = flow->tc_id * priv->fs_entries + flow->tc_index;
+	if (qos_index >= priv->qos_entries) {
+		DPAA2_PMD_ERR("QoS table full(%d >= %d)",
+			qos_index, priv->qos_entries);
+		return -EINVAL;
+	}
+
+	dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
+
+	ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+			priv->token, &flow->qos_rule,
+			flow->tc_id, qos_index,
+			0, 0);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("Add entry(%d) to table(%d) failed",
+			qos_index, flow->tc_id);
+		return ret;
+	}
+
+	return 0;
 }
 
-static int dpaa2_flow_extract_add(
-	struct dpaa2_key_extract *key_extract,
-	enum net_prot prot,
-	uint32_t field, uint8_t field_size)
+static int
+dpaa2_flow_add_fs_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow)
 {
-	int index, ip_src = -1, ip_dst = -1;
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
+	int ret;
+	struct fsl_mc_io *dpni = priv->hw;
 
-	if (dpkg->num_extracts >=
-		DPKG_MAX_NUM_OF_EXTRACTS) {
-		DPAA2_PMD_WARN("Number of extracts overflows");
-		return -1;
+	if (flow->tc_index >= priv->fs_entries) {
+		DPAA2_PMD_ERR("FS table full(%d >= %d)",
+			flow->tc_index, priv->fs_entries);
+		return -EINVAL;
 	}
-	/* Before reorder, the IP SRC and IP DST are already last
-	 * extract(s).
-	 */
-	for (index = 0; index < dpkg->num_extracts; index++) {
-		if (dpkg->extracts[index].extract.from_hdr.prot ==
-			NET_PROT_IP) {
-			if (dpkg->extracts[index].extract.from_hdr.field ==
-				NH_FLD_IP_SRC) {
-				ip_src = index;
-			}
-			if (dpkg->extracts[index].extract.from_hdr.field ==
-				NH_FLD_IP_DST) {
-				ip_dst = index;
+
+	dpaa2_flow_fs_entry_log("Start add", flow);
+
+	ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
+			priv->token, flow->tc_id,
+			flow->tc_index, &flow->fs_rule,
+			&flow->fs_action_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("Add rule(%d) to FS table(%d) failed",
+			flow->tc_index, flow->tc_id);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_rule_insert_hole(struct dpaa2_dev_flow *flow,
+	int offset, int size,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int end;
+
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		end = flow->qos_rule_size;
+		if (end > offset) {
+			memmove(flow->qos_key_addr + offset + size,
+					flow->qos_key_addr + offset,
+					end - offset);
+			memset(flow->qos_key_addr + offset,
+					0, size);
+
+			memmove(flow->qos_mask_addr + offset + size,
+					flow->qos_mask_addr + offset,
+					end - offset);
+			memset(flow->qos_mask_addr + offset,
+					0, size);
+		}
+		flow->qos_rule_size += size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		end = flow->fs_rule_size;
+		if (end > offset) {
+			memmove(flow->fs_key_addr + offset + size,
+					flow->fs_key_addr + offset,
+					end - offset);
+			memset(flow->fs_key_addr + offset,
+					0, size);
+
+			memmove(flow->fs_mask_addr + offset + size,
+					flow->fs_mask_addr + offset,
+					end - offset);
+			memset(flow->fs_mask_addr + offset,
+					0, size);
+		}
+		flow->fs_rule_size += size;
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_rule_add_all(struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type,
+	uint16_t entry_size, uint8_t tc_id)
+{
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	int ret;
+
+	while (curr) {
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			if (priv->num_rx_tc > 1 ||
+				curr->action_type ==
+				RTE_FLOW_ACTION_TYPE_RSS) {
+				curr->qos_rule.key_size = entry_size;
+				ret = dpaa2_flow_add_qos_rule(priv, curr);
+				if (ret)
+					return ret;
 			}
 		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE &&
+			curr->tc_id == tc_id) {
+			curr->fs_rule.key_size = entry_size;
+			ret = dpaa2_flow_add_fs_rule(priv, curr);
+			if (ret)
+				return ret;
+		}
+		curr = LIST_NEXT(curr, next);
 	}
 
-	if (ip_src >= 0)
-		RTE_ASSERT((ip_src + 2) >= dpkg->num_extracts);
+	return 0;
+}
 
-	if (ip_dst >= 0)
-		RTE_ASSERT((ip_dst + 2) >= dpkg->num_extracts);
+static int
+dpaa2_flow_qos_rule_insert_hole(struct dpaa2_dev_priv *priv,
+	int offset, int size)
+{
+	struct dpaa2_dev_flow *curr;
+	int ret;
 
-	if (prot == NET_PROT_IP &&
-		(field == NH_FLD_IP_SRC ||
-		field == NH_FLD_IP_DST)) {
-		index = dpkg->num_extracts;
+	curr = priv->curr;
+	if (!curr) {
+		DPAA2_PMD_ERR("Current qos flow insert hole failed.");
+		return -EINVAL;
 	} else {
-		if (ip_src >= 0 && ip_dst >= 0)
-			index = dpkg->num_extracts - 2;
-		else if (ip_src >= 0 || ip_dst >= 0)
-			index = dpkg->num_extracts - 1;
-		else
-			index = dpkg->num_extracts;
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	dpkg->extracts[index].type =	DPKG_EXTRACT_FROM_HDR;
-	dpkg->extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-	dpkg->extracts[index].extract.from_hdr.prot = prot;
-	dpkg->extracts[index].extract.from_hdr.field = field;
-	if (prot == NET_PROT_IP &&
-		(field == NH_FLD_IP_SRC ||
-		field == NH_FLD_IP_DST)) {
-		dpaa2_flow_extract_key_set(key_info, index, 0);
+	curr = LIST_FIRST(&priv->flows);
+	while (curr) {
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		curr = LIST_NEXT(curr, next);
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_fs_rule_insert_hole(struct dpaa2_dev_priv *priv,
+	int offset, int size, int tc_id)
+{
+	struct dpaa2_dev_flow *curr;
+	int ret;
+
+	curr = priv->curr;
+	if (!curr || curr->tc_id != tc_id) {
+		DPAA2_PMD_ERR("Current flow insert hole failed.");
+		return -EINVAL;
 	} else {
-		dpaa2_flow_extract_key_set(key_info, index, field_size);
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	if (prot == NET_PROT_IP) {
-		if (field == NH_FLD_IP_SRC) {
-			if (key_info->ipv4_dst_offset >= 0) {
-				key_info->ipv4_src_offset =
-					key_info->ipv4_dst_offset +
-					NH_FLD_IPV4_ADDR_SIZE;
-			} else {
-				key_info->ipv4_src_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-			if (key_info->ipv6_dst_offset >= 0) {
-				key_info->ipv6_src_offset =
-					key_info->ipv6_dst_offset +
-					NH_FLD_IPV6_ADDR_SIZE;
-			} else {
-				key_info->ipv6_src_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-		} else if (field == NH_FLD_IP_DST) {
-			if (key_info->ipv4_src_offset >= 0) {
-				key_info->ipv4_dst_offset =
-					key_info->ipv4_src_offset +
-					NH_FLD_IPV4_ADDR_SIZE;
-			} else {
-				key_info->ipv4_dst_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-			if (key_info->ipv6_src_offset >= 0) {
-				key_info->ipv6_dst_offset =
-					key_info->ipv6_src_offset +
-					NH_FLD_IPV6_ADDR_SIZE;
-			} else {
-				key_info->ipv6_dst_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
+	curr = LIST_FIRST(&priv->flows);
+
+	while (curr) {
+		if (curr->tc_id != tc_id) {
+			curr = LIST_NEXT(curr, next);
+			continue;
 		}
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		curr = LIST_NEXT(curr, next);
 	}
 
-	if (index == dpkg->num_extracts) {
-		dpkg->num_extracts++;
-		return 0;
+	return 0;
+}
+
+/* Move IPv4/IPv6 addresses to fill new extract previous IP address.
+ * Current MC/WRIOP only support generic IP extract but IP address
+ * is not fixed, so we have to put them at end of extracts, otherwise,
+ * the extracts position following them can't be identified.
+ */
+static int
+dpaa2_flow_key_profile_advance(enum net_prot prot,
+	uint32_t field, uint8_t field_size,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off += field_size;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, field_size);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, field_size, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = field_size;
+	key_profile->prot_field[pos].prot = prot;
+	key_profile->prot_field[pos].key_field = field;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	if (dpaa2_flow_l4_src_port_extract(prot, field)) {
+		key_profile->l4_src_port_present = 1;
+		key_profile->l4_src_port_pos = pos;
+		key_profile->l4_src_port_offset =
+			key_profile->key_offset[pos];
+	} else if (dpaa2_flow_l4_dst_port_extract(prot, field)) {
+		key_profile->l4_dst_port_present = 1;
+		key_profile->l4_dst_port_pos = pos;
+		key_profile->l4_dst_port_offset =
+			key_profile->key_offset[pos];
+	}
+	key_profile->key_max_size += field_size;
+
+	return pos;
+}
+
+static int
+dpaa2_flow_extract_add_hdr(enum net_prot prot,
+	uint32_t field, uint8_t field_size,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
 	}
 
-	if (ip_src >= 0) {
-		ip_src++;
-		dpkg->extracts[ip_src].type =
-			DPKG_EXTRACT_FROM_HDR;
-		dpkg->extracts[ip_src].extract.from_hdr.type =
-			DPKG_FULL_FIELD;
-		dpkg->extracts[ip_src].extract.from_hdr.prot =
-			NET_PROT_IP;
-		dpkg->extracts[ip_src].extract.from_hdr.field =
-			NH_FLD_IP_SRC;
-		dpaa2_flow_extract_key_set(key_info, ip_src, 0);
-		key_info->ipv4_src_offset += field_size;
-		key_info->ipv6_src_offset += field_size;
-	}
-	if (ip_dst >= 0) {
-		ip_dst++;
-		dpkg->extracts[ip_dst].type =
-			DPKG_EXTRACT_FROM_HDR;
-		dpkg->extracts[ip_dst].extract.from_hdr.type =
-			DPKG_FULL_FIELD;
-		dpkg->extracts[ip_dst].extract.from_hdr.prot =
-			NET_PROT_IP;
-		dpkg->extracts[ip_dst].extract.from_hdr.field =
-			NH_FLD_IP_DST;
-		dpaa2_flow_extract_key_set(key_info, ip_dst, 0);
-		key_info->ipv4_dst_offset += field_size;
-		key_info->ipv6_dst_offset += field_size;
+	pos = dpaa2_flow_key_profile_advance(prot,
+			field, field_size, priv,
+			dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
 	}
 
+	extracts[pos].type = DPKG_EXTRACT_FROM_HDR;
+	extracts[pos].extract.from_hdr.prot = prot;
+	extracts[pos].extract.from_hdr.type = DPKG_FULL_FIELD;
+	extracts[pos].extract.from_hdr.field = field;
+
 	dpkg->num_extracts++;
 
 	return 0;
 }
 
-static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
-				      int size)
+static int
+dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
+	int size)
 {
 	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
+	struct dpaa2_key_profile *key_info = &key_extract->key_profile;
 	int last_extract_size, index;
 
 	if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
@@ -531,83 +804,58 @@ static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
 			DPAA2_FLOW_MAX_KEY_SIZE * index;
 	}
 
-	key_info->key_total_size = size;
+	key_info->key_max_size = size;
 	return 0;
 }
 
-/* Protocol discrimination.
- * Discriminate IPv4/IPv6/vLan by Eth type.
- * Discriminate UDP/TCP/ICMP by next proto of IP.
- */
 static inline int
-dpaa2_flow_proto_discrimination_extract(
-	struct dpaa2_key_extract *key_extract,
-	enum rte_flow_item_type type)
+dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t key_field)
 {
-	if (type == RTE_FLOW_ITEM_TYPE_ETH) {
-		return dpaa2_flow_extract_add(
-				key_extract, NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				sizeof(rte_be16_t));
-	} else if (type == (enum rte_flow_item_type)
-		DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
-		return dpaa2_flow_extract_add(
-				key_extract, NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				NH_FLD_IP_PROTO_SIZE);
-	}
-
-	return -1;
-}
+	int pos;
+	struct key_prot_field *prot_field;
 
-static inline int dpaa2_flow_extract_search(
-	struct dpkg_profile_cfg *dpkg,
-	enum net_prot prot, uint32_t field)
-{
-	int i;
+	if (dpaa2_flow_ip_address_extract(prot, key_field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
 
-	for (i = 0; i < dpkg->num_extracts; i++) {
-		if (dpkg->extracts[i].extract.from_hdr.prot == prot &&
-			dpkg->extracts[i].extract.from_hdr.field == field) {
-			return i;
+	prot_field = key_profile->prot_field;
+	for (pos = 0; pos < key_profile->num; pos++) {
+		if (prot_field[pos].prot == prot &&
+			prot_field[pos].key_field == key_field) {
+			return pos;
 		}
 	}
 
-	return -1;
+	if (dpaa2_flow_l4_src_port_extract(prot, key_field)) {
+		if (key_profile->l4_src_port_present)
+			return key_profile->l4_src_port_pos;
+	} else if (dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
+		if (key_profile->l4_dst_port_present)
+			return key_profile->l4_dst_port_pos;
+	}
+
+	return -ENXIO;
 }
 
-static inline int dpaa2_flow_extract_key_offset(
-	struct dpaa2_key_extract *key_extract,
-	enum net_prot prot, uint32_t field)
+static inline int
+dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t key_field)
 {
 	int i;
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
 
-	if (prot == NET_PROT_IPV4 ||
-		prot == NET_PROT_IPV6)
-		i = dpaa2_flow_extract_search(dpkg, NET_PROT_IP, field);
+	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
+
+	if (i >= 0)
+		return key_profile->key_offset[i];
 	else
-		i = dpaa2_flow_extract_search(dpkg, prot, field);
-
-	if (i >= 0) {
-		if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_SRC)
-			return key_info->ipv4_src_offset;
-		else if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_DST)
-			return key_info->ipv4_dst_offset;
-		else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_SRC)
-			return key_info->ipv6_src_offset;
-		else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_DST)
-			return key_info->ipv6_dst_offset;
-		else
-			return key_info->key_offset[i];
-	} else {
-		return -1;
-	}
+		return i;
 }
 
-struct proto_discrimination {
-	enum rte_flow_item_type type;
+struct prev_proto_field_id {
+	enum net_prot prot;
 	union {
 		rte_be16_t eth_type;
 		uint8_t ip_proto;
@@ -615,103 +863,134 @@ struct proto_discrimination {
 };
 
 static int
-dpaa2_flow_proto_discrimination_rule(
-	struct dpaa2_dev_priv *priv, struct rte_flow *flow,
-	struct proto_discrimination proto, int group)
+dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct prev_proto_field_id *prev_proto,
+	int group,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	enum net_prot prot;
-	uint32_t field;
 	int offset;
-	size_t key_iova;
-	size_t mask_iova;
+	uint8_t *key_addr;
+	uint8_t *mask_addr;
+	uint32_t field = 0;
 	rte_be16_t eth_type;
 	uint8_t ip_proto;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
 
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		prot = NET_PROT_ETH;
+	if (prev_proto->prot == NET_PROT_ETH) {
 		field = NH_FLD_ETH_TYPE;
-	} else if (proto.type == DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
-		prot = NET_PROT_IP;
+	} else if (prev_proto->prot == NET_PROT_IP) {
 		field = NH_FLD_IP_PROTO;
 	} else {
-		DPAA2_PMD_ERR(
-			"Only Eth and IP support to discriminate next proto.");
-		return -1;
-	}
-
-	offset = dpaa2_flow_extract_key_offset(&priv->extract.qos_key_extract,
-			prot, field);
-	if (offset < 0) {
-		DPAA2_PMD_ERR("QoS prot %d field %d extract failed",
-				prot, field);
-		return -1;
-	}
-	key_iova = flow->qos_rule.key_iova + offset;
-	mask_iova = flow->qos_rule.mask_iova + offset;
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		eth_type = proto.eth_type;
-		memcpy((void *)key_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-		eth_type = 0xffff;
-		memcpy((void *)mask_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-	} else {
-		ip_proto = proto.ip_proto;
-		memcpy((void *)key_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-		ip_proto = 0xff;
-		memcpy((void *)mask_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-	}
-
-	offset = dpaa2_flow_extract_key_offset(
-			&priv->extract.tc_key_extract[group],
-			prot, field);
-	if (offset < 0) {
-		DPAA2_PMD_ERR("FS prot %d field %d extract failed",
-				prot, field);
-		return -1;
+		DPAA2_PMD_ERR("Prev proto(%d) not support!",
+			prev_proto->prot);
+		return -EINVAL;
 	}
-	key_iova = flow->fs_rule.key_iova + offset;
-	mask_iova = flow->fs_rule.mask_iova + offset;
 
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		eth_type = proto.eth_type;
-		memcpy((void *)key_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-		eth_type = 0xffff;
-		memcpy((void *)mask_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-	} else {
-		ip_proto = proto.ip_proto;
-		memcpy((void *)key_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-		ip_proto = 0xff;
-		memcpy((void *)mask_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		key_extract = &priv->extract.qos_key_extract;
+		key_profile = &key_extract->key_profile;
+
+		offset = dpaa2_flow_extract_key_offset(key_profile,
+				prev_proto->prot, field);
+		if (offset < 0) {
+			DPAA2_PMD_ERR("%s QoS key extract failed", __func__);
+			return -EINVAL;
+		}
+		key_addr = flow->qos_key_addr + offset;
+		mask_addr = flow->qos_mask_addr + offset;
+		if (prev_proto->prot == NET_PROT_ETH) {
+			eth_type = prev_proto->eth_type;
+			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
+			eth_type = 0xffff;
+			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
+			flow->qos_rule_size += sizeof(rte_be16_t);
+		} else if (prev_proto->prot == NET_PROT_IP) {
+			ip_proto = prev_proto->ip_proto;
+			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
+			ip_proto = 0xff;
+			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
+			flow->qos_rule_size += sizeof(uint8_t);
+		} else {
+			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
+				prev_proto->prot);
+			return -EINVAL;
+		}
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		key_extract = &priv->extract.tc_key_extract[group];
+		key_profile = &key_extract->key_profile;
+
+		offset = dpaa2_flow_extract_key_offset(key_profile,
+				prev_proto->prot, field);
+		if (offset < 0) {
+			DPAA2_PMD_ERR("%s TC[%d] key extract failed",
+				__func__, group);
+			return -EINVAL;
+		}
+		key_addr = flow->fs_key_addr + offset;
+		mask_addr = flow->fs_mask_addr + offset;
+
+		if (prev_proto->prot == NET_PROT_ETH) {
+			eth_type = prev_proto->eth_type;
+			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
+			eth_type = 0xffff;
+			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
+			flow->fs_rule_size += sizeof(rte_be16_t);
+		} else if (prev_proto->prot == NET_PROT_IP) {
+			ip_proto = prev_proto->ip_proto;
+			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
+			ip_proto = 0xff;
+			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
+			flow->fs_rule_size += sizeof(uint8_t);
+		} else {
+			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
+				prev_proto->prot);
+			return -EINVAL;
+		}
 	}
 
 	return 0;
 }
 
 static inline int
-dpaa2_flow_rule_data_set(
-	struct dpaa2_key_extract *key_extract,
-	struct dpni_rule_cfg *rule,
-	enum net_prot prot, uint32_t field,
-	const void *key, const void *mask, int size)
+dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t field, int size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int offset = dpaa2_flow_extract_key_offset(key_extract,
-				prot, field);
+	int offset;
 
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			prot, field);
 	if (offset < 0) {
-		DPAA2_PMD_ERR("prot %d, field %d extract failed",
+		DPAA2_PMD_ERR("P(%d)/F(%d) does not exist!",
 			prot, field);
-		return -1;
+		return -EINVAL;
 	}
 
-	memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
-	memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, size);
+		memcpy((flow->qos_mask_addr + offset), mask, size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size = offset + size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, size);
+		memcpy((flow->fs_mask_addr + offset), mask, size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size = offset + size;
+	}
 
 	return 0;
 }
@@ -728,145 +1007,13 @@ dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
 	return 0;
 }
 
-static inline int
-_dpaa2_flow_rule_move_ipaddr_tail(
-	struct dpaa2_key_extract *key_extract,
-	struct dpni_rule_cfg *rule, int src_offset,
-	uint32_t field, bool ipv4)
-{
-	size_t key_src;
-	size_t mask_src;
-	size_t key_dst;
-	size_t mask_dst;
-	int dst_offset, len;
-	enum net_prot prot;
-	char tmp[NH_FLD_IPV6_ADDR_SIZE];
-
-	if (field != NH_FLD_IP_SRC &&
-		field != NH_FLD_IP_DST) {
-		DPAA2_PMD_ERR("Field of IP addr reorder must be IP SRC/DST");
-		return -1;
-	}
-	if (ipv4)
-		prot = NET_PROT_IPV4;
-	else
-		prot = NET_PROT_IPV6;
-	dst_offset = dpaa2_flow_extract_key_offset(key_extract,
-				prot, field);
-	if (dst_offset < 0) {
-		DPAA2_PMD_ERR("Field %d reorder extract failed", field);
-		return -1;
-	}
-	key_src = rule->key_iova + src_offset;
-	mask_src = rule->mask_iova + src_offset;
-	key_dst = rule->key_iova + dst_offset;
-	mask_dst = rule->mask_iova + dst_offset;
-	if (ipv4)
-		len = sizeof(rte_be32_t);
-	else
-		len = NH_FLD_IPV6_ADDR_SIZE;
-
-	memcpy(tmp, (char *)key_src, len);
-	memset((char *)key_src, 0, len);
-	memcpy((char *)key_dst, tmp, len);
-
-	memcpy(tmp, (char *)mask_src, len);
-	memset((char *)mask_src, 0, len);
-	memcpy((char *)mask_dst, tmp, len);
-
-	return 0;
-}
-
-static inline int
-dpaa2_flow_rule_move_ipaddr_tail(
-	struct rte_flow *flow, struct dpaa2_dev_priv *priv,
-	int fs_group)
+static int
+dpaa2_flow_extract_support(const uint8_t *mask_src,
+	enum rte_flow_item_type type)
 {
-	int ret;
-	enum net_prot prot;
-
-	if (flow->ipaddr_rule.ipaddr_type == FLOW_NONE_IPADDR)
-		return 0;
-
-	if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR)
-		prot = NET_PROT_IPV4;
-	else
-		prot = NET_PROT_IPV6;
-
-	if (flow->ipaddr_rule.qos_ipsrc_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				flow->ipaddr_rule.qos_ipsrc_offset,
-				NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS src address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_SRC);
-	}
-
-	if (flow->ipaddr_rule.qos_ipdst_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				flow->ipaddr_rule.qos_ipdst_offset,
-				NH_FLD_IP_DST, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS dst address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_DST);
-	}
-
-	if (flow->ipaddr_rule.fs_ipsrc_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.tc_key_extract[fs_group],
-				&flow->fs_rule,
-				flow->ipaddr_rule.fs_ipsrc_offset,
-				NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("FS src address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.fs_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[fs_group],
-				prot, NH_FLD_IP_SRC);
-	}
-	if (flow->ipaddr_rule.fs_ipdst_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.tc_key_extract[fs_group],
-				&flow->fs_rule,
-				flow->ipaddr_rule.fs_ipdst_offset,
-				NH_FLD_IP_DST, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("FS dst address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.fs_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[fs_group],
-				prot, NH_FLD_IP_DST);
-	}
-
-	return 0;
-}
-
-static int
-dpaa2_flow_extract_support(
-	const uint8_t *mask_src,
-	enum rte_flow_item_type type)
-{
-	char mask[64];
-	int i, size = 0;
-	const char *mask_support = 0;
+	char mask[64];
+	int i, size = 0;
+	const char *mask_support = 0;
 
 	switch (type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
@@ -906,7 +1053,7 @@ dpaa2_flow_extract_support(
 		size = sizeof(struct rte_flow_item_gre);
 		break;
 	default:
-		return -1;
+		return -EINVAL;
 	}
 
 	memcpy(mask, mask_support, size);
@@ -921,491 +1068,444 @@ dpaa2_flow_extract_support(
 }
 
 static int
-dpaa2_configure_flow_eth(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct prev_proto_field_id *prev_prot,
+	enum dpaa2_flow_dist_type dist_type,
+	int group, int *recfg)
 {
-	int index, ret;
-	int local_cfg = 0;
-	uint32_t group;
-	const struct rte_flow_item_eth *spec, *mask;
-
-	/* TODO: Currently upper bound of range parameter is not implemented */
-	const struct rte_flow_item_eth *last __rte_unused;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
-
-	group = attr->group;
-
-	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_eth *)pattern->spec;
-	last    = (const struct rte_flow_item_eth *)pattern->last;
-	mask    = (const struct rte_flow_item_eth *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_eth_mask);
-	if (!spec) {
-		/* Don't care any field of eth header,
-		 * only care eth protocol.
-		 */
-		DPAA2_PMD_WARN("No pattern spec for Eth flow, just skip");
-		return 0;
-	}
-
-	/* Get traffic class index and flow id to be configured */
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
-
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
-		DPAA2_PMD_WARN("Extract field(s) of ethernet not support.");
-
-		return -1;
-	}
-
-	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_SA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_SA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_SA failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_SA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_SA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_SA failed.");
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	int ret, index, local_cfg = 0, size = 0;
+	struct dpaa2_key_extract *extract;
+	struct dpaa2_key_profile *key_profile;
+	enum net_prot prot = prev_prot->prot;
+	uint32_t key_field = 0;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH_SA rule set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_SA,
-				&spec->hdr.src_addr.addr_bytes,
-				&mask->hdr.src_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_SA,
-				&spec->hdr.src_addr.addr_bytes,
-				&mask->hdr.src_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
-			return -1;
-		}
+	if (prot == NET_PROT_ETH) {
+		key_field = NH_FLD_ETH_TYPE;
+		size = sizeof(rte_be16_t);
+	} else if (prot == NET_PROT_IP) {
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else if (prot == NET_PROT_IPV4) {
+		prot = NET_PROT_IP;
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else if (prot == NET_PROT_IPV6) {
+		prot = NET_PROT_IP;
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else {
+		DPAA2_PMD_ERR("Invalid Prev prot(%d)", prot);
+		return -EINVAL;
 	}
 
-	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_DA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_DA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_DA failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		extract = &priv->extract.qos_key_extract;
+		key_profile = &extract->key_profile;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_DA);
+		index = dpaa2_flow_extract_search(key_profile,
+				prot, key_field);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_DA,
-					RTE_ETHER_ADDR_LEN);
+			ret = dpaa2_flow_extract_add_hdr(prot,
+					key_field, size, priv,
+					DPAA2_FLOW_QOS_TYPE, group,
+					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_DA failed.");
+				DPAA2_PMD_ERR("QOS prev extract add failed");
 
-				return -1;
+				return -EINVAL;
 			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH DA rule set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_DA,
-				&spec->hdr.dst_addr.addr_bytes,
-				&mask->hdr.dst_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
-			return -1;
+			local_cfg |= DPAA2_FLOW_QOS_TYPE;
 		}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_DA,
-				&spec->hdr.dst_addr.addr_bytes,
-				&mask->hdr.dst_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
+		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
-			return -1;
+			DPAA2_PMD_ERR("QoS prev rule set failed");
+			return -EINVAL;
 		}
 	}
 
-	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_TYPE,
-					RTE_ETHER_TYPE_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_TYPE failed.");
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		extract = &priv->extract.tc_key_extract[group];
+		key_profile = &extract->key_profile;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
+		index = dpaa2_flow_extract_search(key_profile,
+				prot, key_field);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_TYPE,
-					RTE_ETHER_TYPE_LEN);
+			ret = dpaa2_flow_extract_add_hdr(prot,
+					key_field, size, priv,
+					DPAA2_FLOW_FS_TYPE, group,
+					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_TYPE failed.");
+				DPAA2_PMD_ERR("FS[%d] prev extract add failed",
+					group);
 
-				return -1;
+				return -EINVAL;
 			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH TYPE rule set failed");
-				return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				&spec->hdr.ether_type,
-				&mask->hdr.ether_type,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
-			return -1;
+			local_cfg |= DPAA2_FLOW_FS_TYPE;
 		}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				&spec->hdr.ether_type,
-				&mask->hdr.ether_type,
-				sizeof(rte_be16_t));
+		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
-			return -1;
+			DPAA2_PMD_ERR("FS[%d] prev rule set failed",
+				group);
+			return -EINVAL;
 		}
 	}
 
-	(*device_configured) |= local_cfg;
+	if (recfg)
+		*recfg = local_cfg;
 
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_vlan(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
+	enum net_prot prot, uint32_t field,
+	const void *key, const void *mask, int size,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int index, ret;
-	int local_cfg = 0;
-	uint32_t group;
-	const struct rte_flow_item_vlan *spec, *mask;
-
-	const struct rte_flow_item_vlan *last __rte_unused;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-
-	group = attr->group;
+	int index, ret, local_cfg = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
 
-	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_vlan *)pattern->spec;
-	last    = (const struct rte_flow_item_vlan *)pattern->last;
-	mask    = (const struct rte_flow_item_vlan *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask);
+	if (dpaa2_flow_ip_address_extract(prot, field))
+		return -EINVAL;
 
-	/* Get traffic class index and flow id to be configured */
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
 
-	if (!spec) {
-		/* Don't care any field of vlan header,
-		 * only care vlan protocol.
-		 */
-		/* Eth type is actually used for vLan classification.
-		 */
-		struct proto_discrimination proto;
+	key_profile = &key_extract->key_profile;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-						&priv->extract.qos_key_extract,
-						RTE_FLOW_ITEM_TYPE_ETH);
-			if (ret) {
-				DPAA2_PMD_ERR(
-				"QoS Ext ETH_TYPE to discriminate vLan failed");
+	index = dpaa2_flow_extract_search(key_profile,
+			prot, field);
+	if (index < 0) {
+		ret = dpaa2_flow_extract_add_hdr(prot,
+				field, size, priv,
+				dist_type, tc_id, NULL);
+		if (ret) {
+			DPAA2_PMD_ERR("QoS Extract P(%d)/F(%d) failed",
+				prot, field);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+			return ret;
 		}
+		local_cfg |= dist_type;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					RTE_FLOW_ITEM_TYPE_ETH);
-			if (ret) {
-				DPAA2_PMD_ERR(
-				"FS Ext ETH_TYPE to discriminate vLan failed.");
+	ret = dpaa2_flow_hdr_rule_data_set(flow, key_profile,
+			prot, field, size, key, mask, dist_type);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS P(%d)/F(%d) rule data set failed",
+			prot, field);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		return ret;
+	}
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"Move ipaddr before vLan discrimination set failed");
-			return -1;
-		}
+	if (recfg)
+		*recfg |= local_cfg;
 
-		proto.type = RTE_FLOW_ITEM_TYPE_ETH;
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("vLan discrimination rule set failed");
-			return -1;
-		}
+	return 0;
+}
 
-		(*device_configured) |= local_cfg;
+static int
+dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
+	enum net_prot prot, uint32_t field,
+	const void *key, const void *mask, int size,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int local_cfg = 0, num, ipaddr_extract_len = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
+	struct dpkg_profile_cfg *dpkg;
+	uint8_t *key_addr, *mask_addr;
+	union ip_addr_extract_rule *ip_addr_data;
+	union ip_addr_extract_rule *ip_addr_mask;
+	enum net_prot orig_prot;
+	uint32_t orig_field;
+
+	if (prot != NET_PROT_IPV4 && prot != NET_PROT_IPV6)
+		return -EINVAL;
 
-		return 0;
+	if (prot == NET_PROT_IPV4 && field != NH_FLD_IPV4_SRC_IP &&
+		field != NH_FLD_IPV4_DST_IP) {
+		return -EINVAL;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VLAN)) {
-		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
-
-		return -1;
+	if (prot == NET_PROT_IPV6 && field != NH_FLD_IPV6_SRC_IP &&
+		field != NH_FLD_IPV6_DST_IP) {
+		return -EINVAL;
 	}
 
-	if (!mask->hdr.vlan_tci)
-		return 0;
-
-	index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_VLAN, NH_FLD_VLAN_TCI);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-						&priv->extract.qos_key_extract,
-						NET_PROT_VLAN,
-						NH_FLD_VLAN_TCI,
-						sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract add VLAN_TCI failed.");
+	orig_prot = prot;
+	orig_field = field;
 
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+	if (prot == NET_PROT_IPV4 &&
+		field == NH_FLD_IPV4_SRC_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_SRC;
+	} else if (prot == NET_PROT_IPV4 &&
+		field == NH_FLD_IPV4_DST_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_DST;
+	} else if (prot == NET_PROT_IPV6 &&
+		field == NH_FLD_IPV6_SRC_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_SRC;
+	} else if (prot == NET_PROT_IPV6 &&
+		field == NH_FLD_IPV6_DST_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_DST;
+	} else {
+		DPAA2_PMD_ERR("Inval P(%d)/F(%d) to extract ip address",
+			prot, field);
+		return -EINVAL;
 	}
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_VLAN, NH_FLD_VLAN_TCI);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.tc_key_extract[group],
-				NET_PROT_VLAN,
-				NH_FLD_VLAN_TCI,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract add VLAN_TCI failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		key_extract = &priv->extract.qos_key_extract;
+		key_profile = &key_extract->key_profile;
+		dpkg = &key_extract->dpkg;
+		num = key_profile->num;
+		key_addr = flow->qos_key_addr;
+		mask_addr = flow->qos_mask_addr;
+	} else {
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+		key_profile = &key_extract->key_profile;
+		dpkg = &key_extract->dpkg;
+		num = key_profile->num;
+		key_addr = flow->fs_key_addr;
+		mask_addr = flow->fs_mask_addr;
 	}
 
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before VLAN TCI rule set failed");
-		return -1;
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_VLAN,
-				NH_FLD_VLAN_TCI,
-				&spec->hdr.vlan_tci,
-				&mask->hdr.vlan_tci,
-				sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
-		return -1;
+	if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT) {
+		if (field == NH_FLD_IP_SRC)
+			key_profile->ip_addr_type = IP_SRC_EXTRACT;
+		else
+			key_profile->ip_addr_type = IP_DST_EXTRACT;
+		ipaddr_extract_len = size;
+
+		key_profile->ip_addr_extract_pos = num;
+		if (num > 0) {
+			key_profile->ip_addr_extract_off =
+				key_profile->key_offset[num - 1] +
+				key_profile->key_size[num - 1];
+		} else {
+			key_profile->ip_addr_extract_off = 0;
+		}
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	} else if (key_profile->ip_addr_type == IP_SRC_EXTRACT) {
+		if (field == NH_FLD_IP_SRC) {
+			ipaddr_extract_len = size;
+			goto rule_configure;
+		}
+		key_profile->ip_addr_type = IP_SRC_DST_EXTRACT;
+		ipaddr_extract_len = size * 2;
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	} else if (key_profile->ip_addr_type == IP_DST_EXTRACT) {
+		if (field == NH_FLD_IP_DST) {
+			ipaddr_extract_len = size;
+			goto rule_configure;
+		}
+		key_profile->ip_addr_type = IP_DST_SRC_EXTRACT;
+		ipaddr_extract_len = size * 2;
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	}
+	key_profile->num++;
+
+	dpkg->extracts[num].extract.from_hdr.prot = prot;
+	dpkg->extracts[num].extract.from_hdr.field = field;
+	dpkg->extracts[num].extract.from_hdr.type = DPKG_FULL_FIELD;
+	dpkg->num_extracts++;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		local_cfg = DPAA2_FLOW_QOS_TYPE;
+	else
+		local_cfg = DPAA2_FLOW_FS_TYPE;
+
+rule_configure:
+	key_addr += key_profile->ip_addr_extract_off;
+	ip_addr_data = (union ip_addr_extract_rule *)key_addr;
+	mask_addr += key_profile->ip_addr_extract_off;
+	ip_addr_mask = (union ip_addr_extract_rule *)mask_addr;
+
+	if (orig_prot == NET_PROT_IPV4 &&
+		orig_field == NH_FLD_IPV4_SRC_IP) {
+		if (key_profile->ip_addr_type == IP_SRC_EXTRACT ||
+			key_profile->ip_addr_type == IP_SRC_DST_EXTRACT) {
+			memcpy(&ip_addr_data->ipv4_sd_addr.ipv4_src,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_sd_addr.ipv4_src,
+				mask, size);
+		} else {
+			memcpy(&ip_addr_data->ipv4_ds_addr.ipv4_src,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_ds_addr.ipv4_src,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV4 &&
+		orig_field == NH_FLD_IPV4_DST_IP) {
+		if (key_profile->ip_addr_type == IP_DST_EXTRACT ||
+			key_profile->ip_addr_type == IP_DST_SRC_EXTRACT) {
+			memcpy(&ip_addr_data->ipv4_ds_addr.ipv4_dst,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_ds_addr.ipv4_dst,
+				mask, size);
+		} else {
+			memcpy(&ip_addr_data->ipv4_sd_addr.ipv4_dst,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_sd_addr.ipv4_dst,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV6 &&
+		orig_field == NH_FLD_IPV6_SRC_IP) {
+		if (key_profile->ip_addr_type == IP_SRC_EXTRACT ||
+			key_profile->ip_addr_type == IP_SRC_DST_EXTRACT) {
+			memcpy(ip_addr_data->ipv6_sd_addr.ipv6_src,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_sd_addr.ipv6_src,
+				mask, size);
+		} else {
+			memcpy(ip_addr_data->ipv6_ds_addr.ipv6_src,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_ds_addr.ipv6_src,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV6 &&
+		orig_field == NH_FLD_IPV6_DST_IP) {
+		if (key_profile->ip_addr_type == IP_DST_EXTRACT ||
+			key_profile->ip_addr_type == IP_DST_SRC_EXTRACT) {
+			memcpy(ip_addr_data->ipv6_ds_addr.ipv6_dst,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_ds_addr.ipv6_dst,
+				mask, size);
+		} else {
+			memcpy(ip_addr_data->ipv6_sd_addr.ipv6_dst,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_sd_addr.ipv6_dst,
+				mask, size);
+		}
 	}
 
-	ret = dpaa2_flow_rule_data_set(
-			&priv->extract.tc_key_extract[group],
-			&flow->fs_rule,
-			NET_PROT_VLAN,
-			NH_FLD_VLAN_TCI,
-			&spec->hdr.vlan_tci,
-			&mask->hdr.vlan_tci,
-			sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
-		return -1;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		flow->qos_rule_size =
+			key_profile->ip_addr_extract_off + ipaddr_extract_len;
+	} else {
+		flow->fs_rule_size =
+			key_profile->ip_addr_extract_off + ipaddr_extract_len;
 	}
 
-	(*device_configured) |= local_cfg;
+	if (recfg)
+		*recfg |= local_cfg;
 
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_ip_discrimation(
-	struct dpaa2_dev_priv *priv, struct rte_flow *flow,
+dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
-	int *local_cfg,	int *device_configured,
-	uint32_t group)
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	struct proto_discrimination proto;
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_eth *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.qos_key_extract.dpkg,
-			NET_PROT_ETH, NH_FLD_ETH_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.qos_key_extract,
-				RTE_FLOW_ITEM_TYPE_ETH);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"QoS Extract ETH_TYPE to discriminate IP failed.");
-			return -1;
-		}
-		(*local_cfg) |= DPAA2_QOS_TABLE_RECONFIGURE;
-	}
+	group = attr->group;
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_ETH, NH_FLD_ETH_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				RTE_FLOW_ITEM_TYPE_ETH);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"FS Extract ETH_TYPE to discriminate IP failed.");
-			return -1;
-		}
-		(*local_cfg) |= DPAA2_FS_TABLE_RECONFIGURE;
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+			pattern->mask : &dpaa2_flow_item_eth_mask;
+	if (!spec) {
+		DPAA2_PMD_WARN("No pattern spec for Eth flow");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before IP discrimination set failed");
-		return -1;
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH)) {
+		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
+
+		return -EINVAL;
 	}
 
-	proto.type = RTE_FLOW_ITEM_TYPE_ETH;
-	if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
-	else
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	ret = dpaa2_flow_proto_discrimination_rule(priv, flow, proto, group);
-	if (ret) {
-		DPAA2_PMD_ERR("IP discrimination rule set failed");
-		return -1;
+	if (memcmp((const char *)&mask->src,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_SA, &spec->src.addr_bytes,
+			&mask->src.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_SA, &spec->src.addr_bytes,
+			&mask->src.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->dst,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_DA, &spec->dst.addr_bytes,
+			&mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_DA, &spec->dst.addr_bytes,
+			&mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->type,
+		zero_cmp, sizeof(rte_be16_t))) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_TYPE, &spec->type,
+			&mask->type, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_TYPE, &spec->type,
+			&mask->type, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	(*device_configured) |= (*local_cfg);
+	(*device_configured) |= local_cfg;
 
 	return 0;
 }
 
-
 static int
-dpaa2_configure_flow_generic_ip(
-	struct rte_flow *flow,
+dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
@@ -1413,419 +1513,338 @@ dpaa2_configure_flow_generic_ip(
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
-	const struct rte_flow_item_ipv4 *spec_ipv4 = 0,
-		*mask_ipv4 = 0;
-	const struct rte_flow_item_ipv6 *spec_ipv6 = 0,
-		*mask_ipv6 = 0;
-	const void *key, *mask;
-	enum net_prot prot;
-
+	const struct rte_flow_item_vlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
-	int size;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
-		spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec;
-		mask_ipv4 = (const struct rte_flow_item_ipv4 *)
-			(pattern->mask ? pattern->mask :
-					&dpaa2_flow_item_ipv4_mask);
-	} else {
-		spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec;
-		mask_ipv6 = (const struct rte_flow_item_ipv6 *)
-			(pattern->mask ? pattern->mask :
-					&dpaa2_flow_item_ipv6_mask);
-	}
+	spec = pattern->spec;
+	mask = pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	ret = dpaa2_configure_flow_ip_discrimation(priv,
-			flow, pattern, &local_cfg,
-			device_configured, group);
-	if (ret) {
-		DPAA2_PMD_ERR("IP discrimination failed!");
-		return -1;
+	if (!spec) {
+		struct prev_proto_field_id prev_proto;
+
+		prev_proto.prot = NET_PROT_ETH;
+		prev_proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_proto,
+				DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+				       RTE_FLOW_ITEM_TYPE_VLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+		return -EINVAL;
 	}
 
-	if (!spec_ipv4 && !spec_ipv6)
+	if (!mask->tci)
 		return 0;
 
-	if (mask_ipv4) {
-		if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
-			RTE_FLOW_ITEM_TYPE_IPV4)) {
-			DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
+					      NH_FLD_VLAN_TCI, &spec->tci,
+					      &mask->tci, sizeof(rte_be16_t),
+					      priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
 
-			return -1;
-		}
-	}
-
-	if (mask_ipv6) {
-		if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
-			RTE_FLOW_ITEM_TYPE_IPV6)) {
-			DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
-
-			return -1;
-		}
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
+					      NH_FLD_VLAN_TCI, &spec->tci,
+					      &mask->tci, sizeof(rte_be16_t),
+					      priv, group, &local_cfg,
+					      DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
 
-	if (mask_ipv4 && (mask_ipv4->hdr.src_addr ||
-		mask_ipv4->hdr.dst_addr)) {
-		flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR;
-	} else if (mask_ipv6 &&
-		(memcmp((const char *)mask_ipv6->hdr.src_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE) ||
-		memcmp((const char *)mask_ipv6->hdr.dst_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		flow->ipaddr_rule.ipaddr_type = FLOW_IPV6_ADDR;
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.src_addr) ||
-		(mask_ipv6 &&
-			memcmp((const char *)mask_ipv6->hdr.src_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_IP,
-					NH_FLD_IP_SRC,
-					0);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_SRC failed.");
+	(*device_configured) |= local_cfg;
+	return 0;
+}
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+static int
+dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
+			  const struct rte_flow_attr *attr,
+			  const struct rte_flow_item *pattern,
+			  const struct rte_flow_action actions[] __rte_unused,
+			  struct rte_flow_error *error __rte_unused,
+			  int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ipv4 *spec_ipv4 = 0, *mask_ipv4 = 0;
+	const void *key, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	int size;
+	struct prev_proto_field_id prev_prot;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_SRC,
-					0);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_SRC failed.");
+	group = attr->group;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	/* Parse pattern list to get the matching parameters */
+	spec_ipv4 = pattern->spec;
+	mask_ipv4 = pattern->mask ?
+		    pattern->mask : &dpaa2_flow_item_ipv4_mask;
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.src_addr;
-		else
-			key = &spec_ipv6->hdr.src_addr[0];
-		if (mask_ipv4) {
-			mask = &mask_ipv4->hdr.src_addr;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-			prot = NET_PROT_IPV4;
-		} else {
-			mask = &mask_ipv6->hdr.src_addr[0];
-			size = NH_FLD_IPV6_ADDR_SIZE;
-			prot = NET_PROT_IPV6;
-		}
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				prot, NH_FLD_IP_SRC,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_SRC rule data set failed");
-			return -1;
-		}
+	prev_prot.prot = NET_PROT_ETH;
+	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				prot, NH_FLD_IP_SRC,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_SRC rule data set failed");
-			return -1;
-		}
+	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE, group,
+			&local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("IPv4 identification failed!");
+		return ret;
+	}
 
-		flow->ipaddr_rule.qos_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_SRC);
-		flow->ipaddr_rule.fs_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[group],
-				prot, NH_FLD_IP_SRC);
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.dst_addr) ||
-		(mask_ipv6 &&
-			memcmp((const char *)mask_ipv6->hdr.dst_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_DST);
-		if (index < 0) {
-			if (mask_ipv4)
-				size = NH_FLD_IPV4_ADDR_SIZE;
-			else
-				size = NH_FLD_IPV6_ADDR_SIZE;
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_IP,
-					NH_FLD_IP_DST,
-					size);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+	if (!spec_ipv4)
+		return 0;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+				       RTE_FLOW_ITEM_TYPE_IPV4)) {
+		DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+		return -EINVAL;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_DST);
-		if (index < 0) {
-			if (mask_ipv4)
-				size = NH_FLD_IPV4_ADDR_SIZE;
-			else
-				size = NH_FLD_IPV6_ADDR_SIZE;
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_DST,
-					size);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+	if (mask_ipv4->hdr.src_addr) {
+		key = &spec_ipv4->hdr.src_addr;
+		mask = &mask_ipv4->hdr.src_addr;
+		size = sizeof(rte_be32_t);
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv4->hdr.dst_addr) {
+		key = &spec_ipv4->hdr.dst_addr;
+		mask = &mask_ipv4->hdr.dst_addr;
+		size = sizeof(rte_be32_t);
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv4->hdr.next_proto_id) {
+		key = &spec_ipv4->hdr.next_proto_id;
+		mask = &mask_ipv4->hdr.next_proto_id;
+		size = sizeof(uint8_t);
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	(*device_configured) |= local_cfg;
+	return 0;
+}
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.dst_addr;
-		else
-			key = spec_ipv6->hdr.dst_addr;
-		if (mask_ipv4) {
-			mask = &mask_ipv4->hdr.dst_addr;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-			prot = NET_PROT_IPV4;
-		} else {
-			mask = &mask_ipv6->hdr.dst_addr[0];
-			size = NH_FLD_IPV6_ADDR_SIZE;
-			prot = NET_PROT_IPV6;
-		}
+static int
+dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
+			  const struct rte_flow_attr *attr,
+			  const struct rte_flow_item *pattern,
+			  const struct rte_flow_action actions[] __rte_unused,
+			  struct rte_flow_error *error __rte_unused,
+			  int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ipv6 *spec_ipv6 = 0, *mask_ipv6 = 0;
+	const void *key, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
+	int size;
+	struct prev_proto_field_id prev_prot;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				prot, NH_FLD_IP_DST,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_DST rule data set failed");
-			return -1;
-		}
+	group = attr->group;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				prot, NH_FLD_IP_DST,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_DST rule data set failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_DST);
-		flow->ipaddr_rule.fs_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[group],
-				prot, NH_FLD_IP_DST);
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.next_proto_id) ||
-		(mask_ipv6 && mask_ipv6->hdr.proto)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-				&priv->extract.qos_key_extract,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				NH_FLD_IP_PROTO_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+	/* Parse pattern list to get the matching parameters */
+	spec_ipv6 = pattern->spec;
+	mask_ipv6 = pattern->mask ? pattern->mask : &dpaa2_flow_item_ipv6_mask;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_PROTO,
-					NH_FLD_IP_PROTO_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+	prev_prot.prot = NET_PROT_ETH;
+	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("IPv6 identification failed!");
+		return ret;
+	}
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr after NH_FLD_IP_PROTO rule set failed");
-			return -1;
-		}
+	if (!spec_ipv6)
+		return 0;
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.next_proto_id;
-		else
-			key = &spec_ipv6->hdr.proto;
-		if (mask_ipv4)
-			mask = &mask_ipv4->hdr.next_proto_id;
-		else
-			mask = &mask_ipv6->hdr.proto;
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				key,	mask, NH_FLD_IP_PROTO_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_PROTO rule data set failed");
-			return -1;
-		}
+	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+				       RTE_FLOW_ITEM_TYPE_IPV6)) {
+		DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
+		return -EINVAL;
+	}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				key,	mask, NH_FLD_IP_PROTO_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_PROTO rule data set failed");
-			return -1;
-		}
+	if (memcmp(mask_ipv6->hdr.src_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
+		key = &spec_ipv6->hdr.src_addr[0];
+		mask = &mask_ipv6->hdr.src_addr[0];
+		size = NH_FLD_IPV6_ADDR_SIZE;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp(mask_ipv6->hdr.dst_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
+		key = &spec_ipv6->hdr.dst_addr[0];
+		mask = &mask_ipv6->hdr.dst_addr[0];
+		size = NH_FLD_IPV6_ADDR_SIZE;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv6->hdr.proto) {
+		key = &spec_ipv6->hdr.proto;
+		mask = &mask_ipv6->hdr.proto;
+		size = sizeof(uint8_t);
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
-
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_icmp(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_icmp *spec, *mask;
-
-	const struct rte_flow_item_icmp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_icmp *)pattern->spec;
-	last    = (const struct rte_flow_item_icmp *)pattern->last;
-	mask    = (const struct rte_flow_item_icmp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_icmp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_icmp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		/* Don't care any field of ICMP header,
-		 * only care ICMP protocol.
-		 * Example: flow create 0 ingress pattern icmp /
-		 */
 		/* Next proto of Generical IP is actually used
 		 * for ICMP identification.
+		 * Example: flow create 0 ingress pattern icmp
 		 */
-		struct proto_discrimination proto;
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate ICMP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate ICMP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before ICMP discrimination set failed");
-			return -1;
-		}
+		struct prev_proto_field_id prev_proto;
 
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_ICMP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("ICMP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_ICMP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
-
 		return 0;
 	}
 
@@ -1833,145 +1852,39 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_ICMP)) {
 		DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.icmp_type) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_TYPE,
-					NH_FLD_ICMP_TYPE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ICMP_TYPE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_TYPE,
-					NH_FLD_ICMP_TYPE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ICMP_TYPE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ICMP TYPE set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_TYPE, &spec->hdr.icmp_type,
+			&mask->hdr.icmp_type, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_TYPE,
-				&spec->hdr.icmp_type,
-				&mask->hdr.icmp_type,
-				NH_FLD_ICMP_TYPE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ICMP_TYPE rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_TYPE,
-				&spec->hdr.icmp_type,
-				&mask->hdr.icmp_type,
-				NH_FLD_ICMP_TYPE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ICMP_TYPE rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_TYPE, &spec->hdr.icmp_type,
+			&mask->hdr.icmp_type, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.icmp_code) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_CODE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_CODE,
-					NH_FLD_ICMP_CODE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ICMP_CODE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_CODE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_CODE,
-					NH_FLD_ICMP_CODE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ICMP_CODE failed.");
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_CODE, &spec->hdr.icmp_code,
+			&mask->hdr.icmp_code, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr after ICMP CODE set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_CODE,
-				&spec->hdr.icmp_code,
-				&mask->hdr.icmp_code,
-				NH_FLD_ICMP_CODE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ICMP_CODE rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_CODE,
-				&spec->hdr.icmp_code,
-				&mask->hdr.icmp_code,
-				NH_FLD_ICMP_CODE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ICMP_CODE rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_CODE, &spec->hdr.icmp_code,
+			&mask->hdr.icmp_code, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -1980,84 +1893,41 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_udp(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_udp *spec, *mask;
-
-	const struct rte_flow_item_udp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_udp *)pattern->spec;
-	last    = (const struct rte_flow_item_udp *)pattern->last;
-	mask    = (const struct rte_flow_item_udp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_udp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_udp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate UDP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate UDP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before UDP discrimination set failed");
-			return -1;
-		}
+		struct prev_proto_field_id prev_proto;
 
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_UDP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("UDP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_UDP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2069,149 +1939,40 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_UDP)) {
 		DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add UDP_SRC failed.");
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_SRC,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add UDP_SRC failed.");
+	if (mask->hdr.dst_port) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before UDP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_UDP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_UDP_PORT_SRC rule data set failed");
-			return -1;
-		}
-	}
-
-	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_DST,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add UDP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_DST,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add UDP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before UDP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_UDP_PORT_DST rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_UDP_PORT_DST rule data set failed");
-			return -1;
-		}
-	}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
 	(*device_configured) |= local_cfg;
 
@@ -2219,84 +1980,41 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_tcp(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_tcp *spec, *mask;
-
-	const struct rte_flow_item_tcp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_tcp *)pattern->spec;
-	last    = (const struct rte_flow_item_tcp *)pattern->last;
-	mask    = (const struct rte_flow_item_tcp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_tcp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_tcp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate TCP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate TCP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before TCP discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_TCP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("TCP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_TCP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2308,149 +2026,39 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_TCP)) {
 		DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_SRC,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add TCP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_SRC,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add TCP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before TCP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_TCP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_TCP_PORT_SRC rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_DST,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add TCP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_DST,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add TCP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before TCP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_TCP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_TCP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -2459,85 +2067,41 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_sctp(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_sctp *spec, *mask;
-
-	const struct rte_flow_item_sctp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_sctp *)pattern->spec;
-	last    = (const struct rte_flow_item_sctp *)pattern->last;
-	mask    = (const struct rte_flow_item_sctp *)
-			(pattern->mask ? pattern->mask :
-				&dpaa2_flow_item_sctp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_sctp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate SCTP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate SCTP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_SCTP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("SCTP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_SCTP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2553,145 +2117,35 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_SRC,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add SCTP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_SRC,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add SCTP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_SCTP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_SCTP_PORT_SRC rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_DST,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add SCTP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_DST,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add SCTP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_SCTP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_SCTP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -2700,88 +2154,46 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_gre(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_gre *spec, *mask;
-
-	const struct rte_flow_item_gre *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_gre *)pattern->spec;
-	last    = (const struct rte_flow_item_gre *)pattern->last;
-	mask    = (const struct rte_flow_item_gre *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_gre_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_gre_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate GRE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate GRE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before GRE discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_GRE;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("GRE discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_GRE;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
-		return 0;
+		if (!spec)
+			return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2794,74 +2206,19 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 	if (!mask->protocol)
 		return 0;
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.qos_key_extract.dpkg,
-			NET_PROT_GRE, NH_FLD_GRE_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.qos_key_extract,
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract add GRE_TYPE failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-	}
-
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_GRE, NH_FLD_GRE_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.tc_key_extract[group],
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract add GRE_TYPE failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-	}
-
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before GRE_TYPE set failed");
-		return -1;
-	}
-
-	ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
-				sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"QoS NH_FLD_GRE_TYPE rule data set failed");
-		return -1;
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GRE,
+			NH_FLD_GRE_TYPE, &spec->protocol,
+			&mask->protocol, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_rule_data_set(
-			&priv->extract.tc_key_extract[group],
-			&flow->fs_rule,
-			NET_PROT_GRE,
-			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
-			sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"FS NH_FLD_GRE_TYPE rule data set failed");
-		return -1;
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GRE,
+			NH_FLD_GRE_TYPE, &spec->protocol,
+			&mask->protocol, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
 
 	(*device_configured) |= local_cfg;
 
@@ -2869,404 +2226,109 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_raw(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_raw *spec = pattern->spec;
 	const struct rte_flow_item_raw *mask = pattern->mask;
 	int prev_key_size =
-		priv->extract.qos_key_extract.key_info.key_total_size;
+		priv->extract.qos_key_extract.key_profile.key_max_size;
 	int local_cfg = 0, ret;
 	uint32_t group;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
-		DPAA2_PMD_ERR("spec or mask not present.");
-		return -EINVAL;
-	}
-	/* Only supports non-relative with offset 0 */
-	if (spec->relative || spec->offset != 0 ||
-	    spec->search || spec->limit) {
-		DPAA2_PMD_ERR("relative and non zero offset not supported.");
-		return -EINVAL;
-	}
-	/* Spec len and mask len should be same */
-	if (spec->length != mask->length) {
-		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
-		return -EINVAL;
-	}
-
-	/* Get traffic class index and flow id to be configured */
-	group = attr->group;
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
-
-	if (prev_key_size <= spec->length) {
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
-						 spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-
-		ret = dpaa2_flow_extract_add_raw(
-					&priv->extract.tc_key_extract[group],
-					spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-	}
-
-	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
-					   mask->pattern, spec->length);
-	if (ret) {
-		DPAA2_PMD_ERR("QoS RAW rule data set failed");
-		return -1;
-	}
-
-	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
-					   mask->pattern, spec->length);
-	if (ret) {
-		DPAA2_PMD_ERR("FS RAW rule data set failed");
-		return -1;
-	}
-
-	(*device_configured) |= local_cfg;
-
-	return 0;
-}
-
-static inline int
-dpaa2_fs_action_supported(enum rte_flow_action_type action)
-{
-	int i;
-
-	for (i = 0; i < (int)(sizeof(dpaa2_supported_fs_action_type) /
-					sizeof(enum rte_flow_action_type)); i++) {
-		if (action == dpaa2_supported_fs_action_type[i])
-			return 1;
-	}
-
-	return 0;
-}
-/* The existing QoS/FS entry with IP address(es)
- * needs update after
- * new extract(s) are inserted before IP
- * address(es) extract(s).
- */
-static int
-dpaa2_flow_entry_update(
-	struct dpaa2_dev_priv *priv, uint8_t tc_id)
-{
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
-	int ret;
-	int qos_ipsrc_offset = -1, qos_ipdst_offset = -1;
-	int fs_ipsrc_offset = -1, fs_ipdst_offset = -1;
-	struct dpaa2_key_extract *qos_key_extract =
-		&priv->extract.qos_key_extract;
-	struct dpaa2_key_extract *tc_key_extract =
-		&priv->extract.tc_key_extract[tc_id];
-	char ipsrc_key[NH_FLD_IPV6_ADDR_SIZE];
-	char ipdst_key[NH_FLD_IPV6_ADDR_SIZE];
-	char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
-	char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
-	int extend = -1, extend1, size = -1;
-	uint16_t qos_index;
-
-	while (curr) {
-		if (curr->ipaddr_rule.ipaddr_type ==
-			FLOW_NONE_IPADDR) {
-			curr = LIST_NEXT(curr, next);
-			continue;
-		}
-
-		if (curr->ipaddr_rule.ipaddr_type ==
-			FLOW_IPV4_ADDR) {
-			qos_ipsrc_offset =
-				qos_key_extract->key_info.ipv4_src_offset;
-			qos_ipdst_offset =
-				qos_key_extract->key_info.ipv4_dst_offset;
-			fs_ipsrc_offset =
-				tc_key_extract->key_info.ipv4_src_offset;
-			fs_ipdst_offset =
-				tc_key_extract->key_info.ipv4_dst_offset;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-		} else {
-			qos_ipsrc_offset =
-				qos_key_extract->key_info.ipv6_src_offset;
-			qos_ipdst_offset =
-				qos_key_extract->key_info.ipv6_dst_offset;
-			fs_ipsrc_offset =
-				tc_key_extract->key_info.ipv6_src_offset;
-			fs_ipdst_offset =
-				tc_key_extract->key_info.ipv6_dst_offset;
-			size = NH_FLD_IPV6_ADDR_SIZE;
-		}
-
-		qos_index = curr->tc_id * priv->fs_entries +
-			curr->tc_index;
-
-		dpaa2_flow_qos_entry_log("Before update", curr, qos_index, stdout);
-
-		if (priv->num_rx_tc > 1) {
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
-					priv->token, &curr->qos_rule);
-			if (ret) {
-				DPAA2_PMD_ERR("Qos entry remove failed.");
-				return -1;
-			}
-		}
-
-		extend = -1;
-
-		if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
-			RTE_ASSERT(qos_ipsrc_offset >=
-				curr->ipaddr_rule.qos_ipsrc_offset);
-			extend1 = qos_ipsrc_offset -
-				curr->ipaddr_rule.qos_ipsrc_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-
-			memcpy(ipsrc_key,
-				(char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				0, size);
-
-			memcpy(ipsrc_mask,
-				(char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				0, size);
-
-			curr->ipaddr_rule.qos_ipsrc_offset = qos_ipsrc_offset;
-		}
-
-		if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
-			RTE_ASSERT(qos_ipdst_offset >=
-				curr->ipaddr_rule.qos_ipdst_offset);
-			extend1 = qos_ipdst_offset -
-				curr->ipaddr_rule.qos_ipdst_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-
-			memcpy(ipdst_key,
-				(char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				0, size);
-
-			memcpy(ipdst_mask,
-				(char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				0, size);
-
-			curr->ipaddr_rule.qos_ipdst_offset = qos_ipdst_offset;
-		}
-
-		if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-			memcpy((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				ipsrc_key,
-				size);
-			memcpy((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				ipsrc_mask,
-				size);
-		}
-		if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-			memcpy((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				ipdst_key,
-				size);
-			memcpy((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				ipdst_mask,
-				size);
-		}
-
-		if (extend >= 0)
-			curr->qos_real_key_size += extend;
-
-		curr->qos_rule.key_size = FIXED_ENTRY_SIZE;
-
-		dpaa2_flow_qos_entry_log("Start update", curr, qos_index, stdout);
-
-		if (priv->num_rx_tc > 1) {
-			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
-					priv->token, &curr->qos_rule,
-					curr->tc_id, qos_index,
-					0, 0);
-			if (ret) {
-				DPAA2_PMD_ERR("Qos entry update failed.");
-				return -1;
-			}
-		}
-
-		if (!dpaa2_fs_action_supported(curr->action)) {
-			curr = LIST_NEXT(curr, next);
-			continue;
-		}
+		DPAA2_PMD_ERR("spec or mask not present.");
+		return -EINVAL;
+	}
+	/* Only supports non-relative with offset 0 */
+	if (spec->relative || spec->offset != 0 ||
+	    spec->search || spec->limit) {
+		DPAA2_PMD_ERR("relative and non zero offset not supported.");
+		return -EINVAL;
+	}
+	/* Spec len and mask len should be same */
+	if (spec->length != mask->length) {
+		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
+		return -EINVAL;
+	}
 
-		dpaa2_flow_fs_entry_log("Before update", curr, stdout);
-		extend = -1;
+	/* Get traffic class index and flow id to be configured */
+	group = attr->group;
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW,
-				priv->token, curr->tc_id, &curr->fs_rule);
+	if (prev_key_size <= spec->length) {
+		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
+						 spec->length);
 		if (ret) {
-			DPAA2_PMD_ERR("FS entry remove failed.");
+			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
 			return -1;
 		}
+		local_cfg |= DPAA2_FLOW_QOS_TYPE;
 
-		if (curr->ipaddr_rule.fs_ipsrc_offset >= 0 &&
-			tc_id == curr->tc_id) {
-			RTE_ASSERT(fs_ipsrc_offset >=
-				curr->ipaddr_rule.fs_ipsrc_offset);
-			extend1 = fs_ipsrc_offset -
-				curr->ipaddr_rule.fs_ipsrc_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			memcpy(ipsrc_key,
-				(char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				0, size);
-
-			memcpy(ipsrc_mask,
-				(char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				0, size);
-
-			curr->ipaddr_rule.fs_ipsrc_offset = fs_ipsrc_offset;
+		ret = dpaa2_flow_extract_add_raw(&priv->extract.tc_key_extract[group],
+					spec->length);
+		if (ret) {
+			DPAA2_PMD_ERR("FS Extract RAW add failed.");
+			return -1;
 		}
+		local_cfg |= DPAA2_FLOW_FS_TYPE;
+	}
 
-		if (curr->ipaddr_rule.fs_ipdst_offset >= 0 &&
-			tc_id == curr->tc_id) {
-			RTE_ASSERT(fs_ipdst_offset >=
-				curr->ipaddr_rule.fs_ipdst_offset);
-			extend1 = fs_ipdst_offset -
-				curr->ipaddr_rule.fs_ipdst_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			memcpy(ipdst_key,
-				(char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				0, size);
-
-			memcpy(ipdst_mask,
-				(char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				0, size);
-
-			curr->ipaddr_rule.fs_ipdst_offset = fs_ipdst_offset;
-		}
+	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
+					   mask->pattern, spec->length);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS RAW rule data set failed");
+		return -1;
+	}
 
-		if (curr->ipaddr_rule.fs_ipsrc_offset >= 0) {
-			memcpy((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				ipsrc_key,
-				size);
-			memcpy((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				ipsrc_mask,
-				size);
-		}
-		if (curr->ipaddr_rule.fs_ipdst_offset >= 0) {
-			memcpy((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				ipdst_key,
-				size);
-			memcpy((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				ipdst_mask,
-				size);
-		}
+	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
+					   mask->pattern, spec->length);
+	if (ret) {
+		DPAA2_PMD_ERR("FS RAW rule data set failed");
+		return -1;
+	}
 
-		if (extend >= 0)
-			curr->fs_real_key_size += extend;
-		curr->fs_rule.key_size = FIXED_ENTRY_SIZE;
+	(*device_configured) |= local_cfg;
 
-		dpaa2_flow_fs_entry_log("Start update", curr, stdout);
+	return 0;
+}
 
-		ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
-				priv->token, curr->tc_id, curr->tc_index,
-				&curr->fs_rule, &curr->action_cfg);
-		if (ret) {
-			DPAA2_PMD_ERR("FS entry update failed.");
-			return -1;
-		}
+static inline int
+dpaa2_fs_action_supported(enum rte_flow_action_type action)
+{
+	int i;
+	int action_num = sizeof(dpaa2_supported_fs_action_type) /
+		sizeof(enum rte_flow_action_type);
 
-		curr = LIST_NEXT(curr, next);
+	for (i = 0; i < action_num; i++) {
+		if (action == dpaa2_supported_fs_action_type[i])
+			return true;
 	}
 
-	return 0;
+	return false;
 }
 
 static inline int
-dpaa2_flow_verify_attr(
-	struct dpaa2_dev_priv *priv,
+dpaa2_flow_verify_attr(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr)
 {
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
 
 	while (curr) {
 		if (curr->tc_id == attr->group &&
 			curr->tc_index == attr->priority) {
-			DPAA2_PMD_ERR(
-				"Flow with group %d and priority %d already exists.",
+			DPAA2_PMD_ERR("Flow(TC[%d].entry[%d] exists",
 				attr->group, attr->priority);
 
-			return -1;
+			return -EINVAL;
 		}
 		curr = LIST_NEXT(curr, next);
 	}
@@ -3279,18 +2341,16 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_action *action)
 {
 	const struct rte_flow_action_port_id *port_id;
+	const struct rte_flow_action_ethdev *ethdev;
 	int idx = -1;
 	struct rte_eth_dev *dest_dev;
 
 	if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
-		port_id = (const struct rte_flow_action_port_id *)
-					action->conf;
+		port_id = action->conf;
 		if (!port_id->original)
 			idx = port_id->id;
 	} else if (action->type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) {
-		const struct rte_flow_action_ethdev *ethdev;
-
-		ethdev = (const struct rte_flow_action_ethdev *)action->conf;
+		ethdev = action->conf;
 		idx = ethdev->port_id;
 	} else {
 		return NULL;
@@ -3310,8 +2370,7 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 }
 
 static inline int
-dpaa2_flow_verify_action(
-	struct dpaa2_dev_priv *priv,
+dpaa2_flow_verify_action(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_action actions[])
 {
@@ -3323,15 +2382,14 @@ dpaa2_flow_verify_action(
 	while (!end_of_list) {
 		switch (actions[j].type) {
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
-			dest_queue = (const struct rte_flow_action_queue *)
-					(actions[j].conf);
+			dest_queue = actions[j].conf;
 			rxq = priv->rx_vq[dest_queue->index];
 			if (attr->group != rxq->tc_index) {
-				DPAA2_PMD_ERR(
-					"RXQ[%d] does not belong to the group %d",
-					dest_queue->index, attr->group);
+				DPAA2_PMD_ERR("FSQ(%d.%d) not in TC[%d]",
+					rxq->tc_index, rxq->flow_id,
+					attr->group);
 
-				return -1;
+				return -ENOTSUP;
 			}
 			break;
 		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
@@ -3345,20 +2403,17 @@ dpaa2_flow_verify_action(
 			rss_conf = (const struct rte_flow_action_rss *)
 					(actions[j].conf);
 			if (rss_conf->queue_num > priv->dist_queues) {
-				DPAA2_PMD_ERR(
-					"RSS number exceeds the distribution size");
+				DPAA2_PMD_ERR("RSS number too large");
 				return -ENOTSUP;
 			}
 			for (i = 0; i < (int)rss_conf->queue_num; i++) {
 				if (rss_conf->queue[i] >= priv->nb_rx_queues) {
-					DPAA2_PMD_ERR(
-						"RSS queue index exceeds the number of RXQs");
+					DPAA2_PMD_ERR("RSS queue not in range");
 					return -ENOTSUP;
 				}
 				rxq = priv->rx_vq[rss_conf->queue[i]];
 				if (rxq->tc_index != attr->group) {
-					DPAA2_PMD_ERR(
-						"Queue/Group combination are not supported");
+					DPAA2_PMD_ERR("RSS queue not in group");
 					return -ENOTSUP;
 				}
 			}
@@ -3378,28 +2433,248 @@ dpaa2_flow_verify_action(
 }
 
 static int
-dpaa2_generic_flow_set(struct rte_flow *flow,
-		       struct rte_eth_dev *dev,
-		       const struct rte_flow_attr *attr,
-		       const struct rte_flow_item pattern[],
-		       const struct rte_flow_action actions[],
-		       struct rte_flow_error *error)
+dpaa2_configure_flow_fs_action(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct rte_flow_action *rte_action)
 {
+	struct rte_eth_dev *dest_dev;
+	struct dpaa2_dev_priv *dest_priv;
 	const struct rte_flow_action_queue *dest_queue;
+	struct dpaa2_queue *dest_q;
+
+	memset(&flow->fs_action_cfg, 0,
+		sizeof(struct dpni_fs_action_cfg));
+	flow->action_type = rte_action->type;
+
+	if (flow->action_type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		dest_queue = rte_action->conf;
+		dest_q = priv->rx_vq[dest_queue->index];
+		flow->fs_action_cfg.flow_id = dest_q->flow_id;
+	} else if (flow->action_type == RTE_FLOW_ACTION_TYPE_PORT_ID ||
+		   flow->action_type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) {
+		dest_dev = dpaa2_flow_redirect_dev(priv, rte_action);
+		if (!dest_dev) {
+			DPAA2_PMD_ERR("Invalid device to redirect");
+			return -EINVAL;
+		}
+
+		dest_priv = dest_dev->data->dev_private;
+		dest_q = dest_priv->tx_vq[0];
+		flow->fs_action_cfg.options =
+			DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
+		flow->fs_action_cfg.redirect_obj_token =
+			dest_priv->token;
+		flow->fs_action_cfg.flow_id = dest_q->flow_id;
+	}
+
+	return 0;
+}
+
+static inline uint16_t
+dpaa2_flow_entry_size(uint16_t key_max_size)
+{
+	if (key_max_size > DPAA2_FLOW_ENTRY_MAX_SIZE) {
+		DPAA2_PMD_ERR("Key size(%d) > max(%d)",
+			key_max_size,
+			DPAA2_FLOW_ENTRY_MAX_SIZE);
+
+		return 0;
+	}
+
+	if (key_max_size > DPAA2_FLOW_ENTRY_MIN_SIZE)
+		return DPAA2_FLOW_ENTRY_MAX_SIZE;
+
+	/* Current MC only support fixed entry size(56)*/
+	return DPAA2_FLOW_ENTRY_MAX_SIZE;
+}
+
+static inline int
+dpaa2_flow_clear_fs_table(struct dpaa2_dev_priv *priv,
+	uint8_t tc_id)
+{
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	int need_clear = 0, ret;
+	struct fsl_mc_io *dpni = priv->hw;
+
+	while (curr) {
+		if (curr->tc_id == tc_id) {
+			need_clear = 1;
+			break;
+		}
+		curr = LIST_NEXT(curr, next);
+	}
+
+	if (need_clear) {
+		ret = dpni_clear_fs_entries(dpni, CMD_PRI_LOW,
+				priv->token, tc_id);
+		if (ret) {
+			DPAA2_PMD_ERR("TC[%d] clear failed", tc_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_configure_fs_rss_table(struct dpaa2_dev_priv *priv,
+	uint8_t tc_id, uint16_t dist_size, int rss_dist)
+{
+	struct dpaa2_key_extract *tc_extract;
+	uint8_t *key_cfg_buf;
+	uint64_t key_cfg_iova;
+	int ret;
+	struct dpni_rx_dist_cfg tc_cfg;
+	struct fsl_mc_io *dpni = priv->hw;
+	uint16_t entry_size;
+	uint16_t key_max_size;
+
+	ret = dpaa2_flow_clear_fs_table(priv, tc_id);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] clear failed", tc_id);
+		return ret;
+	}
+
+	tc_extract = &priv->extract.tc_key_extract[tc_id];
+	key_cfg_buf = priv->extract.tc_extract_param[tc_id];
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+
+	key_max_size = tc_extract->key_profile.key_max_size;
+	entry_size = dpaa2_flow_entry_size(key_max_size);
+
+	dpaa2_flow_fs_extracts_log(priv, tc_id);
+	ret = dpkg_prepare_key_cfg(&tc_extract->dpkg,
+			key_cfg_buf);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] prepare key failed", tc_id);
+		return ret;
+	}
+
+	memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
+	tc_cfg.dist_size = dist_size;
+	tc_cfg.key_cfg_iova = key_cfg_iova;
+	if (rss_dist)
+		tc_cfg.enable = true;
+	else
+		tc_cfg.enable = false;
+	tc_cfg.tc = tc_id;
+	ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
+			priv->token, &tc_cfg);
+	if (ret < 0) {
+		if (rss_dist) {
+			DPAA2_PMD_ERR("RSS TC[%d] set failed",
+				tc_id);
+		} else {
+			DPAA2_PMD_ERR("FS TC[%d] hash disable failed",
+				tc_id);
+		}
+
+		return ret;
+	}
+
+	if (rss_dist)
+		return 0;
+
+	tc_cfg.enable = true;
+	tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
+	ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
+			priv->token, &tc_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] FS configured failed", tc_id);
+		return ret;
+	}
+
+	ret = dpaa2_flow_rule_add_all(priv, DPAA2_FLOW_FS_TYPE,
+			entry_size, tc_id);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
+	int rss_dist)
+{
+	struct dpaa2_key_extract *qos_extract;
+	uint8_t *key_cfg_buf;
+	uint64_t key_cfg_iova;
+	int ret;
+	struct dpni_qos_tbl_cfg qos_cfg;
+	struct fsl_mc_io *dpni = priv->hw;
+	uint16_t entry_size;
+	uint16_t key_max_size;
+
+	if (!rss_dist && priv->num_rx_tc <= 1) {
+		/* QoS table is effecitive for FS multiple TCs or RSS.*/
+		return 0;
+	}
+
+	if (LIST_FIRST(&priv->flows)) {
+		ret = dpni_clear_qos_table(dpni, CMD_PRI_LOW,
+				priv->token);
+		if (ret < 0) {
+			DPAA2_PMD_ERR("QoS table clear failed");
+			return ret;
+		}
+	}
+
+	qos_extract = &priv->extract.qos_key_extract;
+	key_cfg_buf = priv->extract.qos_extract_param;
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+
+	key_max_size = qos_extract->key_profile.key_max_size;
+	entry_size = dpaa2_flow_entry_size(key_max_size);
+
+	dpaa2_flow_qos_extracts_log(priv);
+
+	ret = dpkg_prepare_key_cfg(&qos_extract->dpkg,
+			key_cfg_buf);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("QoS prepare extract failed");
+		return ret;
+	}
+	memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
+	qos_cfg.keep_entries = true;
+	qos_cfg.key_cfg_iova = key_cfg_iova;
+	if (rss_dist) {
+		qos_cfg.discard_on_miss = true;
+	} else {
+		qos_cfg.discard_on_miss = false;
+		qos_cfg.default_tc = 0;
+	}
+
+	ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
+			priv->token, &qos_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("QoS table set failed");
+		return ret;
+	}
+
+	ret = dpaa2_flow_rule_add_all(priv, DPAA2_FLOW_QOS_TYPE,
+			entry_size, 0);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item pattern[],
+	const struct rte_flow_action actions[],
+	struct rte_flow_error *error)
+{
 	const struct rte_flow_action_rss *rss_conf;
 	int is_keycfg_configured = 0, end_of_list = 0;
 	int ret = 0, i = 0, j = 0;
-	struct dpni_rx_dist_cfg tc_cfg;
-	struct dpni_qos_tbl_cfg qos_cfg;
-	struct dpni_fs_action_cfg action;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct dpaa2_queue *dest_q;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
-	size_t param;
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
-	uint16_t qos_index;
-	struct rte_eth_dev *dest_dev;
-	struct dpaa2_dev_priv *dest_priv;
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	uint16_t dist_size, key_size;
+	struct dpaa2_key_extract *qos_key_extract;
+	struct dpaa2_key_extract *tc_key_extract;
 
 	ret = dpaa2_flow_verify_attr(priv, attr);
 	if (ret)
@@ -3417,7 +2692,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("ETH flow configuration failed!");
+				DPAA2_PMD_ERR("ETH flow config failed!");
 				return ret;
 			}
 			break;
@@ -3426,17 +2701,25 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("vLan flow configuration failed!");
+				DPAA2_PMD_ERR("vLan flow config failed!");
 				return ret;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
+			ret = dpaa2_configure_flow_ipv4(flow,
+					dev, attr, &pattern[i], actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("IPV4 flow config failed!");
+				return ret;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ret = dpaa2_configure_flow_generic_ip(flow,
+			ret = dpaa2_configure_flow_ipv6(flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("IP flow configuration failed!");
+				DPAA2_PMD_ERR("IPV6 flow config failed!");
 				return ret;
 			}
 			break;
@@ -3445,7 +2728,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("ICMP flow configuration failed!");
+				DPAA2_PMD_ERR("ICMP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3454,7 +2737,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("UDP flow configuration failed!");
+				DPAA2_PMD_ERR("UDP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3463,7 +2746,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("TCP flow configuration failed!");
+				DPAA2_PMD_ERR("TCP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3472,7 +2755,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("SCTP flow configuration failed!");
+				DPAA2_PMD_ERR("SCTP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3481,17 +2764,17 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("GRE flow configuration failed!");
+				DPAA2_PMD_ERR("GRE flow config failed!");
 				return ret;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow,
-						       dev, attr, &pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					dev, attr, &pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("RAW flow configuration failed!");
+				DPAA2_PMD_ERR("RAW flow config failed!");
 				return ret;
 			}
 			break;
@@ -3506,6 +2789,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 		i++;
 	}
 
+	qos_key_extract = &priv->extract.qos_key_extract;
+	key_size = qos_key_extract->key_profile.key_max_size;
+	flow->qos_rule.key_size = dpaa2_flow_entry_size(key_size);
+
+	tc_key_extract = &priv->extract.tc_key_extract[flow->tc_id];
+	key_size = tc_key_extract->key_profile.key_max_size;
+	flow->fs_rule.key_size = dpaa2_flow_entry_size(key_size);
+
 	/* Let's parse action on matching traffic */
 	end_of_list = 0;
 	while (!end_of_list) {
@@ -3513,150 +2804,33 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
 		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
 		case RTE_FLOW_ACTION_TYPE_PORT_ID:
-			memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
-			flow->action = actions[j].type;
-
-			if (actions[j].type == RTE_FLOW_ACTION_TYPE_QUEUE) {
-				dest_queue = (const struct rte_flow_action_queue *)
-								(actions[j].conf);
-				dest_q = priv->rx_vq[dest_queue->index];
-				action.flow_id = dest_q->flow_id;
-			} else {
-				dest_dev = dpaa2_flow_redirect_dev(priv,
-								   &actions[j]);
-				if (!dest_dev) {
-					DPAA2_PMD_ERR("Invalid destination device to redirect!");
-					return -1;
-				}
-
-				dest_priv = dest_dev->data->dev_private;
-				dest_q = dest_priv->tx_vq[0];
-				action.options =
-						DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
-				action.redirect_obj_token = dest_priv->token;
-				action.flow_id = dest_q->flow_id;
-			}
+			ret = dpaa2_configure_flow_fs_action(priv, flow,
+							     &actions[j]);
+			if (ret)
+				return ret;
 
 			/* Configure FS table first*/
-			if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
-				dpaa2_flow_fs_table_extracts_log(priv,
-							flow->tc_id, stdout);
-				if (dpkg_prepare_key_cfg(
-				&priv->extract.tc_key_extract[flow->tc_id].dpkg,
-				(uint8_t *)(size_t)priv->extract
-				.tc_extract_param[flow->tc_id]) < 0) {
-					DPAA2_PMD_ERR(
-					"Unable to prepare extract parameters");
-					return -1;
-				}
-
-				memset(&tc_cfg, 0,
-					sizeof(struct dpni_rx_dist_cfg));
-				tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc;
-				tc_cfg.key_cfg_iova =
-					(uint64_t)priv->extract.tc_extract_param[flow->tc_id];
-				tc_cfg.tc = flow->tc_id;
-				tc_cfg.enable = false;
-				ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
-						priv->token, &tc_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"TC hash cannot be disabled.(%d)",
-						ret);
-					return -1;
-				}
-				tc_cfg.enable = true;
-				tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
-				ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
-							 priv->token, &tc_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"TC distribution cannot be configured.(%d)",
-						ret);
-					return -1;
-				}
+			dist_size = priv->nb_rx_queues / priv->num_rx_tc;
+			if (is_keycfg_configured & DPAA2_FLOW_FS_TYPE) {
+				ret = dpaa2_configure_fs_rss_table(priv,
+								   flow->tc_id,
+								   dist_size,
+								   false);
+				if (ret)
+					return ret;
 			}
 
 			/* Configure QoS table then.*/
-			if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
-				dpaa2_flow_qos_table_extracts_log(priv, stdout);
-				if (dpkg_prepare_key_cfg(
-					&priv->extract.qos_key_extract.dpkg,
-					(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
-					DPAA2_PMD_ERR(
-						"Unable to prepare extract parameters");
-					return -1;
-				}
-
-				memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
-				qos_cfg.discard_on_miss = false;
-				qos_cfg.default_tc = 0;
-				qos_cfg.keep_entries = true;
-				qos_cfg.key_cfg_iova =
-					(size_t)priv->extract.qos_extract_param;
-				/* QoS table is effective for multiple TCs. */
-				if (priv->num_rx_tc > 1) {
-					ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
-						priv->token, &qos_cfg);
-					if (ret < 0) {
-						DPAA2_PMD_ERR(
-						"RSS QoS table can not be configured(%d)",
-							ret);
-						return -1;
-					}
-				}
-			}
-
-			flow->qos_real_key_size = priv->extract
-				.qos_key_extract.key_info.key_total_size;
-			if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
-				if (flow->ipaddr_rule.qos_ipdst_offset >=
-					flow->ipaddr_rule.qos_ipsrc_offset) {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipdst_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				} else {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipsrc_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				}
-			} else if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV6_ADDR) {
-				if (flow->ipaddr_rule.qos_ipdst_offset >=
-					flow->ipaddr_rule.qos_ipsrc_offset) {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipdst_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				} else {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipsrc_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				}
+			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
+				ret = dpaa2_configure_qos_table(priv, false);
+				if (ret)
+					return ret;
 			}
 
-			/* QoS entry added is only effective for multiple TCs.*/
 			if (priv->num_rx_tc > 1) {
-				qos_index = flow->tc_id * priv->fs_entries +
-					flow->tc_index;
-				if (qos_index >= priv->qos_entries) {
-					DPAA2_PMD_ERR("QoS table with %d entries full",
-						priv->qos_entries);
-					return -1;
-				}
-				flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
-
-				dpaa2_flow_qos_entry_log("Start add", flow,
-							qos_index, stdout);
-
-				ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
-						priv->token, &flow->qos_rule,
-						flow->tc_id, qos_index,
-						0, 0);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"Error in adding entry to QoS table(%d)", ret);
+				ret = dpaa2_flow_add_qos_rule(priv, flow);
+				if (ret)
 					return ret;
-				}
 			}
 
 			if (flow->tc_index >= priv->fs_entries) {
@@ -3665,140 +2839,47 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 				return -1;
 			}
 
-			flow->fs_real_key_size =
-				priv->extract.tc_key_extract[flow->tc_id]
-				.key_info.key_total_size;
-
-			if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV4_ADDR) {
-				if (flow->ipaddr_rule.fs_ipdst_offset >=
-					flow->ipaddr_rule.fs_ipsrc_offset) {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipdst_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				} else {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipsrc_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				}
-			} else if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV6_ADDR) {
-				if (flow->ipaddr_rule.fs_ipdst_offset >=
-					flow->ipaddr_rule.fs_ipsrc_offset) {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipdst_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				} else {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipsrc_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				}
-			}
-
-			flow->fs_rule.key_size = FIXED_ENTRY_SIZE;
-
-			dpaa2_flow_fs_entry_log("Start add", flow, stdout);
-
-			ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
-						flow->tc_id, flow->tc_index,
-						&flow->fs_rule, &action);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"Error in adding entry to FS table(%d)", ret);
+			ret = dpaa2_flow_add_fs_rule(priv, flow);
+			if (ret)
 				return ret;
-			}
-			memcpy(&flow->action_cfg, &action,
-				sizeof(struct dpni_fs_action_cfg));
+
 			break;
 		case RTE_FLOW_ACTION_TYPE_RSS:
-			rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
+			rss_conf = actions[j].conf;
+			flow->action_type = RTE_FLOW_ACTION_TYPE_RSS;
 
-			flow->action = RTE_FLOW_ACTION_TYPE_RSS;
 			ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types,
-					&priv->extract.tc_key_extract[flow->tc_id].dpkg);
+					&tc_key_extract->dpkg);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"unable to set flow distribution.please check queue config");
+				DPAA2_PMD_ERR("TC[%d] distset RSS failed",
+					      flow->tc_id);
 				return ret;
 			}
 
-			/* Allocate DMA'ble memory to write the rules */
-			param = (size_t)rte_malloc(NULL, 256, 64);
-			if (!param) {
-				DPAA2_PMD_ERR("Memory allocation failure");
-				return -1;
-			}
-
-			if (dpkg_prepare_key_cfg(
-				&priv->extract.tc_key_extract[flow->tc_id].dpkg,
-				(uint8_t *)param) < 0) {
-				DPAA2_PMD_ERR(
-				"Unable to prepare extract parameters");
-				rte_free((void *)param);
-				return -1;
-			}
-
-			memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
-			tc_cfg.dist_size = rss_conf->queue_num;
-			tc_cfg.key_cfg_iova = (size_t)param;
-			tc_cfg.enable = true;
-			tc_cfg.tc = flow->tc_id;
-			ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
-						 priv->token, &tc_cfg);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"RSS TC table cannot be configured: %d",
-					ret);
-				rte_free((void *)param);
-				return -1;
+			dist_size = rss_conf->queue_num;
+			if (is_keycfg_configured & DPAA2_FLOW_FS_TYPE) {
+				ret = dpaa2_configure_fs_rss_table(priv,
+								   flow->tc_id,
+								   dist_size,
+								   true);
+				if (ret)
+					return ret;
 			}
 
-			rte_free((void *)param);
-			if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
-				if (dpkg_prepare_key_cfg(
-					&priv->extract.qos_key_extract.dpkg,
-					(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
-					DPAA2_PMD_ERR(
-					"Unable to prepare extract parameters");
-					return -1;
-				}
-				memset(&qos_cfg, 0,
-					sizeof(struct dpni_qos_tbl_cfg));
-				qos_cfg.discard_on_miss = true;
-				qos_cfg.keep_entries = true;
-				qos_cfg.key_cfg_iova =
-					(size_t)priv->extract.qos_extract_param;
-				ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
-							 priv->token, &qos_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-					"RSS QoS dist can't be configured-%d",
-					ret);
-					return -1;
-				}
+			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
+				ret = dpaa2_configure_qos_table(priv, true);
+				if (ret)
+					return ret;
 			}
 
-			/* Add Rule into QoS table */
-			qos_index = flow->tc_id * priv->fs_entries +
-				flow->tc_index;
-			if (qos_index >= priv->qos_entries) {
-				DPAA2_PMD_ERR("QoS table with %d entries full",
-					priv->qos_entries);
-				return -1;
-			}
+			ret = dpaa2_flow_add_qos_rule(priv, flow);
+			if (ret)
+				return ret;
 
-			flow->qos_real_key_size =
-			  priv->extract.qos_key_extract.key_info.key_total_size;
-			flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
-			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-						&flow->qos_rule, flow->tc_id,
-						qos_index, 0, 0);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"Error in entry addition in QoS table(%d)",
-				ret);
+			ret = dpaa2_flow_add_fs_rule(priv, flow);
+			if (ret)
 				return ret;
-			}
+
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
@@ -3812,16 +2893,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 	}
 
 	if (!ret) {
-		if (is_keycfg_configured &
-			(DPAA2_QOS_TABLE_RECONFIGURE |
-			DPAA2_FS_TABLE_RECONFIGURE)) {
-			ret = dpaa2_flow_entry_update(priv, flow->tc_id);
-			if (ret) {
-				DPAA2_PMD_ERR("Flow entry update failed.");
-
-				return -1;
-			}
-		}
 		/* New rules are inserted. */
 		if (!curr) {
 			LIST_INSERT_HEAD(&priv->flows, flow, next);
@@ -3836,7 +2907,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 
 static inline int
 dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
-		      const struct rte_flow_attr *attr)
+	const struct rte_flow_attr *attr)
 {
 	int ret = 0;
 
@@ -3910,18 +2981,18 @@ dpaa2_dev_verify_actions(const struct rte_flow_action actions[])
 	}
 	for (j = 0; actions[j].type != RTE_FLOW_ACTION_TYPE_END; j++) {
 		if (actions[j].type != RTE_FLOW_ACTION_TYPE_DROP &&
-				!actions[j].conf)
+		    !actions[j].conf)
 			ret = -EINVAL;
 	}
 	return ret;
 }
 
-static
-int dpaa2_flow_validate(struct rte_eth_dev *dev,
-			const struct rte_flow_attr *flow_attr,
-			const struct rte_flow_item pattern[],
-			const struct rte_flow_action actions[],
-			struct rte_flow_error *error)
+static int
+dpaa2_flow_validate(struct rte_eth_dev *dev,
+	const struct rte_flow_attr *flow_attr,
+	const struct rte_flow_item pattern[],
+	const struct rte_flow_action actions[],
+	struct rte_flow_error *error)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpni_attr dpni_attr;
@@ -3975,127 +3046,128 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static
-struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
-				   const struct rte_flow_attr *attr,
-				   const struct rte_flow_item pattern[],
-				   const struct rte_flow_action actions[],
-				   struct rte_flow_error *error)
+static struct rte_flow *
+dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
 {
-	struct rte_flow *flow = NULL;
-	size_t key_iova = 0, mask_iova = 0;
+	struct dpaa2_dev_flow *flow = NULL;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int ret;
 
 	dpaa2_flow_control_log =
 		getenv("DPAA2_FLOW_CONTROL_LOG");
 
 	if (getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")) {
-		struct dpaa2_dev_priv *priv = dev->data->dev_private;
-
 		dpaa2_flow_miss_flow_id =
 			(uint16_t)atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
 		if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
-			DPAA2_PMD_ERR(
-				"The missed flow ID %d exceeds the max flow ID %d",
-				dpaa2_flow_miss_flow_id,
-				priv->dist_queues - 1);
+			DPAA2_PMD_ERR("Missed flow ID %d >= dist size(%d)",
+				      dpaa2_flow_miss_flow_id,
+				      priv->dist_queues);
 			return NULL;
 		}
 	}
 
-	flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
+	flow = rte_zmalloc(NULL, sizeof(struct dpaa2_dev_flow),
+			   RTE_CACHE_LINE_SIZE);
 	if (!flow) {
 		DPAA2_PMD_ERR("Failure to allocate memory for flow");
 		goto mem_failure;
 	}
-	/* Allocate DMA'ble memory to write the rules */
-	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!key_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+
+	/* Allocate DMA'ble memory to write the qos rules */
+	flow->qos_key_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->qos_key_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!mask_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+	flow->qos_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->qos_key_addr);
+
+	flow->qos_mask_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->qos_mask_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
+	flow->qos_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->qos_mask_addr);
 
-	flow->qos_rule.key_iova = key_iova;
-	flow->qos_rule.mask_iova = mask_iova;
-
-	/* Allocate DMA'ble memory to write the rules */
-	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!key_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+	/* Allocate DMA'ble memory to write the FS rules */
+	flow->fs_key_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->fs_key_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!mask_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+	flow->fs_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->fs_key_addr);
+
+	flow->fs_mask_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->fs_mask_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
+	flow->fs_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->fs_mask_addr);
 
-	flow->fs_rule.key_iova = key_iova;
-	flow->fs_rule.mask_iova = mask_iova;
-
-	flow->ipaddr_rule.ipaddr_type = FLOW_NONE_IPADDR;
-	flow->ipaddr_rule.qos_ipsrc_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.qos_ipdst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.fs_ipsrc_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.fs_ipdst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
+	priv->curr = flow;
 
-	ret = dpaa2_generic_flow_set(flow, dev, attr, pattern,
-			actions, error);
+	ret = dpaa2_generic_flow_set(flow, dev, attr, pattern, actions, error);
 	if (ret < 0) {
 		if (error && error->type > RTE_FLOW_ERROR_TYPE_ACTION)
 			rte_flow_error_set(error, EPERM,
-					RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-					attr, "unknown");
-		DPAA2_PMD_ERR("Failure to create flow, return code (%d)", ret);
+					   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   attr, "unknown");
+		DPAA2_PMD_ERR("Create flow failed (%d)", ret);
 		goto creation_error;
 	}
 
-	return flow;
+	priv->curr = NULL;
+	return (struct rte_flow *)flow;
+
 mem_failure:
-	rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-			   NULL, "memory alloc");
+	rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "memory alloc");
+
 creation_error:
-	rte_free((void *)flow);
-	rte_free((void *)key_iova);
-	rte_free((void *)mask_iova);
+	if (flow) {
+		if (flow->qos_key_addr)
+			rte_free(flow->qos_key_addr);
+		if (flow->qos_mask_addr)
+			rte_free(flow->qos_mask_addr);
+		if (flow->fs_key_addr)
+			rte_free(flow->fs_key_addr);
+		if (flow->fs_mask_addr)
+			rte_free(flow->fs_mask_addr);
+		rte_free(flow);
+	}
+	priv->curr = NULL;
 
 	return NULL;
 }
 
-static
-int dpaa2_flow_destroy(struct rte_eth_dev *dev,
-		       struct rte_flow *flow,
-		       struct rte_flow_error *error)
+static int
+dpaa2_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *_flow,
+		   struct rte_flow_error *error)
 {
 	int ret = 0;
+	struct dpaa2_dev_flow *flow;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
+	struct fsl_mc_io *dpni = priv->hw;
 
-	switch (flow->action) {
+	flow = (struct dpaa2_dev_flow *)_flow;
+
+	switch (flow->action_type) {
 	case RTE_FLOW_ACTION_TYPE_QUEUE:
 	case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
 	case RTE_FLOW_ACTION_TYPE_PORT_ID:
 		if (priv->num_rx_tc > 1) {
 			/* Remove entry from QoS table first */
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-					&flow->qos_rule);
+			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+						    priv->token,
+						    &flow->qos_rule);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"Error in removing entry from QoS table(%d)", ret);
+				DPAA2_PMD_ERR("Remove FS QoS entry failed");
+				dpaa2_flow_qos_entry_log("Delete failed", flow,
+							 -1);
+				abort();
 				goto error;
 			}
 		}
@@ -4104,34 +3176,37 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
 		ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token,
 					   flow->tc_id, &flow->fs_rule);
 		if (ret < 0) {
-			DPAA2_PMD_ERR(
-				"Error in removing entry from FS table(%d)", ret);
+			DPAA2_PMD_ERR("Remove entry from FS[%d] failed",
+				      flow->tc_id);
 			goto error;
 		}
 		break;
 	case RTE_FLOW_ACTION_TYPE_RSS:
 		if (priv->num_rx_tc > 1) {
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-					&flow->qos_rule);
+			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+						    priv->token,
+						    &flow->qos_rule);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"Error in entry addition in QoS table(%d)", ret);
+				DPAA2_PMD_ERR("Remove RSS QoS entry failed");
 				goto error;
 			}
 		}
 		break;
 	default:
-		DPAA2_PMD_ERR(
-		"Action type (%d) is not supported", flow->action);
+		DPAA2_PMD_ERR("Action(%d) not supported", flow->action_type);
 		ret = -ENOTSUP;
 		break;
 	}
 
 	LIST_REMOVE(flow, next);
-	rte_free((void *)(size_t)flow->qos_rule.key_iova);
-	rte_free((void *)(size_t)flow->qos_rule.mask_iova);
-	rte_free((void *)(size_t)flow->fs_rule.key_iova);
-	rte_free((void *)(size_t)flow->fs_rule.mask_iova);
+	if (flow->qos_key_addr)
+		rte_free(flow->qos_key_addr);
+	if (flow->qos_mask_addr)
+		rte_free(flow->qos_mask_addr);
+	if (flow->fs_key_addr)
+		rte_free(flow->fs_key_addr);
+	if (flow->fs_mask_addr)
+		rte_free(flow->fs_mask_addr);
 	/* Now free the flow */
 	rte_free(flow);
 
@@ -4156,12 +3231,12 @@ dpaa2_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct rte_flow *flow = LIST_FIRST(&priv->flows);
+	struct dpaa2_dev_flow *flow = LIST_FIRST(&priv->flows);
 
 	while (flow) {
-		struct rte_flow *next = LIST_NEXT(flow, next);
+		struct dpaa2_dev_flow *next = LIST_NEXT(flow, next);
 
-		dpaa2_flow_destroy(dev, flow, error);
+		dpaa2_flow_destroy(dev, (struct rte_flow *)flow, error);
 		flow = next;
 	}
 	return 0;
@@ -4169,10 +3244,10 @@ dpaa2_flow_flush(struct rte_eth_dev *dev,
 
 static int
 dpaa2_flow_query(struct rte_eth_dev *dev __rte_unused,
-		struct rte_flow *flow __rte_unused,
-		const struct rte_flow_action *actions __rte_unused,
-		void *data __rte_unused,
-		struct rte_flow_error *error __rte_unused)
+	struct rte_flow *_flow __rte_unused,
+	const struct rte_flow_action *actions __rte_unused,
+	void *data __rte_unused,
+	struct rte_flow_error *error __rte_unused)
 {
 	return 0;
 }
@@ -4189,11 +3264,11 @@ dpaa2_flow_query(struct rte_eth_dev *dev __rte_unused,
 void
 dpaa2_flow_clean(struct rte_eth_dev *dev)
 {
-	struct rte_flow *flow;
+	struct dpaa2_dev_flow *flow;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	while ((flow = LIST_FIRST(&priv->flows)))
-		dpaa2_flow_destroy(dev, flow, NULL);
+		dpaa2_flow_destroy(dev, (struct rte_flow *)flow, NULL);
 }
 
 const struct rte_flow_ops dpaa2_flow_ops = {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 25/43] net/dpaa2: dump Rx parser result
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (23 preceding siblings ...)
  2024-10-14 12:01       ` [v3 24/43] net/dpaa2: flow API refactor vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 26/43] net/dpaa2: enhancement of raw flow extract vanshika.shukla
                         ` (18 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

export DPAA2_PRINT_RX_PARSER_RESULT=1 is used to dump
RX parser result and frame attribute flags generated by
hardware parser and soft parser.
The parser results are converted to big endian described in RM.
The areas set by soft parser are dump as well.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c     |   5 +
 drivers/net/dpaa2/dpaa2_ethdev.h     |  90 ++++++++++
 drivers/net/dpaa2/dpaa2_parse_dump.h | 248 +++++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_rxtx.c       |   7 +
 4 files changed, 350 insertions(+)
 create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index e55de5b614..187b648799 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -75,6 +75,8 @@ int dpaa2_timestamp_dynfield_offset = -1;
 /* Enable error queue */
 bool dpaa2_enable_err_queue;
 
+bool dpaa2_print_parser_result;
+
 #define MAX_NB_RX_DESC		11264
 int total_nb_rx_desc;
 
@@ -2730,6 +2732,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		DPAA2_PMD_INFO("Enable error queue");
 	}
 
+	if (getenv("DPAA2_PRINT_RX_PARSER_RESULT"))
+		dpaa2_print_parser_result = 1;
+
 	/* Allocate memory for hardware structure for queues */
 	ret = dpaa2_alloc_rx_tx_queues(eth_dev);
 	if (ret) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index ea1c1b5117..c864859b3f 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -19,6 +19,8 @@
 #include <mc/fsl_dpni.h>
 #include <mc/fsl_mc_sys.h>
 
+#include "base/dpaa2_hw_dpni_annot.h"
+
 #define DPAA2_MIN_RX_BUF_SIZE 512
 #define DPAA2_MAX_RX_PKT_LEN  10240 /*WRIOP support*/
 #define NET_DPAA2_PMD_DRIVER_NAME net_dpaa2
@@ -152,6 +154,88 @@ extern const struct rte_tm_ops dpaa2_tm_ops;
 
 extern bool dpaa2_enable_err_queue;
 
+extern bool dpaa2_print_parser_result;
+
+#define DPAA2_FAPR_SIZE \
+	(sizeof(struct dpaa2_annot_hdr) - \
+	offsetof(struct dpaa2_annot_hdr, word3))
+
+#define DPAA2_PR_NXTHDR_OFFSET 0
+
+#define DPAA2_FAFE_PSR_OFFSET 2
+#define DPAA2_FAFE_PSR_SIZE 2
+
+#define DPAA2_FAF_PSR_OFFSET 4
+#define DPAA2_FAF_PSR_SIZE 12
+
+#define DPAA2_FAF_TOTAL_SIZE \
+	(DPAA2_FAFE_PSR_SIZE + DPAA2_FAF_PSR_SIZE)
+
+/* Just most popular Frame attribute flags (FAF) here.*/
+enum dpaa2_rx_faf_offset {
+	/* Set by SP start*/
+	FAFE_VXLAN_IN_VLAN_FRAM = 0,
+	FAFE_VXLAN_IN_IPV4_FRAM = 1,
+	FAFE_VXLAN_IN_IPV6_FRAM = 2,
+	FAFE_VXLAN_IN_UDP_FRAM = 3,
+	FAFE_VXLAN_IN_TCP_FRAM = 4,
+	/* Set by SP end*/
+
+	FAF_GTP_PRIMED_FRAM = 1 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_PTP_FRAM = 3 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_VXLAN_FRAM = 4 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ETH_FRAM = 10 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_LLC_SNAP_FRAM = 18 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_VLAN_FRAM = 21 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_PPPOE_PPP_FRAM = 25 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_MPLS_FRAM = 27 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ARP_FRAM = 30 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPV4_FRAM = 34 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPV6_FRAM = 42 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IP_FRAM = 48 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ICMP_FRAM = 57 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IGMP_FRAM = 58 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_GRE_FRAM = 65 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_UDP_FRAM = 70 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_TCP_FRAM = 72 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_FRAM = 77 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_ESP_FRAM = 78 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_AH_FRAM = 79 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_SCTP_FRAM = 81 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_DCCP_FRAM = 83 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_GTP_FRAM = 87 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ESP_FRAM = 89 + DPAA2_FAFE_PSR_SIZE * 8,
+};
+
+#define DPAA2_PR_ETH_OFF_OFFSET 19
+#define DPAA2_PR_TCI_OFF_OFFSET 21
+#define DPAA2_PR_LAST_ETYPE_OFFSET 23
+#define DPAA2_PR_L3_OFF_OFFSET 27
+#define DPAA2_PR_L4_OFF_OFFSET 30
+#define DPAA2_PR_L5_OFF_OFFSET 31
+#define DPAA2_PR_NXTHDR_OFF_OFFSET 34
+
+/* Set by SP for vxlan distribution start*/
+#define DPAA2_VXLAN_IN_TCI_OFFSET 16
+
+#define DPAA2_VXLAN_IN_DADDR0_OFFSET 20
+#define DPAA2_VXLAN_IN_DADDR1_OFFSET 22
+#define DPAA2_VXLAN_IN_DADDR2_OFFSET 24
+#define DPAA2_VXLAN_IN_DADDR3_OFFSET 25
+#define DPAA2_VXLAN_IN_DADDR4_OFFSET 26
+#define DPAA2_VXLAN_IN_DADDR5_OFFSET 28
+
+#define DPAA2_VXLAN_IN_SADDR0_OFFSET 29
+#define DPAA2_VXLAN_IN_SADDR1_OFFSET 32
+#define DPAA2_VXLAN_IN_SADDR2_OFFSET 33
+#define DPAA2_VXLAN_IN_SADDR3_OFFSET 35
+#define DPAA2_VXLAN_IN_SADDR4_OFFSET 41
+#define DPAA2_VXLAN_IN_SADDR5_OFFSET 42
+
+#define DPAA2_VXLAN_VNI_OFFSET 43
+#define DPAA2_VXLAN_IN_TYPE_OFFSET 46
+/* Set by SP for vxlan distribution end*/
+
 struct ipv4_sd_addr_extract_rule {
 	uint32_t ipv4_src;
 	uint32_t ipv4_dst;
@@ -197,7 +281,13 @@ enum ip_addr_extract_type {
 	IP_DST_SRC_EXTRACT
 };
 
+enum key_prot_type {
+	DPAA2_NET_PROT_KEY,
+	DPAA2_FAF_KEY
+};
+
 struct key_prot_field {
+	enum key_prot_type type;
 	enum net_prot prot;
 	uint32_t key_field;
 };
diff --git a/drivers/net/dpaa2/dpaa2_parse_dump.h b/drivers/net/dpaa2/dpaa2_parse_dump.h
new file mode 100644
index 0000000000..f1cdc003de
--- /dev/null
+++ b/drivers/net/dpaa2/dpaa2_parse_dump.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ *   Copyright 2022 NXP
+ *
+ */
+
+#ifndef _DPAA2_PARSE_DUMP_H
+#define _DPAA2_PARSE_DUMP_H
+
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_pmd_dpaa2.h>
+
+#include <dpaa2_hw_pvt.h>
+#include "dpaa2_tm.h"
+
+#include <mc/fsl_dpni.h>
+#include <mc/fsl_mc_sys.h>
+
+#include "base/dpaa2_hw_dpni_annot.h"
+
+#define DPAA2_PR_PRINT printf
+
+struct dpaa2_faf_bit_info {
+	const char *name;
+	int position;
+};
+
+struct dpaa2_fapr_field_info {
+	const char *name;
+	uint16_t value;
+};
+
+struct dpaa2_fapr_array {
+	union {
+		uint64_t pr_64[DPAA2_FAPR_SIZE / 8];
+		uint8_t pr[DPAA2_FAPR_SIZE];
+	};
+};
+
+#define NEXT_HEADER_NAME "Next Header"
+#define ETH_OFF_NAME "ETH OFFSET"
+#define VLAN_TCI_OFF_NAME "VLAN TCI OFFSET"
+#define LAST_ENTRY_OFF_NAME "LAST ETYPE Offset"
+#define L3_OFF_NAME "L3 Offset"
+#define L4_OFF_NAME "L4 Offset"
+#define L5_OFF_NAME "L5 Offset"
+#define NEXT_HEADER_OFF_NAME "Next Header Offset"
+
+static const
+struct dpaa2_fapr_field_info support_dump_fields[] = {
+	{
+		.name = NEXT_HEADER_NAME,
+	},
+	{
+		.name = ETH_OFF_NAME,
+	},
+	{
+		.name = VLAN_TCI_OFF_NAME,
+	},
+	{
+		.name = LAST_ENTRY_OFF_NAME,
+	},
+	{
+		.name = L3_OFF_NAME,
+	},
+	{
+		.name = L4_OFF_NAME,
+	},
+	{
+		.name = L5_OFF_NAME,
+	},
+	{
+		.name = NEXT_HEADER_OFF_NAME,
+	}
+};
+
+static inline void
+dpaa2_print_faf(struct dpaa2_fapr_array *fapr)
+{
+	const int faf_bit_len = DPAA2_FAF_TOTAL_SIZE * 8;
+	struct dpaa2_faf_bit_info faf_bits[faf_bit_len];
+	int i, byte_pos, bit_pos, vxlan = 0, vxlan_vlan = 0;
+	struct rte_ether_hdr vxlan_in_eth;
+	uint16_t vxlan_vlan_tci;
+
+	for (i = 0; i < faf_bit_len; i++) {
+		faf_bits[i].position = i;
+		if (i == FAFE_VXLAN_IN_VLAN_FRAM)
+			faf_bits[i].name = "VXLAN VLAN Present";
+		else if (i == FAFE_VXLAN_IN_IPV4_FRAM)
+			faf_bits[i].name = "VXLAN IPV4 Present";
+		else if (i == FAFE_VXLAN_IN_IPV6_FRAM)
+			faf_bits[i].name = "VXLAN IPV6 Present";
+		else if (i == FAFE_VXLAN_IN_UDP_FRAM)
+			faf_bits[i].name = "VXLAN UDP Present";
+		else if (i == FAFE_VXLAN_IN_TCP_FRAM)
+			faf_bits[i].name = "VXLAN TCP Present";
+		else if (i == FAF_VXLAN_FRAM)
+			faf_bits[i].name = "VXLAN Present";
+		else if (i == FAF_ETH_FRAM)
+			faf_bits[i].name = "Ethernet MAC Present";
+		else if (i == FAF_VLAN_FRAM)
+			faf_bits[i].name = "VLAN 1 Present";
+		else if (i == FAF_IPV4_FRAM)
+			faf_bits[i].name = "IPv4 1 Present";
+		else if (i == FAF_IPV6_FRAM)
+			faf_bits[i].name = "IPv6 1 Present";
+		else if (i == FAF_UDP_FRAM)
+			faf_bits[i].name = "UDP Present";
+		else if (i == FAF_TCP_FRAM)
+			faf_bits[i].name = "TCP Present";
+		else
+			faf_bits[i].name = "Check RM for this unusual frame";
+	}
+
+	DPAA2_PR_PRINT("Frame Annotation Flags:\r\n");
+	for (i = 0; i < faf_bit_len; i++) {
+		byte_pos = i / 8 + DPAA2_FAFE_PSR_OFFSET;
+		bit_pos = i % 8;
+		if (fapr->pr[byte_pos] & (1 << (7 - bit_pos))) {
+			DPAA2_PR_PRINT("FAF bit %d : %s\r\n",
+				faf_bits[i].position, faf_bits[i].name);
+			if (i == FAF_VXLAN_FRAM)
+				vxlan = 1;
+		}
+	}
+
+	if (vxlan) {
+		vxlan_in_eth.dst_addr.addr_bytes[0] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR0_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[1] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR1_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[2] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR2_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[3] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR3_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[4] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR4_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[5] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR5_OFFSET];
+
+		vxlan_in_eth.src_addr.addr_bytes[0] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR0_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[1] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR1_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[2] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR2_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[3] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR3_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[4] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR4_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[5] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR5_OFFSET];
+
+		vxlan_in_eth.ether_type =
+			fapr->pr[DPAA2_VXLAN_IN_TYPE_OFFSET];
+		vxlan_in_eth.ether_type =
+			vxlan_in_eth.ether_type << 8;
+		vxlan_in_eth.ether_type |=
+			fapr->pr[DPAA2_VXLAN_IN_TYPE_OFFSET + 1];
+
+		if (vxlan_in_eth.ether_type == RTE_ETHER_TYPE_VLAN)
+			vxlan_vlan = 1;
+		DPAA2_PR_PRINT("VXLAN inner eth:\r\n");
+		DPAA2_PR_PRINT("dst addr: ");
+		for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+			if (i != 0)
+				DPAA2_PR_PRINT(":");
+			DPAA2_PR_PRINT("%02x",
+				vxlan_in_eth.dst_addr.addr_bytes[i]);
+		}
+		DPAA2_PR_PRINT("\r\n");
+		DPAA2_PR_PRINT("src addr: ");
+		for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+			if (i != 0)
+				DPAA2_PR_PRINT(":");
+			DPAA2_PR_PRINT("%02x",
+				vxlan_in_eth.src_addr.addr_bytes[i]);
+		}
+		DPAA2_PR_PRINT("\r\n");
+		DPAA2_PR_PRINT("type: 0x%04x\r\n",
+			vxlan_in_eth.ether_type);
+		if (vxlan_vlan) {
+			vxlan_vlan_tci = fapr->pr[DPAA2_VXLAN_IN_TCI_OFFSET];
+			vxlan_vlan_tci = vxlan_vlan_tci << 8;
+			vxlan_vlan_tci |=
+				fapr->pr[DPAA2_VXLAN_IN_TCI_OFFSET + 1];
+
+			DPAA2_PR_PRINT("vlan tci: 0x%04x\r\n",
+				vxlan_vlan_tci);
+		}
+	}
+}
+
+static inline void
+dpaa2_print_parse_result(struct dpaa2_annot_hdr *annotation)
+{
+	struct dpaa2_fapr_array fapr;
+	struct dpaa2_fapr_field_info
+		fapr_fields[sizeof(support_dump_fields) /
+		sizeof(struct dpaa2_fapr_field_info)];
+	uint64_t len, i;
+
+	memcpy(&fapr, &annotation->word3, DPAA2_FAPR_SIZE);
+	for (i = 0; i < (DPAA2_FAPR_SIZE / 8); i++)
+		fapr.pr_64[i] = rte_cpu_to_be_64(fapr.pr_64[i]);
+
+	memcpy(fapr_fields, support_dump_fields,
+		sizeof(support_dump_fields));
+
+	for (i = 0;
+		i < sizeof(fapr_fields) /
+		sizeof(struct dpaa2_fapr_field_info);
+		i++) {
+		if (!strcmp(fapr_fields[i].name, NEXT_HEADER_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_NXTHDR_OFFSET];
+			fapr_fields[i].value = fapr_fields[i].value << 8;
+			fapr_fields[i].value |=
+				fapr.pr[DPAA2_PR_NXTHDR_OFFSET + 1];
+		} else if (!strcmp(fapr_fields[i].name, ETH_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_ETH_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, VLAN_TCI_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_TCI_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, LAST_ENTRY_OFF_NAME)) {
+			fapr_fields[i].value =
+				fapr.pr[DPAA2_PR_LAST_ETYPE_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L3_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L3_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L4_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L4_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L5_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L5_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, NEXT_HEADER_OFF_NAME)) {
+			fapr_fields[i].value =
+				fapr.pr[DPAA2_PR_NXTHDR_OFF_OFFSET];
+		}
+	}
+
+	len = sizeof(fapr_fields) / sizeof(struct dpaa2_fapr_field_info);
+	DPAA2_PR_PRINT("Parse Result:\r\n");
+	for (i = 0; i < len; i++) {
+		DPAA2_PR_PRINT("%21s : 0x%02x\r\n",
+			fapr_fields[i].name, fapr_fields[i].value);
+	}
+	dpaa2_print_faf(&fapr);
+}
+
+#endif
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 92e9dd40dc..71b2b4a427 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -25,6 +25,7 @@
 #include "dpaa2_pmd_logs.h"
 #include "dpaa2_ethdev.h"
 #include "base/dpaa2_hw_dpni_annot.h"
+#include "dpaa2_parse_dump.h"
 
 static inline uint32_t __rte_hot
 dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
@@ -57,6 +58,9 @@ dpaa2_dev_rx_parse_new(struct rte_mbuf *m, const struct qbman_fd *fd,
 	struct dpaa2_annot_hdr *annotation =
 			(struct dpaa2_annot_hdr *)hw_annot_addr;
 
+	if (unlikely(dpaa2_print_parser_result))
+		dpaa2_print_parse_result(annotation);
+
 	m->packet_type = RTE_PTYPE_UNKNOWN;
 	switch (frc) {
 	case DPAA2_PKT_TYPE_ETHER:
@@ -252,6 +256,9 @@ dpaa2_dev_rx_parse(struct rte_mbuf *mbuf, void *hw_annot_addr)
 	else
 		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
+	if (unlikely(dpaa2_print_parser_result))
+		dpaa2_print_parse_result(annotation);
+
 	if (dpaa2_enable_ts[mbuf->port]) {
 		*dpaa2_timestamp_dynfield(mbuf) = annotation->word2;
 		mbuf->ol_flags |= dpaa2_timestamp_rx_dynflag;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 26/43] net/dpaa2: enhancement of raw flow extract
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (24 preceding siblings ...)
  2024-10-14 12:01       ` [v3 25/43] net/dpaa2: dump Rx parser result vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 27/43] net/dpaa2: frame attribute flags parser vanshika.shukla
                         ` (17 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support combination of RAW extract and header extracts.
RAW extract can start from any absolute offset.

TBD: relative offset support.
To support relative offset of previous L3 protocol item,
extracts should be expanded to identify if the frame is:
vlan or none-vlan.

To support relative offset of previous L4 protocol item,
extracts should be expanded to identify if the frame is:
vlan/IPv4 or vlan/IPv6 or none-vlan/IPv4 or none-vlan/IPv6.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |  10 +
 drivers/net/dpaa2/dpaa2_flow.c   | 385 ++++++++++++++++++++++++++-----
 2 files changed, 340 insertions(+), 55 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index c864859b3f..8f548467a4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -292,6 +292,11 @@ struct key_prot_field {
 	uint32_t key_field;
 };
 
+struct dpaa2_raw_region {
+	uint8_t raw_start;
+	uint8_t raw_size;
+};
+
 struct dpaa2_key_profile {
 	uint8_t num;
 	uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
@@ -301,6 +306,10 @@ struct dpaa2_key_profile {
 	uint8_t ip_addr_extract_pos;
 	uint8_t ip_addr_extract_off;
 
+	uint8_t raw_extract_pos;
+	uint8_t raw_extract_off;
+	uint8_t raw_extract_num;
+
 	uint8_t l4_src_port_present;
 	uint8_t l4_src_port_pos;
 	uint8_t l4_src_port_offset;
@@ -309,6 +318,7 @@ struct dpaa2_key_profile {
 	uint8_t l4_dst_port_offset;
 	struct key_prot_field prot_field[DPKG_MAX_NUM_OF_EXTRACTS];
 	uint16_t key_max_size;
+	struct dpaa2_raw_region raw_region;
 };
 
 struct dpaa2_key_extract {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 3b4d5cc8d7..69faf36a8c 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -772,42 +772,272 @@ dpaa2_flow_extract_add_hdr(enum net_prot prot,
 }
 
 static int
-dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
-	int size)
+dpaa2_flow_extract_new_raw(struct dpaa2_dev_priv *priv,
+	int offset, int size,
+	enum dpaa2_flow_dist_type dist_type, int tc_id)
 {
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_profile *key_info = &key_extract->key_profile;
-	int last_extract_size, index;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpaa2_key_profile *key_profile;
+	int last_extract_size, index, pos, item_size;
+	uint8_t num_extracts;
+	uint32_t field;
 
-	if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
-	    DPKG_EXTRACT_FROM_DATA) {
-		DPAA2_PMD_WARN("RAW extract cannot be combined with others");
-		return -1;
-	}
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	key_profile = &key_extract->key_profile;
+
+	key_profile->raw_region.raw_start = 0;
+	key_profile->raw_region.raw_size = 0;
 
 	last_extract_size = (size % DPAA2_FLOW_MAX_KEY_SIZE);
-	dpkg->num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
+	num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
 	if (last_extract_size)
-		dpkg->num_extracts++;
+		num_extracts++;
 	else
 		last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
 
-	for (index = 0; index < dpkg->num_extracts; index++) {
-		dpkg->extracts[index].type = DPKG_EXTRACT_FROM_DATA;
-		if (index == dpkg->num_extracts - 1)
-			dpkg->extracts[index].extract.from_data.size =
-				last_extract_size;
+	for (index = 0; index < num_extracts; index++) {
+		if (index == num_extracts - 1)
+			item_size = last_extract_size;
 		else
-			dpkg->extracts[index].extract.from_data.size =
-				DPAA2_FLOW_MAX_KEY_SIZE;
-		dpkg->extracts[index].extract.from_data.offset =
-			DPAA2_FLOW_MAX_KEY_SIZE * index;
+			item_size = DPAA2_FLOW_MAX_KEY_SIZE;
+		field = offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+		field |= item_size;
+
+		pos = dpaa2_flow_key_profile_advance(NET_PROT_PAYLOAD,
+				field, item_size, priv, dist_type,
+				tc_id, NULL);
+		if (pos < 0)
+			return pos;
+
+		dpkg->extracts[pos].type = DPKG_EXTRACT_FROM_DATA;
+		dpkg->extracts[pos].extract.from_data.size = item_size;
+		dpkg->extracts[pos].extract.from_data.offset = offset;
+
+		if (index == 0) {
+			key_profile->raw_extract_pos = pos;
+			key_profile->raw_extract_off =
+				key_profile->key_offset[pos];
+			key_profile->raw_region.raw_start = offset;
+		}
+		key_profile->raw_extract_num++;
+		key_profile->raw_region.raw_size +=
+			key_profile->key_size[pos];
+
+		offset += item_size;
+		dpkg->num_extracts++;
 	}
 
-	key_info->key_max_size = size;
 	return 0;
 }
 
+static int
+dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
+	int offset, int size, enum dpaa2_flow_dist_type dist_type,
+	int tc_id, int *recfg)
+{
+	struct dpaa2_key_profile *key_profile;
+	struct dpaa2_raw_region *raw_region;
+	int end = offset + size, ret = 0, extract_extended, sz_extend;
+	int start_cmp, end_cmp, new_size, index, pos, end_pos;
+	int last_extract_size, item_size, num_extracts, bk_num = 0;
+	struct dpkg_extract extract_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint8_t key_offset_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint8_t key_size_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	struct key_prot_field prot_field_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	struct dpaa2_raw_region raw_hole;
+	struct dpkg_profile_cfg *dpkg;
+	enum net_prot prot;
+	uint32_t field;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+		dpkg = &priv->extract.qos_key_extract.dpkg;
+	} else {
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+		dpkg = &priv->extract.tc_key_extract[tc_id].dpkg;
+	}
+
+	raw_region = &key_profile->raw_region;
+	if (!raw_region->raw_size) {
+		/* New RAW region*/
+		ret = dpaa2_flow_extract_new_raw(priv, offset, size,
+			dist_type, tc_id);
+		if (!ret && recfg)
+			(*recfg) |= dist_type;
+
+		return ret;
+	}
+	start_cmp = raw_region->raw_start;
+	end_cmp = raw_region->raw_start + raw_region->raw_size;
+
+	if (offset >= start_cmp && end <= end_cmp)
+		return 0;
+
+	sz_extend = 0;
+	new_size = raw_region->raw_size;
+	if (offset < start_cmp) {
+		sz_extend += start_cmp - offset;
+		new_size += (start_cmp - offset);
+	}
+	if (end > end_cmp) {
+		sz_extend += end - end_cmp;
+		new_size += (end - end_cmp);
+	}
+
+	last_extract_size = (new_size % DPAA2_FLOW_MAX_KEY_SIZE);
+	num_extracts = (new_size / DPAA2_FLOW_MAX_KEY_SIZE);
+	if (last_extract_size)
+		num_extracts++;
+	else
+		last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
+
+	if ((key_profile->num + num_extracts -
+		key_profile->raw_extract_num) >=
+		DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("%s Failed to expand raw extracts",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (offset < start_cmp) {
+		raw_hole.raw_start = key_profile->raw_extract_off;
+		raw_hole.raw_size = start_cmp - offset;
+		raw_region->raw_start = offset;
+		raw_region->raw_size += start_cmp - offset;
+
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size);
+			if (ret)
+				return ret;
+		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE) {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size, tc_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	if (end > end_cmp) {
+		raw_hole.raw_start =
+			key_profile->raw_extract_off +
+			raw_region->raw_size;
+		raw_hole.raw_size = end - end_cmp;
+		raw_region->raw_size += end - end_cmp;
+
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size);
+			if (ret)
+				return ret;
+		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE) {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size, tc_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	end_pos = key_profile->raw_extract_pos +
+		key_profile->raw_extract_num;
+	if (key_profile->num > end_pos) {
+		bk_num = key_profile->num - end_pos;
+		memcpy(extract_bk, &dpkg->extracts[end_pos],
+			bk_num * sizeof(struct dpkg_extract));
+		memcpy(key_offset_bk, &key_profile->key_offset[end_pos],
+			bk_num * sizeof(uint8_t));
+		memcpy(key_size_bk, &key_profile->key_size[end_pos],
+			bk_num * sizeof(uint8_t));
+		memcpy(prot_field_bk, &key_profile->prot_field[end_pos],
+			bk_num * sizeof(struct key_prot_field));
+
+		for (index = 0; index < bk_num; index++) {
+			key_offset_bk[index] += sz_extend;
+			prot = prot_field_bk[index].prot;
+			field = prot_field_bk[index].key_field;
+			if (dpaa2_flow_l4_src_port_extract(prot,
+				field)) {
+				key_profile->l4_src_port_present = 1;
+				key_profile->l4_src_port_pos = end_pos + index;
+				key_profile->l4_src_port_offset =
+					key_offset_bk[index];
+			} else if (dpaa2_flow_l4_dst_port_extract(prot,
+				field)) {
+				key_profile->l4_dst_port_present = 1;
+				key_profile->l4_dst_port_pos = end_pos + index;
+				key_profile->l4_dst_port_offset =
+					key_offset_bk[index];
+			}
+		}
+	}
+
+	pos = key_profile->raw_extract_pos;
+
+	for (index = 0; index < num_extracts; index++) {
+		if (index == num_extracts - 1)
+			item_size = last_extract_size;
+		else
+			item_size = DPAA2_FLOW_MAX_KEY_SIZE;
+		field = offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+		field |= item_size;
+
+		if (pos > 0) {
+			key_profile->key_offset[pos] =
+				key_profile->key_offset[pos - 1] +
+				key_profile->key_size[pos - 1];
+		} else {
+			key_profile->key_offset[pos] = 0;
+		}
+		key_profile->key_size[pos] = item_size;
+		key_profile->prot_field[pos].prot = NET_PROT_PAYLOAD;
+		key_profile->prot_field[pos].key_field = field;
+
+		dpkg->extracts[pos].type = DPKG_EXTRACT_FROM_DATA;
+		dpkg->extracts[pos].extract.from_data.size = item_size;
+		dpkg->extracts[pos].extract.from_data.offset = offset;
+		offset += item_size;
+		pos++;
+	}
+
+	if (bk_num) {
+		memcpy(&dpkg->extracts[pos], extract_bk,
+			bk_num * sizeof(struct dpkg_extract));
+		memcpy(&key_profile->key_offset[end_pos],
+			key_offset_bk, bk_num * sizeof(uint8_t));
+		memcpy(&key_profile->key_size[end_pos],
+			key_size_bk, bk_num * sizeof(uint8_t));
+		memcpy(&key_profile->prot_field[end_pos],
+			prot_field_bk, bk_num * sizeof(struct key_prot_field));
+	}
+
+	extract_extended = num_extracts - key_profile->raw_extract_num;
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		key_profile->ip_addr_extract_pos += extract_extended;
+		key_profile->ip_addr_extract_off += sz_extend;
+	}
+	key_profile->raw_extract_num = num_extracts;
+	key_profile->num += extract_extended;
+	key_profile->key_max_size += sz_extend;
+
+	dpkg->num_extracts += extract_extended;
+	if (!ret && recfg)
+		(*recfg) |= dist_type;
+
+	return ret;
+}
+
 static inline int
 dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 	enum net_prot prot, uint32_t key_field)
@@ -847,7 +1077,6 @@ dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
 	int i;
 
 	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
-
 	if (i >= 0)
 		return key_profile->key_offset[i];
 	else
@@ -996,13 +1225,37 @@ dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 }
 
 static inline int
-dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
-			     const void *key, const void *mask, int size)
+dpaa2_flow_raw_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	uint32_t extract_offset, int size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int offset = 0;
+	int extract_size = size > DPAA2_FLOW_MAX_KEY_SIZE ?
+		DPAA2_FLOW_MAX_KEY_SIZE : size;
+	int offset, field;
+
+	field = extract_offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+	field |= extract_size;
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			NET_PROT_PAYLOAD, field);
+	if (offset < 0) {
+		DPAA2_PMD_ERR("offset(%d)/size(%d) raw extract failed",
+			extract_offset, size);
+		return -EINVAL;
+	}
 
-	memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
-	memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, size);
+		memcpy((flow->qos_mask_addr + offset), mask, size);
+		flow->qos_rule_size = offset + size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, size);
+		memcpy((flow->fs_mask_addr + offset), mask, size);
+		flow->fs_rule_size = offset + size;
+	}
 
 	return 0;
 }
@@ -2237,22 +2490,36 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_raw *spec = pattern->spec;
 	const struct rte_flow_item_raw *mask = pattern->mask;
-	int prev_key_size =
-		priv->extract.qos_key_extract.key_profile.key_max_size;
 	int local_cfg = 0, ret;
 	uint32_t group;
+	struct dpaa2_key_extract *qos_key_extract;
+	struct dpaa2_key_extract *tc_key_extract;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
 		DPAA2_PMD_ERR("spec or mask not present.");
 		return -EINVAL;
 	}
-	/* Only supports non-relative with offset 0 */
-	if (spec->relative || spec->offset != 0 ||
-	    spec->search || spec->limit) {
-		DPAA2_PMD_ERR("relative and non zero offset not supported.");
+
+	if (spec->relative) {
+		/* TBD: relative offset support.
+		 * To support relative offset of previous L3 protocol item,
+		 * extracts should be expanded to identify if the frame is:
+		 * vlan or none-vlan.
+		 *
+		 * To support relative offset of previous L4 protocol item,
+		 * extracts should be expanded to identify if the frame is:
+		 * vlan/IPv4 or vlan/IPv6 or none-vlan/IPv4 or none-vlan/IPv6.
+		 */
+		DPAA2_PMD_ERR("relative not supported.");
+		return -EINVAL;
+	}
+
+	if (spec->search) {
+		DPAA2_PMD_ERR("search not supported.");
 		return -EINVAL;
 	}
+
 	/* Spec len and mask len should be same */
 	if (spec->length != mask->length) {
 		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
@@ -2264,36 +2531,44 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (prev_key_size <= spec->length) {
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
-						 spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FLOW_QOS_TYPE;
+	qos_key_extract = &priv->extract.qos_key_extract;
+	tc_key_extract = &priv->extract.tc_key_extract[group];
 
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.tc_key_extract[group],
-					spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FLOW_FS_TYPE;
+	ret = dpaa2_flow_extract_add_raw(priv,
+			spec->offset, spec->length,
+			DPAA2_FLOW_QOS_TYPE, 0, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS Extract RAW add failed.");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
-					   mask->pattern, spec->length);
+	ret = dpaa2_flow_extract_add_raw(priv,
+			spec->offset, spec->length,
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("FS[%d] Extract RAW add failed.",
+			group);
+		return -EINVAL;
+	}
+
+	ret = dpaa2_flow_raw_rule_data_set(flow,
+			&qos_key_extract->key_profile,
+			spec->offset, spec->length,
+			spec->pattern, mask->pattern,
+			DPAA2_FLOW_QOS_TYPE);
 	if (ret) {
 		DPAA2_PMD_ERR("QoS RAW rule data set failed");
-		return -1;
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
-					   mask->pattern, spec->length);
+	ret = dpaa2_flow_raw_rule_data_set(flow,
+			&tc_key_extract->key_profile,
+			spec->offset, spec->length,
+			spec->pattern, mask->pattern,
+			DPAA2_FLOW_FS_TYPE);
 	if (ret) {
 		DPAA2_PMD_ERR("FS RAW rule data set failed");
-		return -1;
+		return -EINVAL;
 	}
 
 	(*device_configured) |= local_cfg;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 27/43] net/dpaa2: frame attribute flags parser
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (25 preceding siblings ...)
  2024-10-14 12:01       ` [v3 26/43] net/dpaa2: enhancement of raw flow extract vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 28/43] net/dpaa2: add VXLAN distribution support vanshika.shukla
                         ` (16 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

FAF parser extracts are used to identify protocol type
instead of extracts of previous protocol' type.
FAF starts from offset 2 to include user defined flags which
will be used for soft protocol distribution.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 475 +++++++++++++++++++--------------
 1 file changed, 273 insertions(+), 202 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 69faf36a8c..04720a1277 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -22,13 +22,6 @@
 #include <dpaa2_ethdev.h>
 #include <dpaa2_pmd_logs.h>
 
-/* Workaround to discriminate the UDP/TCP/SCTP
- * with next protocol of l3.
- * MC/WRIOP are not able to identify
- * the l4 protocol with l4 ports.
- */
-static int mc_l4_port_identification;
-
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
@@ -260,6 +253,10 @@ dpaa2_flow_qos_extracts_log(const struct dpaa2_dev_priv *priv)
 			sprintf(string, "raw offset/len: %d/%d",
 				extract->extract.from_data.offset,
 				extract->extract.from_data.size);
+		} else if (type == DPKG_EXTRACT_FROM_PARSE) {
+			sprintf(string, "parse offset/len: %d/%d",
+				extract->extract.from_parse.offset,
+				extract->extract.from_parse.size);
 		}
 		DPAA2_FLOW_DUMP("%s", string);
 		if ((idx + 1) < dpkg->num_extracts)
@@ -298,6 +295,10 @@ dpaa2_flow_fs_extracts_log(const struct dpaa2_dev_priv *priv,
 			sprintf(string, "raw offset/len: %d/%d",
 				extract->extract.from_data.offset,
 				extract->extract.from_data.size);
+		} else if (type == DPKG_EXTRACT_FROM_PARSE) {
+			sprintf(string, "parse offset/len: %d/%d",
+				extract->extract.from_parse.offset,
+				extract->extract.from_parse.size);
 		}
 		DPAA2_FLOW_DUMP("%s", string);
 		if ((idx + 1) < dpkg->num_extracts)
@@ -631,6 +632,66 @@ dpaa2_flow_fs_rule_insert_hole(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_faf_advance(struct dpaa2_dev_priv *priv,
+	int faf_byte, enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off++;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, 1);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, 1, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = 1;
+	key_profile->prot_field[pos].type = DPAA2_FAF_KEY;
+	key_profile->prot_field[pos].key_field = faf_byte;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	key_profile->key_max_size++;
+
+	return pos;
+}
+
 /* Move IPv4/IPv6 addresses to fill new extract previous IP address.
  * Current MC/WRIOP only support generic IP extract but IP address
  * is not fixed, so we have to put them at end of extracts, otherwise,
@@ -692,6 +753,7 @@ dpaa2_flow_key_profile_advance(enum net_prot prot,
 	}
 
 	key_profile->key_size[pos] = field_size;
+	key_profile->prot_field[pos].type = DPAA2_NET_PROT_KEY;
 	key_profile->prot_field[pos].prot = prot;
 	key_profile->prot_field[pos].key_field = field;
 	key_profile->num++;
@@ -715,6 +777,55 @@ dpaa2_flow_key_profile_advance(enum net_prot prot,
 	return pos;
 }
 
+static int
+dpaa2_flow_faf_add_hdr(int faf_byte,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i, offset;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	pos = dpaa2_flow_faf_advance(priv,
+			faf_byte, dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
+	}
+
+	offset = DPAA2_FAFE_PSR_OFFSET + faf_byte;
+
+	extracts[pos].type = DPKG_EXTRACT_FROM_PARSE;
+	extracts[pos].extract.from_parse.offset = offset;
+	extracts[pos].extract.from_parse.size = 1;
+
+	dpkg->num_extracts++;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_extract_add_hdr(enum net_prot prot,
 	uint32_t field, uint8_t field_size,
@@ -1001,6 +1112,7 @@ dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
 			key_profile->key_offset[pos] = 0;
 		}
 		key_profile->key_size[pos] = item_size;
+		key_profile->prot_field[pos].type = DPAA2_NET_PROT_KEY;
 		key_profile->prot_field[pos].prot = NET_PROT_PAYLOAD;
 		key_profile->prot_field[pos].key_field = field;
 
@@ -1040,7 +1152,7 @@ dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
 
 static inline int
 dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
-	enum net_prot prot, uint32_t key_field)
+	enum key_prot_type type, enum net_prot prot, uint32_t key_field)
 {
 	int pos;
 	struct key_prot_field *prot_field;
@@ -1053,16 +1165,23 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 
 	prot_field = key_profile->prot_field;
 	for (pos = 0; pos < key_profile->num; pos++) {
-		if (prot_field[pos].prot == prot &&
-			prot_field[pos].key_field == key_field) {
+		if (type == DPAA2_NET_PROT_KEY &&
+			prot_field[pos].prot == prot &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
+			return pos;
+		else if (type == DPAA2_FAF_KEY &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
 			return pos;
-		}
 	}
 
-	if (dpaa2_flow_l4_src_port_extract(prot, key_field)) {
+	if (type == DPAA2_NET_PROT_KEY &&
+		dpaa2_flow_l4_src_port_extract(prot, key_field)) {
 		if (key_profile->l4_src_port_present)
 			return key_profile->l4_src_port_pos;
-	} else if (dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
+	} else if (type == DPAA2_NET_PROT_KEY &&
+		dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
 		if (key_profile->l4_dst_port_present)
 			return key_profile->l4_dst_port_pos;
 	}
@@ -1072,80 +1191,53 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 
 static inline int
 dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
-	enum net_prot prot, uint32_t key_field)
+	enum key_prot_type type, enum net_prot prot, uint32_t key_field)
 {
 	int i;
 
-	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
+	i = dpaa2_flow_extract_search(key_profile, type, prot, key_field);
 	if (i >= 0)
 		return key_profile->key_offset[i];
 	else
 		return i;
 }
 
-struct prev_proto_field_id {
-	enum net_prot prot;
-	union {
-		rte_be16_t eth_type;
-		uint8_t ip_proto;
-	};
-};
-
 static int
-dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
+dpaa2_flow_faf_add_rule(struct dpaa2_dev_priv *priv,
 	struct dpaa2_dev_flow *flow,
-	const struct prev_proto_field_id *prev_proto,
+	enum dpaa2_rx_faf_offset faf_bit_off,
 	int group,
 	enum dpaa2_flow_dist_type dist_type)
 {
 	int offset;
 	uint8_t *key_addr;
 	uint8_t *mask_addr;
-	uint32_t field = 0;
-	rte_be16_t eth_type;
-	uint8_t ip_proto;
 	struct dpaa2_key_extract *key_extract;
 	struct dpaa2_key_profile *key_profile;
+	uint8_t faf_byte = faf_bit_off / 8;
+	uint8_t faf_bit_in_byte = faf_bit_off % 8;
 
-	if (prev_proto->prot == NET_PROT_ETH) {
-		field = NH_FLD_ETH_TYPE;
-	} else if (prev_proto->prot == NET_PROT_IP) {
-		field = NH_FLD_IP_PROTO;
-	} else {
-		DPAA2_PMD_ERR("Prev proto(%d) not support!",
-			prev_proto->prot);
-		return -EINVAL;
-	}
+	faf_bit_in_byte = 7 - faf_bit_in_byte;
 
 	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
 		key_extract = &priv->extract.qos_key_extract;
 		key_profile = &key_extract->key_profile;
 
 		offset = dpaa2_flow_extract_key_offset(key_profile,
-				prev_proto->prot, field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (offset < 0) {
 			DPAA2_PMD_ERR("%s QoS key extract failed", __func__);
 			return -EINVAL;
 		}
 		key_addr = flow->qos_key_addr + offset;
 		mask_addr = flow->qos_mask_addr + offset;
-		if (prev_proto->prot == NET_PROT_ETH) {
-			eth_type = prev_proto->eth_type;
-			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
-			eth_type = 0xffff;
-			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
-			flow->qos_rule_size += sizeof(rte_be16_t);
-		} else if (prev_proto->prot == NET_PROT_IP) {
-			ip_proto = prev_proto->ip_proto;
-			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
-			ip_proto = 0xff;
-			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
-			flow->qos_rule_size += sizeof(uint8_t);
-		} else {
-			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
-				prev_proto->prot);
-			return -EINVAL;
-		}
+
+		if (!(*key_addr) &&
+			key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size++;
+
+		*key_addr |=  (1 << faf_bit_in_byte);
+		*mask_addr |=  (1 << faf_bit_in_byte);
 	}
 
 	if (dist_type & DPAA2_FLOW_FS_TYPE) {
@@ -1153,7 +1245,7 @@ dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
 		key_profile = &key_extract->key_profile;
 
 		offset = dpaa2_flow_extract_key_offset(key_profile,
-				prev_proto->prot, field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (offset < 0) {
 			DPAA2_PMD_ERR("%s TC[%d] key extract failed",
 				__func__, group);
@@ -1162,23 +1254,12 @@ dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
 		key_addr = flow->fs_key_addr + offset;
 		mask_addr = flow->fs_mask_addr + offset;
 
-		if (prev_proto->prot == NET_PROT_ETH) {
-			eth_type = prev_proto->eth_type;
-			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
-			eth_type = 0xffff;
-			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
-			flow->fs_rule_size += sizeof(rte_be16_t);
-		} else if (prev_proto->prot == NET_PROT_IP) {
-			ip_proto = prev_proto->ip_proto;
-			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
-			ip_proto = 0xff;
-			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
-			flow->fs_rule_size += sizeof(uint8_t);
-		} else {
-			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
-				prev_proto->prot);
-			return -EINVAL;
-		}
+		if (!(*key_addr) &&
+			key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size++;
+
+		*key_addr |=  (1 << faf_bit_in_byte);
+		*mask_addr |=  (1 << faf_bit_in_byte);
 	}
 
 	return 0;
@@ -1200,7 +1281,7 @@ dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 	}
 
 	offset = dpaa2_flow_extract_key_offset(key_profile,
-			prot, field);
+			DPAA2_NET_PROT_KEY, prot, field);
 	if (offset < 0) {
 		DPAA2_PMD_ERR("P(%d)/F(%d) does not exist!",
 			prot, field);
@@ -1238,7 +1319,7 @@ dpaa2_flow_raw_rule_data_set(struct dpaa2_dev_flow *flow,
 	field = extract_offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
 	field |= extract_size;
 	offset = dpaa2_flow_extract_key_offset(key_profile,
-			NET_PROT_PAYLOAD, field);
+			DPAA2_NET_PROT_KEY, NET_PROT_PAYLOAD, field);
 	if (offset < 0) {
 		DPAA2_PMD_ERR("offset(%d)/size(%d) raw extract failed",
 			extract_offset, size);
@@ -1321,60 +1402,39 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 }
 
 static int
-dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
+dpaa2_flow_identify_by_faf(struct dpaa2_dev_priv *priv,
 	struct dpaa2_dev_flow *flow,
-	const struct prev_proto_field_id *prev_prot,
+	enum dpaa2_rx_faf_offset faf_off,
 	enum dpaa2_flow_dist_type dist_type,
 	int group, int *recfg)
 {
-	int ret, index, local_cfg = 0, size = 0;
+	int ret, index, local_cfg = 0;
 	struct dpaa2_key_extract *extract;
 	struct dpaa2_key_profile *key_profile;
-	enum net_prot prot = prev_prot->prot;
-	uint32_t key_field = 0;
-
-	if (prot == NET_PROT_ETH) {
-		key_field = NH_FLD_ETH_TYPE;
-		size = sizeof(rte_be16_t);
-	} else if (prot == NET_PROT_IP) {
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else if (prot == NET_PROT_IPV4) {
-		prot = NET_PROT_IP;
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else if (prot == NET_PROT_IPV6) {
-		prot = NET_PROT_IP;
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else {
-		DPAA2_PMD_ERR("Invalid Prev prot(%d)", prot);
-		return -EINVAL;
-	}
+	uint8_t faf_byte = faf_off / 8;
 
 	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
 		extract = &priv->extract.qos_key_extract;
 		key_profile = &extract->key_profile;
 
 		index = dpaa2_flow_extract_search(key_profile,
-				prot, key_field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add_hdr(prot,
-					key_field, size, priv,
-					DPAA2_FLOW_QOS_TYPE, group,
+			ret = dpaa2_flow_faf_add_hdr(faf_byte,
+					priv, DPAA2_FLOW_QOS_TYPE, group,
 					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("QOS prev extract add failed");
+				DPAA2_PMD_ERR("QOS faf extract add failed");
 
 				return -EINVAL;
 			}
 			local_cfg |= DPAA2_FLOW_QOS_TYPE;
 		}
 
-		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+		ret = dpaa2_flow_faf_add_rule(priv, flow, faf_off, group,
 				DPAA2_FLOW_QOS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("QoS prev rule set failed");
+			DPAA2_PMD_ERR("QoS faf rule set failed");
 			return -EINVAL;
 		}
 	}
@@ -1384,14 +1444,13 @@ dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
 		key_profile = &extract->key_profile;
 
 		index = dpaa2_flow_extract_search(key_profile,
-				prot, key_field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add_hdr(prot,
-					key_field, size, priv,
-					DPAA2_FLOW_FS_TYPE, group,
+			ret = dpaa2_flow_faf_add_hdr(faf_byte,
+					priv, DPAA2_FLOW_FS_TYPE, group,
 					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS[%d] prev extract add failed",
+				DPAA2_PMD_ERR("FS[%d] faf extract add failed",
 					group);
 
 				return -EINVAL;
@@ -1399,17 +1458,17 @@ dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
 			local_cfg |= DPAA2_FLOW_FS_TYPE;
 		}
 
-		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+		ret = dpaa2_flow_faf_add_rule(priv, flow, faf_off, group,
 				DPAA2_FLOW_FS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS[%d] prev rule set failed",
+			DPAA2_PMD_ERR("FS[%d] faf rule set failed",
 				group);
 			return -EINVAL;
 		}
 	}
 
 	if (recfg)
-		*recfg = local_cfg;
+		*recfg |= local_cfg;
 
 	return 0;
 }
@@ -1436,7 +1495,7 @@ dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
 	key_profile = &key_extract->key_profile;
 
 	index = dpaa2_flow_extract_search(key_profile,
-			prot, field);
+			DPAA2_NET_PROT_KEY, prot, field);
 	if (index < 0) {
 		ret = dpaa2_flow_extract_add_hdr(prot,
 				field, size, priv,
@@ -1575,6 +1634,7 @@ dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
 		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
 	}
 	key_profile->num++;
+	key_profile->prot_field[num].type = DPAA2_NET_PROT_KEY;
 
 	dpkg->extracts[num].extract.from_hdr.prot = prot;
 	dpkg->extracts[num].extract.from_hdr.field = field;
@@ -1685,15 +1745,28 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 	spec = pattern->spec;
 	mask = pattern->mask ?
 			pattern->mask : &dpaa2_flow_item_eth_mask;
-	if (!spec) {
-		DPAA2_PMD_WARN("No pattern spec for Eth flow");
-		return -EINVAL;
-	}
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ETH_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ETH_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
 		RTE_FLOW_ITEM_TYPE_ETH)) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
@@ -1782,15 +1855,18 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_ETH;
-		prev_proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
-		ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_proto,
-				DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-				group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
 		if (ret)
 			return ret;
+
 		(*device_configured) |= local_cfg;
 		return 0;
 	}
@@ -1837,7 +1913,6 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	const void *key, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int size;
-	struct prev_proto_field_id prev_prot;
 
 	group = attr->group;
 
@@ -1850,19 +1925,21 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	prev_prot.prot = NET_PROT_ETH;
-	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
+					 DPAA2_FLOW_QOS_TYPE, group,
+					 &local_cfg);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE, group,
-			&local_cfg);
-	if (ret) {
-		DPAA2_PMD_ERR("IPv4 identification failed!");
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
+					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret)
 		return ret;
-	}
 
-	if (!spec_ipv4)
+	if (!spec_ipv4) {
+		(*device_configured) |= local_cfg;
 		return 0;
+	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
 				       RTE_FLOW_ITEM_TYPE_IPV4)) {
@@ -1954,7 +2031,6 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
 	int size;
-	struct prev_proto_field_id prev_prot;
 
 	group = attr->group;
 
@@ -1966,19 +2042,21 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	prev_prot.prot = NET_PROT_ETH;
-	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
+					 DPAA2_FLOW_QOS_TYPE, group,
+					 &local_cfg);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
-	if (ret) {
-		DPAA2_PMD_ERR("IPv6 identification failed!");
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
+					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret)
 		return ret;
-	}
 
-	if (!spec_ipv6)
+	if (!spec_ipv6) {
+		(*device_configured) |= local_cfg;
 		return 0;
+	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
 				       RTE_FLOW_ITEM_TYPE_IPV6)) {
@@ -2082,18 +2160,15 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		/* Next proto of Generical IP is actually used
-		 * for ICMP identification.
-		 * Example: flow create 0 ingress pattern icmp
-		 */
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ICMP_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_ICMP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ICMP_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
 		if (ret)
 			return ret;
 
@@ -2170,22 +2245,21 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_UDP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_UDP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_UDP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2257,22 +2331,21 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_TCP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_TCP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_TCP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2344,22 +2417,21 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_SCTP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_SCTP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_SCTP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2432,21 +2504,20 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GRE_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_GRE;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GRE_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
 		if (ret)
 			return ret;
 
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 28/43] net/dpaa2: add VXLAN distribution support
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (26 preceding siblings ...)
  2024-10-14 12:01       ` [v3 27/43] net/dpaa2: frame attribute flags parser vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 29/43] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
                         ` (15 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Extracts from vxlan header for distribution.
The vxlan header is set by soft parser code in
soft parser context located from offset 43 of parser results:

<assign-variable name="$softparsectx[0:3]" value="vxlan.vnid"/>

vxlan protocol is identified by vxlan bit of frame attribute flags.
The parser result extracts are added for this functionality.

Example:
flow create 0 ingress pattern vxlan / end actions pf / queue index 4 / end

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |   6 +-
 drivers/net/dpaa2/dpaa2_flow.c   | 313 +++++++++++++++++++++++++++++++
 2 files changed, 318 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 8f548467a4..aeddcfdfa9 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -282,8 +282,12 @@ enum ip_addr_extract_type {
 };
 
 enum key_prot_type {
+	/* HW extracts from standard protocol fields*/
 	DPAA2_NET_PROT_KEY,
-	DPAA2_FAF_KEY
+	/* HW extracts from FAF of PR*/
+	DPAA2_FAF_KEY,
+	/* HW extracts from PR other than FAF*/
+	DPAA2_PR_KEY
 };
 
 struct key_prot_field {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 04720a1277..da40be8328 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -38,6 +38,8 @@ enum dpaa2_flow_dist_type {
 #define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
 #define DPAA2_FLOW_MAX_KEY_SIZE			16
 
+#define VXLAN_HF_VNI 0x08
+
 struct dpaa2_dev_flow {
 	LIST_ENTRY(dpaa2_dev_flow) next;
 	struct dpni_rule_cfg qos_rule;
@@ -144,6 +146,11 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
+
+static const struct rte_flow_item_vxlan dpaa2_flow_item_vxlan_mask = {
+	.flags = 0xff,
+	.vni = "\xff\xff\xff",
+};
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -692,6 +699,68 @@ dpaa2_flow_faf_advance(struct dpaa2_dev_priv *priv,
 	return pos;
 }
 
+static int
+dpaa2_flow_pr_advance(struct dpaa2_dev_priv *priv,
+	uint32_t pr_offset, uint32_t pr_size,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off += pr_size;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, pr_size);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, pr_size, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = pr_size;
+	key_profile->prot_field[pos].type = DPAA2_PR_KEY;
+	key_profile->prot_field[pos].key_field =
+		(pr_offset << 16) | pr_size;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	key_profile->key_max_size += pr_size;
+
+	return pos;
+}
+
 /* Move IPv4/IPv6 addresses to fill new extract previous IP address.
  * Current MC/WRIOP only support generic IP extract but IP address
  * is not fixed, so we have to put them at end of extracts, otherwise,
@@ -826,6 +895,59 @@ dpaa2_flow_faf_add_hdr(int faf_byte,
 	return 0;
 }
 
+static int
+dpaa2_flow_pr_add_hdr(uint32_t pr_offset,
+	uint32_t pr_size, struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if ((pr_offset + pr_size) > DPAA2_FAPR_SIZE) {
+		DPAA2_PMD_ERR("PR extracts(%d:%d) overflow",
+			pr_offset, pr_size);
+		return -EINVAL;
+	}
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	pos = dpaa2_flow_pr_advance(priv,
+			pr_offset, pr_size, dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
+	}
+
+	extracts[pos].type = DPKG_EXTRACT_FROM_PARSE;
+	extracts[pos].extract.from_parse.offset = pr_offset;
+	extracts[pos].extract.from_parse.size = pr_size;
+
+	dpkg->num_extracts++;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_extract_add_hdr(enum net_prot prot,
 	uint32_t field, uint8_t field_size,
@@ -1174,6 +1296,10 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 			prot_field[pos].key_field == key_field &&
 			prot_field[pos].type == type)
 			return pos;
+		else if (type == DPAA2_PR_KEY &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
+			return pos;
 	}
 
 	if (type == DPAA2_NET_PROT_KEY &&
@@ -1265,6 +1391,41 @@ dpaa2_flow_faf_add_rule(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static inline int
+dpaa2_flow_pr_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	uint32_t pr_offset, uint32_t pr_size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int offset;
+	uint32_t pr_field = pr_offset << 16 | pr_size;
+
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			DPAA2_PR_KEY, NET_PROT_NONE, pr_field);
+	if (offset < 0) {
+		DPAA2_PMD_ERR("PR off(%d)/size(%d) does not exist!",
+			pr_offset, pr_size);
+		return -EINVAL;
+	}
+
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, pr_size);
+		memcpy((flow->qos_mask_addr + offset), mask, pr_size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size = offset + pr_size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, pr_size);
+		memcpy((flow->fs_mask_addr + offset), mask, pr_size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size = offset + pr_size;
+	}
+
+	return 0;
+}
+
 static inline int
 dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 	struct dpaa2_key_profile *key_profile,
@@ -1386,6 +1547,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_gre_mask;
 		size = sizeof(struct rte_flow_item_gre);
 		break;
+	case RTE_FLOW_ITEM_TYPE_VXLAN:
+		mask_support = (const char *)&dpaa2_flow_item_vxlan_mask;
+		size = sizeof(struct rte_flow_item_vxlan);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -1473,6 +1638,55 @@ dpaa2_flow_identify_by_faf(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_add_pr_extract_rule(struct dpaa2_dev_flow *flow,
+	uint32_t pr_offset, uint32_t pr_size,
+	const void *key, const void *mask,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int index, ret, local_cfg = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
+	uint32_t pr_field = pr_offset << 16 | pr_size;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	key_profile = &key_extract->key_profile;
+
+	index = dpaa2_flow_extract_search(key_profile,
+			DPAA2_PR_KEY, NET_PROT_NONE, pr_field);
+	if (index < 0) {
+		ret = dpaa2_flow_pr_add_hdr(pr_offset,
+				pr_size, priv,
+				dist_type, tc_id, NULL);
+		if (ret) {
+			DPAA2_PMD_ERR("PR add off(%d)/size(%d) failed",
+				pr_offset, pr_size);
+
+			return ret;
+		}
+		local_cfg |= dist_type;
+	}
+
+	ret = dpaa2_flow_pr_rule_data_set(flow, key_profile,
+			pr_offset, pr_size, key, mask, dist_type);
+	if (ret) {
+		DPAA2_PMD_ERR("PR off(%d)/size(%d) rule data set failed",
+			pr_offset, pr_size);
+
+		return ret;
+	}
+
+	if (recfg)
+		*recfg |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
 	enum net_prot prot, uint32_t field,
@@ -2549,6 +2763,90 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_vxlan *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_vxlan_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_VXLAN_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_VXLAN_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VXLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of VXLAN not support.");
+
+		return -1;
+	}
+
+	if (mask->flags) {
+		if (spec->flags != VXLAN_HF_VNI) {
+			DPAA2_PMD_ERR("vxlan flag(0x%02x) must be 0x%02x.",
+				spec->flags, VXLAN_HF_VNI);
+			return -EINVAL;
+		}
+		if (mask->flags != 0xff) {
+			DPAA2_PMD_ERR("Not support to extract vxlan flag.");
+			return -EINVAL;
+		}
+	}
+
+	if (mask->vni[0] || mask->vni[1] || mask->vni[2]) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_VNI_OFFSET,
+			sizeof(mask->vni), spec->vni,
+			mask->vni,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_VNI_OFFSET,
+			sizeof(mask->vni), spec->vni,
+			mask->vni,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -2764,6 +3062,9 @@ dpaa2_flow_verify_action(struct dpaa2_dev_priv *priv,
 				}
 			}
 
+			break;
+		case RTE_FLOW_ACTION_TYPE_PF:
+			/* Skip this action, have to add for vxlan*/
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
@@ -3114,6 +3415,15 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				return ret;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+			ret = dpaa2_configure_flow_vxlan(flow,
+					dev, attr, &pattern[i], actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("VXLAN flow config failed!");
+				return ret;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow,
 					dev, attr, &pattern[i],
@@ -3226,6 +3536,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			if (ret)
 				return ret;
 
+			break;
+		case RTE_FLOW_ACTION_TYPE_PF:
+			/* Skip this action, have to add for vxlan*/
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 29/43] net/dpaa2: protocol inside tunnel distribution
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (27 preceding siblings ...)
  2024-10-14 12:01       ` [v3 28/43] net/dpaa2: add VXLAN distribution support vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 30/43] net/dpaa2: eCPRI support by parser result vanshika.shukla
                         ` (14 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Control flow by protocols inside tunnel.
The tunnel flow items applied by application are in order from
outer to inner. The inner items start from tunnel item, something
like vxlan, GRE etc.

For example:
flow create 0 ingress pattern ipv4 / vxlan / ipv6 / end
	actions pf / queue index 2 / end

So the items following the tunnel item are tagged with "innner".
The inner items are extracted from parser results which are set
by soft parser.
So far only vxlan tunnel is supported. Limited by soft parser area,
only ethernet header and vlan header inside tunnel are able to be used
for flow distribution. IPv4, IPv6, UDP and TCP inside tunnel can be
detected by user defined FAF set by SP for flow distribution.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 587 +++++++++++++++++++++++++++++----
 1 file changed, 519 insertions(+), 68 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index da40be8328..f3fccf6c71 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -58,6 +58,11 @@ struct dpaa2_dev_flow {
 	struct dpni_fs_action_cfg fs_action_cfg;
 };
 
+struct rte_dpaa2_flow_item {
+	struct rte_flow_item generic_item;
+	int in_tunnel;
+};
+
 static const
 enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
@@ -1939,10 +1944,203 @@ dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+dpaa2_configure_flow_tunnel_eth(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_eth *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+			pattern->mask : &dpaa2_flow_item_eth_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec)
+		return 0;
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH)) {
+		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
+
+		return -EINVAL;
+	}
+
+	if (memcmp((const char *)&mask->src,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		/*SRC[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR0_OFFSET,
+			1, &spec->src.addr_bytes[0],
+			&mask->src.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[1:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR1_OFFSET,
+			2, &spec->src.addr_bytes[1],
+			&mask->src.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[3:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR3_OFFSET,
+			1, &spec->src.addr_bytes[3],
+			&mask->src.addr_bytes[3],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[4:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR4_OFFSET,
+			2, &spec->src.addr_bytes[4],
+			&mask->src.addr_bytes[4],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		/*SRC[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR0_OFFSET,
+			1, &spec->src.addr_bytes[0],
+			&mask->src.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[1:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR1_OFFSET,
+			2, &spec->src.addr_bytes[1],
+			&mask->src.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[3:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR3_OFFSET,
+			1, &spec->src.addr_bytes[3],
+			&mask->src.addr_bytes[3],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[4:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR4_OFFSET,
+			2, &spec->src.addr_bytes[4],
+			&mask->src.addr_bytes[4],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->dst,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		/*DST[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR0_OFFSET,
+			1, &spec->dst.addr_bytes[0],
+			&mask->dst.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[1:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR1_OFFSET,
+			1, &spec->dst.addr_bytes[1],
+			&mask->dst.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[2:3]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR2_OFFSET,
+			3, &spec->dst.addr_bytes[2],
+			&mask->dst.addr_bytes[2],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[5:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR5_OFFSET,
+			1, &spec->dst.addr_bytes[5],
+			&mask->dst.addr_bytes[5],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		/*DST[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR0_OFFSET,
+			1, &spec->dst.addr_bytes[0],
+			&mask->dst.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[1:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR1_OFFSET,
+			1, &spec->dst.addr_bytes[1],
+			&mask->dst.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[2:3]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR2_OFFSET,
+			3, &spec->dst.addr_bytes[2],
+			&mask->dst.addr_bytes[2],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[5:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR5_OFFSET,
+			1, &spec->dst.addr_bytes[5],
+			&mask->dst.addr_bytes[5],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->type,
+		zero_cmp, sizeof(rte_be16_t))) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TYPE_OFFSET,
+			sizeof(rte_be16_t), &spec->type, &mask->type,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TYPE_OFFSET,
+			sizeof(rte_be16_t), &spec->type, &mask->type,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -1952,6 +2150,13 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 	const struct rte_flow_item_eth *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	if (dpaa2_pattern->in_tunnel) {
+		return dpaa2_configure_flow_tunnel_eth(flow,
+				dev, attr, pattern, device_configured);
+	}
 
 	group = attr->group;
 
@@ -2045,10 +2250,81 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
+dpaa2_configure_flow_tunnel_vlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_vlan *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_vlan_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAFE_VXLAN_IN_VLAN_FRAM,
+				DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAFE_VXLAN_IN_VLAN_FRAM,
+				DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+
+		return -EINVAL;
+	}
+
+	if (!mask->tci)
+		return 0;
+
+	ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TCI_OFFSET,
+			sizeof(rte_be16_t), &spec->tci, &mask->tci,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TCI_OFFSET,
+			sizeof(rte_be16_t), &spec->tci, &mask->tci,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2057,6 +2333,13 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_vlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	if (dpaa2_pattern->in_tunnel) {
+		return dpaa2_configure_flow_tunnel_vlan(flow,
+				dev, attr, pattern, device_configured);
+	}
 
 	group = attr->group;
 
@@ -2116,7 +2399,7 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 static int
 dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
+			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
 			  const struct rte_flow_action actions[] __rte_unused,
 			  struct rte_flow_error *error __rte_unused,
 			  int *device_configured)
@@ -2127,6 +2410,7 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	const void *key, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int size;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2135,6 +2419,26 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	mask_ipv4 = pattern->mask ?
 		    pattern->mask : &dpaa2_flow_item_ipv4_mask;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec_ipv4) {
+			DPAA2_PMD_ERR("Tunnel-IPv4 distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV4_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV4_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
@@ -2233,7 +2537,7 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 static int
 dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
+			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
 			  const struct rte_flow_action actions[] __rte_unused,
 			  struct rte_flow_error *error __rte_unused,
 			  int *device_configured)
@@ -2245,6 +2549,7 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
 	int size;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2256,6 +2561,26 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec_ipv6) {
+			DPAA2_PMD_ERR("Tunnel-IPv6 distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV6_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV6_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
 					 DPAA2_FLOW_QOS_TYPE, group,
 					 &local_cfg);
@@ -2352,7 +2677,7 @@ static int
 dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2361,6 +2686,7 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_icmp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2373,6 +2699,11 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ICMP distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_ICMP_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2438,7 +2769,7 @@ static int
 dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2447,6 +2778,7 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_udp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2459,6 +2791,26 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec) {
+			DPAA2_PMD_ERR("Tunnel-UDP distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_UDP_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_UDP_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_UDP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2524,7 +2876,7 @@ static int
 dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2533,6 +2885,7 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_tcp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2545,6 +2898,26 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec) {
+			DPAA2_PMD_ERR("Tunnel-TCP distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_TCP_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_TCP_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_TCP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2610,7 +2983,7 @@ static int
 dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2619,6 +2992,7 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_sctp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2631,6 +3005,11 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-SCTP distribution not support");
+		return -ENOTSUP;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_SCTP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2696,7 +3075,7 @@ static int
 dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2705,6 +3084,7 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_gre *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2717,6 +3097,11 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-GRE distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_GRE_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2767,7 +3152,7 @@ static int
 dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2776,6 +3161,7 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_vxlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2788,6 +3174,11 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-VXLAN distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_VXLAN_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2851,18 +3242,19 @@ static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const struct rte_flow_item_raw *spec = pattern->spec;
-	const struct rte_flow_item_raw *mask = pattern->mask;
 	int local_cfg = 0, ret;
 	uint32_t group;
 	struct dpaa2_key_extract *qos_key_extract;
 	struct dpaa2_key_extract *tc_key_extract;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
+	const struct rte_flow_item_raw *spec = pattern->spec;
+	const struct rte_flow_item_raw *mask = pattern->mask;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
@@ -3306,6 +3698,45 @@ dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_item_convert(const struct rte_flow_item pattern[],
+			struct rte_dpaa2_flow_item **dpaa2_pattern)
+{
+	struct rte_dpaa2_flow_item *new_pattern;
+	int num = 0, tunnel_start = 0;
+
+	while (1) {
+		num++;
+		if (pattern[num].type == RTE_FLOW_ITEM_TYPE_END)
+			break;
+	}
+
+	new_pattern = rte_malloc(NULL, sizeof(struct rte_dpaa2_flow_item) * num,
+				 RTE_CACHE_LINE_SIZE);
+	if (!new_pattern) {
+		DPAA2_PMD_ERR("Failed to alloc %d flow items", num);
+		return -ENOMEM;
+	}
+
+	num = 0;
+	while (pattern[num].type != RTE_FLOW_ITEM_TYPE_END) {
+		memcpy(&new_pattern[num].generic_item, &pattern[num],
+		       sizeof(struct rte_flow_item));
+		new_pattern[num].in_tunnel = 0;
+
+		if (pattern[num].type == RTE_FLOW_ITEM_TYPE_VXLAN)
+			tunnel_start = 1;
+		else if (tunnel_start)
+			new_pattern[num].in_tunnel = 1;
+		num++;
+	}
+
+	new_pattern[num].generic_item.type = RTE_FLOW_ITEM_TYPE_END;
+	*dpaa2_pattern = new_pattern;
+
+	return 0;
+}
+
 static int
 dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3322,6 +3753,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	uint16_t dist_size, key_size;
 	struct dpaa2_key_extract *qos_key_extract;
 	struct dpaa2_key_extract *tc_key_extract;
+	struct rte_dpaa2_flow_item *dpaa2_pattern = NULL;
 
 	ret = dpaa2_flow_verify_attr(priv, attr);
 	if (ret)
@@ -3331,107 +3763,121 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	if (ret)
 		return ret;
 
+	ret = dpaa2_flow_item_convert(pattern, &dpaa2_pattern);
+	if (ret)
+		return ret;
+
 	/* Parse pattern list to get the matching parameters */
 	while (!end_of_list) {
 		switch (pattern[i].type) {
 		case RTE_FLOW_ITEM_TYPE_ETH:
-			ret = dpaa2_configure_flow_eth(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_eth(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ETH flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_VLAN:
-			ret = dpaa2_configure_flow_vlan(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_vlan(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("vLan flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
-			ret = dpaa2_configure_flow_ipv4(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_ipv4(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV4 flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ret = dpaa2_configure_flow_ipv6(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_ipv6(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV6 flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_ICMP:
-			ret = dpaa2_configure_flow_icmp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_icmp(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ICMP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_UDP:
-			ret = dpaa2_configure_flow_udp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_udp(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("UDP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
-			ret = dpaa2_configure_flow_tcp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_tcp(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("TCP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_SCTP:
-			ret = dpaa2_configure_flow_sctp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_sctp(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("SCTP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
-			ret = dpaa2_configure_flow_gre(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_gre(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("GRE flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = dpaa2_configure_flow_vxlan(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_vxlan(flow, dev, attr,
+							 &dpaa2_pattern[i],
+							 actions, error,
+							 &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("VXLAN flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
-			ret = dpaa2_configure_flow_raw(flow,
-					dev, attr, &pattern[i],
-					actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_raw(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("RAW flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_END:
@@ -3463,7 +3909,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			ret = dpaa2_configure_flow_fs_action(priv, flow,
 							     &actions[j]);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			/* Configure FS table first*/
 			dist_size = priv->nb_rx_queues / priv->num_rx_tc;
@@ -3473,20 +3919,20 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 								   dist_size,
 								   false);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			/* Configure QoS table then.*/
 			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
 				ret = dpaa2_configure_qos_table(priv, false);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (priv->num_rx_tc > 1) {
 				ret = dpaa2_flow_add_qos_rule(priv, flow);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (flow->tc_index >= priv->fs_entries) {
@@ -3497,7 +3943,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 
 			ret = dpaa2_flow_add_fs_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			break;
 		case RTE_FLOW_ACTION_TYPE_RSS:
@@ -3509,7 +3955,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			if (ret < 0) {
 				DPAA2_PMD_ERR("TC[%d] distset RSS failed",
 					      flow->tc_id);
-				return ret;
+				goto end_flow_set;
 			}
 
 			dist_size = rss_conf->queue_num;
@@ -3519,22 +3965,22 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 								   dist_size,
 								   true);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
 				ret = dpaa2_configure_qos_table(priv, true);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			ret = dpaa2_flow_add_qos_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			ret = dpaa2_flow_add_fs_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			break;
 		case RTE_FLOW_ACTION_TYPE_PF:
@@ -3551,6 +3997,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 		j++;
 	}
 
+end_flow_set:
 	if (!ret) {
 		/* New rules are inserted. */
 		if (!curr) {
@@ -3561,6 +4008,10 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			LIST_INSERT_AFTER(curr, flow, next);
 		}
 	}
+
+	if (dpaa2_pattern)
+		rte_free(dpaa2_pattern);
+
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 30/43] net/dpaa2: eCPRI support by parser result
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (28 preceding siblings ...)
  2024-10-14 12:01       ` [v3 29/43] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 31/43] net/dpaa2: add GTP flow support vanshika.shukla
                         ` (13 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Soft parser extracts ECPRI header and message to specified
areas of parser result.
Flow is classified according to the ECPRI extracts from praser result.
This implementation supports ECPRI over ethernet/vlan/UDP and various
types/messages combinations.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |  18 ++
 drivers/net/dpaa2/dpaa2_flow.c   | 348 ++++++++++++++++++++++++++++++-
 2 files changed, 365 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index aeddcfdfa9..eaa653d266 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -179,6 +179,8 @@ enum dpaa2_rx_faf_offset {
 	FAFE_VXLAN_IN_IPV6_FRAM = 2,
 	FAFE_VXLAN_IN_UDP_FRAM = 3,
 	FAFE_VXLAN_IN_TCP_FRAM = 4,
+
+	FAFE_ECPRI_FRAM = 7,
 	/* Set by SP end*/
 
 	FAF_GTP_PRIMED_FRAM = 1 + DPAA2_FAFE_PSR_SIZE * 8,
@@ -207,6 +209,17 @@ enum dpaa2_rx_faf_offset {
 	FAF_ESP_FRAM = 89 + DPAA2_FAFE_PSR_SIZE * 8,
 };
 
+enum dpaa2_ecpri_fafe_type {
+	ECPRI_FAFE_TYPE_0 = (8 - FAFE_ECPRI_FRAM),
+	ECPRI_FAFE_TYPE_1 = (8 - FAFE_ECPRI_FRAM) | (1 << 1),
+	ECPRI_FAFE_TYPE_2 = (8 - FAFE_ECPRI_FRAM) | (2 << 1),
+	ECPRI_FAFE_TYPE_3 = (8 - FAFE_ECPRI_FRAM) | (3 << 1),
+	ECPRI_FAFE_TYPE_4 = (8 - FAFE_ECPRI_FRAM) | (4 << 1),
+	ECPRI_FAFE_TYPE_5 = (8 - FAFE_ECPRI_FRAM) | (5 << 1),
+	ECPRI_FAFE_TYPE_6 = (8 - FAFE_ECPRI_FRAM) | (6 << 1),
+	ECPRI_FAFE_TYPE_7 = (8 - FAFE_ECPRI_FRAM) | (7 << 1)
+};
+
 #define DPAA2_PR_ETH_OFF_OFFSET 19
 #define DPAA2_PR_TCI_OFF_OFFSET 21
 #define DPAA2_PR_LAST_ETYPE_OFFSET 23
@@ -236,6 +249,11 @@ enum dpaa2_rx_faf_offset {
 #define DPAA2_VXLAN_IN_TYPE_OFFSET 46
 /* Set by SP for vxlan distribution end*/
 
+/* ECPRI shares SP context with VXLAN*/
+#define DPAA2_ECPRI_MSG_OFFSET DPAA2_VXLAN_VNI_OFFSET
+
+#define DPAA2_ECPRI_MAX_EXTRACT_NB 8
+
 struct ipv4_sd_addr_extract_rule {
 	uint32_t ipv4_src;
 	uint32_t ipv4_dst;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index f3fccf6c71..f64562340c 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -156,6 +156,13 @@ static const struct rte_flow_item_vxlan dpaa2_flow_item_vxlan_mask = {
 	.flags = 0xff,
 	.vni = "\xff\xff\xff",
 };
+
+static const struct rte_flow_item_ecpri dpaa2_flow_item_ecpri_mask = {
+	.hdr.common.type = 0xff,
+	.hdr.dummy[0] = RTE_BE32(0xffffffff),
+	.hdr.dummy[1] = RTE_BE32(0xffffffff),
+	.hdr.dummy[2] = RTE_BE32(0xffffffff),
+};
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -1556,6 +1563,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_vxlan_mask;
 		size = sizeof(struct rte_flow_item_vxlan);
 		break;
+	case RTE_FLOW_ITEM_TYPE_ECPRI:
+		mask_support = (const char *)&dpaa2_flow_item_ecpri_mask;
+		size = sizeof(struct rte_flow_item_ecpri);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -3238,6 +3249,330 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ecpri *spec, *mask;
+	struct rte_flow_item_ecpri local_mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+	uint8_t extract_nb = 0, i;
+	uint64_t rule_data[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint64_t mask_data[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint8_t extract_size[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint8_t extract_off[DPAA2_ECPRI_MAX_EXTRACT_NB];
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	if (pattern->mask) {
+		memcpy(&local_mask, pattern->mask,
+			sizeof(struct rte_flow_item_ecpri));
+		local_mask.hdr.common.u32 =
+			rte_be_to_cpu_32(local_mask.hdr.common.u32);
+		mask = &local_mask;
+	} else {
+		mask = &dpaa2_flow_item_ecpri_mask;
+	}
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ECPRI distribution not support");
+		return -ENOTSUP;
+	}
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAFE_ECPRI_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAFE_ECPRI_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ECPRI)) {
+		DPAA2_PMD_WARN("Extract field(s) of ECPRI not support.");
+
+		return -1;
+	}
+
+	if (mask->hdr.common.type != 0xff) {
+		DPAA2_PMD_WARN("ECPRI header type not specified.");
+
+		return -1;
+	}
+
+	if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_0;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type0.pc_id) {
+			rule_data[extract_nb] = spec->hdr.type0.pc_id;
+			mask_data[extract_nb] = mask->hdr.type0.pc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_iq_data, pc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type0.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type0.seq_id;
+			mask_data[extract_nb] = mask->hdr.type0.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_iq_data, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_BIT_SEQ) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_1;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type1.pc_id) {
+			rule_data[extract_nb] = spec->hdr.type1.pc_id;
+			mask_data[extract_nb] = mask->hdr.type1.pc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_bit_seq, pc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type1.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type1.seq_id;
+			mask_data[extract_nb] = mask->hdr.type1.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_bit_seq, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RTC_CTRL) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_2;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type2.rtc_id) {
+			rule_data[extract_nb] = spec->hdr.type2.rtc_id;
+			mask_data[extract_nb] = mask->hdr.type2.rtc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_rtc_ctrl, rtc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type2.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type2.seq_id;
+			mask_data[extract_nb] = mask->hdr.type2.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_rtc_ctrl, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_GEN_DATA) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_3;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type3.pc_id || mask->hdr.type3.seq_id)
+			DPAA2_PMD_WARN("Extract type3 msg not support.");
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RM_ACC) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_4;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type4.rma_id) {
+			rule_data[extract_nb] = spec->hdr.type4.rma_id;
+			mask_data[extract_nb] = mask->hdr.type4.rma_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET + 0;
+				/** Compiler not support to take address
+				 * of bit-field
+				 * offsetof(struct rte_ecpri_msg_rm_access,
+				 * rma_id);
+				 */
+			extract_nb++;
+		}
+		if (mask->hdr.type4.ele_id) {
+			rule_data[extract_nb] = spec->hdr.type4.ele_id;
+			mask_data[extract_nb] = mask->hdr.type4.ele_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET + 2;
+				/** Compiler not support to take address
+				 * of bit-field
+				 * offsetof(struct rte_ecpri_msg_rm_access,
+				 * ele_id);
+				 */
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_DLY_MSR) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_5;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type5.msr_id) {
+			rule_data[extract_nb] = spec->hdr.type5.msr_id;
+			mask_data[extract_nb] = mask->hdr.type5.msr_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_delay_measure,
+					msr_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type5.act_type) {
+			rule_data[extract_nb] = spec->hdr.type5.act_type;
+			mask_data[extract_nb] = mask->hdr.type5.act_type;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_delay_measure,
+					act_type);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RMT_RST) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_6;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type6.rst_id) {
+			rule_data[extract_nb] = spec->hdr.type6.rst_id;
+			mask_data[extract_nb] = mask->hdr.type6.rst_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_remote_reset,
+					rst_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type6.rst_op) {
+			rule_data[extract_nb] = spec->hdr.type6.rst_op;
+			mask_data[extract_nb] = mask->hdr.type6.rst_op;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_remote_reset,
+					rst_op);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_EVT_IND) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_7;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type7.evt_id) {
+			rule_data[extract_nb] = spec->hdr.type7.evt_id;
+			mask_data[extract_nb] = mask->hdr.type7.evt_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					evt_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.evt_type) {
+			rule_data[extract_nb] = spec->hdr.type7.evt_type;
+			mask_data[extract_nb] = mask->hdr.type7.evt_type;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					evt_type);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.seq) {
+			rule_data[extract_nb] = spec->hdr.type7.seq;
+			mask_data[extract_nb] = mask->hdr.type7.seq;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					seq);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.number) {
+			rule_data[extract_nb] = spec->hdr.type7.number;
+			mask_data[extract_nb] = mask->hdr.type7.number;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					number);
+			extract_nb++;
+		}
+	} else {
+		DPAA2_PMD_ERR("Invalid ecpri header type(%d)",
+				spec->hdr.common.type);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < extract_nb; i++) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			extract_off[i],
+			extract_size[i], &rule_data[i], &mask_data[i],
+			priv, group,
+			device_configured,
+			DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			extract_off[i],
+			extract_size[i], &rule_data[i], &mask_data[i],
+			priv, group,
+			device_configured,
+			DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3870,6 +4205,16 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_ECPRI:
+			ret = dpaa2_configure_flow_ecpri(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("ECPRI flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow, dev, attr,
 						       &dpaa2_pattern[i],
@@ -3884,7 +4229,8 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			end_of_list = 1;
 			break; /*End of List*/
 		default:
-			DPAA2_PMD_ERR("Invalid action type");
+			DPAA2_PMD_ERR("Invalid flow item[%d] type(%d)",
+				i, pattern[i].type);
 			ret = -ENOTSUP;
 			break;
 		}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 31/43] net/dpaa2: add GTP flow support
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (29 preceding siblings ...)
  2024-10-14 12:01       ` [v3 30/43] net/dpaa2: eCPRI support by parser result vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 32/43] net/dpaa2: check if Soft parser is loaded vanshika.shukla
                         ` (12 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Configure gtp flow to support RSS and FS.
Check FAF of parser result to identify GTP frame.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 170 ++++++++++++++++++++++++++-------
 1 file changed, 137 insertions(+), 33 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index f64562340c..ce551e8174 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -75,6 +75,7 @@ enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_TCP,
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
+	RTE_FLOW_ITEM_TYPE_GTP
 };
 
 static const
@@ -163,6 +164,11 @@ static const struct rte_flow_item_ecpri dpaa2_flow_item_ecpri_mask = {
 	.hdr.dummy[1] = RTE_BE32(0xffffffff),
 	.hdr.dummy[2] = RTE_BE32(0xffffffff),
 };
+
+static const struct rte_flow_item_gtp dpaa2_flow_item_gtp_mask = {
+	.teid = RTE_BE32(0xffffffff),
+};
+
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -238,6 +244,12 @@ dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 			strcat(string, ".type");
 		else
 			strcat(string, ".unknown field");
+	} else if (prot == NET_PROT_GTP) {
+		strcpy(string, "gtp");
+		if (field == NH_FLD_GTP_TEID)
+			strcat(string, ".teid");
+		else
+			strcat(string, ".unknown field");
 	} else {
 		strcpy(string, "unknown protocol");
 	}
@@ -1567,6 +1579,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_ecpri_mask;
 		size = sizeof(struct rte_flow_item_ecpri);
 		break;
+	case RTE_FLOW_ITEM_TYPE_GTP:
+		mask_support = (const char *)&dpaa2_flow_item_gtp_mask;
+		size = sizeof(struct rte_flow_item_gtp);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -3573,6 +3589,84 @@ dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_gtp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_gtp *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_gtp_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-GTP distribution not support");
+		return -ENOTSUP;
+	}
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GTP_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GTP_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GTP)) {
+		DPAA2_PMD_WARN("Extract field(s) of GTP not support.");
+
+		return -1;
+	}
+
+	if (!mask->teid)
+		return 0;
+
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GTP,
+			NH_FLD_GTP_TEID, &spec->teid,
+			&mask->teid, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GTP,
+			NH_FLD_GTP_TEID, &spec->teid,
+			&mask->teid, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -4107,9 +4201,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 		switch (pattern[i].type) {
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			ret = dpaa2_configure_flow_eth(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ETH flow config failed!");
 				goto end_flow_set;
@@ -4117,9 +4211,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_VLAN:
 			ret = dpaa2_configure_flow_vlan(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("vLan flow config failed!");
 				goto end_flow_set;
@@ -4127,9 +4221,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			ret = dpaa2_configure_flow_ipv4(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV4 flow config failed!");
 				goto end_flow_set;
@@ -4137,9 +4231,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
 			ret = dpaa2_configure_flow_ipv6(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV6 flow config failed!");
 				goto end_flow_set;
@@ -4147,9 +4241,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_ICMP:
 			ret = dpaa2_configure_flow_icmp(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ICMP flow config failed!");
 				goto end_flow_set;
@@ -4157,9 +4251,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_UDP:
 			ret = dpaa2_configure_flow_udp(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("UDP flow config failed!");
 				goto end_flow_set;
@@ -4167,9 +4261,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			ret = dpaa2_configure_flow_tcp(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("TCP flow config failed!");
 				goto end_flow_set;
@@ -4177,9 +4271,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_SCTP:
 			ret = dpaa2_configure_flow_sctp(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("SCTP flow config failed!");
 				goto end_flow_set;
@@ -4187,9 +4281,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
 			ret = dpaa2_configure_flow_gre(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("GRE flow config failed!");
 				goto end_flow_set;
@@ -4197,9 +4291,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			ret = dpaa2_configure_flow_vxlan(flow, dev, attr,
-							 &dpaa2_pattern[i],
-							 actions, error,
-							 &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("VXLAN flow config failed!");
 				goto end_flow_set;
@@ -4215,11 +4309,21 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_GTP:
+			ret = dpaa2_configure_flow_gtp(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("GTP flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("RAW flow config failed!");
 				goto end_flow_set;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 32/43] net/dpaa2: check if Soft parser is loaded
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (30 preceding siblings ...)
  2024-10-14 12:01       ` [v3 31/43] net/dpaa2: add GTP flow support vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 33/43] net/dpaa2: soft parser flow verification vanshika.shukla
                         ` (11 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang, Vanshika Shukla

From: Jun Yang <jun.yang@nxp.com>

Access sp instruction area to check if sp is loaded.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c |  4 ++
 drivers/net/dpaa2/dpaa2_ethdev.h |  2 +
 drivers/net/dpaa2/dpaa2_flow.c   | 88 ++++++++++++++++++++++++++++++++
 3 files changed, 94 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 187b648799..da0ea57ed2 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2861,6 +2861,10 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 			return ret;
 		}
 	}
+
+	ret = dpaa2_soft_parser_loaded();
+	if (ret > 0)
+		DPAA2_PMD_INFO("soft parser is loaded");
 	DPAA2_PMD_INFO("%s: netdev created, connected to %s",
 		eth_dev->data->name, dpaa2_dev->ep_name);
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index eaa653d266..db918725a7 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -479,6 +479,8 @@ int dpaa2_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
 
 int dpaa2_dev_recycle_config(struct rte_eth_dev *eth_dev);
 int dpaa2_dev_recycle_deconfig(struct rte_eth_dev *eth_dev);
+int dpaa2_soft_parser_loaded(void);
+
 int dpaa2_dev_recycle_qp_setup(struct rte_dpaa2_device *dpaa2_dev,
 	uint16_t qidx, uint64_t cntx,
 	eth_rx_burst_t tx_lpbk, eth_tx_burst_t rx_lpbk,
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index ce551e8174..88a04f237f 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -9,6 +9,7 @@
 #include <string.h>
 #include <unistd.h>
 #include <stdarg.h>
+#include <sys/mman.h>
 
 #include <rte_ethdev.h>
 #include <rte_log.h>
@@ -24,6 +25,7 @@
 
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
+static int dpaa2_sp_loaded = -1;
 
 enum dpaa2_flow_entry_size {
 	DPAA2_FLOW_ENTRY_MIN_SIZE = (DPNI_MAX_KEY_SIZE / 2),
@@ -401,6 +403,92 @@ dpaa2_flow_fs_entry_log(const char *log_info,
 	DPAA2_FLOW_DUMP("\r\n");
 }
 
+/** For LX2160A, LS2088A and LS1088A*/
+#define WRIOP_CCSR_BASE 0x8b80000
+#define WRIOP_CCSR_CTLU_OFFSET 0
+#define WRIOP_CCSR_CTLU_PARSER_OFFSET 0
+#define WRIOP_CCSR_CTLU_PARSER_INGRESS_OFFSET 0
+
+#define WRIOP_INGRESS_PARSER_PHY \
+	(WRIOP_CCSR_BASE + WRIOP_CCSR_CTLU_OFFSET + \
+	WRIOP_CCSR_CTLU_PARSER_OFFSET + \
+	WRIOP_CCSR_CTLU_PARSER_INGRESS_OFFSET)
+
+struct dpaa2_parser_ccsr {
+	uint32_t psr_cfg;
+	uint32_t psr_idle;
+	uint32_t psr_pclm;
+	uint8_t psr_ver_min;
+	uint8_t psr_ver_maj;
+	uint8_t psr_id1_l;
+	uint8_t psr_id1_h;
+	uint32_t psr_rev2;
+	uint8_t rsv[0x2c];
+	uint8_t sp_ins[4032];
+};
+
+int
+dpaa2_soft_parser_loaded(void)
+{
+	int fd, i, ret = 0;
+	struct dpaa2_parser_ccsr *parser_ccsr = NULL;
+
+	dpaa2_flow_control_log = getenv("DPAA2_FLOW_CONTROL_LOG");
+
+	if (dpaa2_sp_loaded >= 0)
+		return dpaa2_sp_loaded;
+
+	fd = open("/dev/mem", O_RDWR | O_SYNC);
+	if (fd < 0) {
+		DPAA2_PMD_ERR("open \"/dev/mem\" ERROR(%d)", fd);
+		ret = fd;
+		goto exit;
+	}
+
+	parser_ccsr = mmap(NULL, sizeof(struct dpaa2_parser_ccsr),
+		PROT_READ | PROT_WRITE, MAP_SHARED, fd,
+		WRIOP_INGRESS_PARSER_PHY);
+	if (!parser_ccsr) {
+		DPAA2_PMD_ERR("Map 0x%" PRIx64 "(size=0x%x) failed",
+			(uint64_t)WRIOP_INGRESS_PARSER_PHY,
+			(uint32_t)sizeof(struct dpaa2_parser_ccsr));
+		ret = -ENOBUFS;
+		goto exit;
+	}
+
+	DPAA2_PMD_INFO("Parser ID:0x%02x%02x, Rev:major(%02x), minor(%02x)",
+		parser_ccsr->psr_id1_h, parser_ccsr->psr_id1_l,
+		parser_ccsr->psr_ver_maj, parser_ccsr->psr_ver_min);
+
+	if (dpaa2_flow_control_log) {
+		for (i = 0; i < 64; i++) {
+			DPAA2_FLOW_DUMP("%02x ",
+				parser_ccsr->sp_ins[i]);
+			if (!((i + 1) % 16))
+				DPAA2_FLOW_DUMP("\r\n");
+		}
+	}
+
+	for (i = 0; i < 16; i++) {
+		if (parser_ccsr->sp_ins[i]) {
+			dpaa2_sp_loaded = 1;
+			break;
+		}
+	}
+	if (dpaa2_sp_loaded < 0)
+		dpaa2_sp_loaded = 0;
+
+	ret = dpaa2_sp_loaded;
+
+exit:
+	if (parser_ccsr)
+		munmap(parser_ccsr, sizeof(struct dpaa2_parser_ccsr));
+	if (fd >= 0)
+		close(fd);
+
+	return ret;
+}
+
 static int
 dpaa2_flow_ip_address_extract(enum net_prot prot,
 	uint32_t field)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 33/43] net/dpaa2: soft parser flow verification
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (31 preceding siblings ...)
  2024-10-14 12:01       ` [v3 32/43] net/dpaa2: check if Soft parser is loaded vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 34/43] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
                         ` (10 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Add flow supported by soft parser to verification list.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 84 +++++++++++++++++++++-------------
 1 file changed, 51 insertions(+), 33 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 88a04f237f..72075473fc 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -66,7 +66,7 @@ struct rte_dpaa2_flow_item {
 };
 
 static const
-enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
+enum rte_flow_item_type dpaa2_hp_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
 	RTE_FLOW_ITEM_TYPE_ETH,
 	RTE_FLOW_ITEM_TYPE_VLAN,
@@ -77,7 +77,14 @@ enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_TCP,
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
-	RTE_FLOW_ITEM_TYPE_GTP
+	RTE_FLOW_ITEM_TYPE_GTP,
+	RTE_FLOW_ITEM_TYPE_RAW
+};
+
+static const
+enum rte_flow_item_type dpaa2_sp_supported_pattern_type[] = {
+	RTE_FLOW_ITEM_TYPE_VXLAN,
+	RTE_FLOW_ITEM_TYPE_ECPRI
 };
 
 static const
@@ -4560,16 +4567,17 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
 	int ret = 0;
 
 	if (unlikely(attr->group >= dpni_attr->num_rx_tcs)) {
-		DPAA2_PMD_ERR("Priority group is out of range");
+		DPAA2_PMD_ERR("Group/TC(%d) is out of range(%d)",
+			attr->group, dpni_attr->num_rx_tcs);
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->priority >= dpni_attr->fs_entries)) {
-		DPAA2_PMD_ERR("Priority within the group is out of range");
+		DPAA2_PMD_ERR("Priority(%d) within group is out of range(%d)",
+			attr->priority, dpni_attr->fs_entries);
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->egress)) {
-		DPAA2_PMD_ERR(
-			"Flow configuration is not supported on egress side");
+		DPAA2_PMD_ERR("Egress flow configuration is not supported");
 		ret = -ENOTSUP;
 	}
 	if (unlikely(!attr->ingress)) {
@@ -4584,27 +4592,41 @@ dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
 {
 	unsigned int i, j, is_found = 0;
 	int ret = 0;
+	const enum rte_flow_item_type *hp_supported;
+	const enum rte_flow_item_type *sp_supported;
+	uint64_t hp_supported_num, sp_supported_num;
+
+	hp_supported = dpaa2_hp_supported_pattern_type;
+	hp_supported_num = RTE_DIM(dpaa2_hp_supported_pattern_type);
+
+	sp_supported = dpaa2_sp_supported_pattern_type;
+	sp_supported_num = RTE_DIM(dpaa2_sp_supported_pattern_type);
 
 	for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
-		for (i = 0; i < RTE_DIM(dpaa2_supported_pattern_type); i++) {
-			if (dpaa2_supported_pattern_type[i]
-					== pattern[j].type) {
+		is_found = 0;
+		for (i = 0; i < hp_supported_num; i++) {
+			if (hp_supported[i] == pattern[j].type) {
 				is_found = 1;
 				break;
 			}
 		}
+		if (is_found)
+			continue;
+		if (dpaa2_sp_loaded > 0) {
+			for (i = 0; i < sp_supported_num; i++) {
+				if (sp_supported[i] == pattern[j].type) {
+					is_found = 1;
+					break;
+				}
+			}
+		}
 		if (!is_found) {
+			DPAA2_PMD_WARN("Flow type(%d) not supported",
+				pattern[j].type);
 			ret = -ENOTSUP;
 			break;
 		}
 	}
-	/* Lets verify other combinations of given pattern rules */
-	for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
-		if (!pattern[j].spec) {
-			ret = -EINVAL;
-			break;
-		}
-	}
 
 	return ret;
 }
@@ -4651,43 +4673,39 @@ dpaa2_flow_validate(struct rte_eth_dev *dev,
 	memset(&dpni_attr, 0, sizeof(struct dpni_attr));
 	ret = dpni_get_attributes(dpni, CMD_PRI_LOW, token, &dpni_attr);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Failure to get dpni@%p attribute, err code  %d",
-			dpni, ret);
+		DPAA2_PMD_ERR("Get dpni@%d attribute failed(%d)",
+			priv->hw_id, ret);
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ATTR,
-			   flow_attr, "invalid");
+			RTE_FLOW_ERROR_TYPE_ATTR,
+			flow_attr, "invalid");
 		return ret;
 	}
 
 	/* Verify input attributes */
 	ret = dpaa2_dev_verify_attr(&dpni_attr, flow_attr);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid attributes are given");
+		DPAA2_PMD_ERR("Invalid attributes are given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ATTR,
-			   flow_attr, "invalid");
+			RTE_FLOW_ERROR_TYPE_ATTR,
+			flow_attr, "invalid");
 		goto not_valid_params;
 	}
 	/* Verify input pattern list */
 	ret = dpaa2_dev_verify_patterns(pattern);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid pattern list is given");
+		DPAA2_PMD_ERR("Invalid pattern list is given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ITEM,
-			   pattern, "invalid");
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			pattern, "invalid");
 		goto not_valid_params;
 	}
 	/* Verify input action list */
 	ret = dpaa2_dev_verify_actions(actions);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid action list is given");
+		DPAA2_PMD_ERR("Invalid action list is given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ACTION,
-			   actions, "invalid");
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			actions, "invalid");
 		goto not_valid_params;
 	}
 not_valid_params:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 34/43] net/dpaa2: add flow support for IPsec AH and ESP
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (32 preceding siblings ...)
  2024-10-14 12:01       ` [v3 33/43] net/dpaa2: soft parser flow verification vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 35/43] net/dpaa2: fix memory corruption in TM vanshika.shukla
                         ` (9 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support AH/ESP flow with SPI field.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 528 ++++++++++++++++++++++++---------
 1 file changed, 385 insertions(+), 143 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 72075473fc..3afe331023 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -78,6 +78,8 @@ enum rte_flow_item_type dpaa2_hp_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
 	RTE_FLOW_ITEM_TYPE_GTP,
+	RTE_FLOW_ITEM_TYPE_ESP,
+	RTE_FLOW_ITEM_TYPE_AH,
 	RTE_FLOW_ITEM_TYPE_RAW
 };
 
@@ -158,6 +160,17 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 	},
 };
 
+static const struct rte_flow_item_esp dpaa2_flow_item_esp_mask = {
+	.hdr = {
+		.spi = RTE_BE32(0xffffffff),
+		.seq = RTE_BE32(0xffffffff),
+	},
+};
+
+static const struct rte_flow_item_ah dpaa2_flow_item_ah_mask = {
+	.spi = RTE_BE32(0xffffffff),
+};
+
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
@@ -259,8 +272,16 @@ dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 			strcat(string, ".teid");
 		else
 			strcat(string, ".unknown field");
+	} else if (prot == NET_PROT_IPSEC_ESP) {
+		strcpy(string, "esp");
+		if (field == NH_FLD_IPSEC_ESP_SPI)
+			strcat(string, ".spi");
+		else if (field == NH_FLD_IPSEC_ESP_SEQUENCE_NUM)
+			strcat(string, ".seq");
+		else
+			strcat(string, ".unknown field");
 	} else {
-		strcpy(string, "unknown protocol");
+		sprintf(string, "unknown protocol(%d)", prot);
 	}
 }
 
@@ -1658,6 +1679,14 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_tcp_mask;
 		size = sizeof(struct rte_flow_item_tcp);
 		break;
+	case RTE_FLOW_ITEM_TYPE_ESP:
+		mask_support = (const char *)&dpaa2_flow_item_esp_mask;
+		size = sizeof(struct rte_flow_item_esp);
+		break;
+	case RTE_FLOW_ITEM_TYPE_AH:
+		mask_support = (const char *)&dpaa2_flow_item_ah_mask;
+		size = sizeof(struct rte_flow_item_ah);
+		break;
 	case RTE_FLOW_ITEM_TYPE_SCTP:
 		mask_support = (const char *)&dpaa2_flow_item_sctp_mask;
 		size = sizeof(struct rte_flow_item_sctp);
@@ -1688,7 +1717,7 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask[i] = (mask[i] | mask_src[i]);
 
 	if (memcmp(mask, mask_support, size))
-		return -1;
+		return -ENOTSUP;
 
 	return 0;
 }
@@ -2092,11 +2121,12 @@ dpaa2_configure_flow_tunnel_eth(struct dpaa2_dev_flow *flow,
 	if (!spec)
 		return 0;
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask->src,
@@ -2308,11 +2338,12 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask->src,
@@ -2413,11 +2444,12 @@ dpaa2_configure_flow_tunnel_vlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (!mask->tci)
@@ -2475,14 +2507,14 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
@@ -2490,27 +2522,28 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-				       RTE_FLOW_ITEM_TYPE_VLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+			RTE_FLOW_ITEM_TYPE_VLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (!mask->tci)
 		return 0;
 
 	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
-					      NH_FLD_VLAN_TCI, &spec->tci,
-					      &mask->tci, sizeof(rte_be16_t),
-					      priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+			NH_FLD_VLAN_TCI, &spec->tci,
+			&mask->tci, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
-					      NH_FLD_VLAN_TCI, &spec->tci,
-					      &mask->tci, sizeof(rte_be16_t),
-					      priv, group, &local_cfg,
-					      DPAA2_FLOW_FS_TYPE);
+			NH_FLD_VLAN_TCI, &spec->tci,
+			&mask->tci, sizeof(rte_be16_t),
+			priv, group, &local_cfg,
+			DPAA2_FLOW_FS_TYPE);
 	if (ret)
 		return ret;
 
@@ -2519,12 +2552,13 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	int ret, local_cfg = 0;
 	uint32_t group;
@@ -2548,16 +2582,16 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV4_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV4_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV4_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV4_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
@@ -2566,13 +2600,13 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_index = attr->priority;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
-					 DPAA2_FLOW_QOS_TYPE, group,
-					 &local_cfg);
+			DPAA2_FLOW_QOS_TYPE, group,
+			&local_cfg);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
-					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
 	if (ret)
 		return ret;
 
@@ -2581,10 +2615,11 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
-				       RTE_FLOW_ITEM_TYPE_IPV4)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+			RTE_FLOW_ITEM_TYPE_IPV4);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask_ipv4->hdr.src_addr) {
@@ -2593,18 +2628,18 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(rte_be32_t);
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV4_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV4_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2615,17 +2650,17 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(rte_be32_t);
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV4_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV4_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2636,18 +2671,18 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(uint8_t);
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2657,12 +2692,13 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 }
 
 static int
-dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	int ret, local_cfg = 0;
 	uint32_t group;
@@ -2690,27 +2726,27 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV6_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV6_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV6_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV6_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
-					 DPAA2_FLOW_QOS_TYPE, group,
-					 &local_cfg);
+			DPAA2_FLOW_QOS_TYPE, group,
+			&local_cfg);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
-					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
 	if (ret)
 		return ret;
 
@@ -2719,10 +2755,11 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
-				       RTE_FLOW_ITEM_TYPE_IPV6)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+			RTE_FLOW_ITEM_TYPE_IPV6);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp(mask_ipv6->hdr.src_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
@@ -2731,18 +2768,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = NH_FLD_IPV6_ADDR_SIZE;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV6_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV6_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2753,18 +2790,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = NH_FLD_IPV6_ADDR_SIZE;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV6_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV6_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2775,18 +2812,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(uint8_t);
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2843,11 +2880,12 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ICMP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ICMP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.icmp_type) {
@@ -2920,16 +2958,16 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_UDP_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_UDP_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_UDP_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_UDP_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
@@ -2950,11 +2988,12 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_UDP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_UDP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3027,9 +3066,9 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_TCP_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_TCP_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
@@ -3057,11 +3096,12 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_TCP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_TCP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3101,6 +3141,183 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_esp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_esp *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_esp_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ESP distribution not support");
+		return -ENOTSUP;
+	}
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_ESP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_ESP_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	if (!spec) {
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ESP);
+	if (ret) {
+		DPAA2_PMD_WARN("Extract field(s) of ESP not support.");
+
+		return ret;
+	}
+
+	if (mask->hdr.spi) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SPI, &spec->hdr.spi,
+			&mask->hdr.spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SPI, &spec->hdr.spi,
+			&mask->hdr.spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask->hdr.seq) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SEQUENCE_NUM, &spec->hdr.seq,
+			&mask->hdr.seq, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SEQUENCE_NUM, &spec->hdr.seq,
+			&mask->hdr.seq, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_ah(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ah *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_ah_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-AH distribution not support");
+		return -ENOTSUP;
+	}
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_AH_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_AH_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	if (!spec) {
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_AH);
+	if (ret) {
+		DPAA2_PMD_WARN("Extract field(s) of AH not support.");
+
+		return ret;
+	}
+
+	if (mask->spi) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_AH,
+			NH_FLD_IPSEC_AH_SPI, &spec->spi,
+			&mask->spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_AH,
+			NH_FLD_IPSEC_AH_SPI, &spec->spi,
+			&mask->spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask->seq_num) {
+		DPAA2_PMD_ERR("AH seq distribution not support");
+		return -ENOTSUP;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3149,11 +3366,12 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_SCTP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_SCTP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of SCTP not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3241,11 +3459,12 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_GRE)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GRE);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of GRE not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (!mask->protocol)
@@ -3318,11 +3537,12 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VXLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VXLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of VXLAN not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->flags) {
@@ -3422,17 +3642,18 @@ dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ECPRI)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ECPRI);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ECPRI not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->hdr.common.type != 0xff) {
 		DPAA2_PMD_WARN("ECPRI header type not specified.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA) {
@@ -3733,11 +3954,12 @@ dpaa2_configure_flow_gtp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_GTP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GTP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of GTP not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (!mask->teid)
@@ -4374,6 +4596,26 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_ESP:
+			ret = dpaa2_configure_flow_esp(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("ESP flow config failed!");
+				goto end_flow_set;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_AH:
+			ret = dpaa2_configure_flow_ah(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("AH flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
 			ret = dpaa2_configure_flow_gre(flow, dev, attr,
 					&dpaa2_pattern[i],
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 35/43] net/dpaa2: fix memory corruption in TM
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (33 preceding siblings ...)
  2024-10-14 12:01       ` [v3 34/43] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 36/43] net/dpaa2: support software taildrop vanshika.shukla
                         ` (8 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh; +Cc: stable

From: Gagandeep Singh <g.singh@nxp.com>

driver was reserving memory in an array for 8 queues only,
but it can support many more queues configuration.

This patch fixes the memory corruption issue by defining the
queue array with correct size.

Fixes: 72100f0dee21 ("net/dpaa2: support level 2 in traffic management")
Cc: g.singh@nxp.com
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/net/dpaa2/dpaa2_tm.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index 97d65e7181..14c47b41be 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -684,6 +684,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 	struct dpaa2_tm_node *leaf_node, *temp_leaf_node, *channel_node;
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
 	int ret, t;
+	bool conf_schedule = false;
 
 	/* Populate TCs */
 	LIST_FOREACH(channel_node, &priv->nodes, next) {
@@ -757,7 +758,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 	}
 
 	LIST_FOREACH(channel_node, &priv->nodes, next) {
-		int wfq_grp = 0, is_wfq_grp = 0, conf[DPNI_MAX_TC];
+		int wfq_grp = 0, is_wfq_grp = 0, conf[priv->nb_tx_queues];
 		struct dpni_tx_priorities_cfg prio_cfg;
 
 		memset(&prio_cfg, 0, sizeof(prio_cfg));
@@ -767,6 +768,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 		if (channel_node->level_id != CHANNEL_LEVEL)
 			continue;
 
+		conf_schedule = false;
 		LIST_FOREACH(leaf_node, &priv->nodes, next) {
 			struct dpaa2_queue *leaf_dpaa2_q;
 			uint8_t leaf_tc_id;
@@ -789,6 +791,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			if (leaf_node->parent != channel_node)
 				continue;
 
+			conf_schedule = true;
 			leaf_dpaa2_q =  (struct dpaa2_queue *)dev->data->tx_queues[leaf_node->id];
 			leaf_tc_id = leaf_dpaa2_q->tc_index;
 			/* Process sibling leaf nodes */
@@ -829,8 +832,8 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 						goto out;
 					}
 					is_wfq_grp = 1;
-					conf[temp_leaf_node->id] = 1;
 				}
+				conf[temp_leaf_node->id] = 1;
 			}
 			if (is_wfq_grp) {
 				if (wfq_grp == 0) {
@@ -851,6 +854,9 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			}
 			conf[leaf_node->id] = 1;
 		}
+		if (!conf_schedule)
+			continue;
+
 		if (wfq_grp > 1) {
 			prio_cfg.separate_groups = 1;
 			if (prio_cfg.prio_group_B < prio_cfg.prio_group_A) {
@@ -864,6 +870,16 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 
 		prio_cfg.prio_group_A = 1;
 		prio_cfg.channel_idx = channel_node->channel_id;
+		DPAA2_PMD_DEBUG("########################################");
+		DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
+		for (t = 0; t < DPNI_MAX_TC; t++)
+			DPAA2_PMD_DEBUG("tc = %d mode = %d, delta = %d", t,
+					prio_cfg.tc_sched[t].mode,
+					prio_cfg.tc_sched[t].delta_bandwidth);
+
+		DPAA2_PMD_DEBUG("prioritya = %d, priorityb = %d, separate grps"
+				" = %d", prio_cfg.prio_group_A,
+				prio_cfg.prio_group_B, prio_cfg.separate_groups);
 		ret = dpni_set_tx_priorities(dpni, 0, priv->token, &prio_cfg);
 		if (ret) {
 			ret = -rte_tm_error_set(error, EINVAL,
@@ -871,15 +887,6 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 					"Scheduling Failed\n");
 			goto out;
 		}
-		DPAA2_PMD_DEBUG("########################################");
-		DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
-		for (t = 0; t < DPNI_MAX_TC; t++) {
-			DPAA2_PMD_DEBUG("tc = %d mode = %d ", t, prio_cfg.tc_sched[t].mode);
-			DPAA2_PMD_DEBUG("delta = %d", prio_cfg.tc_sched[t].delta_bandwidth);
-		}
-		DPAA2_PMD_DEBUG("prioritya = %d", prio_cfg.prio_group_A);
-		DPAA2_PMD_DEBUG("priorityb = %d", prio_cfg.prio_group_B);
-		DPAA2_PMD_DEBUG("separate grps = %d", prio_cfg.separate_groups);
 	}
 	return 0;
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 36/43] net/dpaa2: support software taildrop
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (34 preceding siblings ...)
  2024-10-14 12:01       ` [v3 35/43] net/dpaa2: fix memory corruption in TM vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 37/43] net/dpaa2: check IOVA before sending MC command vanshika.shukla
                         ` (7 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

Add software based taildrop support.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  2 +-
 drivers/net/dpaa2/dpaa2_rxtx.c          | 24 +++++++++++++++++++++++-
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 74a1a8b2fa..b6cd1f00c4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -179,7 +179,7 @@ struct __rte_cache_aligned dpaa2_queue {
 	struct dpaa2_queue *tx_conf_queue;
 	int32_t eventfd;	/*!< Event Fd of this queue */
 	uint16_t nb_desc;
-	uint16_t resv;
+	uint16_t tm_sw_td;	/*!< TM software taildrop */
 	uint64_t offloads;
 	uint64_t lpbk_cntx;
 	uint8_t data_stashing_off;
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 71b2b4a427..fd07a75a40 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1297,8 +1297,11 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		while (qbman_result_SCN_state(dpaa2_q->cscn)) {
 			retry_count++;
 			/* Retry for some time before giving up */
-			if (retry_count > CONG_RETRY_COUNT)
+			if (retry_count > CONG_RETRY_COUNT) {
+				if (dpaa2_q->tm_sw_td)
+					goto sw_td;
 				goto skip_tx;
+			}
 		}
 
 		frames_to_send = (nb_pkts > dpaa2_eqcr_size) ?
@@ -1490,6 +1493,25 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
+	return num_tx;
+sw_td:
+	loop = 0;
+	while (loop < num_tx) {
+		if (unlikely(RTE_MBUF_HAS_EXTBUF(*bufs)))
+			rte_pktmbuf_free(*bufs);
+		bufs++;
+		loop++;
+	}
+
+	/* free the pending buffers */
+	while (nb_pkts) {
+		rte_pktmbuf_free(*bufs);
+		bufs++;
+		nb_pkts--;
+		num_tx++;
+	}
+	dpaa2_q->tx_pkts += num_tx;
+
 	return num_tx;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 37/43] net/dpaa2: check IOVA before sending MC command
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (35 preceding siblings ...)
  2024-10-14 12:01       ` [v3 36/43] net/dpaa2: support software taildrop vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 38/43] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
                         ` (6 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Convert VA to IOVA and check IOVA before sending parameter
to MC. Invalid IOVA of parameter sent to MC will cause system
stuck and not be recovered unless power reset.
IOVA is not checked in data path because:
1) MC is not involved and error can be recovered.
2) IOVA check impacts performance a little bit.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c |  63 +++--
 drivers/net/dpaa2/dpaa2_ethdev.c       | 338 +++++++++++++------------
 drivers/net/dpaa2/dpaa2_ethdev.h       |   3 +
 drivers/net/dpaa2/dpaa2_flow.c         |  67 ++++-
 drivers/net/dpaa2/dpaa2_sparser.c      |  25 +-
 drivers/net/dpaa2/dpaa2_tm.c           |  43 ++--
 6 files changed, 320 insertions(+), 219 deletions(-)

diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 4d33b51fea..20b37a97bb 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -30,8 +30,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 
 int
 rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
-			      uint16_t offset,
-			      uint8_t size)
+	uint16_t offset, uint8_t size)
 {
 	struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
 	struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
@@ -52,8 +51,8 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	p_params = rte_zmalloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_zmalloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -73,17 +72,23 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 	}
 
 	memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
-	tc_cfg.key_cfg_iova = (size_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.dist_size = eth_dev->data->nb_rx_queues;
 	tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
 
 	ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index,
-				  &tc_cfg);
+			&tc_cfg);
 	rte_free(p_params);
 	if (ret) {
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("Set RX TC dist failed(err=%d)", ret);
 		return ret;
 	}
 
@@ -115,8 +120,8 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 	if (tc_dist_queues > priv->dist_queues)
 		tc_dist_queues = priv->dist_queues;
 
-	p_params = rte_malloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_malloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -133,7 +138,15 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 		return ret;
 	}
 
-	tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.dist_size = tc_dist_queues;
 	tc_cfg.enable = true;
 	tc_cfg.tc = tc_index;
@@ -148,17 +161,15 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 	ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, &tc_cfg);
 	rte_free(p_params);
 	if (ret) {
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("RX Hash dist for failed(err=%d)", ret);
 		return ret;
 	}
 
 	return 0;
 }
 
-int dpaa2_remove_flow_dist(
-	struct rte_eth_dev *eth_dev,
+int
+dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
 	uint8_t tc_index)
 {
 	struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
@@ -168,8 +179,8 @@ int dpaa2_remove_flow_dist(
 	void *p_params;
 	int ret;
 
-	p_params = rte_malloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_malloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -177,7 +188,15 @@ int dpaa2_remove_flow_dist(
 
 	memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
 	tc_cfg.dist_size = 0;
-	tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.enable = true;
 	tc_cfg.tc = tc_index;
 
@@ -194,9 +213,7 @@ int dpaa2_remove_flow_dist(
 			&tc_cfg);
 	rte_free(p_params);
 	if (ret)
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("RX hash dist failed(err=%d)", ret);
 	return ret;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index da0ea57ed2..7a3937346c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -123,9 +123,9 @@ dpaa2_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
-		return -1;
+		return -EINVAL;
 	}
 
 	if (on)
@@ -174,8 +174,8 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 static int
 dpaa2_vlan_tpid_set(struct rte_eth_dev *dev,
-		      enum rte_vlan_type vlan_type __rte_unused,
-		      uint16_t tpid)
+	enum rte_vlan_type vlan_type __rte_unused,
+	uint16_t tpid)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct fsl_mc_io *dpni = dev->process_private;
@@ -212,8 +212,7 @@ dpaa2_vlan_tpid_set(struct rte_eth_dev *dev,
 
 static int
 dpaa2_fw_version_get(struct rte_eth_dev *dev,
-		     char *fw_version,
-		     size_t fw_size)
+	char *fw_version, size_t fw_size)
 {
 	int ret;
 	struct fsl_mc_io *dpni = dev->process_private;
@@ -245,7 +244,8 @@ dpaa2_fw_version_get(struct rte_eth_dev *dev,
 }
 
 static int
-dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+dpaa2_dev_info_get(struct rte_eth_dev *dev,
+	struct rte_eth_dev_info *dev_info)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
@@ -291,8 +291,8 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 static int
 dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
-			__rte_unused uint16_t queue_id,
-			struct rte_eth_burst_mode *mode)
+	__rte_unused uint16_t queue_id,
+	struct rte_eth_burst_mode *mode)
 {
 	struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
 	int ret = -EINVAL;
@@ -368,7 +368,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	uint8_t num_rxqueue_per_tc;
 	struct dpaa2_queue *mc_q, *mcq;
 	uint32_t tot_queues;
-	int i;
+	int i, ret;
 	struct dpaa2_queue *dpaa2_q;
 
 	PMD_INIT_FUNC_TRACE();
@@ -382,7 +382,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 			  RTE_CACHE_LINE_SIZE);
 	if (!mc_q) {
 		DPAA2_PMD_ERR("Memory allocation failed for rx/tx queues");
-		return -1;
+		return -ENOBUFS;
 	}
 
 	for (i = 0; i < priv->nb_rx_queues; i++) {
@@ -404,8 +404,10 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	if (dpaa2_enable_err_queue) {
 		priv->rx_err_vq = rte_zmalloc("dpni_rx_err",
 			sizeof(struct dpaa2_queue), 0);
-		if (!priv->rx_err_vq)
+		if (!priv->rx_err_vq) {
+			ret = -ENOBUFS;
 			goto fail;
+		}
 
 		dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq;
 		dpaa2_q->q_storage = rte_malloc("err_dq_storage",
@@ -424,13 +426,15 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 
 	for (i = 0; i < priv->nb_tx_queues; i++) {
 		mc_q->eth_data = dev->data;
-		mc_q->flow_id = 0xffff;
+		mc_q->flow_id = DPAA2_INVALID_FLOW_ID;
 		priv->tx_vq[i] = mc_q++;
 		dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i];
 		dpaa2_q->cscn = rte_malloc(NULL,
 					   sizeof(struct qbman_result), 16);
-		if (!dpaa2_q->cscn)
+		if (!dpaa2_q->cscn) {
+			ret = -ENOBUFS;
 			goto fail_tx;
+		}
 	}
 
 	if (priv->flags & DPAA2_TX_CONF_ENABLE) {
@@ -498,7 +502,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	}
 
 	rte_free(mc_q);
-	return -1;
+	return ret;
 }
 
 static void
@@ -718,14 +722,14 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
  */
 static int
 dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
-			 uint16_t rx_queue_id,
-			 uint16_t nb_rx_desc,
-			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_rxconf *rx_conf,
-			 struct rte_mempool *mb_pool)
+	uint16_t rx_queue_id,
+	uint16_t nb_rx_desc,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_rxconf *rx_conf,
+	struct rte_mempool *mb_pool)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct dpni_queue cfg;
 	uint8_t options = 0;
@@ -747,8 +751,8 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* Rx deferred start is not supported */
 	if (rx_conf->rx_deferred_start) {
-		DPAA2_PMD_ERR("%p:Rx deferred start not supported",
-				(void *)dev);
+		DPAA2_PMD_ERR("%s:Rx deferred start not supported",
+			dev->data->name);
 		return -EINVAL;
 	}
 
@@ -764,7 +768,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		if (ret)
 			return ret;
 	}
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = priv->rx_vq[rx_queue_id];
 	dpaa2_q->mb_pool = mb_pool; /**< mbuf pool to populate RX ring. */
 	dpaa2_q->bp_array = rte_dpaa2_bpid_info;
 	dpaa2_q->nb_desc = UINT16_MAX;
@@ -790,7 +794,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		cfg.cgid = i;
 		dpaa2_q->cgid = cfg.cgid;
 	} else {
-		dpaa2_q->cgid = 0xff;
+		dpaa2_q->cgid = DPAA2_INVALID_CGID;
 	}
 
 	/*if ls2088 or rev2 device, enable the stashing */
@@ -814,10 +818,10 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 	}
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_RX,
-			     dpaa2_q->tc_index, flow_id, options, &cfg);
+			dpaa2_q->tc_index, flow_id, options, &cfg);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in setting the rx flow: = %d", ret);
-		return -1;
+		return ret;
 	}
 
 	if (!(priv->flags & DPAA2_RX_TAILDROP_OFF)) {
@@ -830,7 +834,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		 * There is no HW restriction, but number of CGRs are limited,
 		 * hence this restriction is placed.
 		 */
-		if (dpaa2_q->cgid != 0xff) {
+		if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 			/*enabling per rx queue congestion control */
 			taildrop.threshold = nb_rx_desc;
 			taildrop.units = DPNI_CONGESTION_UNIT_FRAMES;
@@ -856,15 +860,15 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 		if (ret) {
 			DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)",
-				      ret);
-			return -1;
+				ret);
+			return ret;
 		}
 	} else { /* Disable tail Drop */
 		struct dpni_taildrop taildrop = {0};
 		DPAA2_PMD_INFO("Tail drop is disabled on queue");
 
 		taildrop.enable = 0;
-		if (dpaa2_q->cgid != 0xff) {
+		if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
 					dpaa2_q->tc_index,
@@ -876,8 +880,8 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 		if (ret) {
 			DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)",
-				      ret);
-			return -1;
+				ret);
+			return ret;
 		}
 	}
 
@@ -887,16 +891,14 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 static int
 dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
-			 uint16_t tx_queue_id,
-			 uint16_t nb_tx_desc,
-			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_txconf *tx_conf)
+	uint16_t tx_queue_id,
+	uint16_t nb_tx_desc,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_txconf *tx_conf)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)
-		priv->tx_vq[tx_queue_id];
-	struct dpaa2_queue *dpaa2_tx_conf_q = (struct dpaa2_queue *)
-		priv->tx_conf_vq[tx_queue_id];
+	struct dpaa2_queue *dpaa2_q = priv->tx_vq[tx_queue_id];
+	struct dpaa2_queue *dpaa2_tx_conf_q = priv->tx_conf_vq[tx_queue_id];
 	struct fsl_mc_io *dpni = dev->process_private;
 	struct dpni_queue tx_conf_cfg;
 	struct dpni_queue tx_flow_cfg;
@@ -906,13 +908,14 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct dpni_queue_id qid;
 	uint32_t tc_id;
 	int ret;
+	uint64_t iova;
 
 	PMD_INIT_FUNC_TRACE();
 
 	/* Tx deferred start is not supported */
 	if (tx_conf->tx_deferred_start) {
-		DPAA2_PMD_ERR("%p:Tx deferred start not supported",
-				(void *)dev);
+		DPAA2_PMD_ERR("%s:Tx deferred start not supported",
+			dev->data->name);
 		return -EINVAL;
 	}
 
@@ -920,7 +923,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	dpaa2_q->offloads = tx_conf->offloads;
 
 	/* Return if queue already configured */
-	if (dpaa2_q->flow_id != 0xffff) {
+	if (dpaa2_q->flow_id != DPAA2_INVALID_FLOW_ID) {
 		dev->data->tx_queues[tx_queue_id] = dpaa2_q;
 		return 0;
 	}
@@ -962,7 +965,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		DPAA2_PMD_ERR("Error in setting the tx flow: "
 			"tc_id=%d, flow=%d err=%d",
 			tc_id, flow_id, ret);
-			return -1;
+			return ret;
 	}
 
 	dpaa2_q->flow_id = flow_id;
@@ -970,11 +973,11 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	dpaa2_q->tc_index = tc_id;
 
 	ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX, ((channel_id << 8) | dpaa2_q->tc_index),
-			     dpaa2_q->flow_id, &tx_flow_cfg, &qid);
+			DPNI_QUEUE_TX, ((channel_id << 8) | dpaa2_q->tc_index),
+			dpaa2_q->flow_id, &tx_flow_cfg, &qid);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-		return -1;
+		return ret;
 	}
 	dpaa2_q->fqid = qid.fqid;
 
@@ -990,8 +993,17 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		 */
 		cong_notif_cfg.threshold_exit = (nb_tx_desc * 9) / 10;
 		cong_notif_cfg.message_ctx = 0;
-		cong_notif_cfg.message_iova =
-				(size_t)DPAA2_VADDR_TO_IOVA(dpaa2_q->cscn);
+
+		iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(dpaa2_q->cscn,
+			sizeof(struct qbman_result));
+		if (iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("No IOMMU map for cscn(%p)(size=%x)",
+				dpaa2_q->cscn, (uint32_t)sizeof(struct qbman_result));
+
+			return -ENOBUFS;
+		}
+
+		cong_notif_cfg.message_iova = iova;
 		cong_notif_cfg.dest_cfg.dest_type = DPNI_DEST_NONE;
 		cong_notif_cfg.notification_mode =
 					 DPNI_CONG_OPT_WRITE_MEM_ON_ENTER |
@@ -999,16 +1011,13 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 					 DPNI_CONG_OPT_COHERENT_WRITE;
 		cong_notif_cfg.cg_point = DPNI_CP_QUEUE;
 
-		ret = dpni_set_congestion_notification(dpni, CMD_PRI_LOW,
-						       priv->token,
-						       DPNI_QUEUE_TX,
-						       ((channel_id << 8) | tc_id),
-						       &cong_notif_cfg);
+		ret = dpni_set_congestion_notification(dpni,
+				CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
+				((channel_id << 8) | tc_id), &cong_notif_cfg);
 		if (ret) {
-			DPAA2_PMD_ERR(
-			   "Error in setting tx congestion notification: "
-			   "err=%d", ret);
-			return -ret;
+			DPAA2_PMD_ERR("Set TX congestion notification err=%d",
+			   ret);
+			return ret;
 		}
 	}
 	dpaa2_q->cb_eqresp_free = dpaa2_dev_free_eqresp_buf;
@@ -1019,22 +1028,24 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		options = options | DPNI_QUEUE_OPT_USER_CTX;
 		tx_conf_cfg.user_context = (size_t)(dpaa2_q);
 		ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX_CONFIRM, ((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
-			     dpaa2_tx_conf_q->flow_id, options, &tx_conf_cfg);
+				DPNI_QUEUE_TX_CONFIRM,
+				((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
+				dpaa2_tx_conf_q->flow_id,
+				options, &tx_conf_cfg);
 		if (ret) {
-			DPAA2_PMD_ERR("Error in setting the tx conf flow: "
-			      "tc_index=%d, flow=%d err=%d",
-			      dpaa2_tx_conf_q->tc_index,
-			      dpaa2_tx_conf_q->flow_id, ret);
-			return -1;
+			DPAA2_PMD_ERR("Set TC[%d].TX[%d] conf flow err=%d",
+				dpaa2_tx_conf_q->tc_index,
+				dpaa2_tx_conf_q->flow_id, ret);
+			return ret;
 		}
 
 		ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX_CONFIRM, ((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
-			     dpaa2_tx_conf_q->flow_id, &tx_conf_cfg, &qid);
+				DPNI_QUEUE_TX_CONFIRM,
+				((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
+				dpaa2_tx_conf_q->flow_id, &tx_conf_cfg, &qid);
 		if (ret) {
 			DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-			return -1;
+			return ret;
 		}
 		dpaa2_tx_conf_q->fqid = qid.fqid;
 	}
@@ -1046,8 +1057,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct dpaa2_queue *dpaa2_q = dev->data->rx_queues[rx_queue_id];
 	struct dpaa2_dev_priv *priv = dpaa2_q->eth_data->dev_private;
-	struct fsl_mc_io *dpni =
-		(struct fsl_mc_io *)priv->eth_dev->process_private;
+	struct fsl_mc_io *dpni = priv->eth_dev->process_private;
 	uint8_t options = 0;
 	int ret;
 	struct dpni_queue cfg;
@@ -1057,7 +1067,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	total_nb_rx_desc -= dpaa2_q->nb_desc;
 
-	if (dpaa2_q->cgid != 0xff) {
+	if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 		options = DPNI_QUEUE_OPT_CLEAR_CGID;
 		cfg.cgid = dpaa2_q->cgid;
 
@@ -1069,7 +1079,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 			DPAA2_PMD_ERR("Unable to clear CGR from q=%u err=%d",
 					dpaa2_q->fqid, ret);
 		priv->cgid_in_use[dpaa2_q->cgid] = 0;
-		dpaa2_q->cgid = 0xff;
+		dpaa2_q->cgid = DPAA2_INVALID_CGID;
 	}
 }
 
@@ -1233,10 +1243,10 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
 	dpaa2_dev_set_link_up(dev);
 
 	for (i = 0; i < data->nb_rx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)data->rx_queues[i];
+		dpaa2_q = data->rx_queues[i];
 		ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-				     DPNI_QUEUE_RX, dpaa2_q->tc_index,
-				       dpaa2_q->flow_id, &cfg, &qid);
+				DPNI_QUEUE_RX, dpaa2_q->tc_index,
+				dpaa2_q->flow_id, &cfg, &qid);
 		if (ret) {
 			DPAA2_PMD_ERR("Error in getting flow information: "
 				      "err=%d", ret);
@@ -1253,7 +1263,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
 						ret);
 			return ret;
 		}
-		dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq;
+		dpaa2_q = priv->rx_err_vq;
 		dpaa2_q->fqid = qid.fqid;
 		dpaa2_q->eth_data = dev->data;
 
@@ -1318,7 +1328,7 @@ static int
 dpaa2_dev_stop(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int ret;
 	struct rte_eth_link link;
 	struct rte_device *rdev = dev->device;
@@ -1371,7 +1381,7 @@ static int
 dpaa2_dev_close(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int i, ret;
 	struct rte_eth_link link;
 
@@ -1382,7 +1392,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 
 	if (!dpni) {
 		DPAA2_PMD_WARN("Already closed or not started");
-		return -1;
+		return -EINVAL;
 	}
 
 	dpaa2_tm_deinit(dev);
@@ -1391,7 +1401,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	ret = dpni_reset(dpni, CMD_PRI_LOW, priv->token);
 	if (ret) {
 		DPAA2_PMD_ERR("Failure cleaning dpni device: err=%d", ret);
-		return -1;
+		return ret;
 	}
 
 	memset(&link, 0, sizeof(link));
@@ -1403,7 +1413,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	ret = dpni_close(dpni, CMD_PRI_LOW, priv->token);
 	if (ret) {
 		DPAA2_PMD_ERR("Failure closing dpni device with err code %d",
-			      ret);
+			ret);
 	}
 
 	/* Free the allocated memory for ethernet private data and dpni*/
@@ -1412,18 +1422,17 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	rte_free(dpni);
 
 	for (i = 0; i < MAX_TCS; i++)
-		rte_free((void *)(size_t)priv->extract.tc_extract_param[i]);
+		rte_free(priv->extract.tc_extract_param[i]);
 
 	if (priv->extract.qos_extract_param)
-		rte_free((void *)(size_t)priv->extract.qos_extract_param);
+		rte_free(priv->extract.qos_extract_param);
 
 	DPAA2_PMD_INFO("%s: netdev deleted", dev->data->name);
 	return 0;
 }
 
 static int
-dpaa2_dev_promiscuous_enable(
-		struct rte_eth_dev *dev)
+dpaa2_dev_promiscuous_enable(struct rte_eth_dev *dev)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
@@ -1483,7 +1492,7 @@ dpaa2_dev_allmulticast_enable(
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1504,7 +1513,7 @@ dpaa2_dev_allmulticast_disable(struct rte_eth_dev *dev)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1529,13 +1538,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
 				+ VLAN_TAG_SIZE;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1547,7 +1556,7 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 					frame_size - RTE_ETHER_CRC_LEN);
 	if (ret) {
 		DPAA2_PMD_ERR("Setting the max frame length failed");
-		return -1;
+		return ret;
 	}
 	dev->data->mtu = mtu;
 	DPAA2_PMD_INFO("MTU configured for the device: %d", mtu);
@@ -1556,36 +1565,35 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 static int
 dpaa2_dev_add_mac_addr(struct rte_eth_dev *dev,
-		       struct rte_ether_addr *addr,
-		       __rte_unused uint32_t index,
-		       __rte_unused uint32_t pool)
+	struct rte_ether_addr *addr,
+	__rte_unused uint32_t index,
+	__rte_unused uint32_t pool)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
-		return -1;
+		return -EINVAL;
 	}
 
 	ret = dpni_add_mac_addr(dpni, CMD_PRI_LOW, priv->token,
 				addr->addr_bytes, 0, 0, 0);
 	if (ret)
-		DPAA2_PMD_ERR(
-			"error: Adding the MAC ADDR failed: err = %d", ret);
-	return 0;
+		DPAA2_PMD_ERR("ERR(%d) Adding the MAC ADDR failed", ret);
+	return ret;
 }
 
 static void
 dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
-			  uint32_t index)
+	uint32_t index)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct rte_eth_dev_data *data = dev->data;
 	struct rte_ether_addr *macaddr;
 
@@ -1593,7 +1601,7 @@ dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
 
 	macaddr = &data->mac_addrs[index];
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return;
 	}
@@ -1607,15 +1615,15 @@ dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
 
 static int
 dpaa2_dev_set_mac_addr(struct rte_eth_dev *dev,
-		       struct rte_ether_addr *addr)
+	struct rte_ether_addr *addr)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1624,19 +1632,18 @@ dpaa2_dev_set_mac_addr(struct rte_eth_dev *dev,
 					priv->token, addr->addr_bytes);
 
 	if (ret)
-		DPAA2_PMD_ERR(
-			"error: Setting the MAC ADDR failed %d", ret);
+		DPAA2_PMD_ERR("ERR(%d) Setting the MAC ADDR failed", ret);
 
 	return ret;
 }
 
-static
-int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
-			 struct rte_eth_stats *stats)
+static int
+dpaa2_dev_stats_get(struct rte_eth_dev *dev,
+	struct rte_eth_stats *stats)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
-	int32_t  retcode;
+	struct fsl_mc_io *dpni = dev->process_private;
+	int32_t retcode;
 	uint8_t page0 = 0, page1 = 1, page2 = 2;
 	union dpni_statistics value;
 	int i;
@@ -1691,8 +1698,8 @@ int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
 	/* Fill in per queue stats */
 	for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) &&
 		(i < priv->nb_rx_queues || i < priv->nb_tx_queues); ++i) {
-		dpaa2_rxq = (struct dpaa2_queue *)priv->rx_vq[i];
-		dpaa2_txq = (struct dpaa2_queue *)priv->tx_vq[i];
+		dpaa2_rxq = priv->rx_vq[i];
+		dpaa2_txq = priv->tx_vq[i];
 		if (dpaa2_rxq)
 			stats->q_ipackets[i] = dpaa2_rxq->rx_pkts;
 		if (dpaa2_txq)
@@ -1711,19 +1718,20 @@ int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
 };
 
 static int
-dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
-		     unsigned int n)
+dpaa2_dev_xstats_get(struct rte_eth_dev *dev,
+	struct rte_eth_xstat *xstats, unsigned int n)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
-	int32_t  retcode;
+	int32_t retcode;
 	union dpni_statistics value[5] = {};
 	unsigned int i = 0, num = RTE_DIM(dpaa2_xstats_strings);
+	uint8_t page_id, stats_id;
 
 	if (n < num)
 		return num;
 
-	if (xstats == NULL)
+	if (!xstats)
 		return 0;
 
 	/* Get Counters from page_0*/
@@ -1758,8 +1766,9 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 
 	for (i = 0; i < num; i++) {
 		xstats[i].id = i;
-		xstats[i].value = value[dpaa2_xstats_strings[i].page_id].
-			raw.counter[dpaa2_xstats_strings[i].stats_id];
+		page_id = dpaa2_xstats_strings[i].page_id;
+		stats_id = dpaa2_xstats_strings[i].stats_id;
+		xstats[i].value = value[page_id].raw.counter[stats_id];
 	}
 	return i;
 err:
@@ -1769,8 +1778,8 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 
 static int
 dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
-		       struct rte_eth_xstat_name *xstats_names,
-		       unsigned int limit)
+	struct rte_eth_xstat_name *xstats_names,
+	unsigned int limit)
 {
 	unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings);
 
@@ -1788,16 +1797,16 @@ dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 
 static int
 dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
-		       uint64_t *values, unsigned int n)
+	uint64_t *values, unsigned int n)
 {
 	unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings);
 	uint64_t values_copy[stat_cnt];
+	uint8_t page_id, stats_id;
 
 	if (!ids) {
 		struct dpaa2_dev_priv *priv = dev->data->dev_private;
-		struct fsl_mc_io *dpni =
-			(struct fsl_mc_io *)dev->process_private;
-		int32_t  retcode;
+		struct fsl_mc_io *dpni = dev->process_private;
+		int32_t retcode;
 		union dpni_statistics value[5] = {};
 
 		if (n < stat_cnt)
@@ -1831,8 +1840,9 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 			return 0;
 
 		for (i = 0; i < stat_cnt; i++) {
-			values[i] = value[dpaa2_xstats_strings[i].page_id].
-				raw.counter[dpaa2_xstats_strings[i].stats_id];
+			page_id = dpaa2_xstats_strings[i].page_id;
+			stats_id = dpaa2_xstats_strings[i].stats_id;
+			values[i] = value[page_id].raw.counter[stats_id];
 		}
 		return stat_cnt;
 	}
@@ -1842,7 +1852,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 	for (i = 0; i < n; i++) {
 		if (ids[i] >= stat_cnt) {
 			DPAA2_PMD_ERR("xstats id value isn't valid");
-			return -1;
+			return -EINVAL;
 		}
 		values[i] = values_copy[ids[i]];
 	}
@@ -1850,8 +1860,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 }
 
 static int
-dpaa2_xstats_get_names_by_id(
-	struct rte_eth_dev *dev,
+dpaa2_xstats_get_names_by_id(struct rte_eth_dev *dev,
 	const uint64_t *ids,
 	struct rte_eth_xstat_name *xstats_names,
 	unsigned int limit)
@@ -1878,14 +1887,14 @@ static int
 dpaa2_dev_stats_reset(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int retcode;
 	int i;
 	struct dpaa2_queue *dpaa2_q;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1896,13 +1905,13 @@ dpaa2_dev_stats_reset(struct rte_eth_dev *dev)
 
 	/* Reset the per queue stats in dpaa2_queue structure */
 	for (i = 0; i < priv->nb_rx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[i];
+		dpaa2_q = priv->rx_vq[i];
 		if (dpaa2_q)
 			dpaa2_q->rx_pkts = 0;
 	}
 
 	for (i = 0; i < priv->nb_tx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i];
+		dpaa2_q = priv->tx_vq[i];
 		if (dpaa2_q)
 			dpaa2_q->tx_pkts = 0;
 	}
@@ -1921,12 +1930,12 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct rte_eth_link link;
 	struct dpni_link_state state = {0};
 	uint8_t count;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return 0;
 	}
@@ -1936,7 +1945,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 					  &state);
 		if (ret < 0) {
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
-			return -1;
+			return ret;
 		}
 		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
@@ -1955,7 +1964,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
-	if (ret == -1)
+	if (ret < 0)
 		DPAA2_PMD_DEBUG("No change in status");
 	else
 		DPAA2_PMD_INFO("Port %d Link is %s", dev->data->port_id,
@@ -1978,9 +1987,9 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	struct dpni_link_state state = {0};
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return ret;
 	}
@@ -2040,9 +2049,9 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("Device has not yet been configured");
 		return ret;
 	}
@@ -2094,9 +2103,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL || fc_conf == NULL) {
+	if (!dpni || !fc_conf) {
 		DPAA2_PMD_ERR("device not configured");
 		return ret;
 	}
@@ -2149,9 +2158,9 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return ret;
 	}
@@ -2394,10 +2403,10 @@ dpaa2_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 {
 	struct dpaa2_queue *rxq;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	uint16_t max_frame_length;
 
-	rxq = (struct dpaa2_queue *)dev->data->rx_queues[queue_id];
+	rxq = dev->data->rx_queues[queue_id];
 
 	qinfo->mp = rxq->mb_pool;
 	qinfo->scattered_rx = dev->data->scattered_rx;
@@ -2513,10 +2522,10 @@ static struct eth_dev_ops dpaa2_ethdev_ops = {
  * Returns the table of MAC entries (multiple entries)
  */
 static int
-populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv,
-		  struct rte_ether_addr *mac_entry)
+populate_mac_addr(struct fsl_mc_io *dpni_dev,
+	struct dpaa2_dev_priv *priv, struct rte_ether_addr *mac_entry)
 {
-	int ret;
+	int ret = 0;
 	struct rte_ether_addr phy_mac, prime_mac;
 
 	memset(&phy_mac, 0, sizeof(struct rte_ether_addr));
@@ -2574,7 +2583,7 @@ populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv,
 	return 0;
 
 cleanup:
-	return -1;
+	return ret;
 }
 
 static int
@@ -2633,7 +2642,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 	dpni_dev->regs = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
-	eth_dev->process_private = (void *)dpni_dev;
+	eth_dev->process_private = dpni_dev;
 
 	/* For secondary processes, the primary has done all the work */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
@@ -2662,7 +2671,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 			     "Failure in opening dpni@%d with err code %d",
 			     hw_id, ret);
 		rte_free(dpni_dev);
-		return -1;
+		return ret;
 	}
 
 	if (eth_dev->data->dev_conf.lpbk_mode)
@@ -2813,7 +2822,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	/* Init fields w.r.t. classification */
 	memset(&priv->extract.qos_key_extract, 0,
 		sizeof(struct dpaa2_key_extract));
-	priv->extract.qos_extract_param = rte_malloc(NULL, 256, 64);
+	priv->extract.qos_extract_param = rte_malloc(NULL,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE,
+		RTE_CACHE_LINE_SIZE);
 	if (!priv->extract.qos_extract_param) {
 		DPAA2_PMD_ERR("Memory alloc failed");
 		goto init_err;
@@ -2822,7 +2833,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	for (i = 0; i < MAX_TCS; i++) {
 		memset(&priv->extract.tc_key_extract[i], 0,
 			sizeof(struct dpaa2_key_extract));
-		priv->extract.tc_extract_param[i] = rte_malloc(NULL, 256, 64);
+		priv->extract.tc_extract_param[i] = rte_malloc(NULL,
+			DPAA2_EXTRACT_PARAM_MAX_SIZE,
+			RTE_CACHE_LINE_SIZE);
 		if (!priv->extract.tc_extract_param[i]) {
 			DPAA2_PMD_ERR("Memory alloc failed");
 			goto init_err;
@@ -2982,12 +2995,11 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 
 	if ((DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE) >
 		RTE_PKTMBUF_HEADROOM) {
-		DPAA2_PMD_ERR(
-		"RTE_PKTMBUF_HEADROOM(%d) shall be > DPAA2 Annotation req(%d)",
-		RTE_PKTMBUF_HEADROOM,
-		DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE);
+		DPAA2_PMD_ERR("RTE_PKTMBUF_HEADROOM(%d) < DPAA2 Annotation(%d)",
+			RTE_PKTMBUF_HEADROOM,
+			DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE);
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index db918725a7..a2b9fc5678 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -31,6 +31,9 @@
 #define MAX_DPNI		8
 #define DPAA2_MAX_CHANNELS	16
 
+#define DPAA2_EXTRACT_PARAM_MAX_SIZE 256
+#define DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE 256
+
 #define DPAA2_RX_DEFAULT_NBDESC 512
 
 #define DPAA2_ETH_MAX_LEN (RTE_ETHER_MTU + \
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 3afe331023..54f38e2e25 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -4322,7 +4322,14 @@ dpaa2_configure_fs_rss_table(struct dpaa2_dev_priv *priv,
 
 	tc_extract = &priv->extract.tc_key_extract[tc_id];
 	key_cfg_buf = priv->extract.tc_extract_param[tc_id];
-	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_buf,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_buf);
+
+		return -ENOBUFS;
+	}
 
 	key_max_size = tc_extract->key_profile.key_max_size;
 	entry_size = dpaa2_flow_entry_size(key_max_size);
@@ -4406,7 +4413,14 @@ dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
 
 	qos_extract = &priv->extract.qos_key_extract;
 	key_cfg_buf = priv->extract.qos_extract_param;
-	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_buf,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_buf);
+
+		return -ENOBUFS;
+	}
 
 	key_max_size = qos_extract->key_profile.key_max_size;
 	entry_size = dpaa2_flow_entry_size(key_max_size);
@@ -4963,6 +4977,7 @@ dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	struct dpaa2_dev_flow *flow = NULL;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int ret;
+	uint64_t iova;
 
 	dpaa2_flow_control_log =
 		getenv("DPAA2_FLOW_CONTROL_LOG");
@@ -4986,34 +5001,66 @@ dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	}
 
 	/* Allocate DMA'ble memory to write the qos rules */
-	flow->qos_key_addr = rte_zmalloc(NULL, 256, 64);
+	flow->qos_key_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->qos_key_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->qos_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->qos_key_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->qos_key_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for qos key(%p)",
+			__func__, flow->qos_key_addr);
+		goto mem_failure;
+	}
+	flow->qos_rule.key_iova = iova;
 
-	flow->qos_mask_addr = rte_zmalloc(NULL, 256, 64);
+	flow->qos_mask_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->qos_mask_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->qos_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->qos_mask_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->qos_mask_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for qos mask(%p)",
+			__func__, flow->qos_mask_addr);
+		goto mem_failure;
+	}
+	flow->qos_rule.mask_iova = iova;
 
 	/* Allocate DMA'ble memory to write the FS rules */
-	flow->fs_key_addr = rte_zmalloc(NULL, 256, 64);
+	flow->fs_key_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->fs_key_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->fs_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->fs_key_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->fs_key_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for fs key(%p)",
+			__func__, flow->fs_key_addr);
+		goto mem_failure;
+	}
+	flow->fs_rule.key_iova = iova;
 
-	flow->fs_mask_addr = rte_zmalloc(NULL, 256, 64);
+	flow->fs_mask_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->fs_mask_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->fs_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->fs_mask_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->fs_mask_addr,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for fs mask(%p)",
+			__func__, flow->fs_mask_addr);
+		goto mem_failure;
+	}
+	flow->fs_rule.mask_iova = iova;
 
 	priv->curr = flow;
 
diff --git a/drivers/net/dpaa2/dpaa2_sparser.c b/drivers/net/dpaa2/dpaa2_sparser.c
index 59f7a172c6..265c9b5c57 100644
--- a/drivers/net/dpaa2/dpaa2_sparser.c
+++ b/drivers/net/dpaa2/dpaa2_sparser.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2023 NXP
  */
 
 #include <rte_mbuf.h>
@@ -170,7 +170,14 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 	}
 
 	memcpy(addr, sp_param.byte_code, sp_param.size);
-	cfg.ss_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(addr));
+	cfg.ss_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(addr, sp_param.size);
+	if (cfg.ss_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("No IOMMU map for soft sequence(%p), size=%d",
+			addr, sp_param.size);
+		rte_free(addr);
+
+		return -ENOBUFS;
+	}
 
 	ret = dpni_load_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
@@ -179,7 +186,7 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 		return ret;
 	}
 
-	priv->ss_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(addr));
+	priv->ss_iova = cfg.ss_iova;
 	priv->ss_offset += sp_param.size;
 	DPAA2_PMD_INFO("Soft parser loaded for dpni@%d", priv->hw_id);
 
@@ -219,7 +226,15 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 		}
 
 		memcpy(param_addr, sp_param.param_array, cfg.param_size);
-		cfg.param_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(param_addr));
+		cfg.param_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(param_addr,
+			cfg.param_size);
+		if (cfg.param_iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("%s: No IOMMU map for %p, size=%d",
+				__func__, param_addr, cfg.param_size);
+			rte_free(param_addr);
+
+			return -ENOBUFS;
+		}
 		priv->ss_param_iova = cfg.param_iova;
 	} else {
 		cfg.param_iova = 0;
@@ -227,7 +242,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 
 	ret = dpni_enable_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d",
+		DPAA2_PMD_ERR("Soft parser enabled for dpni@%d failed",
 			priv->hw_id);
 		rte_free(param_addr);
 		return ret;
diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index 14c47b41be..724f83cb78 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2020-2021 NXP
+ * Copyright 2020-2023 NXP
  */
 
 #include <rte_ethdev.h>
@@ -572,41 +572,42 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
+	uint64_t iova;
 
 	memset(&tx_flow_cfg, 0, sizeof(struct dpni_queue));
-	dpaa2_q =  (struct dpaa2_queue *)dev->data->tx_queues[node->id];
+	dpaa2_q = (struct dpaa2_queue *)dev->data->tx_queues[node->id];
 	tc_id = node->parent->tc_id;
 	node->parent->tc_id++;
 	flow_id = 0;
 
-	if (dpaa2_q == NULL) {
-		DPAA2_PMD_ERR("Queue is not configured for node = %d", node->id);
-		return -1;
+	if (!dpaa2_q) {
+		DPAA2_PMD_ERR("Queue is not configured for node = %d",
+			node->id);
+		return -ENOMEM;
 	}
 
 	DPAA2_PMD_DEBUG("tc_id = %d, channel = %d", tc_id,
 			node->parent->channel_id);
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
-			     ((node->parent->channel_id << 8) | tc_id),
-			     flow_id, options, &tx_flow_cfg);
+			((node->parent->channel_id << 8) | tc_id),
+			flow_id, options, &tx_flow_cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("Error in setting the tx flow: "
-		       "channel id  = %d tc_id= %d, param = 0x%x "
-		       "flow=%d err=%d", node->parent->channel_id, tc_id,
-		       ((node->parent->channel_id << 8) | tc_id), flow_id,
-		       ret);
-		return -1;
+		DPAA2_PMD_ERR("Set the TC[%d].ch[%d].TX flow[%d] (err=%d)",
+			tc_id, node->parent->channel_id, flow_id,
+			ret);
+		return ret;
 	}
 
 	dpaa2_q->flow_id = flow_id;
 	dpaa2_q->tc_index = tc_id;
 
 	ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-		DPNI_QUEUE_TX, ((node->parent->channel_id << 8) | dpaa2_q->tc_index),
-		dpaa2_q->flow_id, &tx_flow_cfg, &qid);
+			DPNI_QUEUE_TX,
+			((node->parent->channel_id << 8) | dpaa2_q->tc_index),
+			dpaa2_q->flow_id, &tx_flow_cfg, &qid);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-		return -1;
+		return ret;
 	}
 	dpaa2_q->fqid = qid.fqid;
 
@@ -621,8 +622,13 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 		 */
 		cong_notif_cfg.threshold_exit = (dpaa2_q->nb_desc * 9) / 10;
 		cong_notif_cfg.message_ctx = 0;
-		cong_notif_cfg.message_iova =
-			(size_t)DPAA2_VADDR_TO_IOVA(dpaa2_q->cscn);
+		iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(dpaa2_q->cscn,
+				sizeof(struct qbman_result));
+		if (iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("No IOMMU map for cscn(%p)", dpaa2_q->cscn);
+			return -ENOBUFS;
+		}
+		cong_notif_cfg.message_iova = iova;
 		cong_notif_cfg.dest_cfg.dest_type = DPNI_DEST_NONE;
 		cong_notif_cfg.notification_mode =
 					DPNI_CONG_OPT_WRITE_MEM_ON_ENTER |
@@ -641,6 +647,7 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 			return -ret;
 		}
 	}
+	dpaa2_q->tm_sw_td = true;
 
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 38/43] net/dpaa2: improve DPDMUX error behavior settings
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (36 preceding siblings ...)
  2024-10-14 12:01       ` [v3 37/43] net/dpaa2: check IOVA before sending MC command vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 39/43] net/dpaa2: store drop priority in mbuf vanshika.shukla
                         ` (5 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena

From: Sachin Saxena <sachin.saxena@nxp.com>

compatible with MC v10.36 or later

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index f4b8d481af..13de7d5783 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2021,2023 NXP
  */
 
 #include <sys/queue.h>
@@ -448,13 +448,12 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 		struct dpdmux_error_cfg mux_err_cfg;
 
 		memset(&mux_err_cfg, 0, sizeof(mux_err_cfg));
+		/* Note: Discarded flag(DPDMUX_ERROR_DISC) has effect only when
+		 * ERROR_ACTION is set to DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE.
+		 */
+		mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
 		mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE;
 
-		if (attr.method != DPDMUX_METHOD_C_VLAN_MAC)
-			mux_err_cfg.errors = DPDMUX_ERROR_DISC;
-		else
-			mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
-
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
 				dpdmux_dev->token, DPAA2_DPDMUX_DPMAC_IDX,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 39/43] net/dpaa2: store drop priority in mbuf
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (37 preceding siblings ...)
  2024-10-14 12:01       ` [v3 38/43] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 40/43] net/dpaa2: add API to get endpoint name vanshika.shukla
                         ` (4 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

store drop priority in mbuf from fd.

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 1 +
 drivers/net/dpaa2/dpaa2_rxtx.c          | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index b6cd1f00c4..cd22974752 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -329,6 +329,7 @@ enum qbman_fd_format {
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
 #define DPAA2_GET_FD_IVP(fd)   (((fd)->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_GET_FD_DROPP(fd)  (((fd)->simple.ctrl & 0x07000000) >> 24)
 #define DPAA2_GET_FD_FRC(fd)   ((fd)->simple.frc)
 #define DPAA2_GET_FD_FLC(fd) \
 	(((uint64_t)((fd)->simple.flc_hi) << 32) + (fd)->simple.flc_lo)
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index fd07a75a40..01e699d282 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -388,6 +388,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 	mbuf->pkt_len = mbuf->data_len;
 	mbuf->port = port_id;
 	mbuf->next = NULL;
+	mbuf->hash.sched.color = DPAA2_GET_FD_DROPP(fd);
 	rte_mbuf_refcnt_set(mbuf, 1);
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 40/43] net/dpaa2: add API to get endpoint name
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (38 preceding siblings ...)
  2024-10-14 12:01       ` [v3 39/43] net/dpaa2: store drop priority in mbuf vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 41/43] net/dpaa2: support VLAN traffic splitting vanshika.shukla
                         ` (3 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Export API in rte_pmd_dpaa2.h

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 24 ++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h  |  4 ++++
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  3 +++
 drivers/net/dpaa2/version.map     |  1 +
 4 files changed, 32 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 7a3937346c..137e116963 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2903,6 +2903,30 @@ rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+const char *
+rte_pmd_dpaa2_ep_name(uint32_t eth_id)
+{
+	struct rte_eth_dev *dev;
+	struct dpaa2_dev_priv *priv;
+
+	if (eth_id >= RTE_MAX_ETHPORTS)
+		return NULL;
+
+	if (!rte_pmd_dpaa2_dev_is_dpaa2(eth_id))
+		return NULL;
+
+	dev = &rte_eth_devices[eth_id];
+	if (!dev->data)
+		return NULL;
+
+	if (!dev->data->dev_private)
+		return NULL;
+
+	priv = dev->data->dev_private;
+
+	return priv->ep_name;
+}
+
 #if defined(RTE_LIBRTE_IEEE1588)
 int
 rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index a2b9fc5678..fd6bad7f74 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -385,6 +385,10 @@ struct dpaa2_dev_priv {
 	uint8_t max_cgs;
 	uint8_t cgid_in_use[MAX_RX_QUEUES];
 
+	enum rte_dpaa2_dev_type ep_dev_type;   /**< Endpoint Device Type */
+	uint16_t ep_object_id;                 /**< Endpoint DPAA2 Object ID */
+	char ep_name[RTE_DEV_NAME_MAX_LEN];
+
 	struct extract_s extract;
 
 	uint16_t ss_offset;
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index fc52a9218e..f93af1c65f 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -130,6 +130,9 @@ rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
 __rte_experimental
 int
 rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id);
+__rte_experimental
+const char *
+rte_pmd_dpaa2_ep_name(uint32_t eth_id);
 
 #if defined(RTE_LIBRTE_IEEE1588)
 __rte_experimental
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 233c6e6b2c..35815f7777 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -18,6 +18,7 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_get_tlu_hash;
 	# added in 24.11
 	rte_pmd_dpaa2_dev_is_dpaa2;
+	rte_pmd_dpaa2_ep_name;
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
 	rte_pmd_dpaa2_mux_dump_counter;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 41/43] net/dpaa2: support VLAN traffic splitting
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (39 preceding siblings ...)
  2024-10-14 12:01       ` [v3 40/43] net/dpaa2: add API to get endpoint name vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 42/43] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
                         ` (2 subsequent siblings)
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds support for adding rules in DPDMUX
to split VLAN traffic based on VLAN ids.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 13de7d5783..c8f1d46bb2 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -118,6 +118,26 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	}
 	break;
 
+	case RTE_FLOW_ITEM_TYPE_VLAN:
+	{
+		const struct rte_flow_item_vlan *spec;
+
+		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
+		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
+		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
+		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FROM_FIELD;
+		kg_cfg.extracts[0].extract.from_hdr.offset = 1;
+		kg_cfg.extracts[0].extract.from_hdr.size = 1;
+		kg_cfg.num_extracts = 1;
+
+		spec = (const struct rte_flow_item_vlan *)pattern[0]->spec;
+		memcpy((void *)key_iova, (const void *)(&spec->hdr.vlan_tci),
+			sizeof(uint16_t));
+		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
+		key_size = sizeof(uint16_t);
+	}
+	break;
+
 	case RTE_FLOW_ITEM_TYPE_UDP:
 	{
 		const struct rte_flow_item_udp *spec;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 42/43] net/dpaa2: add support for C-VLAN and MAC
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (40 preceding siblings ...)
  2024-10-14 12:01       ` [v3 41/43] net/dpaa2: support VLAN traffic splitting vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-14 12:01       ` [v3 43/43] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
  43 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds the support for DPDMUX_METHOD_C_VLAN_MAC method
which implements DPDMUX based on C-VLAN and MAC address.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     |  2 +-
 drivers/net/dpaa2/mc/fsl_dpdmux.h | 16 ++++++++++++++++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index c8f1d46bb2..6e10739dd3 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021,2023 NXP
+ * Copyright 2018-2024 NXP
  */
 
 #include <sys/queue.h>
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 97b09e59f9..70b81f3b3b 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -593,6 +593,22 @@ int dpdmux_dump_table(struct fsl_mc_io *mc_io,
  */
 #define DPDMUX__ERROR_L4CE			0x00000001
 
+#define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
+				 DPDMUX__ERROR_L4CV | \
+				 DPDMUX__ERROR_L3CE | \
+				 DPDMUX__ERROR_L3CV | \
+				 DPDMUX_ERROR_BLE | \
+				 DPDMUX_ERROR_PHE | \
+				 DPDMUX_ERROR_ISP | \
+				 DPDMUX_ERROR_PTE | \
+				 DPDMUX_ERROR_FPE | \
+				 DPDMUX_ERROR_FLE | \
+				 DPDMUX_ERROR_PIEE | \
+				 DPDMUX_ERROR_TIDE | \
+				 DPDMUX_ERROR_MNLE | \
+				 DPDMUX_ERROR_EOFHE | \
+				 DPDMUX_ERROR_KSE)
+
 #define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
 				 DPDMUX__ERROR_L4CV | \
 				 DPDMUX__ERROR_L3CE | \
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v3 43/43] net/dpaa2: dpdmux single flow/multiple rules support
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (41 preceding siblings ...)
  2024-10-14 12:01       ` [v3 42/43] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
@ 2024-10-14 12:01       ` vanshika.shukla
  2024-10-15  2:32         ` Stephen Hemminger
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
  43 siblings, 1 reply; 229+ messages in thread
From: vanshika.shukla @ 2024-10-14 12:01 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support multiple extractions as well as hardware descriptions
instead of hard code.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h     |   1 +
 drivers/net/dpaa2/dpaa2_flow.c       |  22 --
 drivers/net/dpaa2/dpaa2_mux.c        | 395 ++++++++++++++++-----------
 drivers/net/dpaa2/dpaa2_parse_dump.h |   2 +
 drivers/net/dpaa2/rte_pmd_dpaa2.h    |   8 +-
 5 files changed, 247 insertions(+), 181 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fd6bad7f74..fd3119247a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -198,6 +198,7 @@ enum dpaa2_rx_faf_offset {
 	FAF_IPV4_FRAM = 34 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IPV6_FRAM = 42 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IP_FRAM = 48 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IP_FRAG_FRAM = 50 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_ICMP_FRAM = 57 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IGMP_FRAM = 58 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_GRE_FRAM = 65 + DPAA2_FAFE_PSR_SIZE * 8,
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 54f38e2e25..9dd9163880 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -98,13 +98,6 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = {
 	RTE_FLOW_ACTION_TYPE_RSS
 };
 
-static const
-enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
-	RTE_FLOW_ACTION_TYPE_QUEUE,
-	RTE_FLOW_ACTION_TYPE_PORT_ID,
-	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
-};
-
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
 	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
@@ -4083,21 +4076,6 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
-static inline int
-dpaa2_fs_action_supported(enum rte_flow_action_type action)
-{
-	int i;
-	int action_num = sizeof(dpaa2_supported_fs_action_type) /
-		sizeof(enum rte_flow_action_type);
-
-	for (i = 0; i < action_num; i++) {
-		if (action == dpaa2_supported_fs_action_type[i])
-			return true;
-	}
-
-	return false;
-}
-
 static inline int
 dpaa2_flow_verify_attr(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr)
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 6e10739dd3..79a1c7f981 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -32,8 +32,9 @@ struct dpaa2_dpdmux_dev {
 	uint8_t num_ifs;   /* Number of interfaces in DPDMUX */
 };
 
-struct rte_flow {
-	struct dpdmux_rule_cfg rule;
+#define DPAA2_MUX_FLOW_MAX_RULE_NUM 8
+struct dpaa2_mux_flow {
+	struct dpdmux_rule_cfg rule[DPAA2_MUX_FLOW_MAX_RULE_NUM];
 };
 
 TAILQ_HEAD(dpdmux_dev_list, dpaa2_dpdmux_dev);
@@ -53,204 +54,287 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 	return dpdmux_dev;
 }
 
-struct rte_flow *
+int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
-			      struct rte_flow_item *pattern[],
-			      struct rte_flow_action *actions[])
+	struct rte_flow_item pattern[],
+	struct rte_flow_action actions[])
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev;
+	static struct dpkg_profile_cfg s_kg_cfg;
 	struct dpkg_profile_cfg kg_cfg;
 	const struct rte_flow_action_vf *vf_conf;
 	struct dpdmux_cls_action dpdmux_action;
-	struct rte_flow *flow = NULL;
-	void *key_iova, *mask_iova, *key_cfg_iova = NULL;
+	uint8_t *key_va = NULL, *mask_va = NULL;
+	void *key_cfg_va = NULL;
+	uint64_t key_iova, mask_iova, key_cfg_iova;
 	uint8_t key_size = 0;
-	int ret;
-	static int i;
+	int ret = 0, loop = 0;
+	static int s_i;
+	struct dpkg_extract *extract;
+	struct dpdmux_rule_cfg rule;
 
-	if (!pattern || !actions || !pattern[0] || !actions[0])
-		return NULL;
+	memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
 
 	/* Find the DPDMUX from dpdmux_id in our list */
 	dpdmux_dev = get_dpdmux_from_id(dpdmux_id);
 	if (!dpdmux_dev) {
 		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
-		return NULL;
+		ret = -ENODEV;
+		goto creation_error;
 	}
 
-	key_cfg_iova = rte_zmalloc(NULL, DIST_PARAM_IOVA_SIZE,
-				   RTE_CACHE_LINE_SIZE);
-	if (!key_cfg_iova) {
-		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
-		return NULL;
+	key_cfg_va = rte_zmalloc(NULL, DIST_PARAM_IOVA_SIZE,
+				RTE_CACHE_LINE_SIZE);
+	if (!key_cfg_va) {
+		DPAA2_PMD_ERR("Unable to allocate key configure buffer");
+		ret = -ENOMEM;
+		goto creation_error;
+	}
+
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_va,
+		DIST_PARAM_IOVA_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_va);
+		ret = -ENOBUFS;
+		goto creation_error;
 	}
-	flow = rte_zmalloc(NULL, sizeof(struct rte_flow) +
-			   (2 * DIST_PARAM_IOVA_SIZE), RTE_CACHE_LINE_SIZE);
-	if (!flow) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+
+	key_va = rte_zmalloc(NULL, (2 * DIST_PARAM_IOVA_SIZE),
+		RTE_CACHE_LINE_SIZE);
+	if (!key_va) {
+		DPAA2_PMD_ERR("Unable to allocate flow dist parameter");
+		ret = -ENOMEM;
 		goto creation_error;
 	}
-	key_iova = (void *)((size_t)flow + sizeof(struct rte_flow));
-	mask_iova = (void *)((size_t)key_iova + DIST_PARAM_IOVA_SIZE);
+
+	key_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_va,
+		(2 * DIST_PARAM_IOVA_SIZE));
+	if (key_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU mapping for address(%p)",
+			__func__, key_va);
+		ret = -ENOBUFS;
+		goto creation_error;
+	}
+
+	mask_va = key_va + DIST_PARAM_IOVA_SIZE;
+	mask_iova = key_iova + DIST_PARAM_IOVA_SIZE;
 
 	/* Currently taking only IP protocol as an extract type.
-	 * This can be extended to other fields using pattern->type.
+	 * This can be exended to other fields using pattern->type.
 	 */
 	memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
 
-	switch (pattern[0]->type) {
-	case RTE_FLOW_ITEM_TYPE_IPV4:
-	{
-		const struct rte_flow_item_ipv4 *spec;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_IP;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_ipv4 *)pattern[0]->spec;
-		memcpy(key_iova, (const void *)(&spec->hdr.next_proto_id),
-			sizeof(uint8_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint8_t));
-		key_size = sizeof(uint8_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_VLAN:
-	{
-		const struct rte_flow_item_vlan *spec;
-
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FROM_FIELD;
-		kg_cfg.extracts[0].extract.from_hdr.offset = 1;
-		kg_cfg.extracts[0].extract.from_hdr.size = 1;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_vlan *)pattern[0]->spec;
-		memcpy((void *)key_iova, (const void *)(&spec->hdr.vlan_tci),
-			sizeof(uint16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_UDP:
-	{
-		const struct rte_flow_item_udp *spec;
-		uint16_t udp_dst_port;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_UDP;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_udp *)pattern[0]->spec;
-		udp_dst_port = rte_constant_bswap16(spec->hdr.dst_port);
-		memcpy((void *)key_iova, (const void *)&udp_dst_port,
-							sizeof(rte_be16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_ETH:
-	{
-		const struct rte_flow_item_eth *spec;
-		uint16_t eth_type;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_ETH;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_ETH_TYPE;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
-		memcpy((void *)key_iova, (const void *)&eth_type,
-							sizeof(rte_be16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_RAW:
-	{
-		const struct rte_flow_item_raw *spec;
-
-		spec = (const struct rte_flow_item_raw *)pattern[0]->spec;
-		kg_cfg.extracts[0].extract.from_data.offset = spec->offset;
-		kg_cfg.extracts[0].extract.from_data.size = spec->length;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_DATA;
-		kg_cfg.num_extracts = 1;
-		memcpy((void *)key_iova, (const void *)spec->pattern,
-							spec->length);
-		memcpy(mask_iova, pattern[0]->mask, spec->length);
-
-		key_size = spec->length;
-	}
-	break;
+	while (pattern[loop].type != RTE_FLOW_ITEM_TYPE_END) {
+		if (kg_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+			DPAA2_PMD_ERR("Too many extracts(%d)",
+				kg_cfg.num_extracts);
+			ret = -ENOTSUP;
+			goto creation_error;
+		}
+		switch (pattern[loop].type) {
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		{
+			const struct rte_flow_item_ipv4 *spec;
+			const struct rte_flow_item_ipv4 *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_IP;
+			extract->extract.from_hdr.field = NH_FLD_IP_PROTO;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->hdr.next_proto_id, sizeof(uint8_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->hdr.next_proto_id,
+					sizeof(uint8_t));
+			} else {
+				mask_va[key_size] = 0xff;
+			}
+			key_size += sizeof(uint8_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+		{
+			const struct rte_flow_item_vlan *spec;
+			const struct rte_flow_item_vlan *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_VLAN;
+			extract->extract.from_hdr.field = NH_FLD_VLAN_TCI;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->tci, sizeof(uint16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->tci, sizeof(uint16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(uint16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_UDP:
+		{
+			const struct rte_flow_item_udp *spec;
+			const struct rte_flow_item_udp *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_UDP;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+			extract->extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->hdr.dst_port, sizeof(rte_be16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->hdr.dst_port,
+					sizeof(rte_be16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(rte_be16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_ETH:
+		{
+			const struct rte_flow_item_eth *spec;
+			const struct rte_flow_item_eth *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_ETH;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+			extract->extract.from_hdr.field = NH_FLD_ETH_TYPE;
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->type, sizeof(rte_be16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->type, sizeof(rte_be16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(rte_be16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_RAW:
+		{
+			const struct rte_flow_item_raw *spec;
+			const struct rte_flow_item_raw *mask;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_DATA;
+			extract->extract.from_data.offset = spec->offset;
+			extract->extract.from_data.size = spec->length;
+			kg_cfg.num_extracts++;
+
+			rte_memcpy(&key_va[key_size],
+				spec->pattern, spec->length);
+			if (mask && mask->pattern) {
+				rte_memcpy(&mask_va[key_size],
+					mask->pattern, spec->length);
+			} else {
+				memset(&mask_va[key_size], 0xff, spec->length);
+			}
+
+			key_size += spec->length;
+		}
+		break;
 
-	default:
-		DPAA2_PMD_ERR("Not supported pattern type: %d",
-				pattern[0]->type);
-		goto creation_error;
+		default:
+			DPAA2_PMD_ERR("Not supported pattern[%d] type: %d",
+				loop, pattern[loop].type);
+			ret = -ENOTSUP;
+			goto creation_error;
+		}
+		loop++;
 	}
 
-	ret = dpkg_prepare_key_cfg(&kg_cfg, key_cfg_iova);
+	ret = dpkg_prepare_key_cfg(&kg_cfg, key_cfg_va);
 	if (ret) {
 		DPAA2_PMD_ERR("dpkg_prepare_key_cfg failed: err(%d)", ret);
 		goto creation_error;
 	}
 
-	/* Multiple rules with same DPKG extracts (kg_cfg.extracts) like same
-	 * offset and length values in raw is supported right now. Different
-	 * values of kg_cfg may not work.
-	 */
-	if (i == 0) {
-		ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-					    dpdmux_dev->token,
-				(uint64_t)(DPAA2_VADDR_TO_IOVA(key_cfg_iova)));
+	if (!s_i) {
+		ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux,
+				CMD_PRI_LOW, dpdmux_dev->token, key_cfg_iova);
 		if (ret) {
 			DPAA2_PMD_ERR("dpdmux_set_custom_key failed: err(%d)",
-					ret);
+				ret);
+			goto creation_error;
+		}
+		rte_memcpy(&s_kg_cfg, &kg_cfg, sizeof(struct dpkg_profile_cfg));
+	} else {
+		if (memcmp(&s_kg_cfg, &kg_cfg,
+			sizeof(struct dpkg_profile_cfg))) {
+			DPAA2_PMD_ERR("%s: Single flow support only.",
+				__func__);
+			ret = -ENOTSUP;
 			goto creation_error;
 		}
 	}
-	/* As now our key extract parameters are set, let us configure
-	 * the rule.
-	 */
-	flow->rule.key_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(key_iova));
-	flow->rule.mask_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(mask_iova));
-	flow->rule.key_size = key_size;
-	flow->rule.entry_index = i++;
 
-	vf_conf = (const struct rte_flow_action_vf *)(actions[0]->conf);
+	vf_conf = actions[0].conf;
 	if (vf_conf->id == 0 || vf_conf->id > dpdmux_dev->num_ifs) {
-		DPAA2_PMD_ERR("Invalid destination id");
+		DPAA2_PMD_ERR("Invalid destination id(%d)", vf_conf->id);
 		goto creation_error;
 	}
 	dpdmux_action.dest_if = vf_conf->id;
 
-	ret = dpdmux_add_custom_cls_entry(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-					  dpdmux_dev->token, &flow->rule,
-					  &dpdmux_action);
+	rule.key_iova = key_iova;
+	rule.mask_iova = mask_iova;
+	rule.key_size = key_size;
+	rule.entry_index = s_i;
+	s_i++;
+
+	/* As now our key extract parameters are set, let us configure
+	 * the rule.
+	 */
+	ret = dpdmux_add_custom_cls_entry(&dpdmux_dev->dpdmux,
+			CMD_PRI_LOW, dpdmux_dev->token,
+			&rule, &dpdmux_action);
 	if (ret) {
-		DPAA2_PMD_ERR("dpdmux_add_custom_cls_entry failed: err(%d)",
-			      ret);
+		DPAA2_PMD_ERR("Add classification entry failed:err(%d)", ret);
 		goto creation_error;
 	}
 
-	return flow;
-
 creation_error:
-	rte_free((void *)key_cfg_iova);
-	rte_free((void *)flow);
-	return NULL;
+	if (key_cfg_va)
+		rte_free(key_cfg_va);
+	if (key_va)
+		rte_free(key_va);
+
+	return ret;
 }
 
 int
@@ -407,10 +491,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	PMD_INIT_FUNC_TRACE();
 
 	/* Allocate DPAA2 dpdmux handle */
-	dpdmux_dev = rte_malloc(NULL, sizeof(struct dpaa2_dpdmux_dev), 0);
+	dpdmux_dev = rte_zmalloc(NULL,
+		sizeof(struct dpaa2_dpdmux_dev), RTE_CACHE_LINE_SIZE);
 	if (!dpdmux_dev) {
 		DPAA2_PMD_ERR("Memory allocation failed for DPDMUX Device");
-		return -1;
+		return -ENOMEM;
 	}
 
 	/* Open the dpdmux object */
diff --git a/drivers/net/dpaa2/dpaa2_parse_dump.h b/drivers/net/dpaa2/dpaa2_parse_dump.h
index f1cdc003de..78fd3b768c 100644
--- a/drivers/net/dpaa2/dpaa2_parse_dump.h
+++ b/drivers/net/dpaa2/dpaa2_parse_dump.h
@@ -105,6 +105,8 @@ dpaa2_print_faf(struct dpaa2_fapr_array *fapr)
 			faf_bits[i].name = "IPv4 1 Present";
 		else if (i == FAF_IPV6_FRAM)
 			faf_bits[i].name = "IPv6 1 Present";
+		else if (i == FAF_IP_FRAG_FRAM)
+			faf_bits[i].name = "IP fragment Present";
 		else if (i == FAF_UDP_FRAM)
 			faf_bits[i].name = "UDP Present";
 		else if (i == FAF_TCP_FRAM)
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index f93af1c65f..237c3cd6e7 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -26,12 +26,12 @@
  *    Associated actions.
  *
  * @return
- *    A valid handle in case of success, NULL otherwise.
+ *    0 in case of success,  otherwise failure.
  */
-struct rte_flow *
+int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
-			      struct rte_flow_item *pattern[],
-			      struct rte_flow_action *actions[]);
+	struct rte_flow_item pattern[],
+	struct rte_flow_action actions[]);
 int
 rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
 	uint16_t entry_index);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* Re: [v3 13/43] bus/fslmc: get MC VFIO group FD directly
  2024-10-14 12:00       ` [v3 13/43] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
@ 2024-10-15  2:27         ` Stephen Hemminger
  0 siblings, 0 replies; 229+ messages in thread
From: Stephen Hemminger @ 2024-10-15  2:27 UTC (permalink / raw)
  To: vanshika.shukla
  Cc: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov, Jun Yang

On Mon, 14 Oct 2024 17:30:56 +0530
vanshika.shukla@nxp.com wrote:

> +static int
> +fslmc_vfio_open_group_fd(int iommu_group_num)
> +{
> +	int vfio_group_fd;
> +	char filename[PATH_MAX];
> +	struct rte_mp_msg mp_req, *mp_rep;
> +	struct rte_mp_reply mp_reply = {0};
> +	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
> +	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
> +
> +	/* if primary, try to open the group */
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +		/* try regular group format */
> +		snprintf(filename, sizeof(filename),
> +			VFIO_GROUP_FMT, iommu_group_num);
> +		vfio_group_fd = open(filename, O_RDWR);
> +		if (vfio_group_fd <= 0) {
> +			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
> +				filename, vfio_group_fd);
> +		}
> +
> +		return vfio_group_fd;
> +	}
> +	/* if we're in a secondary process, request group fd from the primary
> +	 * process via mp channel.
> +	 */
> +	p->req = SOCKET_REQ_GROUP;
> +	p->group_num = iommu_group_num;
> +	strcpy(mp_req.name, EAL_VFIO_MP);

Later versions of checkpatch complain that strcpy() should not be used.
Instead use strlcpy.


^ permalink raw reply	[flat|nested] 229+ messages in thread

* Re: [v3 14/43] bus/fslmc: enhance MC VFIO multiprocess support
  2024-10-14 12:00       ` [v3 14/43] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
@ 2024-10-15  2:29         ` Stephen Hemminger
  0 siblings, 0 replies; 229+ messages in thread
From: Stephen Hemminger @ 2024-10-15  2:29 UTC (permalink / raw)
  To: vanshika.shukla
  Cc: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov, Jun Yang

On Mon, 14 Oct 2024 17:30:57 +0530
vanshika.shukla@nxp.com wrote:

> +#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
> +	if (vaddr != iovaddr) {
> +		DPAA2_BUS_WARN("vaddr(0x%lx) != iovaddr(0x%lx)",
> +			vaddr, iovaddr);
> +	}
>  #endif

Checkpatch complain shere.
Warning in drivers/bus/fslmc/fslmc_vfio.c:
Using %l format, prefer %PRI*64 if type is [u]int64_t


^ permalink raw reply	[flat|nested] 229+ messages in thread

* Re: [v3 16/43] bus/fslmc: dynamic IOVA mode configuration
  2024-10-14 12:00       ` [v3 16/43] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
@ 2024-10-15  2:31         ` Stephen Hemminger
  0 siblings, 0 replies; 229+ messages in thread
From: Stephen Hemminger @ 2024-10-15  2:31 UTC (permalink / raw)
  To: vanshika.shukla
  Cc: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh, Jun Yang

On Mon, 14 Oct 2024 17:30:59 +0530
vanshika.shukla@nxp.com wrote:

> iff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
> index 1695b6c078..408b35680d 100644
> --- a/drivers/bus/fslmc/fslmc_vfio.h
> +++ b/drivers/bus/fslmc/fslmc_vfio.h
> @@ -11,6 +11,10 @@
>  #include <rte_compat.h>
>  #include <rte_vfio.h>
>  
> +#ifndef __hot
> +#define __hot __attribute__((hot))
> +#endif
> +

DPDK already has __rte_hot (in rte_common.h) use that instead
to fix.

Warning in drivers/bus/fslmc/fslmc_vfio.h:
Using compiler attribute directly


^ permalink raw reply	[flat|nested] 229+ messages in thread

* Re: [v3 43/43] net/dpaa2: dpdmux single flow/multiple rules support
  2024-10-14 12:01       ` [v3 43/43] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
@ 2024-10-15  2:32         ` Stephen Hemminger
  0 siblings, 0 replies; 229+ messages in thread
From: Stephen Hemminger @ 2024-10-15  2:32 UTC (permalink / raw)
  To: vanshika.shukla; +Cc: dev, Hemant Agrawal, Sachin Saxena, Jun Yang

On Mon, 14 Oct 2024 17:31:26 +0530
vanshika.shukla@nxp.com wrote:

> From: Jun Yang <jun.yang@nxp.com>
> 
> Support multiple extractions as well as hardware descriptions
> instead of hard code.
> 
> Signed-off-by: Jun Yang <jun.yang@nxp.com>
> ---
>  drivers/net/dpaa2/dpaa2_ethdev.h     |   1 +
>  drivers/net/dpaa2/dpaa2_flow.c       |  22 --
>  drivers/net/dpaa2/dpaa2_mux.c        | 395 ++++++++++++++++-----------
>  drivers/net/dpaa2/dpaa2_parse_dump.h |   2 +
>  drivers/net/dpaa2/rte_pmd_dpaa2.h    |   8 +-
>  5 files changed, 247 insertions(+), 181 deletions(-)

Fix this spelling error in next version please.

### [PATCH] net/dpaa2: dpdmux single flow/multiple rules support

WARNING:TYPO_SPELLING: 'exended' may be misspelled - perhaps 'extended'?
#173: FILE: drivers/net/dpaa2/dpaa2_mux.c:124:
+	 * This can be exended to other fields using pattern->type.
 	               ^^^^^^^

^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 00/42] DPAA2 specific patches
  2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
                         ` (42 preceding siblings ...)
  2024-10-14 12:01       ` [v3 43/43] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
@ 2024-10-22 19:12       ` vanshika.shukla
  2024-10-22 19:12         ` [v4 01/42] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
                           ` (42 more replies)
  43 siblings, 43 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This series includes:
-> Fixes and enhancements for NXP DPAA2 drivers.
-> Upgrade with MC version 10.37
-> Enhancements in DPDMUX code
-> Fixes for coverity issues reported

V2 changes:
Fixed the broken compilation for clang in:
        "net/dpaa2: dpdmux single flow/multiple rules support" patch.
Fixed checkpatch warnings in the below patches:
        "net/dpaa2: protocol inside tunnel distribution"
        "net/dpaa2: add VXLAN distribution support"
        "bus/fslmc: dynamic IOVA mode configuration"
        "bus/fslmc: enhance MC VFIO multiprocess support"

V3 changes:
Rebased to the latest commit.

V4 changes:
Fixed the checkpatch warnings in:
	"bus/fslmc: get MC VFIO group FD directly"
	"bus/fslmc: dynamic IOVA mode configuration"
	"net/dpaa2: add GTP flow support"
	"net/dpaa2: add flow support for IPsec AH and ESP
	"bus/fslmc: enhance MC VFIO multiprocess support"
Resolved comments by the reviewer.
	
Apeksha Gupta (2):
  net/dpaa2: add proper MTU debugging print
  net/dpaa2: store drop priority in mbuf

Brick Yang (1):
  net/dpaa2: update DPNI link status method

Gagandeep Singh (3):
  bus/fslmc: upgrade with MC version 10.37
  net/dpaa2: fix memory corruption in TM
  net/dpaa2: support software taildrop

Hemant Agrawal (2):
  net/dpaa2: add support to dump dpdmux counters
  bus/fslmc: change dpcon close as internal symbol

Jun Yang (23):
  net/dpaa2: enhance Tx scatter-gather mempool
  net/dpaa2: add new PMD API to check dpaa platform version
  bus/fslmc: improve BMAN buffer acquire
  bus/fslmc: get MC VFIO group FD directly
  bus/fslmc: enhance MC VFIO multiprocess support
  bus/fslmc: dynamic IOVA mode configuration
  bus/fslmc: remove VFIO IRQ mapping
  bus/fslmc: create dpaa2 device with it's object
  bus/fslmc: introduce VFIO DMA mapping API for fslmc
  net/dpaa2: flow API refactor
  net/dpaa2: dump Rx parser result
  net/dpaa2: enhancement of raw flow extract
  net/dpaa2: frame attribute flags parser
  net/dpaa2: add VXLAN distribution support
  net/dpaa2: protocol inside tunnel distribution
  net/dpaa2: eCPRI support by parser result
  net/dpaa2: add GTP flow support
  net/dpaa2: check if Soft parser is loaded
  net/dpaa2: soft parser flow verification
  net/dpaa2: add flow support for IPsec AH and ESP
  net/dpaa2: check IOVA before sending MC command
  net/dpaa2: add API to get endpoint name
  net/dpaa2: dpdmux single flow/multiple rules support

Rohit Raj (6):
  bus/fslmc: add close API to close DPAA2 device
  net/dpaa2: support link state for eth interfaces
  bus/fslmc: free VFIO group FD in case of add group failure
  bus/fslmc: fix coverity issue
  bus/fslmc: change qbman eq desc from d to desc
  net/dpaa2: change miss flow ID macro name

Sachin Saxena (1):
  net/dpaa2: improve DPDMUX error behavior settings

Vanshika Shukla (4):
  net/dpaa2: support PTP packet one-step timestamp
  net/dpaa2: dpdmux: add support for CVLAN
  net/dpaa2: support VLAN traffic splitting
  net/dpaa2: add support for C-VLAN and MAC

 doc/guides/platform/dpaa2.rst                 |    4 +-
 drivers/bus/fslmc/bus_fslmc_driver.h          |   72 +-
 drivers/bus/fslmc/fslmc_bus.c                 |   62 +-
 drivers/bus/fslmc/fslmc_vfio.c                | 1628 +++-
 drivers/bus/fslmc/fslmc_vfio.h                |   35 +-
 drivers/bus/fslmc/mc/dpio.c                   |   94 +-
 drivers/bus/fslmc/mc/fsl_dpcon.h              |    6 +-
 drivers/bus/fslmc/mc/fsl_dpio.h               |   21 +-
 drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |   13 +-
 drivers/bus/fslmc/mc/fsl_dpmng.h              |    4 +-
 drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |    8 +-
 drivers/bus/fslmc/meson.build                 |    3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |   38 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |   38 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   50 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |    3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dprc.c      |    8 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  114 +-
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |   12 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |   49 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   30 +-
 drivers/bus/fslmc/version.map                 |   16 +-
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |   91 +-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |   47 +-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |   19 +-
 drivers/dma/dpaa2/dpaa2_qdma.c                |    1 +
 drivers/event/dpaa2/dpaa2_hw_dpcon.c          |   38 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |    2 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |   63 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  597 +-
 drivers/net/dpaa2/dpaa2_ethdev.h              |  225 +-
 drivers/net/dpaa2/dpaa2_flow.c                | 7066 ++++++++++-------
 drivers/net/dpaa2/dpaa2_mux.c                 |  541 +-
 drivers/net/dpaa2/dpaa2_parse_dump.h          |  250 +
 drivers/net/dpaa2/dpaa2_ptp.c                 |    8 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |   32 +-
 drivers/net/dpaa2/dpaa2_sparser.c             |   25 +-
 drivers/net/dpaa2/dpaa2_tm.c                  |   72 +-
 drivers/net/dpaa2/mc/dpdmux.c                 |  205 +-
 drivers/net/dpaa2/mc/dpkg.c                   |   12 +-
 drivers/net/dpaa2/mc/dpni.c                   |  383 +-
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |   99 +-
 drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |   83 +-
 drivers/net/dpaa2/mc/fsl_dpkg.h               |    7 +-
 drivers/net/dpaa2/mc/fsl_dpni.h               |  176 +-
 drivers/net/dpaa2/mc/fsl_dpni_cmd.h           |  125 +-
 drivers/net/dpaa2/rte_pmd_dpaa2.h             |   51 +-
 drivers/net/dpaa2/version.map                 |    6 +
 48 files changed, 8278 insertions(+), 4254 deletions(-)
 create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 01/42] net/dpaa2: enhance Tx scatter-gather mempool
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 02/42] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
                           ` (41 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Create TX SG pool only for primary process and lookup
this pool in secondary process.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 46 +++++++++++++++++++++++---------
 1 file changed, 33 insertions(+), 13 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 7b3e587a8d..4b93606de1 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2870,6 +2870,35 @@ int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+static int dpaa2_tx_sg_pool_init(void)
+{
+	char name[RTE_MEMZONE_NAMESIZE];
+
+	if (dpaa2_tx_sg_pool)
+		return 0;
+
+	sprintf(name, "dpaa2_mbuf_tx_sg_pool");
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		dpaa2_tx_sg_pool = rte_pktmbuf_pool_create(name,
+			DPAA2_POOL_SIZE,
+			DPAA2_POOL_CACHE_SIZE, 0,
+			DPAA2_MAX_SGS * sizeof(struct qbman_sge),
+			rte_socket_id());
+		if (!dpaa2_tx_sg_pool) {
+			DPAA2_PMD_ERR("SG pool creation failed");
+			return -ENOMEM;
+		}
+	} else {
+		dpaa2_tx_sg_pool = rte_mempool_lookup(name);
+		if (!dpaa2_tx_sg_pool) {
+			DPAA2_PMD_ERR("SG pool lookup failed");
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
 static int
 rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 		struct rte_dpaa2_device *dpaa2_dev)
@@ -2924,19 +2953,10 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 
 	/* Invoke PMD device initialization function */
 	diag = dpaa2_dev_init(eth_dev);
-	if (diag == 0) {
-		if (!dpaa2_tx_sg_pool) {
-			dpaa2_tx_sg_pool =
-				rte_pktmbuf_pool_create("dpaa2_mbuf_tx_sg_pool",
-				DPAA2_POOL_SIZE,
-				DPAA2_POOL_CACHE_SIZE, 0,
-				DPAA2_MAX_SGS * sizeof(struct qbman_sge),
-				rte_socket_id());
-			if (dpaa2_tx_sg_pool == NULL) {
-				DPAA2_PMD_ERR("SG pool creation failed");
-				return -ENOMEM;
-			}
-		}
+	if (!diag) {
+		diag = dpaa2_tx_sg_pool_init();
+		if (diag)
+			return diag;
 		rte_eth_dev_probing_finish(eth_dev);
 		dpaa2_valid_dev++;
 		return 0;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 02/42] net/dpaa2: support PTP packet one-step timestamp
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
  2024-10-22 19:12         ` [v4 01/42] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 03/42] net/dpaa2: add proper MTU debugging print vanshika.shukla
                           ` (40 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds PTP one-step timestamping support.
dpni_set_single_step_cfg() MC API is utilized with offset provided
to insert correction time on frame.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 61 +++++++++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h  |  3 ++
 drivers/net/dpaa2/rte_pmd_dpaa2.h | 10 +++++
 drivers/net/dpaa2/version.map     |  3 ++
 4 files changed, 77 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4b93606de1..051ebd9d8e 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -548,6 +548,9 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	int tx_l4_csum_offload = false;
 	int ret, tc_index;
 	uint32_t max_rx_pktlen;
+#if defined(RTE_LIBRTE_IEEE1588)
+	uint16_t ptp_correction_offset;
+#endif
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -632,6 +635,11 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
+#if defined(RTE_LIBRTE_IEEE1588)
+	/* By default setting ptp correction offset for Ethernet SYNC packets */
+	ptp_correction_offset = RTE_ETHER_HDR_LEN + 8;
+	rte_pmd_dpaa2_set_one_step_ts(dev->data->port_id, ptp_correction_offset, 0);
+#endif
 	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
@@ -2870,6 +2878,59 @@ int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+#if defined(RTE_LIBRTE_IEEE1588)
+int
+rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpni = priv->eth_dev->process_private;
+	struct dpni_single_step_cfg ptp_cfg;
+	int err;
+
+	if (!mc_query)
+		return priv->ptp_correction_offset;
+
+	err = dpni_get_single_step_cfg(dpni, CMD_PRI_LOW, priv->token, &ptp_cfg);
+	if (err) {
+		DPAA2_PMD_ERR("Failed to retrieve onestep configuration");
+		return err;
+	}
+
+	if (!ptp_cfg.ptp_onestep_reg_base) {
+		DPAA2_PMD_ERR("1588 onestep reg not available");
+		return -1;
+	}
+
+	priv->ptp_correction_offset = ptp_cfg.offset;
+
+	return priv->ptp_correction_offset;
+}
+
+int
+rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpni = dev->process_private;
+	struct dpni_single_step_cfg cfg;
+	int err;
+
+	cfg.en = 1;
+	cfg.ch_update = ch_update;
+	cfg.offset = offset;
+	cfg.peer_delay = 0;
+
+	err = dpni_set_single_step_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
+	if (err)
+		return err;
+
+	priv->ptp_correction_offset = offset;
+
+	return 0;
+}
+#endif
+
 static int dpaa2_tx_sg_pool_init(void)
 {
 	char name[RTE_MEMZONE_NAMESIZE];
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 9feb631d5f..6625afaba3 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -230,6 +230,9 @@ struct dpaa2_dev_priv {
 	rte_spinlock_t lpbk_qp_lock;
 
 	uint8_t channel_inuse;
+	/* Stores correction offset for one step timestamping */
+	uint16_t ptp_correction_offset;
+
 	LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */
 	LIST_HEAD(nodes, dpaa2_tm_node) nodes;
 	LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles;
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index a1152eb717..aea9bae905 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -102,4 +102,14 @@ rte_pmd_dpaa2_thread_init(void);
 __rte_experimental
 uint32_t
 rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
+
+#if defined(RTE_LIBRTE_IEEE1588)
+__rte_experimental
+int
+rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update);
+
+__rte_experimental
+int
+rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query);
+#endif
 #endif /* _RTE_PMD_DPAA2_H */
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index ba756d26bd..2d95303e27 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -16,6 +16,9 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_thread_init;
 	# added in 21.11
 	rte_pmd_dpaa2_get_tlu_hash;
+	# added in 24.11
+	rte_pmd_dpaa2_set_one_step_ts;
+	rte_pmd_dpaa2_get_one_step_ts;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 03/42] net/dpaa2: add proper MTU debugging print
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
  2024-10-22 19:12         ` [v4 01/42] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
  2024-10-22 19:12         ` [v4 02/42] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 04/42] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
                           ` (39 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Apeksha Gupta, Jun Yang

From: Apeksha Gupta <apeksha.gupta@nxp.com>

This patch add proper debug info for check information of
max-pkt-len and configured params.

also store MTU

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 051ebd9d8e..ab64df6a59 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -579,9 +579,11 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 			DPAA2_PMD_ERR("Unable to set mtu. check config");
 			return ret;
 		}
-		DPAA2_PMD_INFO("MTU configured for the device: %d",
+		DPAA2_PMD_DEBUG("MTU configured for the device: %d",
 				dev->data->mtu);
 	} else {
+		DPAA2_PMD_ERR("Configured mtu %d and calculated max-pkt-len is %d which should be <= %d",
+			eth_conf->rxmode.mtu, max_rx_pktlen, DPAA2_MAX_RX_PKT_LEN);
 		return -1;
 	}
 
@@ -1537,6 +1539,7 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 		DPAA2_PMD_ERR("Setting the max frame length failed");
 		return -1;
 	}
+	dev->data->mtu = mtu;
 	DPAA2_PMD_INFO("MTU configured for the device: %d", mtu);
 	return 0;
 }
@@ -2839,6 +2842,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		DPAA2_PMD_ERR("Unable to set mtu. check config");
 		goto init_err;
 	}
+	eth_dev->data->mtu = RTE_ETHER_MTU;
 
 	/*TODO To enable soft parser support DPAA2 driver needs to integrate
 	 * with external entity to receive byte code for software sequence
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 04/42] net/dpaa2: add support to dump dpdmux counters
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (2 preceding siblings ...)
  2024-10-22 19:12         ` [v4 03/42] net/dpaa2: add proper MTU debugging print vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 05/42] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
                           ` (38 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Hemant Agrawal <hemant.agrawal@nxp.com>

This patch add supports to dump dpdmux counters as they are required
to identify the reasons for packet drop in dpdmux.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     | 84 +++++++++++++++++++++++++++++++
 drivers/net/dpaa2/rte_pmd_dpaa2.h | 18 +++++++
 drivers/net/dpaa2/version.map     |  1 +
 3 files changed, 103 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 7dd5a60966..b2ec5337b1 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -259,6 +259,90 @@ rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 	return ret;
 }
 
+/* dump the status of the dpaa2_mux counters on the console */
+void
+rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
+{
+	struct dpaa2_dpdmux_dev *dpdmux;
+	uint64_t counter;
+	int ret;
+	int if_id;
+
+	/* Find the DPDMUX from dpdmux_id in our list */
+	dpdmux = get_dpdmux_from_id(dpdmux_id);
+	if (!dpdmux) {
+		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
+		return;
+	}
+
+	for (if_id = 0; if_id < num_if; if_id++) {
+		fprintf(f, "dpdmux.%d\n", if_id);
+
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FRAME, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BYTE, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FLTR_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FLTR_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FRAME_DISCARD,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FRAME_DISCARD %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_MCAST_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_MCAST_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_MCAST_BYTE,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_MCAST_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BCAST_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BCAST_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BCAST_BYTES,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BCAST_BYTES %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_FRAME, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_BYTE, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_FRAME_DISCARD,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_FRAME_DISCARD %" PRIu64 "\n",
+				counter);
+	}
+}
+
 static int
 dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 			   struct vfio_device_info *obj_info __rte_unused,
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index aea9bae905..fd9acd841b 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -33,6 +33,24 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			      struct rte_flow_item *pattern[],
 			      struct rte_flow_action *actions[]);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Dump demultiplex ethernet traffic counters
+ *
+ * @param f
+ *    output stream
+ * @param dpdmux_id
+ *    ID of the DPDMUX MC object.
+ * @param num_if
+ *    number of interface in dpdmux object
+ *
+ */
+__rte_experimental
+void
+rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 2d95303e27..7323fc8869 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	# added in 24.11
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
+	rte_pmd_dpaa2_mux_dump_counter;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 05/42] bus/fslmc: change dpcon close as internal symbol
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (3 preceding siblings ...)
  2024-10-22 19:12         ` [v4 04/42] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 06/42] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
                           ` (37 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena

From: Hemant Agrawal <hemant.agrawal@nxp.com>

This patch marks dpcon_close API as internal symbol and
also adds it into version map file

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/mc/fsl_dpcon.h | 3 ++-
 drivers/bus/fslmc/version.map    | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index db72477c8a..34b30d15c2 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2021 NXP
  *
  */
 #ifndef __FSL_DPCON_H
@@ -28,6 +28,7 @@ int dpcon_open(struct fsl_mc_io *mc_io,
 	       int dpcon_id,
 	       uint16_t *token);
 
+__rte_internal
 int dpcon_close(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index e19b8d1f6b..01e28c6625 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -36,6 +36,7 @@ INTERNAL {
 	dpci_set_rx_queue;
 	dpcon_get_attributes;
 	dpcon_open;
+	dpcon_close;
 	dpdmai_close;
 	dpdmai_disable;
 	dpdmai_enable;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 06/42] bus/fslmc: add close API to close DPAA2 device
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (4 preceding siblings ...)
  2024-10-22 19:12         ` [v4 05/42] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 07/42] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
                           ` (36 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Add rte_fslmc_close API to close all the DPAA2 devices while
closing the DPDK application.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     |  3 +
 drivers/bus/fslmc/fslmc_bus.c            | 13 ++++
 drivers/bus/fslmc/fslmc_vfio.c           | 87 ++++++++++++++++++++++++
 drivers/bus/fslmc/fslmc_vfio.h           |  3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c | 31 ++++++++-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c | 32 ++++++++-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 34 +++++++++
 drivers/event/dpaa2/dpaa2_hw_dpcon.c     | 32 ++++++++-
 drivers/net/dpaa2/dpaa2_mux.c            | 18 ++++-
 drivers/net/dpaa2/rte_pmd_dpaa2.h        |  5 +-
 10 files changed, 252 insertions(+), 6 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 3095458133..a3428fe28b 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -98,6 +98,8 @@ typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
 				      struct vfio_device_info *obj_info,
 				      int object_id);
 
+typedef void (*rte_dpaa2_obj_close_t)(int object_id);
+
 /**
  * A structure describing a DPAA2 object.
  */
@@ -106,6 +108,7 @@ struct rte_dpaa2_object {
 	const char *name;                   /**< Name of Object. */
 	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
 	rte_dpaa2_obj_create_t create;
+	rte_dpaa2_obj_close_t close;
 };
 
 /**
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 097d6dca08..97473c278f 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -384,6 +384,18 @@ rte_fslmc_match(struct rte_dpaa2_driver *dpaa2_drv,
 	return 1;
 }
 
+static int
+rte_fslmc_close(void)
+{
+	int ret = 0;
+
+	ret = fslmc_vfio_close_group();
+	if (ret)
+		DPAA2_BUS_ERR("Unable to close devices %d", ret);
+
+	return 0;
+}
+
 static int
 rte_fslmc_probe(void)
 {
@@ -664,6 +676,7 @@ struct rte_fslmc_bus rte_fslmc_bus = {
 	.bus = {
 		.scan = rte_fslmc_scan,
 		.probe = rte_fslmc_probe,
+		.cleanup = rte_fslmc_close,
 		.parse = rte_fslmc_parse,
 		.find_device = rte_fslmc_find_device,
 		.get_iommu_class = rte_dpaa2_get_iommu_class,
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 6981679a2d..ecca593c34 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -702,6 +702,54 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 	return -1;
 }
 
+static void
+fslmc_close_iodevices(struct rte_dpaa2_device *dev)
+{
+	struct rte_dpaa2_object *object = NULL;
+	struct rte_dpaa2_driver *drv;
+	int ret, probe_all;
+
+	switch (dev->dev_type) {
+	case DPAA2_IO:
+	case DPAA2_CON:
+	case DPAA2_CI:
+	case DPAA2_BPOOL:
+	case DPAA2_MUX:
+		TAILQ_FOREACH(object, &dpaa2_obj_list, next) {
+			if (dev->dev_type == object->dev_type)
+				object->close(dev->object_id);
+			else
+				continue;
+		}
+		break;
+	case DPAA2_ETH:
+	case DPAA2_CRYPTO:
+	case DPAA2_QDMA:
+		probe_all = rte_fslmc_bus.bus.conf.scan_mode !=
+			    RTE_BUS_SCAN_ALLOWLIST;
+		TAILQ_FOREACH(drv, &rte_fslmc_bus.driver_list, next) {
+			if (drv->drv_type != dev->dev_type)
+				continue;
+			if (rte_dev_is_probed(&dev->device))
+				continue;
+			if (probe_all ||
+			    (dev->device.devargs &&
+			     dev->device.devargs->policy ==
+			     RTE_DEV_ALLOWED)) {
+				ret = drv->remove(dev);
+				if (ret)
+					DPAA2_BUS_ERR("Unable to remove");
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	DPAA2_BUS_LOG(DEBUG, "Device (%s) Closed",
+		      dev->device.name);
+}
+
 /*
  * fslmc_process_iodevices for processing only IO (ETH, CRYPTO, and possibly
  * EVENT) devices.
@@ -807,6 +855,45 @@ fslmc_process_mcp(struct rte_dpaa2_device *dev)
 	return ret;
 }
 
+int
+fslmc_vfio_close_group(void)
+{
+	struct rte_dpaa2_device *dev, *dev_temp;
+
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+		if (dev->device.devargs &&
+		    dev->device.devargs->policy == RTE_DEV_BLOCKED) {
+			DPAA2_BUS_LOG(DEBUG, "%s Blacklisted, skipping",
+				      dev->device.name);
+			TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+				continue;
+		}
+		switch (dev->dev_type) {
+		case DPAA2_ETH:
+		case DPAA2_CRYPTO:
+		case DPAA2_QDMA:
+		case DPAA2_IO:
+			fslmc_close_iodevices(dev);
+			break;
+		case DPAA2_CON:
+		case DPAA2_CI:
+		case DPAA2_BPOOL:
+		case DPAA2_MUX:
+			if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+				continue;
+
+			fslmc_close_iodevices(dev);
+			break;
+		case DPAA2_DPRTC:
+		default:
+			DPAA2_BUS_DEBUG("Device cannot be closed: Not supported (%s)",
+					dev->device.name);
+		}
+	}
+
+	return 0;
+}
+
 int
 fslmc_vfio_process_group(void)
 {
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 133606a9fd..b6677bdd18 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2019 NXP
+ *   Copyright 2016,2019-2020 NXP
  *
  */
 
@@ -55,6 +55,7 @@ int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 
 int fslmc_vfio_setup_group(void);
 int fslmc_vfio_process_group(void);
+int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
 int fslmc_get_container_group(int *gropuid);
 int rte_fslmc_vfio_dmamap(void);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index d7f6e45b7d..bc36607e64 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016 NXP
+ *   Copyright 2016,2020 NXP
  *
  */
 
@@ -33,6 +33,19 @@ TAILQ_HEAD(dpbp_dev_list, dpaa2_dpbp_dev);
 static struct dpbp_dev_list dpbp_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpbp_dev_list); /*!< DPBP device list */
 
+static struct dpaa2_dpbp_dev *get_dpbp_from_id(uint32_t dpbp_id)
+{
+	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
+
+	/* Get DPBP dev handle from list using index */
+	TAILQ_FOREACH(dpbp_dev, &dpbp_dev_list, next) {
+		if (dpbp_dev->dpbp_id == dpbp_id)
+			break;
+	}
+
+	return dpbp_dev;
+}
+
 static int
 dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
 			 struct vfio_device_info *obj_info __rte_unused,
@@ -116,9 +129,25 @@ int dpaa2_dpbp_supported(void)
 	return 0;
 }
 
+static void
+dpaa2_close_dpbp_device(int object_id)
+{
+	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
+
+	dpbp_dev = get_dpbp_from_id((uint32_t)object_id);
+
+	if (dpbp_dev) {
+		dpaa2_free_dpbp_dev(dpbp_dev);
+		dpbp_close(&dpbp_dev->dpbp, CMD_PRI_LOW, dpbp_dev->token);
+		TAILQ_REMOVE(&dpbp_dev_list, dpbp_dev, next);
+		rte_free(dpbp_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpbp_obj = {
 	.dev_type = DPAA2_BPOOL,
 	.create = dpaa2_create_dpbp_device,
+	.close = dpaa2_close_dpbp_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpbp, rte_dpaa2_dpbp_obj);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 7e858a113f..99f2147ccb 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017 NXP
+ *   Copyright 2017,2020 NXP
  *
  */
 
@@ -30,6 +30,19 @@ TAILQ_HEAD(dpci_dev_list, dpaa2_dpci_dev);
 static struct dpci_dev_list dpci_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpci_dev_list); /*!< DPCI device list */
 
+static struct dpaa2_dpci_dev *get_dpci_from_id(uint32_t dpci_id)
+{
+	struct dpaa2_dpci_dev *dpci_dev = NULL;
+
+	/* Get DPCI dev handle from list using index */
+	TAILQ_FOREACH(dpci_dev, &dpci_dev_list, next) {
+		if (dpci_dev->dpci_id == dpci_id)
+			break;
+	}
+
+	return dpci_dev;
+}
+
 static int
 rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
 			     struct vfio_device_info *obj_info __rte_unused,
@@ -179,9 +192,26 @@ void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci)
 	}
 }
 
+
+static void
+rte_dpaa2_close_dpci_device(int object_id)
+{
+	struct dpaa2_dpci_dev *dpci_dev = NULL;
+
+	dpci_dev = get_dpci_from_id((uint32_t)object_id);
+
+	if (dpci_dev) {
+		rte_dpaa2_free_dpci_dev(dpci_dev);
+		dpci_close(&dpci_dev->dpci, CMD_PRI_LOW, dpci_dev->token);
+		TAILQ_REMOVE(&dpci_dev_list, dpci_dev, next);
+		rte_free(dpci_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpci_obj = {
 	.dev_type = DPAA2_CI,
 	.create = rte_dpaa2_create_dpci_device,
+	.close = rte_dpaa2_close_dpci_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpci, rte_dpaa2_dpci_obj);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index d8a98326d9..c3f6e24139 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -86,6 +86,19 @@ static int dpaa2_cluster_sz = 2;
  * Cluster 4 (ID = x07) : CPU14, CPU15;
  */
 
+static struct dpaa2_dpio_dev *get_dpio_dev_from_id(int32_t dpio_id)
+{
+	struct dpaa2_dpio_dev *dpio_dev = NULL;
+
+	/* Get DPIO dev handle from list using index */
+	TAILQ_FOREACH(dpio_dev, &dpio_dev_list, next) {
+		if (dpio_dev->hw_id == dpio_id)
+			break;
+	}
+
+	return dpio_dev;
+}
+
 static int
 dpaa2_get_core_id(void)
 {
@@ -366,6 +379,26 @@ static void dpaa2_portal_finish(void *arg)
 	pthread_setspecific(dpaa2_portal_key, NULL);
 }
 
+static void
+dpaa2_close_dpio_device(int object_id)
+{
+	struct dpaa2_dpio_dev *dpio_dev = NULL;
+
+	dpio_dev = get_dpio_dev_from_id((int32_t)object_id);
+
+	if (dpio_dev) {
+		if (dpio_dev->dpio) {
+			dpio_disable(dpio_dev->dpio, CMD_PRI_LOW,
+				     dpio_dev->token);
+			dpio_close(dpio_dev->dpio, CMD_PRI_LOW,
+				   dpio_dev->token);
+			rte_free(dpio_dev->dpio);
+		}
+		TAILQ_REMOVE(&dpio_dev_list, dpio_dev, next);
+		rte_free(dpio_dev);
+	}
+}
+
 static int
 dpaa2_create_dpio_device(int vdev_fd,
 			 struct vfio_device_info *obj_info,
@@ -643,6 +676,7 @@ dpaa2_free_eq_descriptors(void)
 static struct rte_dpaa2_object rte_dpaa2_dpio_obj = {
 	.dev_type = DPAA2_IO,
 	.create = dpaa2_create_dpio_device,
+	.close = dpaa2_close_dpio_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpio, rte_dpaa2_dpio_obj);
diff --git a/drivers/event/dpaa2/dpaa2_hw_dpcon.c b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
index a68d3ac154..64b0136e24 100644
--- a/drivers/event/dpaa2/dpaa2_hw_dpcon.c
+++ b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017 NXP
+ *   Copyright 2017,2020 NXP
  *
  */
 
@@ -30,6 +30,19 @@ TAILQ_HEAD(dpcon_dev_list, dpaa2_dpcon_dev);
 static struct dpcon_dev_list dpcon_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpcon_dev_list); /*!< DPCON device list */
 
+static struct dpaa2_dpcon_dev *get_dpcon_from_id(uint32_t dpcon_id)
+{
+	struct dpaa2_dpcon_dev *dpcon_dev = NULL;
+
+	/* Get DPCONC dev handle from list using index */
+	TAILQ_FOREACH(dpcon_dev, &dpcon_dev_list, next) {
+		if (dpcon_dev->dpcon_id == dpcon_id)
+			break;
+	}
+
+	return dpcon_dev;
+}
+
 static int
 rte_dpaa2_create_dpcon_device(int dev_fd __rte_unused,
 			      struct vfio_device_info *obj_info __rte_unused,
@@ -105,9 +118,26 @@ void rte_dpaa2_free_dpcon_dev(struct dpaa2_dpcon_dev *dpcon)
 	}
 }
 
+
+static void
+rte_dpaa2_close_dpcon_device(int object_id)
+{
+	struct dpaa2_dpcon_dev *dpcon_dev = NULL;
+
+	dpcon_dev = get_dpcon_from_id((uint32_t)object_id);
+
+	if (dpcon_dev) {
+		rte_dpaa2_free_dpcon_dev(dpcon_dev);
+		dpcon_close(&dpcon_dev->dpcon, CMD_PRI_LOW, dpcon_dev->token);
+		TAILQ_REMOVE(&dpcon_dev_list, dpcon_dev, next);
+		rte_free(dpcon_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpcon_obj = {
 	.dev_type = DPAA2_CON,
 	.create = rte_dpaa2_create_dpcon_device,
+	.close = rte_dpaa2_close_dpcon_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpcon, rte_dpaa2_dpcon_obj);
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index b2ec5337b1..489beb6f27 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -44,7 +44,7 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev = NULL;
 
-	/* Get DPBP dev handle from list using index */
+	/* Get DPDMUX dev handle from list using index */
 	TAILQ_FOREACH(dpdmux_dev, &dpdmux_dev_list, next) {
 		if (dpdmux_dev->dpdmux_id == dpdmux_id)
 			break;
@@ -442,9 +442,25 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	return -1;
 }
 
+static void
+dpaa2_close_dpdmux_device(int object_id)
+{
+	struct dpaa2_dpdmux_dev *dpdmux_dev;
+
+	dpdmux_dev = get_dpdmux_from_id((uint32_t)object_id);
+
+	if (dpdmux_dev) {
+		dpdmux_close(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+			     dpdmux_dev->token);
+		TAILQ_REMOVE(&dpdmux_dev_list, dpdmux_dev, next);
+		rte_free(dpdmux_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpdmux_obj = {
 	.dev_type = DPAA2_MUX,
 	.create = dpaa2_create_dpdmux_device,
+	.close = dpaa2_close_dpdmux_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpdmux, rte_dpaa2_dpdmux_obj);
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index fd9acd841b..80e5e3298b 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2024 NXP
  */
 
 #ifndef _RTE_PMD_DPAA2_H
@@ -32,6 +32,9 @@ struct rte_flow *
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			      struct rte_flow_item *pattern[],
 			      struct rte_flow_action *actions[]);
+int
+rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
+	uint16_t entry_index);
 
 /**
  * @warning
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 07/42] net/dpaa2: dpdmux: add support for CVLAN
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (5 preceding siblings ...)
  2024-10-22 19:12         ` [v4 06/42] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 08/42] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
                           ` (35 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds the support for DPDMUX_METHOD_C_VLAN_MAC method
which implements DPDMUX based on C-VLAN and MAC address.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     | 59 +++++++++++++++++++++++++------
 drivers/net/dpaa2/mc/fsl_dpdmux.h | 18 +++++++++-
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  3 ++
 3 files changed, 68 insertions(+), 12 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 489beb6f27..3693f4b62e 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -233,6 +233,35 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	return NULL;
 }
 
+int
+rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
+	uint8_t mac_addr[6], uint16_t vlan_id, int dest_if)
+{
+	struct dpaa2_dpdmux_dev *dpdmux_dev;
+	struct dpdmux_l2_rule rule;
+	int ret, i;
+
+	/* Find the DPDMUX from dpdmux_id in our list */
+	dpdmux_dev = get_dpdmux_from_id(dpdmux_id);
+	if (!dpdmux_dev) {
+		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
+		return -ENODEV;
+	}
+
+	for (i = 0; i < 6; i++)
+		rule.mac_addr[i] = mac_addr[i];
+	rule.vlan_id = vlan_id;
+
+	ret = dpdmux_if_add_l2_rule(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+			dpdmux_dev->token, dest_if, &rule);
+	if (ret) {
+		DPAA2_PMD_ERR("dpdmux_if_add_l2_rule failed:err(%d)", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
 int
 rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 {
@@ -353,6 +382,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	int ret;
 	uint16_t maj_ver;
 	uint16_t min_ver;
+	uint8_t skip_reset_flags;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -379,12 +409,18 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 		goto init_err;
 	}
 
-	ret = dpdmux_if_set_default(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-				    dpdmux_dev->token, attr.default_if);
-	if (ret) {
-		DPAA2_PMD_ERR("setting default interface failed in %s",
-			      __func__);
-		goto init_err;
+	if (attr.method != DPDMUX_METHOD_C_VLAN_MAC) {
+		ret = dpdmux_if_set_default(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+				dpdmux_dev->token, attr.default_if);
+		if (ret) {
+			DPAA2_PMD_ERR("setting default interface failed in %s",
+				      __func__);
+			goto init_err;
+		}
+		skip_reset_flags = DPDMUX_SKIP_DEFAULT_INTERFACE
+			| DPDMUX_SKIP_UNICAST_RULES | DPDMUX_SKIP_MULTICAST_RULES;
+	} else {
+		skip_reset_flags = DPDMUX_SKIP_DEFAULT_INTERFACE;
 	}
 
 	ret = dpdmux_get_api_version(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
@@ -400,10 +436,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	 */
 	if (maj_ver >= 6 && min_ver >= 6) {
 		ret = dpdmux_set_resetable(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-				dpdmux_dev->token,
-				DPDMUX_SKIP_DEFAULT_INTERFACE |
-				DPDMUX_SKIP_UNICAST_RULES |
-				DPDMUX_SKIP_MULTICAST_RULES);
+				dpdmux_dev->token, skip_reset_flags);
 		if (ret) {
 			DPAA2_PMD_ERR("setting default interface failed in %s",
 				      __func__);
@@ -416,7 +449,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 
 		memset(&mux_err_cfg, 0, sizeof(mux_err_cfg));
 		mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE;
-		mux_err_cfg.errors = DPDMUX_ERROR_DISC;
+
+		if (attr.method != DPDMUX_METHOD_C_VLAN_MAC)
+			mux_err_cfg.errors = DPDMUX_ERROR_DISC;
+		else
+			mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
 
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 4600ea94d4..9bbac44219 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2022 NXP
  *
  */
 #ifndef __FSL_DPDMUX_H
@@ -549,6 +549,22 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
  */
 #define DPDMUX__ERROR_L4CE			0x00000001
 
+#define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
+				 DPDMUX__ERROR_L4CV | \
+				 DPDMUX__ERROR_L3CE | \
+				 DPDMUX__ERROR_L3CV | \
+				 DPDMUX_ERROR_BLE | \
+				 DPDMUX_ERROR_PHE | \
+				 DPDMUX_ERROR_ISP | \
+				 DPDMUX_ERROR_PTE | \
+				 DPDMUX_ERROR_FPE | \
+				 DPDMUX_ERROR_FLE | \
+				 DPDMUX_ERROR_PIEE | \
+				 DPDMUX_ERROR_TIDE | \
+				 DPDMUX_ERROR_MNLE | \
+				 DPDMUX_ERROR_EOFHE | \
+				 DPDMUX_ERROR_KSE)
+
 enum dpdmux_error_action {
 	DPDMUX_ERROR_ACTION_DISCARD = 0,
 	DPDMUX_ERROR_ACTION_CONTINUE = 1
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index 80e5e3298b..bebebcacdc 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -35,6 +35,9 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 int
 rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
 	uint16_t entry_index);
+int
+rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
+	uint8_t mac_addr[6], uint16_t vlan_id, int dest_if);
 
 /**
  * @warning
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 08/42] bus/fslmc: upgrade with MC version 10.37
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (6 preceding siblings ...)
  2024-10-22 19:12         ` [v4 07/42] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 09/42] net/dpaa2: support link state for eth interfaces vanshika.shukla
                           ` (34 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh; +Cc: Apeksha Gupta

From: Gagandeep Singh <g.singh@nxp.com>

This patch upgrades the MC version compaitbility to 10.37

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 doc/guides/platform/dpaa2.rst                 |   4 +-
 drivers/bus/fslmc/mc/dpio.c                   |  94 ++++-
 drivers/bus/fslmc/mc/fsl_dpcon.h              |   5 +-
 drivers/bus/fslmc/mc/fsl_dpio.h               |  21 +-
 drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |  13 +-
 drivers/bus/fslmc/mc/fsl_dpmng.h              |   4 +-
 drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |   8 +-
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |  12 +-
 drivers/bus/fslmc/version.map                 |   7 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |  91 ++++-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |  47 ++-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |  19 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  36 +-
 drivers/net/dpaa2/mc/dpdmux.c                 | 205 +++++++++-
 drivers/net/dpaa2/mc/dpkg.c                   |  12 +-
 drivers/net/dpaa2/mc/dpni.c                   | 383 +++++++++++++++++-
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |  67 ++-
 drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |  83 +++-
 drivers/net/dpaa2/mc/fsl_dpkg.h               |   7 +-
 drivers/net/dpaa2/mc/fsl_dpni.h               | 176 +++++---
 drivers/net/dpaa2/mc/fsl_dpni_cmd.h           | 125 ++++--
 21 files changed, 1267 insertions(+), 152 deletions(-)

diff --git a/doc/guides/platform/dpaa2.rst b/doc/guides/platform/dpaa2.rst
index 2b0d93a976..c9ec21334f 100644
--- a/doc/guides/platform/dpaa2.rst
+++ b/doc/guides/platform/dpaa2.rst
@@ -105,8 +105,8 @@ separately:
 
 Currently supported by DPDK:
 
-- NXP SDK **LSDK 19.09++**.
-- MC Firmware version **10.18.0** and higher.
+- NXP SDK **LSDK 21.08++**.
+- MC Firmware version **10.37.0** and higher.
 - Supported architectures:  **arm64 LE**.
 
 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
diff --git a/drivers/bus/fslmc/mc/dpio.c b/drivers/bus/fslmc/mc/dpio.c
index a3382ed142..97c08fa713 100644
--- a/drivers/bus/fslmc/mc/dpio.c
+++ b/drivers/bus/fslmc/mc/dpio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -376,6 +376,98 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpio_set_stashing_destination_by_core_id() - Set the stashing destination source
+ * using the core id.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @core_id:	Core id stashing destination
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
+					uint32_t cmd_flags,
+					uint16_t token,
+					uint8_t core_id)
+{
+	struct dpio_stashing_dest_by_core_id *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST_BY_CORE_ID,
+										cmd_flags,
+										token);
+	cmd_params = (struct dpio_stashing_dest_by_core_id  *)cmd.params;
+	cmd_params->core_id = core_id;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpio_set_stashing_destination_source() - Set the stashing destination source.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @ss:		Stashing destination source (0 manual/1 automatic)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ss)
+{
+	struct dpio_stashing_dest_source *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST_SOURCE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpio_stashing_dest_source *)cmd.params;
+	cmd_params->ss = ss;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpio_get_stashing_destination_source() - Get the stashing destination source.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @ss:		Returns the stashing destination source (0 manual/1 automatic)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *ss)
+{
+	struct dpio_stashing_dest_source *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_STASHING_DEST_SOURCE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpio_stashing_dest_source *)cmd.params;
+	*ss = rsp_params->ss;
+
+	return 0;
+}
+
 /**
  * dpio_add_static_dequeue_channel() - Add a static dequeue channel.
  * @mc_io:		Pointer to MC portal's I/O object
diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index 34b30d15c2..e3a626077e 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2021, 2024 NXP
  *
  */
 #ifndef __FSL_DPCON_H
@@ -52,10 +52,12 @@ int dpcon_destroy(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint32_t obj_id);
 
+__rte_internal
 int dpcon_enable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
 
+__rte_internal
 int dpcon_disable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token);
@@ -65,6 +67,7 @@ int dpcon_is_enabled(struct fsl_mc_io *mc_io,
 		     uint16_t token,
 		     int *en);
 
+__rte_internal
 int dpcon_reset(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
index c2db76bdf8..eddce58a5f 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPIO_H
@@ -87,11 +87,30 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint16_t token,
 				  uint8_t sdest);
 
+__rte_internal
 int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
 				  uint8_t *sdest);
 
+__rte_internal
+int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t core_id);
+
+__rte_internal
+int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ss);
+
+__rte_internal
+int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *ss);
+
 __rte_internal
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
diff --git a/drivers/bus/fslmc/mc/fsl_dpio_cmd.h b/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
index 45ed01f809..360c68eaa5 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2019 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPIO_CMD_H
@@ -40,6 +40,9 @@
 #define DPIO_CMDID_GET_STASHING_DEST			DPIO_CMD(0x121)
 #define DPIO_CMDID_ADD_STATIC_DEQUEUE_CHANNEL		DPIO_CMD(0x122)
 #define DPIO_CMDID_REMOVE_STATIC_DEQUEUE_CHANNEL	DPIO_CMD(0x123)
+#define DPIO_CMDID_SET_STASHING_DEST_SOURCE		DPIO_CMD(0x124)
+#define DPIO_CMDID_GET_STASHING_DEST_SOURCE		DPIO_CMD(0x125)
+#define DPIO_CMDID_SET_STASHING_DEST_BY_CORE_ID		DPIO_CMD(0x126)
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPIO_MASK(field)        \
@@ -98,6 +101,14 @@ struct dpio_stashing_dest {
 	uint8_t sdest;
 };
 
+struct dpio_stashing_dest_source {
+	uint8_t ss;
+};
+
+struct dpio_stashing_dest_by_core_id {
+	uint8_t core_id;
+};
+
 struct dpio_cmd_static_dequeue_channel {
 	uint32_t dpcon_id;
 };
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index c6ea220df7..dfa51b3a86 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2017-2022 NXP
+ * Copyright 2017-2023 NXP
  *
  */
 #ifndef __FSL_DPMNG_H
@@ -20,7 +20,7 @@ struct fsl_mc_io;
  * Management Complex firmware version information
  */
 #define MC_VER_MAJOR 10
-#define MC_VER_MINOR 32
+#define MC_VER_MINOR 37
 
 /**
  * struct mc_version
diff --git a/drivers/bus/fslmc/mc/fsl_dprc_cmd.h b/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
index 6efa5634d2..d5ba35b5f0 100644
--- a/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 
@@ -10,13 +10,17 @@
 
 /* Minimal supported DPRC Version */
 #define DPRC_VER_MAJOR			6
-#define DPRC_VER_MINOR			6
+#define DPRC_VER_MINOR			7
 
 /* Command versioning */
 #define DPRC_CMD_BASE_VERSION			1
+#define DPRC_CMD_VERSION_2			2
+#define DPRC_CMD_VERSION_3			3
 #define DPRC_CMD_ID_OFFSET			4
 
 #define DPRC_CMD(id)	((id << DPRC_CMD_ID_OFFSET) | DPRC_CMD_BASE_VERSION)
+#define DPRC_CMD_V2(id)	(((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_VERSION_2)
+#define DPRC_CMD_V3(id)	(((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_VERSION_3)
 
 /* Command IDs */
 #define DPRC_CMDID_CLOSE                        DPRC_CMD(0x800)
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 18b6a3c2e4..297d4ed4fc 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2023 NXP
  */
 #ifndef _FSL_QBMAN_DEBUG_H
 #define _FSL_QBMAN_DEBUG_H
@@ -105,16 +105,6 @@ uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r);
 uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r);
 uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r);
 
-/* FQ query command for non-programmable fields*/
-enum qbman_fq_schedstate_e {
-	qbman_fq_schedstate_oos = 0,
-	qbman_fq_schedstate_retired,
-	qbman_fq_schedstate_tentatively_scheduled,
-	qbman_fq_schedstate_truly_scheduled,
-	qbman_fq_schedstate_parked,
-	qbman_fq_schedstate_held_active,
-};
-
 struct qbman_fq_query_np_rslt {
 uint8_t verb;
 	uint8_t rslt;
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index 01e28c6625..df1143733d 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -37,6 +37,9 @@ INTERNAL {
 	dpcon_get_attributes;
 	dpcon_open;
 	dpcon_close;
+	dpcon_reset;
+	dpcon_enable;
+	dpcon_disable;
 	dpdmai_close;
 	dpdmai_disable;
 	dpdmai_enable;
@@ -53,7 +56,11 @@ INTERNAL {
 	dpio_open;
 	dpio_remove_static_dequeue_channel;
 	dpio_reset;
+	dpio_get_stashing_destination;
+	dpio_get_stashing_destination_source;
 	dpio_set_stashing_destination;
+	dpio_set_stashing_destination_by_core_id;
+	dpio_set_stashing_destination_source;
 	mc_get_soc_version;
 	mc_get_version;
 	mc_send_command;
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
index 87e0defdc6..773b4648e0 100644
--- a/drivers/crypto/dpaa2_sec/mc/dpseci.c
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -763,3 +763,92 @@ int dpseci_get_congestion_notification(
 
 	return 0;
 }
+
+
+/**
+ * dpseci_get_rx_queue_status() - Get queue status attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue_index:	Select the queue_index
+ * @attr:	Returned queue status attributes
+ *
+ * Return:	'0' on success, error code otherwise
+ */
+int dpseci_get_rx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr)
+{
+	struct dpseci_rsp_get_queue_status *rsp_params;
+	struct dpseci_cmd_get_queue_status *cmd_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpseci_cmd_get_queue_status *)cmd.params;
+	cmd_params->queue_index = cpu_to_le32(queue_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpseci_rsp_get_queue_status *)cmd.params;
+	attr->fqid = le32_to_cpu(rsp_params->fqid);
+	attr->schedstate = (enum qbman_fq_schedstate_e)(le16_to_cpu(rsp_params->schedstate));
+	attr->state_flags = le16_to_cpu(rsp_params->state_flags);
+	attr->frame_count = le32_to_cpu(rsp_params->frame_count);
+	attr->byte_count = le32_to_cpu(rsp_params->byte_count);
+
+	return 0;
+}
+
+/**
+ * dpseci_get_tx_queue_status() - Get queue status attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue_index:	Select the queue_index
+ * @attr:	Returned queue status attributes
+ *
+ * Return:	'0' on success, error code otherwise
+ */
+int dpseci_get_tx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr)
+{
+	struct dpseci_rsp_get_queue_status *rsp_params;
+	struct dpseci_cmd_get_queue_status *cmd_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpseci_cmd_get_queue_status *)cmd.params;
+	cmd_params->queue_index = cpu_to_le32(queue_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpseci_rsp_get_queue_status *)cmd.params;
+	attr->fqid = le32_to_cpu(rsp_params->fqid);
+	attr->schedstate = (enum qbman_fq_schedstate_e)(le16_to_cpu(rsp_params->schedstate));
+	attr->state_flags = le16_to_cpu(rsp_params->state_flags);
+	attr->frame_count = le32_to_cpu(rsp_params->frame_count);
+	attr->byte_count = le32_to_cpu(rsp_params->byte_count);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
index c295c04f24..e371abdd64 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPSECI_H
@@ -429,4 +429,49 @@ int dpseci_get_congestion_notification(
 			uint16_t token,
 			struct dpseci_congestion_notification_cfg *cfg);
 
+/* Available FQ's scheduling states */
+enum qbman_fq_schedstate_e {
+	qbman_fq_schedstate_oos = 0,
+	qbman_fq_schedstate_retired,
+	qbman_fq_schedstate_tentatively_scheduled,
+	qbman_fq_schedstate_truly_scheduled,
+	qbman_fq_schedstate_parked,
+	qbman_fq_schedstate_held_active,
+};
+
+/* FQ's force eligible pending bit */
+#define DPSECI_FQ_STATE_FORCE_ELIGIBLE			0x00000001
+/* FQ's XON/XOFF state, 0: XON, 1: XOFF */
+#define DPSECI_FQ_STATE_XOFF					0x00000002
+/* FQ's retirement pending bit */
+#define DPSECI_FQ_STATE_RETIREMENT_PENDING		0x00000004
+/* FQ's overflow error bit */
+#define DPSECI_FQ_STATE_OVERFLOW_ERROR			0x00000008
+
+struct dpseci_queue_status {
+	uint32_t fqid;
+	/* FQ's scheduling states
+	 * (available scheduling states are defined in qbman_fq_schedstate_e)
+	 */
+	enum qbman_fq_schedstate_e schedstate;
+	/* FQ's state flags (available flags are defined above) */
+	uint16_t state_flags;
+	/* FQ's frame count */
+	uint32_t frame_count;
+	/* FQ's byte count */
+	uint32_t byte_count;
+};
+
+int dpseci_get_rx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr);
+
+int dpseci_get_tx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr);
+
 #endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
index af3518a0f3..065464b701 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPSECI_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPSECI Version */
 #define DPSECI_VER_MAJOR		5
-#define DPSECI_VER_MINOR		3
+#define DPSECI_VER_MINOR		4
 
 /* Command versioning */
 #define DPSECI_CMD_BASE_VERSION		1
@@ -46,6 +46,9 @@
 #define DPSECI_CMDID_GET_OPR		DPSECI_CMD_V1(0x19B)
 #define DPSECI_CMDID_SET_CONGESTION_NOTIFICATION	DPSECI_CMD_V1(0x170)
 #define DPSECI_CMDID_GET_CONGESTION_NOTIFICATION	DPSECI_CMD_V1(0x171)
+#define DPSECI_CMDID_GET_RX_QUEUE_STATUS	DPSECI_CMD_V1(0x172)
+#define DPSECI_CMDID_GET_TX_QUEUE_STATUS	DPSECI_CMD_V1(0x173)
+
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPSECI_MASK(field)        \
@@ -251,5 +254,17 @@ struct dpseci_cmd_set_congestion_notification {
 	uint32_t threshold_exit;
 };
 
+struct dpseci_cmd_get_queue_status {
+	uint32_t queue_index;
+};
+
+struct dpseci_rsp_get_queue_status {
+	uint32_t fqid;
+	uint16_t schedstate;
+	uint16_t state_flags;
+	uint32_t frame_count;
+	uint32_t byte_count;
+};
+
 #pragma pack(pop)
 #endif /* _FSL_DPSECI_CMD_H */
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index ab64df6a59..439b8f97a4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -899,6 +899,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct dpni_queue tx_conf_cfg;
 	struct dpni_queue tx_flow_cfg;
 	uint8_t options = 0, flow_id;
+	uint8_t ceetm_ch_idx;
 	uint16_t channel_id;
 	struct dpni_queue_id qid;
 	uint32_t tc_id;
@@ -925,20 +926,27 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	memset(&tx_conf_cfg, 0, sizeof(struct dpni_queue));
 	memset(&tx_flow_cfg, 0, sizeof(struct dpni_queue));
 
-	if (tx_queue_id == 0) {
-		/*Set tx-conf and error configuration*/
-		if (priv->flags & DPAA2_TX_CONF_ENABLE)
-			ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW,
-							    priv->token,
-							    DPNI_CONF_AFFINE);
-		else
-			ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW,
-							    priv->token,
-							    DPNI_CONF_DISABLE);
-		if (ret) {
-			DPAA2_PMD_ERR("Error in set tx conf mode settings: "
-				      "err=%d", ret);
-			return -1;
+	if (!tx_queue_id) {
+		for (ceetm_ch_idx = 0;
+			ceetm_ch_idx <= (priv->num_channels - 1);
+			ceetm_ch_idx++) {
+			/*Set tx-conf and error configuration*/
+			if (priv->flags & DPAA2_TX_CONF_ENABLE) {
+				ret = dpni_set_tx_confirmation_mode(dpni,
+						CMD_PRI_LOW, priv->token,
+						ceetm_ch_idx,
+						DPNI_CONF_AFFINE);
+			} else {
+				ret = dpni_set_tx_confirmation_mode(dpni,
+						CMD_PRI_LOW, priv->token,
+						ceetm_ch_idx,
+						DPNI_CONF_DISABLE);
+			}
+			if (ret) {
+				DPAA2_PMD_ERR("Error(%d) in tx conf setting",
+					ret);
+				return ret;
+			}
 		}
 	}
 
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 1bb153cad7..f4feef3840 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -287,15 +287,19 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
  * @token:	Token of DPDMUX object
  * @skip_reset_flags:	By default all are 0.
  *			By setting 1 will deactivate the reset.
- *	The flags are:
- *			DPDMUX_SKIP_DEFAULT_INTERFACE  0x01
- *			DPDMUX_SKIP_UNICAST_RULES      0x02
- *			DPDMUX_SKIP_MULTICAST_RULES    0x04
+ * The flags are:
+ *			DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE  0x01
+ *			DPDMUX_SKIP_UNICAST_RULES             0x02
+ *			DPDMUX_SKIP_MULTICAST_RULES           0x04
+ *			DPDMUX_SKIP_RESET_DEFAULT_INTERFACE   0x08
  *
  * For example, by default, through DPDMUX_RESET the default
  * interface will be restored with the one from create.
- * By setting DPDMUX_SKIP_DEFAULT_INTERFACE flag,
- * through DPDMUX_RESET the default interface will not be modified.
+ * By setting DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE flag,
+ * through DPDMUX_RESET the default interface will not be modified after reset.
+ * By setting DPDMUX_SKIP_RESET_DEFAULT_INTERFACE flag,
+ * through DPDMUX_RESET the default interface will not be reset
+ * and will continue to be functional during reset procedure.
  *
  * Return:	'0' on Success; Error code otherwise.
  */
@@ -327,10 +331,11 @@ int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
  * @token:	Token of DPDMUX object
  * @skip_reset_flags:	Get the reset flags.
  *
- *	The flags are:
- *			DPDMUX_SKIP_DEFAULT_INTERFACE  0x01
- *			DPDMUX_SKIP_UNICAST_RULES      0x02
- *			DPDMUX_SKIP_MULTICAST_RULES    0x04
+ * The flags are:
+ *			DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE  0x01
+ *			DPDMUX_SKIP_UNICAST_RULES             0x02
+ *			DPDMUX_SKIP_MULTICAST_RULES           0x04
+ *			DPDMUX_SKIP_RESET_DEFAULT_INTERFACE   0x08
  *
  * Return:	'0' on Success; Error code otherwise.
  */
@@ -1064,6 +1069,127 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpdmux_if_set_taildrop() - enable taildrop for egress interface queues.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id:	Interface Identifier
+ * @cfg: Taildrop configuration
+ */
+int dpdmux_if_set_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+	struct dpdmux_cmd_set_taildrop *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_SET_TAILDROP,
+			cmd_flags,
+			token);
+	cmd_params = (struct dpdmux_cmd_set_taildrop *)cmd.params;
+	cmd_params->if_id		= cpu_to_le16(if_id);
+	cmd_params->units		= cfg->units;
+	cmd_params->threshold	= cpu_to_le32(cfg->threshold);
+	dpdmux_set_field(cmd_params->oal_en, ENABLE, (!!cfg->enable));
+
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpdmux_if_get_taildrop() - get current taildrop configuration.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id:	Interface Identifier
+ * @cfg: Taildrop configuration
+ */
+int dpdmux_if_get_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg)
+{
+	struct mc_command cmd = {0};
+	struct dpdmux_cmd_get_taildrop *cmd_params;
+	struct dpdmux_rsp_get_taildrop *rsp_params;
+	int err = 0;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_TAILDROP,
+			cmd_flags,
+			token);
+	cmd_params = (struct dpdmux_cmd_get_taildrop *)cmd.params;
+	cmd_params->if_id	= cpu_to_le16(if_id);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpdmux_rsp_get_taildrop *)cmd.params;
+	cfg->threshold = le32_to_cpu(rsp_params->threshold);
+	cfg->units = rsp_params->units;
+	cfg->enable = dpdmux_get_field(rsp_params->oal_en, ENABLE);
+
+	return err;
+}
+
+/**
+ * dpdmux_dump_table() - Dump the content of table_type table into memory.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @table_type: The type of the table to dump
+ *	- DPDMUX_DMAT_TABLE
+ *	- DPDMUX_MISS_TABLE
+ *	- DPDMUX_PRUNE_TABLE
+ * @table_index: The index of the table to dump in case of more than one table
+ *	if table_type == DPDMUX_DMAT_TABLE
+ *		- DPDMUX_HMAP_UNICAST
+ *		- DPDMUX_HMAP_MULTICAST
+ *	else 0
+ * @iova_addr: The snapshot will be stored in this variable as an header of struct dump_table_header
+ *             followed by an array of struct dump_table_entry
+ * @iova_size: Memory size allocated for iova_addr
+ * @num_entries: Number of entries written in iova_addr
+ *
+ * Return: Completion status. '0' on Success; Error code otherwise.
+ *
+ * The memory allocated at iova_addr must be zeroed before command execution.
+ * If the table content exceeds the memory size provided the dump will be truncated.
+ */
+int dpdmux_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	struct dpdmux_cmd_dump_table *cmd_params;
+	struct dpdmux_rsp_dump_table *rsp_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_DUMP_TABLE, cmd_flags, token);
+	cmd_params = (struct dpdmux_cmd_dump_table *)cmd.params;
+	cmd_params->table_type = cpu_to_le16(table_type);
+	cmd_params->table_index = cpu_to_le16(table_index);
+	cmd_params->iova_addr = cpu_to_le64(iova_addr);
+	cmd_params->iova_size = cpu_to_le32(iova_size);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpdmux_rsp_dump_table *)cmd.params;
+	*num_entries = le16_to_cpu(rsp_params->num_entries);
+
+	return 0;
+}
+
+
 /**
  * dpdmux_if_set_errors_behavior() - Set errors behavior
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
@@ -1100,3 +1226,60 @@ int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 	/* send command to mc*/
 	return mc_send_command(mc_io, &cmd);
 }
+
+/* Sets up a Soft Parser Profile on this DPDMUX
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @sp_profile: Soft Parser Profile name (must a valid name for a defined profile)
+ *			Maximum allowed length for this string is 8 characters long
+ *			If this parameter is empty string (all zeros)
+ *			then the Default SP Profile is set on this dpdmux
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ */
+int dpdmux_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type)
+{
+	struct dpdmux_cmd_set_sp_profile *cmd_params;
+	struct mc_command cmd = { 0 };
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SET_SP_PROFILE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpdmux_cmd_set_sp_profile *)cmd.params;
+	for (i = 0; i < MAX_SP_PROFILE_ID_SIZE && sp_profile[i]; i++)
+		cmd_params->sp_profile[i] = sp_profile[i];
+	cmd_params->type = type;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/* Enable/Disable Soft Parser on this DPDMUX interface
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id: interface id
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ * @en: 1 to enable or 0 to disable
+ */
+int dpdmux_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t if_id, uint8_t type, uint8_t en)
+{
+	struct dpdmux_cmd_sp_enable *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SP_ENABLE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpdmux_cmd_sp_enable *)cmd.params;
+	cmd_params->if_id = if_id;
+	cmd_params->type = type;
+	cmd_params->en = en;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
diff --git a/drivers/net/dpaa2/mc/dpkg.c b/drivers/net/dpaa2/mc/dpkg.c
index 4789976b7d..5db3d092c1 100644
--- a/drivers/net/dpaa2/mc/dpkg.c
+++ b/drivers/net/dpaa2/mc/dpkg.c
@@ -1,16 +1,18 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2021, 2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
 #include <fsl_mc_cmd.h>
 #include <fsl_dpkg.h>
+#include <string.h>
 
 /**
  * dpkg_prepare_key_cfg() - function prepare extract parameters
  * @cfg: defining a full Key Generation profile (rule)
- * @key_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
+ * @key_cfg_buf: Zeroed memory whose size is sizeo of
+ *		"struct dpni_ext_set_rx_tc_dist" before mapping it to DMA
  *
  * This function has to be called before the following functions:
  *	- dpni_set_rx_tc_dist()
@@ -18,7 +20,8 @@
  *	- dpkg_prepare_key_cfg()
  */
 int
-dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, uint8_t *key_cfg_buf)
+dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+	void *key_cfg_buf)
 {
 	int i, j;
 	struct dpni_ext_set_rx_tc_dist *dpni_ext;
@@ -27,11 +30,12 @@ dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, uint8_t *key_cfg_buf)
 	if (cfg->num_extracts > DPKG_MAX_NUM_OF_EXTRACTS)
 		return -EINVAL;
 
-	dpni_ext = (struct dpni_ext_set_rx_tc_dist *)key_cfg_buf;
+	dpni_ext = key_cfg_buf;
 	dpni_ext->num_extracts = cfg->num_extracts;
 
 	for (i = 0; i < cfg->num_extracts; i++) {
 		extr = &dpni_ext->extracts[i];
+		memset(extr, 0, sizeof(struct dpni_dist_extract));
 
 		switch (cfg->extracts[i].type) {
 		case DPKG_EXTRACT_FROM_HDR:
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index 4d97b98939..558f08dc69 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2022 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -852,6 +852,92 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpni_get_qdid_ex() - Extension for the function to get the Queuing Destination ID (QDID)
+ *			that should be used for enqueue operations.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @qtype:	Type of queue to receive QDID for
+ * @qdid:	Array of virtual QDID value that should be used as an argument
+ *			in all enqueue operations.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * This function must be used when dpni is created using multiple Tx channels to return one
+ * qdid for each channel.
+ */
+int dpni_get_qdid_ex(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  enum dpni_queue_type qtype,
+		  uint16_t *qdid)
+{
+	struct mc_command cmd = { 0 };
+	struct dpni_cmd_get_qdid *cmd_params;
+	struct dpni_rsp_get_qdid_ex *rsp_params;
+	int i;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QDID_EX,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_qdid *)cmd.params;
+	cmd_params->qtype = qtype;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_qdid_ex *)cmd.params;
+	for (i = 0; i < DPNI_MAX_CHANNELS; i++)
+		qdid[i] = le16_to_cpu(rsp_params->qdid[i]);
+
+	return 0;
+}
+
+/**
+ * dpni_get_sp_info() - Get the AIOP storage profile IDs associated
+ *			with the DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @sp_info:	Returned AIOP storage-profile information
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * @warning	Only relevant for DPNI that belongs to AIOP container.
+ */
+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
+		     uint32_t cmd_flags,
+		     uint16_t token,
+		     struct dpni_sp_info *sp_info)
+{
+	struct dpni_rsp_get_sp_info *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err, i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SP_INFO,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_sp_info *)cmd.params;
+	for (i = 0; i < DPNI_MAX_SP; i++)
+		sp_info->spids[i] = le16_to_cpu(rsp_params->spids[i]);
+
+	return 0;
+}
+
 /**
  * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
  * @mc_io:	Pointer to MC portal's I/O object
@@ -1684,6 +1770,7 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
  * @mc_io:	Pointer to MC portal's I/O object
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
  * @mode:	Tx confirmation mode
  *
  * This function is useful only when 'DPNI_OPT_TX_CONF_DISABLED' is not
@@ -1701,6 +1788,7 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
 int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode mode)
 {
 	struct dpni_tx_confirmation_mode *cmd_params;
@@ -1711,6 +1799,7 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 					  cmd_flags,
 					  token);
 	cmd_params = (struct dpni_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
 	cmd_params->confirmation_mode = mode;
 
 	/* send command to mc*/
@@ -1722,6 +1811,7 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
  * @mc_io:	Pointer to MC portal's I/O object
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
  * @mode:	Tx confirmation mode
  *
  * Return:  '0' on Success; Error code otherwise.
@@ -1729,8 +1819,10 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode *mode)
 {
+	struct dpni_tx_confirmation_mode *cmd_params;
 	struct dpni_tx_confirmation_mode *rsp_params;
 	struct mc_command cmd = { 0 };
 	int err;
@@ -1738,6 +1830,8 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONFIRMATION_MODE,
 					cmd_flags,
 					token);
+	cmd_params = (struct dpni_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
 
 	err = mc_send_command(mc_io, &cmd);
 	if (err)
@@ -1749,6 +1843,78 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpni_set_queue_tx_confirmation_mode() - Set Tx confirmation mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
+ * @index:	queue index
+ * @mode:	Tx confirmation mode
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode mode)
+{
+	struct dpni_queue_tx_confirmation_mode *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QUEUE_TX_CONFIRMATION_MODE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
+	cmd_params->index = index;
+	cmd_params->confirmation_mode = mode;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_queue_tx_confirmation_mode() - Get Tx confirmation mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
+ * @index:	queue index
+ * @mode:	Tx confirmation mode
+ *
+ * Return:  '0' on Success; Error code otherwise.
+ */
+int dpni_get_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode *mode)
+{
+	struct dpni_queue_tx_confirmation_mode *cmd_params;
+	struct dpni_queue_tx_confirmation_mode *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QUEUE_TX_CONFIRMATION_MODE,
+					cmd_flags,
+					token);
+	cmd_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
+	cmd_params->index = index;
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	*mode =  rsp_params->confirmation_mode;
+
+	return 0;
+}
+
 /**
  * dpni_set_qos_table() - Set QoS mapping table
  * @mc_io:	Pointer to MC portal's I/O object
@@ -2291,8 +2457,7 @@ int dpni_set_congestion_notification(struct fsl_mc_io *mc_io,
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
  * @qtype:	Type of queue - Rx, Tx and Tx confirm types are supported
- * @param:	Traffic class and channel. Bits[0-7] contain traaffic class,
- *		byte[8-15] contains channel id
+ * @tc_id:	Traffic class selection (0-7)
  * @cfg:	congestion notification configuration
  *
  * Return:	'0' on Success; error code otherwise.
@@ -3114,8 +3279,216 @@ int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 
 	cmd_params = (struct dpni_cmd_set_port_cfg *)cmd.params;
 	cmd_params->flags = cpu_to_le32(flags);
-	dpni_set_field(cmd_params->bit_params,	PORT_LOOPBACK_EN,
-			!!port_cfg->loopback_en);
+	dpni_set_field(cmd_params->bit_params, PORT_LOOPBACK_EN, !!port_cfg->loopback_en);
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_single_step_cfg() - return current configuration for single step PTP
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPNI object
+ * @ptp_cfg: ptp single step configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ */
+int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_single_step_cfg *ptp_cfg)
+{
+	struct dpni_rsp_single_step_cfg *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SINGLE_STEP_CFG,
+						cmd_flags,
+						token);
+	/* send command to mc*/
+	err =  mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* read command response */
+	rsp_params = (struct dpni_rsp_single_step_cfg *)cmd.params;
+	ptp_cfg->offset = le16_to_cpu(rsp_params->offset);
+	ptp_cfg->en = dpni_get_field(rsp_params->flags, PTP_ENABLE);
+	ptp_cfg->ch_update = dpni_get_field(rsp_params->flags, PTP_CH_UPDATE);
+	ptp_cfg->peer_delay = le32_to_cpu(rsp_params->peer_delay);
+	ptp_cfg->ptp_onestep_reg_base =
+				  le32_to_cpu(rsp_params->ptp_onestep_reg_base);
+
+	return err;
+}
+
+/**
+ * dpni_get_port_cfg() - return configuration from physical port. The command has effect only if
+ *			dpni is connected to a mac object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @port_cfg: Configuration data
+ * The command can be called only when dpni is connected to a dpmac object.
+ * If the dpni is unconnected or the endpoint is not a dpni it will return error;
+ */
+int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_port_cfg *port_cfg)
+{
+	struct dpni_rsp_get_port_cfg *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PORT_CFG,
+			cmd_flags, token);
+
+	/* send command to MC */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* read command response */
+	rsp_params = (struct dpni_rsp_get_port_cfg *)cmd.params;
+	port_cfg->loopback_en = dpni_get_field(rsp_params->bit_params, PORT_LOOPBACK_EN);
+
+	return 0;
+}
+
+/**
+ * dpni_set_single_step_cfg() - enable/disable and configure single step PTP
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPNI object
+ * @ptp_cfg: ptp single step configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * The function has effect only when dpni object is connected to a dpmac object. If the
+ * dpni is not connected to a dpmac the configuration will be stored inside and applied
+ * when connection is made.
+ */
+int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_single_step_cfg *ptp_cfg)
+{
+	struct dpni_cmd_single_step_cfg *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_SINGLE_STEP_CFG,
+						cmd_flags,
+						token);
+	cmd_params = (struct dpni_cmd_single_step_cfg *)cmd.params;
+	cmd_params->offset = cpu_to_le16(ptp_cfg->offset);
+	cmd_params->peer_delay = cpu_to_le32(ptp_cfg->peer_delay);
+	dpni_set_field(cmd_params->flags, PTP_ENABLE, !!ptp_cfg->en);
+	dpni_set_field(cmd_params->flags, PTP_CH_UPDATE, !!ptp_cfg->ch_update);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_dump_table() - Dump the content of table_type table into memory.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @table_type: The type of the table to dump
+ * @table_index: The index of the table to dump in case of more than one table
+ * @iova_addr: The snapshot will be stored in this variable as an header of struct dump_table_header
+ *             followed by an array of struct dump_table_entry
+ * @iova_size: Memory size allocated for iova_addr
+ * @num_entries: Number of entries written in iova_addr
+ *
+ * Return: Completion status. '0' on Success; Error code otherwise.
+ *
+ * The memory allocated at iova_addr must be zeroed before command execution.
+ * If the table content exceeds the memory size provided the dump will be truncated.
+ */
+int dpni_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	struct dpni_cmd_dump_table *cmd_params;
+	struct dpni_rsp_dump_table *rsp_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_DUMP_TABLE, cmd_flags, token);
+	cmd_params = (struct dpni_cmd_dump_table *)cmd.params;
+	cmd_params->table_type = cpu_to_le16(table_type);
+	cmd_params->table_index = cpu_to_le16(table_index);
+	cmd_params->iova_addr = cpu_to_le64(iova_addr);
+	cmd_params->iova_size = cpu_to_le32(iova_size);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_rsp_dump_table *)cmd.params;
+	*num_entries = le16_to_cpu(rsp_params->num_entries);
+
+	return 0;
+}
+
+/* Sets up a Soft Parser Profile on this DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @sp_profile: Soft Parser Profile name (must a valid name for a defined profile)
+ *			Maximum allowed length for this string is 8 characters long
+ *			If this parameter is empty string (all zeros)
+ *			then the Default SP Profile is set on this dpni
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ */
+int dpni_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type)
+{
+	struct dpni_cmd_set_sp_profile *cmd_params;
+	struct mc_command cmd = { 0 };
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_SP_PROFILE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpni_cmd_set_sp_profile *)cmd.params;
+	for (i = 0; i < MAX_SP_PROFILE_ID_SIZE && sp_profile[i]; i++)
+		cmd_params->sp_profile[i] = sp_profile[i];
+	cmd_params->type = type;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/* Enable/Disable Soft Parser on this DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ * @en: 1 to enable or 0 to disable
+ */
+int dpni_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t type, uint8_t en)
+{
+	struct dpni_cmd_sp_enable *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SP_ENABLE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpni_cmd_sp_enable *)cmd.params;
+	cmd_params->type = type;
+	cmd_params->en = en;
 
 	/* send command to MC */
 	return mc_send_command(mc_io, &cmd);
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 9bbac44219..97b09e59f9 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2022 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #ifndef __FSL_DPDMUX_H
@@ -154,6 +154,10 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
  *Setting 1 DPDMUX_RESET will not reset multicast rules
  */
 #define DPDMUX_SKIP_MULTICAST_RULES	0x04
+/**
+ *Setting 4 DPDMUX_RESET will not reset default interface
+ */
+#define DPDMUX_SKIP_RESET_DEFAULT_INTERFACE	0x08
 
 int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
@@ -464,10 +468,50 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
 			   uint16_t *major_ver,
 			   uint16_t *minor_ver);
 
+enum dpdmux_congestion_unit {
+	DPDMUX_TAIDLROP_DROP_UNIT_BYTE = 0,
+	DPDMUX_TAILDROP_DROP_UNIT_FRAMES,
+	DPDMUX_TAILDROP_DROP_UNIT_BUFFERS
+};
+
 /**
- * Discard bit. This bit must be used together with other bits in
- * DPDMUX_ERROR_ACTION_CONTINUE to disable discarding of frames containing
- * errors
+ * struct dpdmux_taildrop_cfg - interface taildrop configuration
+ * @enable - enable (1 ) or disable (0) taildrop
+ * @units - taildrop units
+ * @threshold - taildtop threshold
+ */
+struct dpdmux_taildrop_cfg {
+	char enable;
+	enum dpdmux_congestion_unit units;
+	uint32_t threshold;
+};
+
+int dpdmux_if_set_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg);
+
+int dpdmux_if_get_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg);
+
+#define DPDMUX_MAX_KEY_SIZE 56
+
+enum dpdmux_table_type {
+	DPDMUX_DMAT_TABLE = 1,
+	DPDMUX_MISS_TABLE = 2,
+	DPDMUX_PRUNE_TABLE = 3,
+};
+
+int dpdmux_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries);
+
+/**
+ * Discard bit. This bit must be used together with other bits in DPDMUX_ERROR_ACTION_CONTINUE
+ * to disable discarding of frames containing errors
  */
 #define DPDMUX_ERROR_DISC		0x80000000
 /**
@@ -583,4 +627,19 @@ struct dpdmux_error_cfg {
 int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 		uint16_t token, uint16_t if_id, struct dpdmux_error_cfg *cfg);
 
+/**
+ * SP Profile on Ingress DPDMUX
+ */
+#define DPDMUX_SP_PROFILE_INGRESS 0x1
+/**
+ * SP Profile on Egress DPDMUX
+ */
+#define DPDMUX_SP_PROFILE_EGRESS	0x2
+
+int dpdmux_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type);
+
+int dpdmux_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t if_id, uint8_t type, uint8_t en);
+
 #endif /* __FSL_DPDMUX_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
index bf6b8a20d1..a94f1bf91a 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #ifndef _FSL_DPDMUX_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPDMUX Version */
 #define DPDMUX_VER_MAJOR		6
-#define DPDMUX_VER_MINOR		9
+#define DPDMUX_VER_MINOR		10
 
 #define DPDMUX_CMD_BASE_VERSION		1
 #define DPDMUX_CMD_VERSION_2		2
@@ -63,8 +63,17 @@
 
 #define DPDMUX_CMDID_SET_RESETABLE		DPDMUX_CMD(0x0ba)
 #define DPDMUX_CMDID_GET_RESETABLE		DPDMUX_CMD(0x0bb)
+
+#define DPDMUX_CMDID_IF_SET_TAILDROP		DPDMUX_CMD(0x0bc)
+#define DPDMUX_CMDID_IF_GET_TAILDROP		DPDMUX_CMD(0x0bd)
+
+#define DPDMUX_CMDID_DUMP_TABLE           DPDMUX_CMD(0x0be)
+
 #define DPDMUX_CMDID_SET_ERRORS_BEHAVIOR	DPDMUX_CMD(0x0bf)
 
+#define DPDMUX_CMDID_SET_SP_PROFILE			DPDMUX_CMD(0x0c0)
+#define DPDMUX_CMDID_SP_ENABLE				DPDMUX_CMD(0x0c1)
+
 #define DPDMUX_MASK(field)        \
 	GENMASK(DPDMUX_##field##_SHIFT + DPDMUX_##field##_SIZE - 1, \
 		DPDMUX_##field##_SHIFT)
@@ -241,7 +250,7 @@ struct dpdmux_cmd_remove_custom_cls_entry {
 };
 
 #define DPDMUX_SKIP_RESET_FLAGS_SHIFT    0
-#define DPDMUX_SKIP_RESET_FLAGS_SIZE     3
+#define DPDMUX_SKIP_RESET_FLAGS_SIZE     4
 
 struct dpdmux_cmd_set_skip_reset_flags {
 	uint8_t skip_reset_flags;
@@ -251,6 +260,61 @@ struct dpdmux_rsp_get_skip_reset_flags {
 	uint8_t skip_reset_flags;
 };
 
+struct dpdmux_cmd_set_taildrop {
+	uint32_t	pad1;
+	uint16_t	if_id;
+	uint16_t	pad2;
+	uint16_t	oal_en;
+	uint8_t		units;
+	uint8_t		pad3;
+	uint32_t	threshold;
+};
+
+struct dpdmux_cmd_get_taildrop {
+	uint32_t	pad1;
+	uint16_t	if_id;
+};
+
+struct dpdmux_rsp_get_taildrop {
+	uint16_t	pad1;
+	uint16_t	pad2;
+	uint16_t	if_id;
+	uint16_t	pad3;
+	uint16_t	oal_en;
+	uint8_t		units;
+	uint8_t		pad4;
+	uint32_t	threshold;
+};
+
+struct dpdmux_cmd_dump_table {
+	uint16_t table_type;
+	uint16_t table_index;
+	uint32_t pad0;
+	uint64_t iova_addr;
+	uint32_t iova_size;
+};
+
+struct dpdmux_rsp_dump_table {
+	uint16_t num_entries;
+};
+
+struct dpdmux_dump_table_header {
+	uint16_t table_type;
+	uint16_t table_num_entries;
+	uint16_t table_max_entries;
+	uint8_t default_action;
+	uint8_t match_type;
+	uint8_t reserved[24];
+};
+
+struct dpdmux_dump_table_entry {
+	uint8_t key[DPDMUX_MAX_KEY_SIZE];
+	uint8_t mask[DPDMUX_MAX_KEY_SIZE];
+	uint8_t key_action;
+	uint16_t result[3];
+	uint8_t reserved[21];
+};
+
 #define DPDMUX_ERROR_ACTION_SHIFT		0
 #define DPDMUX_ERROR_ACTION_SIZE		4
 
@@ -260,5 +324,18 @@ struct dpdmux_cmd_set_errors_behavior {
 	uint16_t if_id;
 };
 
+#define MAX_SP_PROFILE_ID_SIZE	8
+
+struct dpdmux_cmd_set_sp_profile {
+	uint8_t sp_profile[MAX_SP_PROFILE_ID_SIZE];
+	uint8_t type;
+};
+
+struct dpdmux_cmd_sp_enable {
+	uint16_t if_id;
+	uint8_t type;
+	uint8_t en;
+};
+
 #pragma pack(pop)
 #endif /* _FSL_DPDMUX_CMD_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpkg.h b/drivers/net/dpaa2/mc/fsl_dpkg.h
index 70f2339ea5..834c765513 100644
--- a/drivers/net/dpaa2/mc/fsl_dpkg.h
+++ b/drivers/net/dpaa2/mc/fsl_dpkg.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  * Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPKG_H_
@@ -180,7 +180,8 @@ struct dpni_ext_set_rx_tc_dist {
 	struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
 };
 
-int dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
-			 uint8_t *key_cfg_buf);
+int
+dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+	void *key_cfg_buf);
 
 #endif /* __FSL_DPKG_H_ */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index ce84f4265e..3a5fcfa8a5 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPNI_H
@@ -116,6 +116,11 @@ struct fsl_mc_io;
  * Flow steering table is shared between all traffic classes
  */
 #define DPNI_OPT_SHARED_FS				0x001000
+/*
+ * Fq frame data, context and annotations stashing disable.
+ * The stashing is enabled by default.
+ */
+#define DPNI_OPT_STASHING_DIS			0x002000
 /**
  * Software sequence maximum layout size
  */
@@ -147,6 +152,7 @@ int dpni_close(struct fsl_mc_io *mc_io,
  *		DPNI_OPT_HAS_KEY_MASKING
  *		DPNI_OPT_NO_FS
  *		DPNI_OPT_SINGLE_SENDER
+ *		DPNI_OPT_STASHING_DIS
  * @fs_entries: Number of entries in the flow steering table.
  *		This table is used to select the ingress queue for
  *		ingress traffic, targeting a GPP core or another.
@@ -335,6 +341,7 @@ int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
  *		DPNI_OPT_SHARED_CONGESTION
  *		DPNI_OPT_HAS_KEY_MASKING
  *		DPNI_OPT_NO_FS
+ *		DPNI_OPT_STASHING_DIS
  * @num_queues: Number of Tx and Rx queues used for traffic distribution.
  * @num_rx_tcs: Number of RX traffic classes (TCs), reserved for the DPNI.
  * @num_tx_tcs: Number of TX traffic classes (TCs), reserved for the DPNI.
@@ -394,7 +401,7 @@ int dpni_get_attributes(struct fsl_mc_io *mc_io,
  * error queue. To be used in dpni_set_errors_behavior() only if error_action
  * parameter is set to DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE.
  */
-#define DPNI_ERROR_DISC		0x80000000
+#define DPNI_ERROR_DISC			0x80000000
 
 /**
  * Extract out of frame header error
@@ -576,6 +583,8 @@ enum dpni_offload {
 	DPNI_OFF_TX_L3_CSUM,
 	DPNI_OFF_TX_L4_CSUM,
 	DPNI_FLCTYPE_HASH,
+	DPNI_HEADER_STASHING,
+	DPNI_PAYLOAD_STASHING,
 };
 
 int dpni_set_offload(struct fsl_mc_io *mc_io,
@@ -596,6 +605,26 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 		  enum dpni_queue_type qtype,
 		  uint16_t *qdid);
 
+int dpni_get_qdid_ex(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  enum dpni_queue_type qtype,
+		  uint16_t *qdid);
+
+/**
+ * struct dpni_sp_info - Structure representing DPNI storage-profile information
+ * (relevant only for DPNI owned by AIOP)
+ * @spids: array of storage-profiles
+ */
+struct dpni_sp_info {
+	uint16_t spids[DPNI_MAX_SP];
+};
+
+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
+		     uint32_t cmd_flags,
+		     uint16_t token,
+		     struct dpni_sp_info *sp_info);
+
 int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
 			    uint32_t cmd_flags,
 			    uint16_t token,
@@ -1443,11 +1472,25 @@ enum dpni_confirmation_mode {
 int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode mode);
 
 int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
+				  enum dpni_confirmation_mode *mode);
+
+int dpni_set_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode mode);
+
+int dpni_get_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
 				  enum dpni_confirmation_mode *mode);
 
 /**
@@ -1841,6 +1884,60 @@ void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
 				     const uint8_t *sw_sequence_layout_buf);
 
 /**
+ * When used for queue_idx in function dpni_set_rx_dist_default_queue will signal to dpni
+ * to drop all unclassified frames
+ */
+#define DPNI_FS_MISS_DROP		((uint16_t)-1)
+
+/**
+ * struct dpni_rx_dist_cfg - distribution configuration
+ * @dist_size:	distribution size; supported values: 1,2,3,4,6,7,8,
+ *		12,14,16,24,28,32,48,56,64,96,112,128,192,224,256,384,448,
+ *		512,768,896,1024
+ * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
+ *		the extractions to be used for the distribution key by calling
+ *		dpkg_prepare_key_cfg() relevant only when enable!=0 otherwise it can be '0'
+ * @enable: enable/disable the distribution.
+ * @tc: TC id for which distribution is set
+ * @fs_miss_flow_id: when packet misses all rules from flow steering table and hash is
+ *		disabled it will be put into this queue id; use DPNI_FS_MISS_DROP to drop
+ *		frames. The value of this field is used only when flow steering distribution
+ *		is enabled and hash distribution is disabled
+ */
+struct dpni_rx_dist_cfg {
+	uint16_t dist_size;
+	uint64_t key_cfg_iova;
+	uint8_t enable;
+	uint8_t tc;
+	uint16_t fs_miss_flow_id;
+};
+
+int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		const struct dpni_rx_dist_cfg *cfg);
+
+int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		const struct dpni_rx_dist_cfg *cfg);
+
+int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t tpid);
+
+int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t tpid);
+
+/**
+ * struct dpni_custom_tpid_cfg - custom TPID configuration. Contains custom TPID values
+ *		used in current dpni object to detect 802.1q frames.
+ *	@tpid1: first tag. Not used if zero.
+ *	@tpid2: second tag. Not used if zero.
+ */
+struct dpni_custom_tpid_cfg {
+	uint16_t tpid1;
+	uint16_t tpid2;
+};
+
+int dpni_get_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_custom_tpid_cfg *tpid);
+/*
  * struct dpni_ptp_cfg - configure single step PTP (IEEE 1588)
  *	@en: enable single step PTP. When enabled the PTPv1 functionality will
  *		not work. If the field is zero, offset and ch_update parameters
@@ -1858,6 +1955,7 @@ struct dpni_single_step_cfg {
 	uint8_t ch_update;
 	uint16_t offset;
 	uint32_t peer_delay;
+	uint32_t ptp_onestep_reg_base;
 };
 
 int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
@@ -1885,61 +1983,35 @@ int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 		uint16_t token, struct dpni_port_cfg *port_cfg);
 
-/**
- * When used for queue_idx in function dpni_set_rx_dist_default_queue will
- * signal to dpni to drop all unclassified frames
- */
-#define DPNI_FS_MISS_DROP		((uint16_t)-1)
-
-/**
- * struct dpni_rx_dist_cfg - distribution configuration
- * @dist_size:	distribution size; supported values: 1,2,3,4,6,7,8,
- *		12,14,16,24,28,32,48,56,64,96,112,128,192,224,256,384,448,
- *		512,768,896,1024
- * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
- *		the extractions to be used for the distribution key by calling
- *		dpkg_prepare_key_cfg() relevant only when enable!=0 otherwise
- *		it can be '0'
- * @enable: enable/disable the distribution.
- * @tc: TC id for which distribution is set
- * @fs_miss_flow_id: when packet misses all rules from flow steering table and
- *		hash is disabled it will be put into this queue id; use
- *		DPNI_FS_MISS_DROP to drop frames. The value of this field is
- *		used only when flow steering distribution is enabled and hash
- *		distribution is disabled
- */
-struct dpni_rx_dist_cfg {
-	uint16_t dist_size;
-	uint64_t key_cfg_iova;
-	uint8_t enable;
-	uint8_t tc;
-	uint16_t fs_miss_flow_id;
+enum dpni_table_type {
+	DPNI_FS_TABLE = 1,
+	DPNI_MAC_TABLE = 2,
+	DPNI_QOS_TABLE = 3,
+	DPNI_VLAN_TABLE = 4,
 };
 
-int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, const struct dpni_rx_dist_cfg *cfg);
-
-int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, const struct dpni_rx_dist_cfg *cfg);
-
-int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, uint16_t tpid);
-
-int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, uint16_t tpid);
+int dpni_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries);
 
 /**
- * struct dpni_custom_tpid_cfg - custom TPID configuration. Contains custom TPID
- *	values used in current dpni object to detect 802.1q frames.
- *	@tpid1: first tag. Not used if zero.
- *	@tpid2: second tag. Not used if zero.
+ * SP Profile on Ingress DPNI
  */
-struct dpni_custom_tpid_cfg {
-	uint16_t tpid1;
-	uint16_t tpid2;
-};
+#define DPNI_SP_PROFILE_INGRESS 0x1
+/**
+ * SP Profile on Egress DPNI
+ */
+#define DPNI_SP_PROFILE_EGRESS	0x2
+
+int dpni_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type);
 
-int dpni_get_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, struct dpni_custom_tpid_cfg *tpid);
+int dpni_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t type, uint8_t en);
 
 #endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
index 781f936add..1152182e34 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2022 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPNI_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPNI Version */
 #define DPNI_VER_MAJOR				8
-#define DPNI_VER_MINOR				2
+#define DPNI_VER_MINOR				4
 
 #define DPNI_CMD_BASE_VERSION			1
 #define DPNI_CMD_VERSION_2			2
@@ -108,8 +108,8 @@
 #define DPNI_CMDID_GET_EARLY_DROP		DPNI_CMD_V3(0x26A)
 #define DPNI_CMDID_GET_OFFLOAD			DPNI_CMD_V2(0x26B)
 #define DPNI_CMDID_SET_OFFLOAD			DPNI_CMD_V2(0x26C)
-#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE	DPNI_CMD(0x266)
-#define DPNI_CMDID_GET_TX_CONFIRMATION_MODE	DPNI_CMD(0x26D)
+#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE	DPNI_CMD_V2(0x266)
+#define DPNI_CMDID_GET_TX_CONFIRMATION_MODE	DPNI_CMD_V2(0x26D)
 #define DPNI_CMDID_SET_OPR			DPNI_CMD_V2(0x26e)
 #define DPNI_CMDID_GET_OPR			DPNI_CMD_V2(0x26f)
 #define DPNI_CMDID_LOAD_SW_SEQUENCE		DPNI_CMD(0x270)
@@ -121,7 +121,16 @@
 #define DPNI_CMDID_REMOVE_CUSTOM_TPID		DPNI_CMD(0x276)
 #define DPNI_CMDID_GET_CUSTOM_TPID		DPNI_CMD(0x277)
 #define DPNI_CMDID_GET_LINK_CFG			DPNI_CMD(0x278)
+#define DPNI_CMDID_SET_SINGLE_STEP_CFG			DPNI_CMD(0x279)
+#define DPNI_CMDID_GET_SINGLE_STEP_CFG		DPNI_CMD_V2(0x27a)
 #define DPNI_CMDID_SET_PORT_CFG			DPNI_CMD(0x27B)
+#define DPNI_CMDID_GET_PORT_CFG			DPNI_CMD(0x27C)
+#define DPNI_CMDID_DUMP_TABLE           DPNI_CMD(0x27D)
+#define DPNI_CMDID_SET_SP_PROFILE		DPNI_CMD(0x27E)
+#define DPNI_CMDID_GET_QDID_EX			DPNI_CMD(0x27F)
+#define DPNI_CMDID_SP_ENABLE		    DPNI_CMD(0x280)
+#define DPNI_CMDID_SET_QUEUE_TX_CONFIRMATION_MODE	DPNI_CMD(0x281)
+#define DPNI_CMDID_GET_QUEUE_TX_CONFIRMATION_MODE	DPNI_CMD(0x282)
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPNI_MASK(field)	\
@@ -329,6 +338,10 @@ struct dpni_rsp_get_qdid {
 	uint16_t qdid;
 };
 
+struct dpni_rsp_get_qdid_ex {
+	uint16_t qdid[16];
+};
+
 struct dpni_rsp_get_sp_info {
 	uint16_t spids[2];
 };
@@ -748,7 +761,16 @@ struct dpni_cmd_set_taildrop {
 };
 
 struct dpni_tx_confirmation_mode {
-	uint32_t pad;
+	uint8_t ceetm_ch_idx;
+	uint8_t pad1;
+	uint16_t pad2;
+	uint8_t confirmation_mode;
+};
+
+struct dpni_queue_tx_confirmation_mode {
+	uint8_t ceetm_ch_idx;
+	uint8_t index;
+	uint16_t pad;
 	uint8_t confirmation_mode;
 };
 
@@ -894,6 +916,42 @@ struct dpni_sw_sequence_layout_entry {
 	uint16_t pad;
 };
 
+#define DPNI_RX_FS_DIST_ENABLE_SHIFT	0
+#define DPNI_RX_FS_DIST_ENABLE_SIZE		1
+struct dpni_cmd_set_rx_fs_dist {
+	uint16_t	dist_size;
+	uint8_t		enable;
+	uint8_t		tc;
+	uint16_t	miss_flow_id;
+	uint16_t	pad1;
+	uint64_t	key_cfg_iova;
+};
+
+#define DPNI_RX_HASH_DIST_ENABLE_SHIFT	0
+#define DPNI_RX_HASH_DIST_ENABLE_SIZE		1
+struct dpni_cmd_set_rx_hash_dist {
+	uint16_t	dist_size;
+	uint8_t		enable;
+	uint8_t		tc_id;
+	uint32_t	pad;
+	uint64_t	key_cfg_iova;
+};
+
+struct dpni_cmd_add_custom_tpid {
+	uint16_t	pad;
+	uint16_t	tpid;
+};
+
+struct dpni_cmd_remove_custom_tpid {
+	uint16_t	pad;
+	uint16_t	tpid;
+};
+
+struct dpni_rsp_get_custom_tpid {
+	uint16_t	tpid1;
+	uint16_t	tpid2;
+};
+
 #define DPNI_PTP_ENABLE_SHIFT			0
 #define DPNI_PTP_ENABLE_SIZE			1
 #define DPNI_PTP_CH_UPDATE_SHIFT		1
@@ -925,40 +983,45 @@ struct dpni_rsp_get_port_cfg {
 	uint32_t	bit_params;
 };
 
-#define DPNI_RX_FS_DIST_ENABLE_SHIFT	0
-#define DPNI_RX_FS_DIST_ENABLE_SIZE		1
-struct dpni_cmd_set_rx_fs_dist {
-	uint16_t	dist_size;
-	uint8_t		enable;
-	uint8_t		tc;
-	uint16_t	miss_flow_id;
-	uint16_t	pad1;
-	uint64_t	key_cfg_iova;
+struct dpni_cmd_dump_table {
+	uint16_t table_type;
+	uint16_t table_index;
+	uint32_t pad0;
+	uint64_t iova_addr;
+	uint32_t iova_size;
 };
 
-#define DPNI_RX_HASH_DIST_ENABLE_SHIFT	0
-#define DPNI_RX_HASH_DIST_ENABLE_SIZE		1
-struct dpni_cmd_set_rx_hash_dist {
-	uint16_t	dist_size;
-	uint8_t		enable;
-	uint8_t		tc_id;
-	uint32_t	pad;
-	uint64_t	key_cfg_iova;
+struct dpni_rsp_dump_table {
+	uint16_t num_entries;
 };
 
-struct dpni_cmd_add_custom_tpid {
-	uint16_t	pad;
-	uint16_t	tpid;
+struct dump_table_header {
+	uint16_t table_type;
+	uint16_t table_num_entries;
+	uint16_t table_max_entries;
+	uint8_t default_action;
+	uint8_t match_type;
+	uint8_t reserved[24];
 };
 
-struct dpni_cmd_remove_custom_tpid {
-	uint16_t	pad;
-	uint16_t	tpid;
+struct dump_table_entry {
+	uint8_t key[DPNI_MAX_KEY_SIZE];
+	uint8_t mask[DPNI_MAX_KEY_SIZE];
+	uint8_t key_action;
+	uint16_t result[3];
+	uint8_t reserved[21];
 };
 
-struct dpni_rsp_get_custom_tpid {
-	uint16_t	tpid1;
-	uint16_t	tpid2;
+#define MAX_SP_PROFILE_ID_SIZE	8
+
+struct dpni_cmd_set_sp_profile {
+	uint8_t sp_profile[MAX_SP_PROFILE_ID_SIZE];
+	uint8_t type;
+};
+
+struct dpni_cmd_sp_enable {
+	uint8_t type;
+	uint8_t en;
 };
 
 #pragma pack(pop)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 09/42] net/dpaa2: support link state for eth interfaces
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (7 preceding siblings ...)
  2024-10-22 19:12         ` [v4 08/42] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 10/42] net/dpaa2: update DPNI link status method vanshika.shukla
                           ` (33 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

This patch add support to update the duplex value along with
link status and link speed after setting the link UP.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 439b8f97a4..b120e2c815 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1988,7 +1988,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	if (ret) {
 		/* Unable to obtain dpni status; Not continuing */
 		DPAA2_PMD_ERR("Interface Link UP failed (%d)", ret);
-		return -EINVAL;
+		return ret;
 	}
 
 	/* Enable link if not already enabled */
@@ -1996,13 +1996,13 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 		ret = dpni_enable(dpni, CMD_PRI_LOW, priv->token);
 		if (ret) {
 			DPAA2_PMD_ERR("Interface Link UP failed (%d)", ret);
-			return -EINVAL;
+			return ret;
 		}
 	}
 	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
 	if (ret < 0) {
 		DPAA2_PMD_DEBUG("Unable to get link state (%d)", ret);
-		return -1;
+		return ret;
 	}
 
 	/* changing tx burst function to start enqueues */
@@ -2010,10 +2010,15 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = state.up;
 	dev->data->dev_link.link_speed = state.rate;
 
+	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
+		dev->data->dev_link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	else
+		dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+
 	if (state.up)
-		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
+		DPAA2_PMD_DEBUG("Port %d Link is Up", dev->data->port_id);
 	else
-		DPAA2_PMD_INFO("Port %d Link is Down", dev->data->port_id);
+		DPAA2_PMD_DEBUG("Port %d Link is Down", dev->data->port_id);
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 10/42] net/dpaa2: update DPNI link status method
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (8 preceding siblings ...)
  2024-10-22 19:12         ` [v4 09/42] net/dpaa2: support link state for eth interfaces vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 11/42] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
                           ` (32 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Brick Yang, Rohit Raj

From: Brick Yang <brick.yang@nxp.com>

If SFP module is not connected to the port and flow control is
configured using flow control API, link will show DOWN even after
connecting the SFP module and fiber cable.

This issue cannot be reproduced if only SFP module is connected and
fiber cable is disconnected before configuring flow control even
though link is down in this case too.

This patch improves it by getting configuration values from
dpni_get_link_cfg API instead of dpni_get_link_state API, which
provides us static configuration data.

Signed-off-by: Brick Yang <brick.yang@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index b120e2c815..0adebc0bf1 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2087,7 +2087,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	int ret = -EINVAL;
 	struct dpaa2_dev_priv *priv;
 	struct fsl_mc_io *dpni;
-	struct dpni_link_state state = {0};
+	struct dpni_link_cfg cfg = {0};
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -2099,14 +2099,14 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
+	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("error: dpni_get_link_state %d", ret);
+		DPAA2_PMD_ERR("error: dpni_get_link_cfg %d", ret);
 		return ret;
 	}
 
 	memset(fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	if (state.options & DPNI_LINK_OPT_PAUSE) {
+	if (cfg.options & DPNI_LINK_OPT_PAUSE) {
 		/* DPNI_LINK_OPT_PAUSE set
 		 *  if ASYM_PAUSE not set,
 		 *	RX Side flow control (handle received Pause frame)
@@ -2115,7 +2115,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	RX Side flow control (handle received Pause frame)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
-		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
+		if (!(cfg.options & DPNI_LINK_OPT_ASYM_PAUSE))
 			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
 			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
@@ -2127,7 +2127,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *  if ASYM_PAUSE not set,
 		 *	Flow control disabled
 		 */
-		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
+		if (cfg.options & DPNI_LINK_OPT_ASYM_PAUSE)
 			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
 			fc_conf->mode = RTE_ETH_FC_NONE;
@@ -2142,7 +2142,6 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	int ret = -EINVAL;
 	struct dpaa2_dev_priv *priv;
 	struct fsl_mc_io *dpni;
-	struct dpni_link_state state = {0};
 	struct dpni_link_cfg cfg = {0};
 
 	PMD_INIT_FUNC_TRACE();
@@ -2155,23 +2154,19 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	/* It is necessary to obtain the current state before setting fc_conf
+	/* It is necessary to obtain the current cfg before setting fc_conf
 	 * as MC would return error in case rate, autoneg or duplex values are
 	 * different.
 	 */
-	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
+	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("Unable to get link state (err=%d)", ret);
+		DPAA2_PMD_ERR("Unable to get link cfg (err=%d)", ret);
 		return -1;
 	}
 
 	/* Disable link before setting configuration */
 	dpaa2_dev_set_link_down(dev);
 
-	/* Based on fc_conf, update cfg */
-	cfg.rate = state.rate;
-	cfg.options = state.options;
-
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
 	case RTE_ETH_FC_FULL:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 11/42] net/dpaa2: add new PMD API to check dpaa platform version
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (9 preceding siblings ...)
  2024-10-22 19:12         ` [v4 10/42] net/dpaa2: update DPNI link status method vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 12/42] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
                           ` (31 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

This patch add support to check the DPAA platform type from
the applications.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 16 +++++++++++++---
 drivers/net/dpaa2/dpaa2_flow.c    |  5 ++---
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  4 ++++
 drivers/net/dpaa2/version.map     |  1 +
 4 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 0adebc0bf1..bd6a578e30 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2161,7 +2161,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
 		DPAA2_PMD_ERR("Unable to get link cfg (err=%d)", ret);
-		return -1;
+		return ret;
 	}
 
 	/* Disable link before setting configuration */
@@ -2203,7 +2203,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	default:
 		DPAA2_PMD_ERR("Incorrect Flow control flag (%d)",
 			      fc_conf->mode);
-		return -1;
+		return -EINVAL;
 	}
 
 	ret = dpni_set_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
@@ -2885,8 +2885,18 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
+int
+rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 {
+	struct rte_eth_dev *dev;
+
+	if (eth_id >= RTE_MAX_ETHPORTS)
+		return false;
+
+	dev = &rte_eth_devices[eth_id];
+	if (!dev->device)
+		return false;
+
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 54b17e97c0..77367aa392 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3296,14 +3296,13 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 	if (idx >= 0) {
 		if (!rte_eth_dev_is_valid_port(idx))
 			return NULL;
+		if (!rte_pmd_dpaa2_dev_is_dpaa2(idx))
+			return NULL;
 		dest_dev = &rte_eth_devices[idx];
 	} else {
 		dest_dev = priv->eth_dev;
 	}
 
-	if (!dpaa2_dev_is_dpaa2(dest_dev))
-		return NULL;
-
 	return dest_dev;
 }
 
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index bebebcacdc..fc52a9218e 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -127,6 +127,10 @@ __rte_experimental
 uint32_t
 rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
 
+__rte_experimental
+int
+rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id);
+
 #if defined(RTE_LIBRTE_IEEE1588)
 __rte_experimental
 int
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 7323fc8869..233c6e6b2c 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -17,6 +17,7 @@ EXPERIMENTAL {
 	# added in 21.11
 	rte_pmd_dpaa2_get_tlu_hash;
 	# added in 24.11
+	rte_pmd_dpaa2_dev_is_dpaa2;
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
 	rte_pmd_dpaa2_mux_dump_counter;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 12/42] bus/fslmc: improve BMAN buffer acquire
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (10 preceding siblings ...)
  2024-10-22 19:12         ` [v4 11/42] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 13/42] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
                           ` (30 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Ignore reserved bits of BMan acquire response number.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 1f24cdce7e..3fdca9761d 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2023-2024 NXP
  *
  */
 
@@ -42,6 +42,8 @@
 /* opaque token for static dequeues */
 #define QMAN_SDQCR_TOKEN    0xbb
 
+#define BMAN_VALID_RSLT_NUM_MASK 0x7
+
 enum qbman_sdqcr_dct {
 	qbman_sdqcr_dct_null = 0,
 	qbman_sdqcr_dct_prio_ics,
@@ -2628,7 +2630,7 @@ struct qbman_acquire_rslt {
 	uint16_t reserved;
 	uint8_t num;
 	uint8_t reserved2[3];
-	uint64_t buf[7];
+	uint64_t buf[BMAN_VALID_RSLT_NUM_MASK];
 };
 
 static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
@@ -2636,8 +2638,9 @@ static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
+	int num;
 
-	if (!num_buffers || (num_buffers > 7))
+	if (!num_buffers || (num_buffers > BMAN_VALID_RSLT_NUM_MASK))
 		return -EINVAL;
 
 	/* Start the management command */
@@ -2668,12 +2671,13 @@ static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
 		return -EIO;
 	}
 
-	QBMAN_BUG_ON(r->num > num_buffers);
+	num = r->num & BMAN_VALID_RSLT_NUM_MASK;
+	QBMAN_BUG_ON(num > num_buffers);
 
 	/* Copy the acquired buffers to the caller's array */
-	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+	u64_from_le32_copy(buffers, &r->buf[0], num);
 
-	return (int)r->num;
+	return num;
 }
 
 static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
@@ -2681,8 +2685,9 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
+	int num;
 
-	if (!num_buffers || (num_buffers > 7))
+	if (!num_buffers || (num_buffers > BMAN_VALID_RSLT_NUM_MASK))
 		return -EINVAL;
 
 	/* Start the management command */
@@ -2713,12 +2718,13 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 		return -EIO;
 	}
 
-	QBMAN_BUG_ON(r->num > num_buffers);
+	num = r->num & BMAN_VALID_RSLT_NUM_MASK;
+	QBMAN_BUG_ON(num > num_buffers);
 
 	/* Copy the acquired buffers to the caller's array */
-	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+	u64_from_le32_copy(buffers, &r->buf[0], num);
 
-	return (int)r->num;
+	return num;
 }
 
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 13/42] bus/fslmc: get MC VFIO group FD directly
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (11 preceding siblings ...)
  2024-10-22 19:12         ` [v4 12/42] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 14/42] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
                           ` (29 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Get vfio group fd directly from file system instead of
from RTE API to avoid conflicting with PCIe VFIO.
FSL MC VFIO should have it's own logic which doe NOT depend on
RTE VFIO.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 88 ++++++++++++++++++++++++++--------
 drivers/bus/fslmc/meson.build  |  3 +-
 2 files changed, 71 insertions(+), 20 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index ecca593c34..54398c4643 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2021 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -30,6 +30,7 @@
 #include <rte_kvargs.h>
 #include <dev_driver.h>
 #include <rte_eal_memconfig.h>
+#include <eal_vfio.h>
 
 #include "private.h"
 #include "fslmc_vfio.h"
@@ -440,6 +441,59 @@ int rte_fslmc_vfio_dmamap(void)
 	return 0;
 }
 
+static int
+fslmc_vfio_open_group_fd(int iommu_group_num)
+{
+	int vfio_group_fd;
+	char filename[PATH_MAX];
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
+
+	/* if primary, try to open the group */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		/* try regular group format */
+		snprintf(filename, sizeof(filename),
+			VFIO_GROUP_FMT, iommu_group_num);
+		vfio_group_fd = open(filename, O_RDWR);
+		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
+				filename, vfio_group_fd);
+		}
+
+		return vfio_group_fd;
+	}
+	/* if we're in a secondary process, request group fd from the primary
+	 * process via mp channel.
+	 */
+	p->req = SOCKET_REQ_GROUP;
+	p->group_num = iommu_group_num;
+	rte_strscpy(mp_req.name, EAL_VFIO_MP, sizeof(mp_req.name));
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_group_fd = -1;
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+	    mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		p = (struct vfio_mp_param *)mp_rep->param;
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+			vfio_group_fd = mp_rep->fds[0];
+		} else if (p->result == SOCKET_NO_FD) {
+			DPAA2_BUS_ERR("Bad VFIO group fd");
+			vfio_group_fd = 0;
+		}
+	}
+
+	free(mp_reply.msgs);
+	if (vfio_group_fd < 0) {
+		DPAA2_BUS_ERR("Cannot request group fd(%d)",
+			vfio_group_fd);
+	}
+	return vfio_group_fd;
+}
+
 static int
 fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		int *vfio_dev_fd, struct vfio_device_info *device_info)
@@ -455,7 +509,7 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		return -1;
 
 	/* get the actual group fd */
-	vfio_group_fd = rte_vfio_get_group_fd(iommu_group_no);
+	vfio_group_fd = vfio_group.fd;
 	if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
 		return -1;
 
@@ -891,6 +945,11 @@ fslmc_vfio_close_group(void)
 		}
 	}
 
+	if (vfio_group.fd > 0) {
+		close(vfio_group.fd);
+		vfio_group.fd = 0;
+	}
+
 	return 0;
 }
 
@@ -1081,7 +1140,6 @@ fslmc_vfio_setup_group(void)
 {
 	int groupid;
 	int ret;
-	int vfio_container_fd;
 	struct vfio_group_status status = { .argsz = sizeof(status) };
 
 	/* if already done once */
@@ -1100,16 +1158,9 @@ fslmc_vfio_setup_group(void)
 		return 0;
 	}
 
-	ret = rte_vfio_container_create();
-	if (ret < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return ret;
-	}
-	vfio_container_fd = ret;
-
 	/* Get the actual group fd */
-	ret = rte_vfio_container_group_bind(vfio_container_fd, groupid);
-	if (ret < 0)
+	ret = fslmc_vfio_open_group_fd(groupid);
+	if (ret <= 0)
 		return ret;
 	vfio_group.fd = ret;
 
@@ -1118,14 +1169,14 @@ fslmc_vfio_setup_group(void)
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO error getting group status");
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return ret;
 	}
 
 	if (!(status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
 		DPAA2_BUS_ERR("VFIO group not viable");
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return -EPERM;
 	}
 	/* Since Group is VIABLE, Store the groupid */
@@ -1136,11 +1187,10 @@ fslmc_vfio_setup_group(void)
 		/* Now connect this IOMMU group to given container */
 		ret = vfio_connect_container();
 		if (ret) {
-			DPAA2_BUS_ERR(
-				"Error connecting container with groupid %d",
-				groupid);
+			DPAA2_BUS_ERR("vfio group(%d) connect failed(%d)",
+				groupid, ret);
 			close(vfio_group.fd);
-			rte_vfio_clear_group(vfio_group.fd);
+			vfio_group.fd = 0;
 			return ret;
 		}
 	}
@@ -1151,7 +1201,7 @@ fslmc_vfio_setup_group(void)
 		DPAA2_BUS_ERR("Error getting device %s fd from group %d",
 			      fslmc_container, vfio_group.groupid);
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return ret;
 	}
 	container_device_fd = ret;
diff --git a/drivers/bus/fslmc/meson.build b/drivers/bus/fslmc/meson.build
index 162ca286fe..70098ad778 100644
--- a/drivers/bus/fslmc/meson.build
+++ b/drivers/bus/fslmc/meson.build
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018,2021 NXP
+# Copyright 2018-2023 NXP
 
 if not is_linux
     build = false
@@ -27,3 +27,4 @@ sources = files(
 )
 
 includes += include_directories('mc', 'qbman/include', 'portal')
+includes += include_directories('../../../lib/eal/linux')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 14/42] bus/fslmc: enhance MC VFIO multiprocess support
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (12 preceding siblings ...)
  2024-10-22 19:12         ` [v4 13/42] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 15/42] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
                           ` (28 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

MC VFIO is not registered into RTE VFIO. Primary process registers
MC vfio mp action for secondary process to request.
VFIO/Container handlers are provided via CMSG.
Primary process is responsible to connect MC VFIO group to container.

In addition, MC VFIO code is refactored according to container/group logic.
In general, VFIO container can support multiple groups per process.
Now we only support single MC group(dprc.x) per process, but we add
logic to support connecting multiple MC groups to container.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_bus.c  |  14 +-
 drivers/bus/fslmc/fslmc_vfio.c | 997 ++++++++++++++++++++++-----------
 drivers/bus/fslmc/fslmc_vfio.h |  35 +-
 drivers/bus/fslmc/version.map  |   1 +
 4 files changed, 693 insertions(+), 354 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 97473c278f..a966df1598 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -318,6 +318,7 @@ rte_fslmc_scan(void)
 	struct dirent *entry;
 	static int process_once;
 	int groupid;
+	char *group_name;
 
 	if (process_once) {
 		DPAA2_BUS_DEBUG("Fslmc bus already scanned. Not rescanning");
@@ -325,12 +326,19 @@ rte_fslmc_scan(void)
 	}
 	process_once = 1;
 
-	ret = fslmc_get_container_group(&groupid);
+	/* Now we only support single group per process.*/
+	group_name = getenv("DPRC");
+	if (!group_name) {
+		DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
+		return -EINVAL;
+	}
+
+	ret = fslmc_get_container_group(group_name, &groupid);
 	if (ret != 0)
 		goto scan_fail;
 
 	/* Scan devices on the group */
-	sprintf(fslmc_dirpath, "%s/%s", SYSFS_FSL_MC_DEVICES, fslmc_container);
+	sprintf(fslmc_dirpath, "%s/%s", SYSFS_FSL_MC_DEVICES, group_name);
 	dir = opendir(fslmc_dirpath);
 	if (!dir) {
 		DPAA2_BUS_ERR("Unable to open VFIO group directory");
@@ -338,7 +346,7 @@ rte_fslmc_scan(void)
 	}
 
 	/* Scan the DPRC container object */
-	ret = scan_one_fslmc_device(fslmc_container);
+	ret = scan_one_fslmc_device(group_name);
 	if (ret != 0) {
 		/* Error in parsing directory - exit gracefully */
 		goto scan_fail_cleanup;
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 54398c4643..63e84cb4d8 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2023 NXP
+ *   Copyright 2016-2024 NXP
  *
  */
 
@@ -40,14 +40,14 @@
 #include "portal/dpaa2_hw_pvt.h"
 #include "portal/dpaa2_hw_dpio.h"
 
-#define FSLMC_CONTAINER_MAX_LEN 8 /**< Of the format dprc.XX */
+#define FSLMC_VFIO_MP "fslmc_vfio_mp_sync"
 
-/* Number of VFIO containers & groups with in */
-static struct fslmc_vfio_group vfio_group;
-static struct fslmc_vfio_container vfio_container;
-static int container_device_fd;
-char *fslmc_container;
-static int fslmc_iommu_type;
+/* Container is composed by multiple groups, however,
+ * now each process only supports single group with in container.
+ */
+static struct fslmc_vfio_container s_vfio_container;
+/* Currently we only support single group/process. */
+const char *fslmc_group; /* dprc.x*/
 static uint32_t *msi_intr_vaddr;
 void *(*rte_mcp_ptr_list);
 
@@ -72,108 +72,545 @@ rte_fslmc_object_register(struct rte_dpaa2_object *object)
 	TAILQ_INSERT_TAIL(&dpaa2_obj_list, object, next);
 }
 
-int
-fslmc_get_container_group(int *groupid)
+static const char *
+fslmc_vfio_get_group_name(void)
 {
-	int ret;
-	char *container;
+	return fslmc_group;
+}
+
+static void
+fslmc_vfio_set_group_name(const char *group_name)
+{
+	fslmc_group = group_name;
+}
+
+static int
+fslmc_vfio_add_group(int vfio_group_fd,
+	int iommu_group_num, const char *group_name)
+{
+	struct fslmc_vfio_group *group;
+
+	group = rte_zmalloc(NULL, sizeof(struct fslmc_vfio_group), 0);
+	if (!group)
+		return -ENOMEM;
+	group->fd = vfio_group_fd;
+	group->groupid = iommu_group_num;
+	rte_strscpy(group->group_name, group_name, sizeof(group->group_name));
+	if (rte_vfio_noiommu_is_enabled() > 0)
+		group->iommu_type = RTE_VFIO_NOIOMMU;
+	else
+		group->iommu_type = VFIO_TYPE1_IOMMU;
+	LIST_INSERT_HEAD(&s_vfio_container.groups, group, next);
 
-	if (!fslmc_container) {
-		container = getenv("DPRC");
-		if (container == NULL) {
-			DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
-			return -EINVAL;
+	return 0;
+}
+
+static int
+fslmc_vfio_clear_group(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+	struct fslmc_vfio_device *dev;
+	int clear = 0;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			LIST_FOREACH(dev, &group->vfio_devices, next)
+				LIST_REMOVE(dev, next);
+
+			close(vfio_group_fd);
+			LIST_REMOVE(group, next);
+			rte_free(group);
+			clear = 1;
+
+			break;
 		}
+	}
 
-		if (strlen(container) >= FSLMC_CONTAINER_MAX_LEN) {
-			DPAA2_BUS_ERR("Invalid container name: %s", container);
-			return -1;
+	if (LIST_EMPTY(&s_vfio_container.groups)) {
+		if (s_vfio_container.fd > 0)
+			close(s_vfio_container.fd);
+
+		s_vfio_container.fd = -1;
+	}
+	if (clear)
+		return 0;
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_connect_container(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			group->connected = 1;
+
+			return 0;
+		}
+	}
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_container_connected(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			if (group->connected)
+				return 1;
+		}
+	}
+	return 0;
+}
+
+static int
+fslmc_vfio_iommu_type(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd)
+			return group->iommu_type;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_fd_by_name(const char *group_name)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (!strcmp(group->group_name, group_name))
+			return group->fd;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_fd_by_id(int group_id)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->groupid == group_id)
+			return group->fd;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_add_dev(int vfio_group_fd,
+	int dev_fd, const char *name)
+{
+	struct fslmc_vfio_group *group;
+	struct fslmc_vfio_device *dev;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			dev = rte_zmalloc(NULL,
+				sizeof(struct fslmc_vfio_device), 0);
+			dev->fd = dev_fd;
+			rte_strscpy(dev->dev_name, name, sizeof(dev->dev_name));
+			LIST_INSERT_HEAD(&group->vfio_devices, dev, next);
+			return 0;
 		}
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_remove_dev(int vfio_group_fd,
+	const char *name)
+{
+	struct fslmc_vfio_group *group = NULL;
+	struct fslmc_vfio_device *dev;
+	int removed = 0;
 
-		fslmc_container = strdup(container);
-		if (!fslmc_container) {
-			DPAA2_BUS_ERR("Mem alloc failure; Container name");
-			return -ENOMEM;
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd)
+			break;
+	}
+
+	if (group) {
+		LIST_FOREACH(dev, &group->vfio_devices, next) {
+			if (!strcmp(dev->dev_name, name)) {
+				LIST_REMOVE(dev, next);
+				removed = 1;
+				break;
+			}
 		}
 	}
 
-	fslmc_iommu_type = (rte_vfio_noiommu_is_enabled() == 1) ?
-		RTE_VFIO_NOIOMMU : VFIO_TYPE1_IOMMU;
+	if (removed)
+		return 0;
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_container_fd(void)
+{
+	return s_vfio_container.fd;
+}
+
+static int
+fslmc_get_group_id(const char *group_name,
+	int *groupid)
+{
+	int ret;
 
 	/* get group number */
 	ret = rte_vfio_get_group_num(SYSFS_FSL_MC_DEVICES,
-				     fslmc_container, groupid);
+			group_name, groupid);
 	if (ret <= 0) {
-		DPAA2_BUS_ERR("Unable to find %s IOMMU group", fslmc_container);
-		return -1;
+		DPAA2_BUS_ERR("Unable to find %s IOMMU group", group_name);
+		if (ret < 0)
+			return ret;
+
+		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("Container: %s has VFIO iommu group id = %d",
-			fslmc_container, *groupid);
+	DPAA2_BUS_DEBUG("GROUP(%s) has VFIO iommu group id = %d",
+		group_name, *groupid);
 
 	return 0;
 }
 
 static int
-vfio_connect_container(void)
+fslmc_vfio_open_group_fd(const char *group_name)
 {
-	int fd, ret;
+	int vfio_group_fd;
+	char filename[PATH_MAX];
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
+	int iommu_group_num, ret;
 
-	if (vfio_container.used) {
-		DPAA2_BUS_DEBUG("No container available");
-		return -1;
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd > 0)
+		return vfio_group_fd;
+
+	ret = fslmc_get_group_id(group_name, &iommu_group_num);
+	if (ret)
+		return ret;
+	/* if primary, try to open the group */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		/* try regular group format */
+		snprintf(filename, sizeof(filename),
+			VFIO_GROUP_FMT, iommu_group_num);
+		vfio_group_fd = open(filename, O_RDWR);
+
+		goto add_vfio_group;
+	}
+	/* if we're in a secondary process, request group fd from the primary
+	 * process via mp channel.
+	 */
+	p->req = SOCKET_REQ_GROUP;
+	p->group_num = iommu_group_num;
+	rte_strscpy(mp_req.name, FSLMC_VFIO_MP, sizeof(mp_req.name));
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_group_fd = -1;
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+	    mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		p = (struct vfio_mp_param *)mp_rep->param;
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1)
+			vfio_group_fd = mp_rep->fds[0];
+		else if (p->result == SOCKET_NO_FD)
+			DPAA2_BUS_ERR("Bad VFIO group fd");
+	}
+
+	free(mp_reply.msgs);
+
+add_vfio_group:
+	if (vfio_group_fd < 0) {
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
+				filename, vfio_group_fd);
+		} else {
+			DPAA2_BUS_ERR("Cannot request group fd(%d)",
+				vfio_group_fd);
+		}
+	} else {
+		ret = fslmc_vfio_add_group(vfio_group_fd, iommu_group_num,
+			group_name);
+		if (ret)
+			return ret;
 	}
 
-	/* Try connecting to vfio container if already created */
-	if (!ioctl(vfio_group.fd, VFIO_GROUP_SET_CONTAINER,
-		&vfio_container.fd)) {
-		DPAA2_BUS_DEBUG(
-		    "Container pre-exists with FD[0x%x] for this group",
-		    vfio_container.fd);
-		vfio_group.container = &vfio_container;
+	return vfio_group_fd;
+}
+
+static int
+fslmc_vfio_check_extensions(int vfio_container_fd)
+{
+	int ret;
+	uint32_t idx, n_extensions = 0;
+	static const int type_id[] = {RTE_VFIO_TYPE1, RTE_VFIO_SPAPR,
+		RTE_VFIO_NOIOMMU};
+	static const char * const type_id_nm[] = {"Type 1",
+		"sPAPR", "No-IOMMU"};
+
+	for (idx = 0; idx < RTE_DIM(type_id); idx++) {
+		ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
+			type_id[idx]);
+		if (ret < 0) {
+			DPAA2_BUS_ERR("Could not get IOMMU type, error %i (%s)",
+				errno, strerror(errno));
+			close(vfio_container_fd);
+			return -errno;
+		} else if (ret == 1) {
+			/* we found a supported extension */
+			n_extensions++;
+		}
+		DPAA2_BUS_DEBUG("IOMMU type %d (%s) is %s",
+			type_id[idx], type_id_nm[idx],
+			ret ? "supported" : "not supported");
+	}
+
+	/* if we didn't find any supported IOMMU types, fail */
+	if (!n_extensions) {
+		close(vfio_container_fd);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int
+fslmc_vfio_open_container_fd(void)
+{
+	int ret, vfio_container_fd;
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (void *)mp_req.param;
+
+	if (fslmc_vfio_container_fd() > 0)
+		return fslmc_vfio_container_fd();
+
+	/* if we're in a primary process, try to open the container */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
+		if (vfio_container_fd < 0) {
+			DPAA2_BUS_ERR("Cannot open VFIO container(%s), err(%d)",
+				VFIO_CONTAINER_PATH, vfio_container_fd);
+			ret = vfio_container_fd;
+			goto err_exit;
+		}
+
+		/* check VFIO API version */
+		ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
+		if (ret < 0) {
+			DPAA2_BUS_ERR("Could not get VFIO API version(%d)",
+				ret);
+		} else if (ret != VFIO_API_VERSION) {
+			DPAA2_BUS_ERR("Unsupported VFIO API version(%d)",
+				ret);
+			ret = -ENOTSUP;
+		}
+		if (ret < 0) {
+			close(vfio_container_fd);
+			goto err_exit;
+		}
+
+		ret = fslmc_vfio_check_extensions(vfio_container_fd);
+		if (ret) {
+			DPAA2_BUS_ERR("No supported IOMMU extensions found(%d)",
+				ret);
+			close(vfio_container_fd);
+			goto err_exit;
+		}
+
+		goto success_exit;
+	}
+	/*
+	 * if we're in a secondary process, request container fd from the
+	 * primary process via mp channel
+	 */
+	p->req = SOCKET_REQ_CONTAINER;
+	rte_strscpy(mp_req.name, FSLMC_VFIO_MP, sizeof(mp_req.name));
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_container_fd = -1;
+	ret = rte_mp_request_sync(&mp_req, &mp_reply, &ts);
+	if (ret)
+		goto err_exit;
+
+	if (mp_reply.nb_received != 1) {
+		ret = -EIO;
+		goto err_exit;
+	}
+
+	mp_rep = &mp_reply.msgs[0];
+	p = (void *)mp_rep->param;
+	if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+		vfio_container_fd = mp_rep->fds[0];
+		free(mp_reply.msgs);
+	}
+
+success_exit:
+	s_vfio_container.fd = vfio_container_fd;
+
+	return vfio_container_fd;
+
+err_exit:
+	if (mp_reply.msgs)
+		free(mp_reply.msgs);
+	DPAA2_BUS_ERR("Cannot request container fd err(%d)", ret);
+	return ret;
+}
+
+int
+fslmc_get_container_group(const char *group_name,
+	int *groupid)
+{
+	int ret;
+
+	if (!group_name) {
+		DPAA2_BUS_ERR("No group name provided!");
+
+		return -EINVAL;
+	}
+	ret = fslmc_get_group_id(group_name, groupid);
+	if (ret)
+		return ret;
+
+	fslmc_vfio_set_group_name(group_name);
+
+	return 0;
+}
+
+static int
+fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
+	const void *peer)
+{
+	int fd = -1;
+	int ret;
+	struct rte_mp_msg reply;
+	struct vfio_mp_param *r = (void *)reply.param;
+	const struct vfio_mp_param *m = (const void *)msg->param;
+
+	if (msg->len_param != sizeof(*m)) {
+		DPAA2_BUS_ERR("fslmc vfio received invalid message!");
+		return -EINVAL;
+	}
+
+	memset(&reply, 0, sizeof(reply));
+
+	switch (m->req) {
+	case SOCKET_REQ_GROUP:
+		r->req = SOCKET_REQ_GROUP;
+		r->group_num = m->group_num;
+		fd = fslmc_vfio_group_fd_by_id(m->group_num);
+		if (fd < 0) {
+			r->result = SOCKET_ERR;
+		} else if (!fd) {
+			/* if group exists but isn't bound to VFIO driver */
+			r->result = SOCKET_NO_FD;
+		} else {
+			/* if group exists and is bound to VFIO driver */
+			r->result = SOCKET_OK;
+			reply.num_fds = 1;
+			reply.fds[0] = fd;
+		}
+		break;
+	case SOCKET_REQ_CONTAINER:
+		r->req = SOCKET_REQ_CONTAINER;
+		fd = fslmc_vfio_container_fd();
+		if (fd <= 0) {
+			r->result = SOCKET_ERR;
+		} else {
+			r->result = SOCKET_OK;
+			reply.num_fds = 1;
+			reply.fds[0] = fd;
+		}
+		break;
+	default:
+		DPAA2_BUS_ERR("fslmc vfio received invalid message(%08x)",
+			m->req);
+		return -ENOTSUP;
+	}
+
+	rte_strscpy(reply.name, FSLMC_VFIO_MP, sizeof(reply.name));
+	reply.len_param = sizeof(*r);
+	ret = rte_mp_reply(&reply, peer);
+
+	return ret;
+}
+
+static int
+fslmc_vfio_mp_sync_setup(void)
+{
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		ret = rte_mp_action_register(FSLMC_VFIO_MP,
+			fslmc_vfio_mp_primary);
+		if (ret && rte_errno != ENOTSUP)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int
+vfio_connect_container(int vfio_container_fd,
+	int vfio_group_fd)
+{
+	int ret;
+	int iommu_type;
+
+	if (fslmc_vfio_container_connected(vfio_group_fd)) {
+		DPAA2_BUS_WARN("VFIO FD(%d) has connected to container",
+			vfio_group_fd);
 		return 0;
 	}
 
-	/* Opens main vfio file descriptor which represents the "container" */
-	fd = rte_vfio_get_container_fd();
-	if (fd < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return -errno;
+	iommu_type = fslmc_vfio_iommu_type(vfio_group_fd);
+	if (iommu_type < 0) {
+		DPAA2_BUS_ERR("Failed to get iommu type(%d)",
+			iommu_type);
+
+		return iommu_type;
 	}
 
 	/* Check whether support for SMMU type IOMMU present or not */
-	if (ioctl(fd, VFIO_CHECK_EXTENSION, fslmc_iommu_type)) {
+	if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type)) {
 		/* Connect group to container */
-		ret = ioctl(vfio_group.fd, VFIO_GROUP_SET_CONTAINER, &fd);
+		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
+			&vfio_container_fd);
 		if (ret) {
 			DPAA2_BUS_ERR("Failed to setup group container");
-			close(fd);
 			return -errno;
 		}
 
-		ret = ioctl(fd, VFIO_SET_IOMMU, fslmc_iommu_type);
+		ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
 		if (ret) {
 			DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-			close(fd);
 			return -errno;
 		}
 	} else {
 		DPAA2_BUS_ERR("No supported IOMMU available");
-		close(fd);
 		return -EINVAL;
 	}
 
-	vfio_container.used = 1;
-	vfio_container.fd = fd;
-	vfio_container.group = &vfio_group;
-	vfio_group.container = &vfio_container;
-
-	return 0;
+	return fslmc_vfio_connect_container(vfio_group_fd);
 }
 
-static int vfio_map_irq_region(struct fslmc_vfio_group *group)
+static int vfio_map_irq_region(void)
 {
-	int ret;
+	int ret, fd;
 	unsigned long *vaddr = NULL;
 	struct vfio_iommu_type1_dma_map map = {
 		.argsz = sizeof(map),
@@ -182,9 +619,23 @@ static int vfio_map_irq_region(struct fslmc_vfio_group *group)
 		.iova = 0x6030000,
 		.size = 0x1000,
 	};
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (!fslmc_vfio_container_connected(fd)) {
+		DPAA2_BUS_ERR("Container is not connected");
+		return -EIO;
+	}
 
 	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
-		PROT_READ, MAP_SHARED, container_device_fd, 0x6030000);
+		PROT_READ, MAP_SHARED, fd, 0x6030000);
 	if (vaddr == MAP_FAILED) {
 		DPAA2_BUS_INFO("Unable to map region (errno = %d)", errno);
 		return -errno;
@@ -192,8 +643,8 @@ static int vfio_map_irq_region(struct fslmc_vfio_group *group)
 
 	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
 	map.vaddr = (unsigned long)vaddr;
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA, &map);
-	if (ret == 0)
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA, &map);
+	if (!ret)
 		return 0;
 
 	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
@@ -204,8 +655,8 @@ static int fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
 static int fslmc_unmap_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
 
 static void
-fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
-		void *arg __rte_unused)
+fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
+	size_t len, void *arg __rte_unused)
 {
 	struct rte_memseg_list *msl;
 	struct rte_memseg *ms;
@@ -262,44 +713,54 @@ fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
 }
 
 static int
-fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr __rte_unused, size_t len)
+fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr,
+	size_t len)
 {
-	struct fslmc_vfio_group *group;
 	struct vfio_iommu_type1_dma_map dma_map = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
 		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
 	};
-	int ret;
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
+	int ret, fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
 		return 0;
 	}
 
 	dma_map.size = len;
 	dma_map.vaddr = vaddr;
-
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 	dma_map.iova = iovaddr;
-#else
-	dma_map.iova = dma_map.vaddr;
+
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+	if (vaddr != iovaddr) {
+		DPAA2_BUS_WARN("vaddr(0x%"PRIx64") != iovaddr(0x%"PRIx64")",
+			vaddr, iovaddr);
+	}
 #endif
 
 	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-
-	if (!group->container) {
+	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected ");
-		return -1;
+		return -EIO;
 	}
 
 	DPAA2_BUS_DEBUG("--> Map address: 0x%"PRIx64", size: %"PRIu64"",
 			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA, &dma_map);
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA,
+		&dma_map);
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO_IOMMU_MAP_DMA API(errno = %d)",
 				errno);
-		return -1;
+		return ret;
 	}
 
 	return 0;
@@ -308,14 +769,22 @@ fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr __rte_unused, size_t len)
 static int
 fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 {
-	struct fslmc_vfio_group *group;
 	struct vfio_iommu_type1_dma_unmap dma_unmap = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_unmap),
 		.flags = 0,
 	};
-	int ret;
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
+	int ret, fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
 		return 0;
 	}
@@ -324,16 +793,15 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 	dma_unmap.iova = vaddr;
 
 	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-
-	if (!group->container) {
+	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected ");
-		return -1;
+		return -EIO;
 	}
 
 	DPAA2_BUS_DEBUG("--> Unmap address: 0x%"PRIx64", size: %"PRIu64"",
 			(uint64_t)dma_unmap.iova, (uint64_t)dma_unmap.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap);
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_UNMAP_DMA,
+		&dma_unmap);
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO_IOMMU_UNMAP_DMA API(errno = %d)",
 				errno);
@@ -367,41 +835,13 @@ fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 int
 rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 {
-	int ret;
-	struct fslmc_vfio_group *group;
-	struct vfio_iommu_type1_dma_map dma_map = {
-		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
-		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
-	};
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
-		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
-		return 0;
-	}
-
-	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-	if (!group->container) {
-		DPAA2_BUS_ERR("Container is not connected");
-		return -1;
-	}
-
-	dma_map.size = size;
-	dma_map.vaddr = vaddr;
-	dma_map.iova = iova;
-
-	DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64,
-			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.iova,
-			(uint64_t)dma_map.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA,
-		    &dma_map);
-	if (ret) {
-		DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)",
-			errno);
-		return ret;
-	}
+	return fslmc_map_dma(vaddr, iova, size);
+}
 
-	return 0;
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
+{
+	return fslmc_unmap_dma(iova, 0, size);
 }
 
 int rte_fslmc_vfio_dmamap(void)
@@ -431,7 +871,7 @@ int rte_fslmc_vfio_dmamap(void)
 	 * the interrupt region to SMMU. This should be removed once the
 	 * support is added in the Kernel.
 	 */
-	vfio_map_irq_region(&vfio_group);
+	vfio_map_irq_region();
 
 	/* Existing segments have been mapped and memory callback for hotplug
 	 * has been installed.
@@ -442,149 +882,19 @@ int rte_fslmc_vfio_dmamap(void)
 }
 
 static int
-fslmc_vfio_open_group_fd(int iommu_group_num)
-{
-	int vfio_group_fd;
-	char filename[PATH_MAX];
-	struct rte_mp_msg mp_req, *mp_rep;
-	struct rte_mp_reply mp_reply = {0};
-	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
-	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
-
-	/* if primary, try to open the group */
-	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
-		/* try regular group format */
-		snprintf(filename, sizeof(filename),
-			VFIO_GROUP_FMT, iommu_group_num);
-		vfio_group_fd = open(filename, O_RDWR);
-		if (vfio_group_fd <= 0) {
-			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
-				filename, vfio_group_fd);
-		}
-
-		return vfio_group_fd;
-	}
-	/* if we're in a secondary process, request group fd from the primary
-	 * process via mp channel.
-	 */
-	p->req = SOCKET_REQ_GROUP;
-	p->group_num = iommu_group_num;
-	rte_strscpy(mp_req.name, EAL_VFIO_MP, sizeof(mp_req.name));
-	mp_req.len_param = sizeof(*p);
-	mp_req.num_fds = 0;
-
-	vfio_group_fd = -1;
-	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
-	    mp_reply.nb_received == 1) {
-		mp_rep = &mp_reply.msgs[0];
-		p = (struct vfio_mp_param *)mp_rep->param;
-		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
-			vfio_group_fd = mp_rep->fds[0];
-		} else if (p->result == SOCKET_NO_FD) {
-			DPAA2_BUS_ERR("Bad VFIO group fd");
-			vfio_group_fd = 0;
-		}
-	}
-
-	free(mp_reply.msgs);
-	if (vfio_group_fd < 0) {
-		DPAA2_BUS_ERR("Cannot request group fd(%d)",
-			vfio_group_fd);
-	}
-	return vfio_group_fd;
-}
-
-static int
-fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
-		int *vfio_dev_fd, struct vfio_device_info *device_info)
+fslmc_vfio_setup_device(const char *dev_addr,
+	int *vfio_dev_fd, struct vfio_device_info *device_info)
 {
 	struct vfio_group_status group_status = {
 			.argsz = sizeof(group_status)
 	};
-	int vfio_group_fd, vfio_container_fd, iommu_group_no, ret;
+	int vfio_group_fd, ret;
+	const char *group_name = fslmc_vfio_get_group_name();
 
-	/* get group number */
-	ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_no);
-	if (ret < 0)
-		return -1;
-
-	/* get the actual group fd */
-	vfio_group_fd = vfio_group.fd;
-	if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
-		return -1;
-
-	/*
-	 * if vfio_group_fd == -ENOENT, that means the device
-	 * isn't managed by VFIO
-	 */
-	if (vfio_group_fd == -ENOENT) {
-		DPAA2_BUS_WARN(" %s not managed by VFIO driver, skipping",
-				dev_addr);
-		return 1;
-	}
-
-	/* Opens main vfio file descriptor which represents the "container" */
-	vfio_container_fd = rte_vfio_get_container_fd();
-	if (vfio_container_fd < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return -errno;
-	}
-
-	/* check if the group is viable */
-	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status);
-	if (ret) {
-		DPAA2_BUS_ERR("  %s cannot get group status, "
-				"error %i (%s)", dev_addr,
-				errno, strerror(errno));
-		close(vfio_group_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
-	} else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
-		DPAA2_BUS_ERR("  %s VFIO group is not viable!", dev_addr);
-		close(vfio_group_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
-	}
-	/* At this point, we know that this group is viable (meaning,
-	 * all devices are either bound to VFIO or not bound to anything)
-	 */
-
-	/* check if group does not have a container yet */
-	if (!(group_status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
-
-		/* add group to a container */
-		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
-				&vfio_container_fd);
-		if (ret) {
-			DPAA2_BUS_ERR("  %s cannot add VFIO group to container, "
-					"error %i (%s)", dev_addr,
-					errno, strerror(errno));
-			close(vfio_group_fd);
-			close(vfio_container_fd);
-			rte_vfio_clear_group(vfio_group_fd);
-			return -1;
-		}
-
-		/*
-		 * set an IOMMU type for container
-		 *
-		 */
-		if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
-			  fslmc_iommu_type)) {
-			ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU,
-				    fslmc_iommu_type);
-			if (ret) {
-				DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-				close(vfio_group_fd);
-				close(vfio_container_fd);
-				return -errno;
-			}
-		} else {
-			DPAA2_BUS_ERR("No supported IOMMU available");
-			close(vfio_group_fd);
-			close(vfio_container_fd);
-			return -EINVAL;
-		}
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (!fslmc_vfio_container_connected(vfio_group_fd)) {
+		DPAA2_BUS_ERR("Container is not connected");
+		return -EIO;
 	}
 
 	/* get a file descriptor for the device */
@@ -594,26 +904,21 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		 * the VFIO group or the container not having IOMMU configured.
 		 */
 
-		DPAA2_BUS_WARN("Getting a vfio_dev_fd for %s failed", dev_addr);
-		close(vfio_group_fd);
-		close(vfio_container_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
+		DPAA2_BUS_ERR("Getting a vfio_dev_fd for %s from %s failed",
+			dev_addr, group_name);
+		return -EIO;
 	}
 
 	/* test and setup the device */
 	ret = ioctl(*vfio_dev_fd, VFIO_DEVICE_GET_INFO, device_info);
 	if (ret) {
-		DPAA2_BUS_ERR("  %s cannot get device info, error %i (%s)",
-				dev_addr, errno, strerror(errno));
-		close(*vfio_dev_fd);
-		close(vfio_group_fd);
-		close(vfio_container_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
+		DPAA2_BUS_ERR("%s cannot get device info err(%d)(%s)",
+			dev_addr, errno, strerror(errno));
+		return ret;
 	}
 
-	return 0;
+	return fslmc_vfio_group_add_dev(vfio_group_fd, *vfio_dev_fd,
+			dev_addr);
 }
 
 static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
@@ -625,8 +930,7 @@ static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
 	struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
 
-	fslmc_vfio_setup_device(SYSFS_FSL_MC_DEVICES, mcp_obj,
-			&mc_fd, &d_info);
+	fslmc_vfio_setup_device(mcp_obj, &mc_fd, &d_info);
 
 	/* getting device region info*/
 	ret = ioctl(mc_fd, VFIO_DEVICE_GET_REGION_INFO, &reg_info);
@@ -757,7 +1061,8 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 }
 
 static void
-fslmc_close_iodevices(struct rte_dpaa2_device *dev)
+fslmc_close_iodevices(struct rte_dpaa2_device *dev,
+	int vfio_fd)
 {
 	struct rte_dpaa2_object *object = NULL;
 	struct rte_dpaa2_driver *drv;
@@ -800,6 +1105,11 @@ fslmc_close_iodevices(struct rte_dpaa2_device *dev)
 		break;
 	}
 
+	ret = fslmc_vfio_group_remove_dev(vfio_fd, dev->device.name);
+	if (ret) {
+		DPAA2_BUS_ERR("Failed to remove %s from vfio",
+			dev->device.name);
+	}
 	DPAA2_BUS_LOG(DEBUG, "Device (%s) Closed",
 		      dev->device.name);
 }
@@ -811,17 +1121,21 @@ fslmc_close_iodevices(struct rte_dpaa2_device *dev)
 static int
 fslmc_process_iodevices(struct rte_dpaa2_device *dev)
 {
-	int dev_fd;
+	int dev_fd, ret;
 	struct vfio_device_info device_info = { .argsz = sizeof(device_info) };
 	struct rte_dpaa2_object *object = NULL;
 
-	fslmc_vfio_setup_device(SYSFS_FSL_MC_DEVICES, dev->device.name,
-			&dev_fd, &device_info);
+	ret = fslmc_vfio_setup_device(dev->device.name, &dev_fd,
+			&device_info);
+	if (ret)
+		return ret;
 
 	switch (dev->dev_type) {
 	case DPAA2_ETH:
-		rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
-					  device_info.num_irqs);
+		ret = rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
+				device_info.num_irqs);
+		if (ret)
+			return ret;
 		break;
 	case DPAA2_CON:
 	case DPAA2_IO:
@@ -913,6 +1227,10 @@ int
 fslmc_vfio_close_group(void)
 {
 	struct rte_dpaa2_device *dev, *dev_temp;
+	int vfio_group_fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
 		if (dev->device.devargs &&
@@ -927,7 +1245,7 @@ fslmc_vfio_close_group(void)
 		case DPAA2_CRYPTO:
 		case DPAA2_QDMA:
 		case DPAA2_IO:
-			fslmc_close_iodevices(dev);
+			fslmc_close_iodevices(dev, vfio_group_fd);
 			break;
 		case DPAA2_CON:
 		case DPAA2_CI:
@@ -936,7 +1254,7 @@ fslmc_vfio_close_group(void)
 			if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 				continue;
 
-			fslmc_close_iodevices(dev);
+			fslmc_close_iodevices(dev, vfio_group_fd);
 			break;
 		case DPAA2_DPRTC:
 		default:
@@ -945,10 +1263,7 @@ fslmc_vfio_close_group(void)
 		}
 	}
 
-	if (vfio_group.fd > 0) {
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
-	}
+	fslmc_vfio_clear_group(vfio_group_fd);
 
 	return 0;
 }
@@ -1138,75 +1453,85 @@ fslmc_vfio_process_group(void)
 int
 fslmc_vfio_setup_group(void)
 {
-	int groupid;
-	int ret;
+	int vfio_group_fd, vfio_container_fd, ret;
 	struct vfio_group_status status = { .argsz = sizeof(status) };
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	/* MC VFIO setup entry */
+	vfio_container_fd = fslmc_vfio_container_fd();
+	if (vfio_container_fd <= 0) {
+		vfio_container_fd = fslmc_vfio_open_container_fd();
+		if (vfio_container_fd < 0) {
+			DPAA2_BUS_ERR("Failed to create MC VFIO container");
+			return vfio_container_fd;
+		}
+	}
 
-	/* if already done once */
-	if (container_device_fd)
-		return 0;
-
-	ret = fslmc_get_container_group(&groupid);
-	if (ret)
-		return ret;
-
-	/* In case this group was already opened, continue without any
-	 * processing.
-	 */
-	if (vfio_group.groupid == groupid) {
-		DPAA2_BUS_ERR("groupid already exists %d", groupid);
-		return 0;
+	if (!group_name) {
+		DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
+		return -EINVAL;
 	}
 
-	/* Get the actual group fd */
-	ret = fslmc_vfio_open_group_fd(groupid);
-	if (ret <= 0)
-		return ret;
-	vfio_group.fd = ret;
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd < 0) {
+		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
+		if (vfio_group_fd < 0) {
+			DPAA2_BUS_ERR("open group name(%s) failed(%d)",
+				group_name, vfio_group_fd);
+			return -rte_errno;
+		}
+	}
 
 	/* Check group viability */
-	ret = ioctl(vfio_group.fd, VFIO_GROUP_GET_STATUS, &status);
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &status);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO error getting group status");
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		DPAA2_BUS_ERR("VFIO(%s:fd=%d) error getting group status(%d)",
+			group_name, vfio_group_fd, ret);
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return ret;
 	}
 
 	if (!(status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
 		DPAA2_BUS_ERR("VFIO group not viable");
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return -EPERM;
 	}
-	/* Since Group is VIABLE, Store the groupid */
-	vfio_group.groupid = groupid;
 
 	/* check if group does not have a container yet */
 	if (!(status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
 		/* Now connect this IOMMU group to given container */
-		ret = vfio_connect_container();
-		if (ret) {
-			DPAA2_BUS_ERR("vfio group(%d) connect failed(%d)",
-				groupid, ret);
-			close(vfio_group.fd);
-			vfio_group.fd = 0;
-			return ret;
-		}
+		ret = vfio_connect_container(vfio_container_fd,
+			vfio_group_fd);
+	} else {
+		/* Here is supposed in secondary process,
+		 * group has been set to container in primary process.
+		 */
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+			DPAA2_BUS_WARN("This group has been set container?");
+		ret = fslmc_vfio_connect_container(vfio_group_fd);
+	}
+	if (ret) {
+		DPAA2_BUS_ERR("vfio group connect failed(%d)", ret);
+		fslmc_vfio_clear_group(vfio_group_fd);
+		return ret;
 	}
 
 	/* Get Device information */
-	ret = ioctl(vfio_group.fd, VFIO_GROUP_GET_DEVICE_FD, fslmc_container);
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_DEVICE_FD, group_name);
 	if (ret < 0) {
-		DPAA2_BUS_ERR("Error getting device %s fd from group %d",
-			      fslmc_container, vfio_group.groupid);
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		DPAA2_BUS_ERR("Error getting device %s fd", group_name);
+		fslmc_vfio_clear_group(vfio_group_fd);
+		return ret;
+	}
+
+	ret = fslmc_vfio_mp_sync_setup();
+	if (ret) {
+		DPAA2_BUS_ERR("VFIO MP sync setup failed!");
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return ret;
 	}
-	container_device_fd = ret;
-	DPAA2_BUS_DEBUG("VFIO Container FD is [0x%X]",
-			container_device_fd);
+
+	DPAA2_BUS_DEBUG("VFIO GROUP FD is %d", vfio_group_fd);
 
 	return 0;
 }
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index b6677bdd18..1695b6c078 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2019-2020 NXP
+ *   Copyright 2016,2019-2023 NXP
  *
  */
 
@@ -20,26 +20,28 @@
 #define DPAA2_MC_DPBP_DEVID	10
 #define DPAA2_MC_DPCI_DEVID	11
 
-typedef struct fslmc_vfio_device {
+struct fslmc_vfio_device {
+	LIST_ENTRY(fslmc_vfio_device) next;
 	int fd; /* fslmc root container device ?? */
 	int index; /*index of child object */
+	char dev_name[64];
 	struct fslmc_vfio_device *child; /* Child object */
-} fslmc_vfio_device;
+};
 
-typedef struct fslmc_vfio_group {
+struct fslmc_vfio_group {
+	LIST_ENTRY(fslmc_vfio_group) next;
 	int fd; /* /dev/vfio/"groupid" */
 	int groupid;
-	struct fslmc_vfio_container *container;
-	int object_index;
-	struct fslmc_vfio_device *vfio_device;
-} fslmc_vfio_group;
+	int connected;
+	char group_name[64]; /* dprc.x*/
+	int iommu_type;
+	LIST_HEAD(, fslmc_vfio_device) vfio_devices;
+};
 
-typedef struct fslmc_vfio_container {
+struct fslmc_vfio_container {
 	int fd; /* /dev/vfio/vfio */
-	int used;
-	int index; /* index in group list */
-	struct fslmc_vfio_group *group;
-} fslmc_vfio_container;
+	LIST_HEAD(, fslmc_vfio_group) groups;
+};
 
 extern char *fslmc_container;
 
@@ -57,8 +59,11 @@ int fslmc_vfio_setup_group(void);
 int fslmc_vfio_process_group(void);
 int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
-int fslmc_get_container_group(int *gropuid);
+int fslmc_get_container_group(const char *group_name, int *gropuid);
 int rte_fslmc_vfio_dmamap(void);
-int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova,
+		uint64_t size);
+int rte_fslmc_vfio_mem_dmaunmap(uint64_t iova,
+		uint64_t size);
 
 #endif /* _FSLMC_VFIO_H_ */
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index df1143733d..b49bc0a62c 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -118,6 +118,7 @@ INTERNAL {
 	rte_fslmc_get_device_count;
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
+	rte_fslmc_vfio_mem_dmaunmap;
 
 	local: *;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 15/42] bus/fslmc: free VFIO group FD in case of add group failure
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (13 preceding siblings ...)
  2024-10-22 19:12         ` [v4 14/42] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 16/42] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
                           ` (27 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Free vfio_group_fd if add group fails to avoid resource leak

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 63e84cb4d8..3d466d3f1f 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -343,8 +343,10 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	} else {
 		ret = fslmc_vfio_add_group(vfio_group_fd, iommu_group_num,
 			group_name);
-		if (ret)
+		if (ret) {
+			close(vfio_group_fd);
 			return ret;
+		}
 	}
 
 	return vfio_group_fd;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 16/42] bus/fslmc: dynamic IOVA mode configuration
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (14 preceding siblings ...)
  2024-10-22 19:12         ` [v4 15/42] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-23  1:02           ` Stephen Hemminger
  2024-10-22 19:12         ` [v4 17/42] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
                           ` (26 subsequent siblings)
  42 siblings, 1 reply; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh
  Cc: Jun Yang, Vanshika Shukla

From: Jun Yang <jun.yang@nxp.com>

IOVA mode should not be configured with CFLAGS because
1) User can perform "--iova-mode" to configure IOVA.
2) IOVA mode is determined by negotiation between multiple devices.
   Eal is in VA mode only when all devices support VA mode.

Hence:
1) Remove RTE_LIBRTE_DPAA2_USE_PHYS_IOVA cflags.
   Instead, use rte_eal_iova_mode API to identify VA or PA mode.
2) Support memory IOMMU mapping and I/O IOMMU mapping(PCI space).
3) For memory IOMMU, in VA mode, IOVA:VA = 1:1;
   in PA mode, IOVA:VA = PA:VA. The mapping policy is determined by
   EAL memory driver.
4) For I/O IOMMU, IOVA:VA is up to I/O driver configuration.
   In general, it's aligned with memory IOMMU mapping.
5) Memory and I/O IOVA tables are created and update when DMA
   mapping is setup, which takes place of dpaax IOVA table.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     |  29 +-
 drivers/bus/fslmc/fslmc_bus.c            |  33 +-
 drivers/bus/fslmc/fslmc_vfio.c           | 662 ++++++++++++++++++-----
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c |   3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c |  10 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h |   3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h  | 111 ++--
 drivers/bus/fslmc/version.map            |   7 +-
 drivers/dma/dpaa2/dpaa2_qdma.c           |   1 +
 9 files changed, 608 insertions(+), 251 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index a3428fe28b..ba3774823b 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -33,9 +33,6 @@
 
 #include <fslmc_vfio.h>
 
-#include "portal/dpaa2_hw_pvt.h"
-#include "portal/dpaa2_hw_dpio.h"
-
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -149,6 +146,32 @@ struct rte_dpaa2_driver {
 	rte_dpaa2_remove_t remove;
 };
 
+int
+rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+__rte_internal
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size);
+__rte_internal
+uint64_t
+rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
+	uint64_t size);
+__rte_internal
+void *
+rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
+	uint64_t size);
+__rte_internal
+__rte_hot uint64_t
+rte_fslmc_mem_vaddr_to_iova(void *vaddr);
+__rte_internal
+__rte_hot void *
+rte_fslmc_mem_iova_to_vaddr(uint64_t iova);
+__rte_internal
+uint64_t
+rte_fslmc_io_vaddr_to_iova(void *vaddr);
+__rte_internal
+void *
+rte_fslmc_io_iova_to_vaddr(uint64_t iova);
+
 /**
  * Register a DPAA2 driver.
  *
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index a966df1598..107cc70833 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -27,7 +27,6 @@
 #define FSLMC_BUS_NAME	fslmc
 
 struct rte_fslmc_bus rte_fslmc_bus;
-uint8_t dpaa2_virt_mode;
 
 #define DPAA2_SEQN_DYNFIELD_NAME "dpaa2_seqn_dynfield"
 int dpaa2_seqn_dynfield_offset = -1;
@@ -457,22 +456,6 @@ rte_fslmc_probe(void)
 
 	probe_all = rte_fslmc_bus.bus.conf.scan_mode != RTE_BUS_SCAN_ALLOWLIST;
 
-	/* In case of PA, the FD addresses returned by qbman APIs are physical
-	 * addresses, which need conversion into equivalent VA address for
-	 * rte_mbuf. For that, a table (a serial array, in memory) is used to
-	 * increase translation efficiency.
-	 * This has to be done before probe as some device initialization
-	 * (during) probe allocate memory (dpaa2_sec) which needs to be pinned
-	 * to this table.
-	 *
-	 * Error is ignored as relevant logs are handled within dpaax and
-	 * handling for unavailable dpaax table too is transparent to caller.
-	 *
-	 * And, the IOVA table is only applicable in case of PA mode.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_populate();
-
 	TAILQ_FOREACH(dev, &rte_fslmc_bus.device_list, next) {
 		TAILQ_FOREACH(drv, &rte_fslmc_bus.driver_list, next) {
 			ret = rte_fslmc_match(drv, dev);
@@ -507,9 +490,6 @@ rte_fslmc_probe(void)
 		}
 	}
 
-	if (rte_eal_iova_mode() == RTE_IOVA_VA)
-		dpaa2_virt_mode = 1;
-
 	return 0;
 }
 
@@ -558,12 +538,6 @@ rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
 void
 rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver)
 {
-	/* Cleanup the PA->VA Translation table; From wherever this function
-	 * is called from.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_depopulate();
-
 	TAILQ_REMOVE(&rte_fslmc_bus.driver_list, driver, next);
 }
 
@@ -599,13 +573,12 @@ rte_dpaa2_get_iommu_class(void)
 	bool is_vfio_noiommu_enabled = 1;
 	bool has_iova_va;
 
+	if (rte_eal_iova_mode() == RTE_IOVA_PA)
+		return RTE_IOVA_PA;
+
 	if (TAILQ_EMPTY(&rte_fslmc_bus.device_list))
 		return RTE_IOVA_DC;
 
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-	return RTE_IOVA_PA;
-#endif
-
 	/* check if all devices on the bus support Virtual addressing or not */
 	has_iova_va = fslmc_all_device_support_iova();
 
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 3d466d3f1f..b0e7299bda 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -19,6 +19,7 @@
 #include <libgen.h>
 #include <dirent.h>
 #include <sys/eventfd.h>
+#include <ctype.h>
 
 #include <eal_filesystem.h>
 #include <rte_mbuf.h>
@@ -47,9 +48,41 @@
  */
 static struct fslmc_vfio_container s_vfio_container;
 /* Currently we only support single group/process. */
-const char *fslmc_group; /* dprc.x*/
+static const char *fslmc_group; /* dprc.x*/
 static uint32_t *msi_intr_vaddr;
-void *(*rte_mcp_ptr_list);
+static void *(*rte_mcp_ptr_list);
+
+struct fslmc_dmaseg {
+	uint64_t vaddr;
+	uint64_t iova;
+	uint64_t size;
+
+	TAILQ_ENTRY(fslmc_dmaseg) next;
+};
+
+TAILQ_HEAD(fslmc_dmaseg_list, fslmc_dmaseg);
+
+struct fslmc_dmaseg_list fslmc_memsegs =
+		TAILQ_HEAD_INITIALIZER(fslmc_memsegs);
+struct fslmc_dmaseg_list fslmc_iosegs =
+		TAILQ_HEAD_INITIALIZER(fslmc_iosegs);
+
+static uint64_t fslmc_mem_va2iova = RTE_BAD_IOVA;
+static int fslmc_mem_map_num;
+
+struct fslmc_mem_param {
+	struct vfio_mp_param mp_param;
+	struct fslmc_dmaseg_list memsegs;
+	struct fslmc_dmaseg_list iosegs;
+	uint64_t mem_va2iova;
+	int mem_map_num;
+};
+
+enum {
+	FSLMC_VFIO_SOCKET_REQ_CONTAINER = 0x100,
+	FSLMC_VFIO_SOCKET_REQ_GROUP,
+	FSLMC_VFIO_SOCKET_REQ_MEM
+};
 
 void *
 dpaa2_get_mcp_ptr(int portal_idx)
@@ -63,6 +96,64 @@ dpaa2_get_mcp_ptr(int portal_idx)
 static struct rte_dpaa2_object_list dpaa2_obj_list =
 	TAILQ_HEAD_INITIALIZER(dpaa2_obj_list);
 
+static uint64_t
+fslmc_io_virt2phy(const void *virtaddr)
+{
+	FILE *fp = fopen("/proc/self/maps", "r");
+	char *line = NULL;
+	size_t linesz;
+	uint64_t start, end, phy;
+	const uint64_t va = (const uint64_t)virtaddr;
+	char tmp[1024];
+	int ret;
+
+	if (!fp)
+		return RTE_BAD_IOVA;
+	while (getdelim(&line, &linesz, '\n', fp) > 0) {
+		char *ptr = line;
+		int n;
+
+		/** Parse virtual address range.*/
+		n = 0;
+		while (*ptr && !isspace(*ptr)) {
+			tmp[n] = *ptr;
+			ptr++;
+			n++;
+		}
+		tmp[n] = 0;
+		ret = sscanf(tmp, "%" SCNx64 "-%" SCNx64, &start, &end);
+		if (ret != 2)
+			continue;
+		if (va < start || va >= end)
+			continue;
+
+		/** This virtual address is in this segment.*/
+		while (*ptr == ' ' || *ptr == 'r' ||
+			*ptr == 'w' || *ptr == 's' ||
+			*ptr == 'p' || *ptr == 'x' ||
+			*ptr == '-')
+			ptr++;
+
+		/** Extract phy address*/
+		n = 0;
+		while (*ptr && !isspace(*ptr)) {
+			tmp[n] = *ptr;
+			ptr++;
+			n++;
+		}
+		tmp[n] = 0;
+		phy = strtoul(tmp, 0, 16);
+		if (!phy)
+			continue;
+
+		fclose(fp);
+		return phy + va - start;
+	}
+
+	fclose(fp);
+	return RTE_BAD_IOVA;
+}
+
 /*register a fslmc bus based dpaa2 driver */
 void
 rte_fslmc_object_register(struct rte_dpaa2_object *object)
@@ -269,7 +360,7 @@ fslmc_get_group_id(const char *group_name,
 	ret = rte_vfio_get_group_num(SYSFS_FSL_MC_DEVICES,
 			group_name, groupid);
 	if (ret <= 0) {
-		DPAA2_BUS_ERR("Unable to find %s IOMMU group", group_name);
+		DPAA2_BUS_ERR("Find %s IOMMU group", group_name);
 		if (ret < 0)
 			return ret;
 
@@ -312,7 +403,7 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	/* if we're in a secondary process, request group fd from the primary
 	 * process via mp channel.
 	 */
-	p->req = SOCKET_REQ_GROUP;
+	p->req = FSLMC_VFIO_SOCKET_REQ_GROUP;
 	p->group_num = iommu_group_num;
 	rte_strscpy(mp_req.name, FSLMC_VFIO_MP, sizeof(mp_req.name));
 	mp_req.len_param = sizeof(*p);
@@ -404,7 +495,7 @@ fslmc_vfio_open_container_fd(void)
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
 		if (vfio_container_fd < 0) {
-			DPAA2_BUS_ERR("Cannot open VFIO container(%s), err(%d)",
+			DPAA2_BUS_ERR("Open VFIO container(%s), err(%d)",
 				VFIO_CONTAINER_PATH, vfio_container_fd);
 			ret = vfio_container_fd;
 			goto err_exit;
@@ -413,7 +504,7 @@ fslmc_vfio_open_container_fd(void)
 		/* check VFIO API version */
 		ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
 		if (ret < 0) {
-			DPAA2_BUS_ERR("Could not get VFIO API version(%d)",
+			DPAA2_BUS_ERR("Get VFIO API version(%d)",
 				ret);
 		} else if (ret != VFIO_API_VERSION) {
 			DPAA2_BUS_ERR("Unsupported VFIO API version(%d)",
@@ -427,7 +518,7 @@ fslmc_vfio_open_container_fd(void)
 
 		ret = fslmc_vfio_check_extensions(vfio_container_fd);
 		if (ret) {
-			DPAA2_BUS_ERR("No supported IOMMU extensions found(%d)",
+			DPAA2_BUS_ERR("Unsupported IOMMU extensions found(%d)",
 				ret);
 			close(vfio_container_fd);
 			goto err_exit;
@@ -439,7 +530,7 @@ fslmc_vfio_open_container_fd(void)
 	 * if we're in a secondary process, request container fd from the
 	 * primary process via mp channel
 	 */
-	p->req = SOCKET_REQ_CONTAINER;
+	p->req = FSLMC_VFIO_SOCKET_REQ_CONTAINER;
 	rte_strscpy(mp_req.name, FSLMC_VFIO_MP, sizeof(mp_req.name));
 	mp_req.len_param = sizeof(*p);
 	mp_req.num_fds = 0;
@@ -469,7 +560,7 @@ fslmc_vfio_open_container_fd(void)
 err_exit:
 	if (mp_reply.msgs)
 		free(mp_reply.msgs);
-	DPAA2_BUS_ERR("Cannot request container fd err(%d)", ret);
+	DPAA2_BUS_ERR("Open container fd err(%d)", ret);
 	return ret;
 }
 
@@ -502,17 +593,19 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 	struct rte_mp_msg reply;
 	struct vfio_mp_param *r = (void *)reply.param;
 	const struct vfio_mp_param *m = (const void *)msg->param;
+	struct fslmc_mem_param *map;
 
 	if (msg->len_param != sizeof(*m)) {
-		DPAA2_BUS_ERR("fslmc vfio received invalid message!");
+		DPAA2_BUS_ERR("Invalid msg size(%d) for req(%d)",
+			msg->len_param, m->req);
 		return -EINVAL;
 	}
 
 	memset(&reply, 0, sizeof(reply));
 
 	switch (m->req) {
-	case SOCKET_REQ_GROUP:
-		r->req = SOCKET_REQ_GROUP;
+	case FSLMC_VFIO_SOCKET_REQ_GROUP:
+		r->req = FSLMC_VFIO_SOCKET_REQ_GROUP;
 		r->group_num = m->group_num;
 		fd = fslmc_vfio_group_fd_by_id(m->group_num);
 		if (fd < 0) {
@@ -526,9 +619,10 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 			reply.num_fds = 1;
 			reply.fds[0] = fd;
 		}
+		reply.len_param = sizeof(*r);
 		break;
-	case SOCKET_REQ_CONTAINER:
-		r->req = SOCKET_REQ_CONTAINER;
+	case FSLMC_VFIO_SOCKET_REQ_CONTAINER:
+		r->req = FSLMC_VFIO_SOCKET_REQ_CONTAINER;
 		fd = fslmc_vfio_container_fd();
 		if (fd <= 0) {
 			r->result = SOCKET_ERR;
@@ -537,20 +631,73 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 			reply.num_fds = 1;
 			reply.fds[0] = fd;
 		}
+		reply.len_param = sizeof(*r);
+		break;
+	case FSLMC_VFIO_SOCKET_REQ_MEM:
+		map = (void *)reply.param;
+		r = &map->mp_param;
+		r->req = FSLMC_VFIO_SOCKET_REQ_MEM;
+		r->result = SOCKET_OK;
+		rte_memcpy(&map->memsegs, &fslmc_memsegs,
+			sizeof(struct fslmc_dmaseg_list));
+		rte_memcpy(&map->iosegs, &fslmc_iosegs,
+			sizeof(struct fslmc_dmaseg_list));
+		map->mem_va2iova = fslmc_mem_va2iova;
+		map->mem_map_num = fslmc_mem_map_num;
+		reply.len_param = sizeof(struct fslmc_mem_param);
 		break;
 	default:
-		DPAA2_BUS_ERR("fslmc vfio received invalid message(%08x)",
+		DPAA2_BUS_ERR("VFIO received invalid message(%08x)",
 			m->req);
 		return -ENOTSUP;
 	}
 
 	rte_strscpy(reply.name, FSLMC_VFIO_MP, sizeof(reply.name));
-	reply.len_param = sizeof(*r);
 	ret = rte_mp_reply(&reply, peer);
 
 	return ret;
 }
 
+static int
+fslmc_vfio_mp_sync_mem_req(void)
+{
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	int ret = 0;
+	struct vfio_mp_param *mp_param;
+	struct fslmc_mem_param *mem_rsp;
+
+	mp_param = (void *)mp_req.param;
+	memset(&mp_req, 0, sizeof(struct rte_mp_msg));
+	mp_param->req = FSLMC_VFIO_SOCKET_REQ_MEM;
+	rte_strscpy(mp_req.name, FSLMC_VFIO_MP, sizeof(mp_req.name));
+	mp_req.len_param = sizeof(struct vfio_mp_param);
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+		mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		mem_rsp = (struct fslmc_mem_param *)mp_rep->param;
+		if (mem_rsp->mp_param.result == SOCKET_OK) {
+			rte_memcpy(&fslmc_memsegs,
+				&mem_rsp->memsegs,
+				sizeof(struct fslmc_dmaseg_list));
+			rte_memcpy(&fslmc_memsegs,
+				&mem_rsp->memsegs,
+				sizeof(struct fslmc_dmaseg_list));
+			fslmc_mem_va2iova = mem_rsp->mem_va2iova;
+			fslmc_mem_map_num = mem_rsp->mem_map_num;
+		} else {
+			DPAA2_BUS_ERR("Bad MEM SEG");
+			ret = -EINVAL;
+		}
+	} else {
+		ret = -EINVAL;
+	}
+	free(mp_reply.msgs);
+
+	return ret;
+}
+
 static int
 fslmc_vfio_mp_sync_setup(void)
 {
@@ -561,6 +708,10 @@ fslmc_vfio_mp_sync_setup(void)
 			fslmc_vfio_mp_primary);
 		if (ret && rte_errno != ENOTSUP)
 			return ret;
+	} else {
+		ret = fslmc_vfio_mp_sync_mem_req();
+		if (ret)
+			return ret;
 	}
 
 	return 0;
@@ -581,30 +732,34 @@ vfio_connect_container(int vfio_container_fd,
 
 	iommu_type = fslmc_vfio_iommu_type(vfio_group_fd);
 	if (iommu_type < 0) {
-		DPAA2_BUS_ERR("Failed to get iommu type(%d)",
-			iommu_type);
+		DPAA2_BUS_ERR("Get iommu type(%d)", iommu_type);
 
 		return iommu_type;
 	}
 
 	/* Check whether support for SMMU type IOMMU present or not */
-	if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type)) {
-		/* Connect group to container */
-		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
+	ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type);
+	if (ret <= 0) {
+		DPAA2_BUS_ERR("Unsupported IOMMU type(%d) ret(%d), err(%d)",
+			iommu_type, ret, -errno);
+		return -EINVAL;
+	}
+
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
 			&vfio_container_fd);
-		if (ret) {
-			DPAA2_BUS_ERR("Failed to setup group container");
-			return -errno;
-		}
+	if (ret) {
+		DPAA2_BUS_ERR("Set group container ret(%d), err(%d)",
+			ret, -errno);
 
-		ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
-		if (ret) {
-			DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-			return -errno;
-		}
-	} else {
-		DPAA2_BUS_ERR("No supported IOMMU available");
-		return -EINVAL;
+		return ret;
+	}
+
+	ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
+	if (ret) {
+		DPAA2_BUS_ERR("Set iommu ret(%d), err(%d)",
+			ret, -errno);
+
+		return ret;
 	}
 
 	return fslmc_vfio_connect_container(vfio_group_fd);
@@ -625,11 +780,11 @@ static int vfio_map_irq_region(void)
 
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected");
@@ -639,8 +794,8 @@ static int vfio_map_irq_region(void)
 	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
 		PROT_READ, MAP_SHARED, fd, 0x6030000);
 	if (vaddr == MAP_FAILED) {
-		DPAA2_BUS_INFO("Unable to map region (errno = %d)", errno);
-		return -errno;
+		DPAA2_BUS_ERR("Unable to map region (errno = %d)", errno);
+		return -ENOMEM;
 	}
 
 	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
@@ -650,141 +805,200 @@ static int vfio_map_irq_region(void)
 		return 0;
 
 	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
-	return -errno;
-}
-
-static int fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
-static int fslmc_unmap_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
-
-static void
-fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
-	size_t len, void *arg __rte_unused)
-{
-	struct rte_memseg_list *msl;
-	struct rte_memseg *ms;
-	size_t cur_len = 0, map_len = 0;
-	uint64_t virt_addr;
-	rte_iova_t iova_addr;
-	int ret;
-
-	msl = rte_mem_virt2memseg_list(addr);
-
-	while (cur_len < len) {
-		const void *va = RTE_PTR_ADD(addr, cur_len);
-
-		ms = rte_mem_virt2memseg(va, msl);
-		iova_addr = ms->iova;
-		virt_addr = ms->addr_64;
-		map_len = ms->len;
-
-		DPAA2_BUS_DEBUG("Request for %s, va=%p, "
-				"virt_addr=0x%" PRIx64 ", "
-				"iova=0x%" PRIx64 ", map_len=%zu",
-				type == RTE_MEM_EVENT_ALLOC ?
-					"alloc" : "dealloc",
-				va, virt_addr, iova_addr, map_len);
-
-		/* iova_addr may be set to RTE_BAD_IOVA */
-		if (iova_addr == RTE_BAD_IOVA) {
-			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping");
-			cur_len += map_len;
-			continue;
-		}
-
-		if (type == RTE_MEM_EVENT_ALLOC)
-			ret = fslmc_map_dma(virt_addr, iova_addr, map_len);
-		else
-			ret = fslmc_unmap_dma(virt_addr, iova_addr, map_len);
-
-		if (ret != 0) {
-			DPAA2_BUS_ERR("DMA Mapping/Unmapping failed. "
-					"Map=%d, addr=%p, len=%zu, err:(%d)",
-					type, va, map_len, ret);
-			return;
-		}
-
-		cur_len += map_len;
-	}
-
-	if (type == RTE_MEM_EVENT_ALLOC)
-		DPAA2_BUS_DEBUG("Total Mapped: addr=%p, len=%zu",
-				addr, len);
-	else
-		DPAA2_BUS_DEBUG("Total Unmapped: addr=%p, len=%zu",
-				addr, len);
+	return ret;
 }
 
 static int
-fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr,
-	size_t len)
+fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len)
 {
 	struct vfio_iommu_type1_dma_map dma_map = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
 		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
 	};
-	int ret, fd;
+	int ret, fd, is_io = 0;
 	const char *group_name = fslmc_vfio_get_group_name();
+	struct fslmc_dmaseg *dmaseg = NULL;
+	uint64_t phy = 0;
+
+	if (rte_eal_iova_mode() == RTE_IOVA_VA) {
+		if (vaddr != iovaddr) {
+			DPAA2_BUS_ERR("IOVA:VA(%" PRIx64 " : %" PRIx64 ") %s",
+				iovaddr, vaddr,
+				"should be 1:1 for VA mode");
 
+			return -EINVAL;
+		}
+	}
+
+	phy = rte_mem_virt2phy((const void *)(uintptr_t)vaddr);
+	if (phy == RTE_BAD_IOVA) {
+		phy = fslmc_io_virt2phy((const void *)(uintptr_t)vaddr);
+		if (phy == RTE_BAD_IOVA)
+			return -ENOMEM;
+		is_io = 1;
+	} else if (fslmc_mem_va2iova != RTE_BAD_IOVA &&
+		fslmc_mem_va2iova != (iovaddr - vaddr)) {
+		DPAA2_BUS_WARN("Multiple MEM PA<->VA conversions.");
+	}
+	DPAA2_BUS_DEBUG("%s(%zu): VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+		is_io ? "DMA IO map size" : "DMA MEM map size",
+		len, vaddr, iovaddr, phy);
+
+	if (is_io)
+		goto io_mapping_check;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (!((vaddr + len) <= dmaseg->vaddr ||
+			(dmaseg->vaddr + dmaseg->size) <= vaddr)) {
+			DPAA2_BUS_ERR("MEM: New VA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				vaddr, vaddr + len);
+			DPAA2_BUS_ERR("MEM: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->vaddr,
+				dmaseg->vaddr + dmaseg->size);
+			return -EEXIST;
+		}
+		if (!((iovaddr + len) <= dmaseg->iova ||
+			(dmaseg->iova + dmaseg->size) <= iovaddr)) {
+			DPAA2_BUS_ERR("MEM: New IOVA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				iovaddr, iovaddr + len);
+			DPAA2_BUS_ERR("MEM: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->iova,
+				dmaseg->iova + dmaseg->size);
+			return -EEXIST;
+		}
+	}
+	goto start_mapping;
+
+io_mapping_check:
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if (!((vaddr + len) <= dmaseg->vaddr ||
+			(dmaseg->vaddr + dmaseg->size) <= vaddr)) {
+			DPAA2_BUS_ERR("IO: New VA Range (%" PRIx64 " ~ %" PRIx64 ")",
+				vaddr, vaddr + len);
+			DPAA2_BUS_ERR("IO: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->vaddr,
+				dmaseg->vaddr + dmaseg->size);
+			return -EEXIST;
+		}
+		if (!((iovaddr + len) <= dmaseg->iova ||
+			(dmaseg->iova + dmaseg->size) <= iovaddr)) {
+			DPAA2_BUS_ERR("IO: New IOVA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				iovaddr, iovaddr + len);
+			DPAA2_BUS_ERR("IO: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->iova,
+				dmaseg->iova + dmaseg->size);
+			return -EEXIST;
+		}
+	}
+
+start_mapping:
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
-		return 0;
+		if (phy != iovaddr) {
+			DPAA2_BUS_ERR("IOVA should support with IOMMU");
+			return -EIO;
+		}
+		goto end_mapping;
 	}
 
 	dma_map.size = len;
 	dma_map.vaddr = vaddr;
 	dma_map.iova = iovaddr;
 
-#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-	if (vaddr != iovaddr) {
-		DPAA2_BUS_WARN("vaddr(0x%"PRIx64") != iovaddr(0x%"PRIx64")",
-			vaddr, iovaddr);
-	}
-#endif
-
 	/* SET DMA MAP for IOMMU */
 	if (!fslmc_vfio_container_connected(fd)) {
-		DPAA2_BUS_ERR("Container is not connected ");
+		DPAA2_BUS_ERR("Container is not connected");
 		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("--> Map address: 0x%"PRIx64", size: %"PRIu64"",
-			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.size);
 	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA,
 		&dma_map);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO_IOMMU_MAP_DMA API(errno = %d)",
-				errno);
+		DPAA2_BUS_ERR("%s(%d) VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+			is_io ? "DMA IO map err" : "DMA MEM map err",
+			errno, vaddr, iovaddr, phy);
 		return ret;
 	}
 
+end_mapping:
+	dmaseg = malloc(sizeof(struct fslmc_dmaseg));
+	if (!dmaseg) {
+		DPAA2_BUS_ERR("DMA segment malloc failed!");
+		return -ENOMEM;
+	}
+	dmaseg->vaddr = vaddr;
+	dmaseg->iova = iovaddr;
+	dmaseg->size = len;
+	if (is_io) {
+		TAILQ_INSERT_TAIL(&fslmc_iosegs, dmaseg, next);
+	} else {
+		fslmc_mem_map_num++;
+		if (fslmc_mem_map_num == 1)
+			fslmc_mem_va2iova = iovaddr - vaddr;
+		else
+			fslmc_mem_va2iova = RTE_BAD_IOVA;
+		TAILQ_INSERT_TAIL(&fslmc_memsegs, dmaseg, next);
+	}
+	DPAA2_BUS_LOG(NOTICE,
+		"%s(%zx): VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+		is_io ? "DMA I/O map size" : "DMA MEM map size",
+		len, vaddr, iovaddr, phy);
+
 	return 0;
 }
 
 static int
-fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
+fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr, size_t len)
 {
 	struct vfio_iommu_type1_dma_unmap dma_unmap = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_unmap),
 		.flags = 0,
 	};
-	int ret, fd;
+	int ret, fd, is_io = 0;
 	const char *group_name = fslmc_vfio_get_group_name();
+	struct fslmc_dmaseg *dmaseg = NULL;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (((vaddr && dmaseg->vaddr == vaddr) || !vaddr) &&
+			dmaseg->iova == iovaddr &&
+			dmaseg->size == len) {
+			is_io = 0;
+			break;
+		}
+	}
+
+	if (!dmaseg) {
+		TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+			if (((vaddr && dmaseg->vaddr == vaddr) || !vaddr) &&
+				dmaseg->iova == iovaddr &&
+				dmaseg->size == len) {
+				is_io = 1;
+				break;
+			}
+		}
+	}
+
+	if (!dmaseg) {
+		DPAA2_BUS_ERR("IOVA(%" PRIx64 ") with length(%zx) not mapped",
+			iovaddr, len);
+		return 0;
+	}
 
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
@@ -792,7 +1006,7 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 	}
 
 	dma_unmap.size = len;
-	dma_unmap.iova = vaddr;
+	dma_unmap.iova = iovaddr;
 
 	/* SET DMA MAP for IOMMU */
 	if (!fslmc_vfio_container_connected(fd)) {
@@ -800,19 +1014,164 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("--> Unmap address: 0x%"PRIx64", size: %"PRIu64"",
-			(uint64_t)dma_unmap.iova, (uint64_t)dma_unmap.size);
 	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_UNMAP_DMA,
 		&dma_unmap);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO_IOMMU_UNMAP_DMA API(errno = %d)",
-				errno);
-		return -1;
+		DPAA2_BUS_ERR("DMA un-map IOVA(%" PRIx64 " ~ %" PRIx64 ") err(%d)",
+			iovaddr, iovaddr + len, errno);
+		return ret;
 	}
 
+	if (is_io) {
+		TAILQ_REMOVE(&fslmc_iosegs, dmaseg, next);
+	} else {
+		TAILQ_REMOVE(&fslmc_memsegs, dmaseg, next);
+		fslmc_mem_map_num--;
+		if (TAILQ_EMPTY(&fslmc_memsegs))
+			fslmc_mem_va2iova = RTE_BAD_IOVA;
+	}
+
+	free(dmaseg);
+
 	return 0;
 }
 
+uint64_t
+rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
+	uint64_t size)
+{
+	struct fslmc_dmaseg *dmaseg;
+	uint64_t va;
+
+	va = (uint64_t)vaddr;
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (va >= dmaseg->vaddr &&
+			(va + size) < (dmaseg->vaddr + dmaseg->size)) {
+			return dmaseg->iova + va - dmaseg->vaddr;
+		}
+	}
+
+	return RTE_BAD_IOVA;
+}
+
+void *
+rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
+	uint64_t size)
+{
+	struct fslmc_dmaseg *dmaseg;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (iova >= dmaseg->iova &&
+			(iova + size) < (dmaseg->iova + dmaseg->size))
+			return (void *)((uintptr_t)dmaseg->vaddr
+				+ (uintptr_t)(iova - dmaseg->iova));
+	}
+
+	return NULL;
+}
+
+__rte_hot uint64_t
+rte_fslmc_mem_vaddr_to_iova(void *vaddr)
+{
+	if (likely(fslmc_mem_va2iova != RTE_BAD_IOVA))
+		return (uint64_t)vaddr + fslmc_mem_va2iova;
+
+	return rte_fslmc_cold_mem_vaddr_to_iova(vaddr, 0);
+}
+
+__rte_hot void *
+rte_fslmc_mem_iova_to_vaddr(uint64_t iova)
+{
+	if (likely(fslmc_mem_va2iova != RTE_BAD_IOVA))
+		return (void *)((uintptr_t)iova - (uintptr_t)fslmc_mem_va2iova);
+
+	return rte_fslmc_cold_mem_iova_to_vaddr(iova, 0);
+}
+
+uint64_t
+rte_fslmc_io_vaddr_to_iova(void *vaddr)
+{
+	struct fslmc_dmaseg *dmaseg = NULL;
+	uint64_t va = (uint64_t)vaddr;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if ((va >= dmaseg->vaddr) &&
+			va < dmaseg->vaddr + dmaseg->size)
+			return dmaseg->iova + va - dmaseg->vaddr;
+	}
+
+	return RTE_BAD_IOVA;
+}
+
+void *
+rte_fslmc_io_iova_to_vaddr(uint64_t iova)
+{
+	struct fslmc_dmaseg *dmaseg = NULL;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if ((iova >= dmaseg->iova) &&
+			iova < dmaseg->iova + dmaseg->size)
+			return (void *)((uintptr_t)dmaseg->vaddr
+				+ (uintptr_t)(iova - dmaseg->iova));
+	}
+
+	return NULL;
+}
+
+static void
+fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
+	size_t len, void *arg __rte_unused)
+{
+	struct rte_memseg_list *msl;
+	struct rte_memseg *ms;
+	size_t cur_len = 0, map_len = 0;
+	uint64_t virt_addr;
+	rte_iova_t iova_addr;
+	int ret;
+
+	msl = rte_mem_virt2memseg_list(addr);
+
+	while (cur_len < len) {
+		const void *va = RTE_PTR_ADD(addr, cur_len);
+
+		ms = rte_mem_virt2memseg(va, msl);
+		iova_addr = ms->iova;
+		virt_addr = ms->addr_64;
+		map_len = ms->len;
+
+		DPAA2_BUS_DEBUG("%s, va=%p, virt=%" PRIx64 ", iova=%" PRIx64 ", len=%zu",
+			type == RTE_MEM_EVENT_ALLOC ? "alloc" : "dealloc",
+			va, virt_addr, iova_addr, map_len);
+
+		/* iova_addr may be set to RTE_BAD_IOVA */
+		if (iova_addr == RTE_BAD_IOVA) {
+			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping");
+			cur_len += map_len;
+			continue;
+		}
+
+		if (type == RTE_MEM_EVENT_ALLOC)
+			ret = fslmc_map_dma(virt_addr, iova_addr, map_len);
+		else
+			ret = fslmc_unmap_dma(virt_addr, iova_addr, map_len);
+
+		if (ret != 0) {
+			DPAA2_BUS_ERR("%s: Map=%d, addr=%p, len=%zu, err:(%d)",
+				type == RTE_MEM_EVENT_ALLOC ?
+				"DMA Mapping failed. " :
+				"DMA Unmapping failed. ",
+				type, va, map_len, ret);
+			return;
+		}
+
+		cur_len += map_len;
+	}
+
+	DPAA2_BUS_DEBUG("Total %s: addr=%p, len=%zu",
+		type == RTE_MEM_EVENT_ALLOC ? "Mapped" : "Unmapped",
+		addr, len);
+}
+
 static int
 fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 		const struct rte_memseg *ms, void *arg)
@@ -843,7 +1202,7 @@ rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 int
 rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 {
-	return fslmc_unmap_dma(iova, 0, size);
+	return fslmc_unmap_dma(0, iova, size);
 }
 
 int rte_fslmc_vfio_dmamap(void)
@@ -853,9 +1212,10 @@ int rte_fslmc_vfio_dmamap(void)
 	/* Lock before parsing and registering callback to memory subsystem */
 	rte_mcfg_mem_read_lock();
 
-	if (rte_memseg_walk(fslmc_dmamap_seg, &i) < 0) {
+	ret = rte_memseg_walk(fslmc_dmamap_seg, &i);
+	if (ret) {
 		rte_mcfg_mem_read_unlock();
-		return -1;
+		return ret;
 	}
 
 	ret = rte_mem_event_callback_register("fslmc_memevent_clb",
@@ -894,6 +1254,14 @@ fslmc_vfio_setup_device(const char *dev_addr,
 	const char *group_name = fslmc_vfio_get_group_name();
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, vfio_group_fd);
+		if (vfio_group_fd < 0)
+			return vfio_group_fd;
+		return -EIO;
+	}
+
 	if (!fslmc_vfio_container_connected(vfio_group_fd)) {
 		DPAA2_BUS_ERR("Container is not connected");
 		return -EIO;
@@ -1002,8 +1370,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
 	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
 	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 	if (ret)
-		DPAA2_BUS_ERR(
-			"Error disabling dpaa2 interrupts for fd %d",
+		DPAA2_BUS_ERR("Error disabling dpaa2 interrupts for fd %d",
 			rte_intr_fd_get(intr_handle));
 
 	return ret;
@@ -1028,7 +1395,7 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 		if (ret < 0) {
 			DPAA2_BUS_ERR("Cannot get IRQ(%d) info, error %i (%s)",
 				      i, errno, strerror(errno));
-			return -1;
+			return ret;
 		}
 
 		/* if this vector cannot be used with eventfd,
@@ -1042,8 +1409,8 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 		fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
 		if (fd < 0) {
 			DPAA2_BUS_ERR("Cannot set up eventfd, error %i (%s)",
-				      errno, strerror(errno));
-			return -1;
+				errno, strerror(errno));
+			return fd;
 		}
 
 		if (rte_intr_fd_set(intr_handle, fd))
@@ -1059,7 +1426,7 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 	}
 
 	/* if we're here, we haven't found a suitable interrupt vector */
-	return -1;
+	return -EIO;
 }
 
 static void
@@ -1233,6 +1600,13 @@ fslmc_vfio_close_group(void)
 	const char *group_name = fslmc_vfio_get_group_name();
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, vfio_group_fd);
+		if (vfio_group_fd < 0)
+			return vfio_group_fd;
+		return -EIO;
+	}
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
 		if (dev->device.devargs &&
@@ -1324,7 +1698,7 @@ fslmc_vfio_process_group(void)
 				ret = fslmc_process_mcp(dev);
 				if (ret) {
 					DPAA2_BUS_ERR("Unable to map MC Portal");
-					return -1;
+					return ret;
 				}
 				found_mportal = 1;
 			}
@@ -1341,7 +1715,7 @@ fslmc_vfio_process_group(void)
 	/* Cannot continue if there is not even a single mportal */
 	if (!found_mportal) {
 		DPAA2_BUS_ERR("No MC Portal device found. Not continuing");
-		return -1;
+		return -EIO;
 	}
 
 	/* Search for DPRC device next as it updates endpoint of
@@ -1353,7 +1727,7 @@ fslmc_vfio_process_group(void)
 			ret = fslmc_process_iodevices(dev);
 			if (ret) {
 				DPAA2_BUS_ERR("Unable to process dprc");
-				return -1;
+				return ret;
 			}
 			TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		}
@@ -1410,7 +1784,7 @@ fslmc_vfio_process_group(void)
 			if (ret) {
 				DPAA2_BUS_DEBUG("Dev (%s) init failed",
 						dev->device.name);
-				return -1;
+				return ret;
 			}
 
 			break;
@@ -1434,7 +1808,7 @@ fslmc_vfio_process_group(void)
 			if (ret) {
 				DPAA2_BUS_DEBUG("Dev (%s) init failed",
 						dev->device.name);
-				return -1;
+				return ret;
 			}
 
 			break;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index bc36607e64..85e4c16c03 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2020 NXP
+ *   Copyright 2016,2020-2023 NXP
  *
  */
 
@@ -28,7 +28,6 @@
 #include "portal/dpaa2_hw_pvt.h"
 #include "portal/dpaa2_hw_dpio.h"
 
-
 TAILQ_HEAD(dpbp_dev_list, dpaa2_dpbp_dev);
 static struct dpbp_dev_list dpbp_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpbp_dev_list); /*!< DPBP device list */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index c3f6e24139..954d59d123 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -340,9 +340,8 @@ dpaa2_affine_qbman_swp(void)
 		}
 		RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
 
-		DPAA2_BUS_INFO(
-			"DPAA Portal=%p (%d) is affined to thread %" PRIu64,
-			dpio_dev, dpio_dev->index, tid);
+		DPAA2_BUS_DEBUG("Portal[%d] is affined to thread %" PRIu64,
+			dpio_dev->index, tid);
 	}
 	return 0;
 }
@@ -362,9 +361,8 @@ dpaa2_affine_qbman_ethrx_swp(void)
 		}
 		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
 
-		DPAA2_BUS_INFO(
-			"DPAA Portal=%p (%d) is affined for eth rx to thread %"
-			PRIu64, dpio_dev, dpio_dev->index, tid);
+		DPAA2_BUS_DEBUG("Portal_eth_rx[%d] is affined to thread %" PRIu64,
+			dpio_dev->index, tid);
 	}
 	return 0;
 }
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 7407f8d38d..328e1e788a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -12,6 +12,7 @@
 #include <mc/fsl_mc_sys.h>
 
 #include <rte_compat.h>
+#include <dpaa2_hw_pvt.h>
 
 struct dpaa2_io_portal_t {
 	struct dpaa2_dpio_dev *dpio_dev;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 4c30e6db18..74a1a8b2fa 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -14,6 +14,7 @@
 
 #include <mc/fsl_mc_sys.h>
 #include <fsl_qbman_portal.h>
+#include <bus_fslmc_driver.h>
 
 #ifndef false
 #define false      0
@@ -80,6 +81,8 @@
 #define DPAA2_PACKET_LAYOUT_ALIGN	64 /*changing from 256 */
 
 #define DPAA2_DPCI_MAX_QUEUES 2
+#define DPAA2_INVALID_FLOW_ID 0xffff
+#define DPAA2_INVALID_CGID 0xff
 
 struct dpaa2_queue;
 
@@ -366,83 +369,63 @@ enum qbman_fd_format {
  */
 #define DPAA2_EQ_RESP_ALWAYS		1
 
-/* Various structures representing contiguous memory maps */
-struct dpaa2_memseg {
-	TAILQ_ENTRY(dpaa2_memseg) next;
-	char *vaddr;
-	rte_iova_t iova;
-	size_t len;
-};
-
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-extern uint8_t dpaa2_virt_mode;
-static void *dpaa2_mem_ptov(phys_addr_t paddr) __rte_unused;
-
-static void *dpaa2_mem_ptov(phys_addr_t paddr)
+static inline uint64_t
+dpaa2_mem_va_to_iova(void *va)
 {
-	void *va;
-
-	if (dpaa2_virt_mode)
-		return (void *)(size_t)paddr;
-
-	va = (void *)dpaax_iova_table_get_va(paddr);
-	if (likely(va != NULL))
-		return va;
-
-	/* If not, Fallback to full memseg list searching */
-	va = rte_mem_iova2virt(paddr);
+	if (likely(rte_eal_iova_mode() == RTE_IOVA_VA))
+		return (uint64_t)va;
 
-	return va;
+	return rte_fslmc_mem_vaddr_to_iova(va);
 }
 
-static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr) __rte_unused;
-
-static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
+static inline void *
+dpaa2_mem_iova_to_va(uint64_t iova)
 {
-	const struct rte_memseg *memseg;
-
-	if (dpaa2_virt_mode)
-		return vaddr;
+	if (likely(rte_eal_iova_mode() == RTE_IOVA_VA))
+		return (void *)(uintptr_t)iova;
 
-	memseg = rte_mem_virt2memseg((void *)(uintptr_t)vaddr, NULL);
-	if (memseg)
-		return memseg->iova + RTE_PTR_DIFF(vaddr, memseg->addr);
-	return (size_t)NULL;
+	return rte_fslmc_mem_iova_to_vaddr(iova);
 }
 
-/**
- * When we are using Physical addresses as IO Virtual Addresses,
- * Need to call conversion routines dpaa2_mem_vtop & dpaa2_mem_ptov
- * wherever required.
- * These routines are called with help of below MACRO's
- */
-
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_iova)
-
-/**
- * macro to convert Virtual address to IOVA
- */
-#define DPAA2_VADDR_TO_IOVA(_vaddr) dpaa2_mem_vtop((size_t)(_vaddr))
-
-/**
- * macro to convert IOVA to Virtual address
- */
-#define DPAA2_IOVA_TO_VADDR(_iova) dpaa2_mem_ptov((size_t)(_iova))
-
-/**
- * macro to convert modify the memory containing IOVA to Virtual address
- */
+#define DPAA2_VADDR_TO_IOVA(_vaddr) \
+	dpaa2_mem_va_to_iova((void *)(uintptr_t)_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) \
+	dpaa2_mem_iova_to_va((uint64_t)_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type) \
-	{_mem = (_type)(dpaa2_mem_ptov((size_t)(_mem))); }
+	{_mem = (_type)DPAA2_IOVA_TO_VADDR(_mem); }
+
+#define DPAA2_VAMODE_VADDR_TO_IOVA(_vaddr) ((uint64_t)_vaddr)
+#define DPAA2_VAMODE_IOVA_TO_VADDR(_iova) ((void *)_iova)
+#define DPAA2_VAMODE_MODIFY_IOVA_TO_VADDR(_mem, _type) \
+	{_mem = (_type)(_mem); }
+
+#define DPAA2_PAMODE_VADDR_TO_IOVA(_vaddr) \
+	rte_fslmc_mem_vaddr_to_iova((void *)_vaddr)
+#define DPAA2_PAMODE_IOVA_TO_VADDR(_iova) \
+	rte_fslmc_mem_iova_to_vaddr((uint64_t)_iova)
+#define DPAA2_PAMODE_MODIFY_IOVA_TO_VADDR(_mem, _type) \
+	{_mem = (_type)rte_fslmc_mem_iova_to_vaddr(_mem); }
+
+static inline uint64_t
+dpaa2_mem_va_to_iova_check(void *va, uint64_t size)
+{
+	uint64_t iova = rte_fslmc_cold_mem_vaddr_to_iova(va, size);
 
-#else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
+	if (iova == RTE_BAD_IOVA)
+		return RTE_BAD_IOVA;
 
-#define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
-#define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
+	/** Double check the iova is valid.*/
+	if (iova != rte_mem_virt2iova(va))
+		return RTE_BAD_IOVA;
+
+	return iova;
+}
 
-#endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
+#define DPAA2_VADDR_TO_IOVA_AND_CHECK(_vaddr, size) \
+	dpaa2_mem_va_to_iova_check(_vaddr, size)
+#define DPAA2_IOVA_TO_VADDR_AND_CHECK(_iova, size) \
+	rte_fslmc_cold_mem_iova_to_vaddr(_iova, size)
 
 static inline
 int check_swp_active_dqs(uint16_t dpio_index)
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index b49bc0a62c..2c36895285 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -24,7 +24,6 @@ INTERNAL {
 	dpaa2_seqn_dynfield_offset;
 	dpaa2_seqn;
 	dpaa2_svr_family;
-	dpaa2_virt_mode;
 	dpbp_disable;
 	dpbp_enable;
 	dpbp_get_attributes;
@@ -119,6 +118,12 @@ INTERNAL {
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
 	rte_fslmc_vfio_mem_dmaunmap;
+	rte_fslmc_cold_mem_vaddr_to_iova;
+	rte_fslmc_cold_mem_iova_to_vaddr;
+	rte_fslmc_mem_vaddr_to_iova;
+	rte_fslmc_mem_iova_to_vaddr;
+	rte_fslmc_io_vaddr_to_iova;
+	rte_fslmc_io_iova_to_vaddr;
 
 	local: *;
 };
diff --git a/drivers/dma/dpaa2/dpaa2_qdma.c b/drivers/dma/dpaa2/dpaa2_qdma.c
index 5780e49297..b2cf074c7d 100644
--- a/drivers/dma/dpaa2/dpaa2_qdma.c
+++ b/drivers/dma/dpaa2/dpaa2_qdma.c
@@ -10,6 +10,7 @@
 
 #include <mc/fsl_dpdmai.h>
 
+#include <dpaa2_hw_dpio.h>
 #include "rte_pmd_dpaa2_qdma.h"
 #include "dpaa2_qdma.h"
 #include "dpaa2_qdma_logs.h"
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 17/42] bus/fslmc: remove VFIO IRQ mapping
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (15 preceding siblings ...)
  2024-10-22 19:12         ` [v4 16/42] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 18/42] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
                           ` (25 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Remove unused GITS translator VFIO mapping.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 50 ----------------------------------
 1 file changed, 50 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index b0e7299bda..b48c7843d5 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -49,7 +49,6 @@
 static struct fslmc_vfio_container s_vfio_container;
 /* Currently we only support single group/process. */
 static const char *fslmc_group; /* dprc.x*/
-static uint32_t *msi_intr_vaddr;
 static void *(*rte_mcp_ptr_list);
 
 struct fslmc_dmaseg {
@@ -765,49 +764,6 @@ vfio_connect_container(int vfio_container_fd,
 	return fslmc_vfio_connect_container(vfio_group_fd);
 }
 
-static int vfio_map_irq_region(void)
-{
-	int ret, fd;
-	unsigned long *vaddr = NULL;
-	struct vfio_iommu_type1_dma_map map = {
-		.argsz = sizeof(map),
-		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
-		.vaddr = 0x6030000,
-		.iova = 0x6030000,
-		.size = 0x1000,
-	};
-	const char *group_name = fslmc_vfio_get_group_name();
-
-	fd = fslmc_vfio_group_fd_by_name(group_name);
-	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
-			__func__, group_name, fd);
-		if (fd < 0)
-			return fd;
-		return -EIO;
-	}
-	if (!fslmc_vfio_container_connected(fd)) {
-		DPAA2_BUS_ERR("Container is not connected");
-		return -EIO;
-	}
-
-	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
-		PROT_READ, MAP_SHARED, fd, 0x6030000);
-	if (vaddr == MAP_FAILED) {
-		DPAA2_BUS_ERR("Unable to map region (errno = %d)", errno);
-		return -ENOMEM;
-	}
-
-	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
-	map.vaddr = (unsigned long)vaddr;
-	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA, &map);
-	if (!ret)
-		return 0;
-
-	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
-	return ret;
-}
-
 static int
 fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len)
 {
@@ -1229,12 +1185,6 @@ int rte_fslmc_vfio_dmamap(void)
 
 	DPAA2_BUS_DEBUG("Total %d segments found.", i);
 
-	/* TODO - This is a W.A. as VFIO currently does not add the mapping of
-	 * the interrupt region to SMMU. This should be removed once the
-	 * support is added in the Kernel.
-	 */
-	vfio_map_irq_region();
-
 	/* Existing segments have been mapped and memory callback for hotplug
 	 * has been installed.
 	 */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 18/42] bus/fslmc: create dpaa2 device with it's object
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (16 preceding siblings ...)
  2024-10-22 19:12         ` [v4 17/42] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 19/42] bus/fslmc: fix coverity issue vanshika.shukla
                           ` (24 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Create dpaa2 device with object instead of object ID.
Assign each dpaa2 object with it's container.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     | 39 ++++++++++++------------
 drivers/bus/fslmc/fslmc_vfio.c           |  3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c |  6 ++--
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c |  8 ++---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c |  6 ++--
 drivers/bus/fslmc/portal/dpaa2_hw_dprc.c |  8 +++--
 drivers/event/dpaa2/dpaa2_hw_dpcon.c     |  8 ++---
 drivers/net/dpaa2/dpaa2_mux.c            |  6 ++--
 drivers/net/dpaa2/dpaa2_ptp.c            |  8 ++---
 9 files changed, 47 insertions(+), 45 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index ba3774823b..777ab24c10 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -89,25 +89,6 @@ enum rte_dpaa2_dev_type {
 	DPAA2_DEVTYPE_MAX,
 };
 
-TAILQ_HEAD(rte_dpaa2_object_list, rte_dpaa2_object);
-
-typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
-				      struct vfio_device_info *obj_info,
-				      int object_id);
-
-typedef void (*rte_dpaa2_obj_close_t)(int object_id);
-
-/**
- * A structure describing a DPAA2 object.
- */
-struct rte_dpaa2_object {
-	TAILQ_ENTRY(rte_dpaa2_object) next; /**< Next in list. */
-	const char *name;                   /**< Name of Object. */
-	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
-	rte_dpaa2_obj_create_t create;
-	rte_dpaa2_obj_close_t close;
-};
-
 /**
  * A structure describing a DPAA2 device.
  */
@@ -123,6 +104,7 @@ struct rte_dpaa2_device {
 	enum rte_dpaa2_dev_type dev_type;   /**< Device Type */
 	uint16_t object_id;                 /**< DPAA2 Object ID */
 	enum rte_dpaa2_dev_type ep_dev_type;   /**< Endpoint Device Type */
+	struct dpaa2_dprc_dev *container;
 	uint16_t ep_object_id;                 /**< Endpoint DPAA2 Object ID */
 	char ep_name[RTE_DEV_NAME_MAX_LEN];
 	struct rte_intr_handle *intr_handle; /**< Interrupt handle */
@@ -130,10 +112,29 @@ struct rte_dpaa2_device {
 	char name[FSLMC_OBJECT_MAX_LEN];    /**< DPAA2 Object name*/
 };
 
+typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
+				      struct vfio_device_info *obj_info,
+				      struct rte_dpaa2_device *dev);
+
+typedef void (*rte_dpaa2_obj_close_t)(int object_id);
+
 typedef int (*rte_dpaa2_probe_t)(struct rte_dpaa2_driver *dpaa2_drv,
 				 struct rte_dpaa2_device *dpaa2_dev);
 typedef int (*rte_dpaa2_remove_t)(struct rte_dpaa2_device *dpaa2_dev);
 
+TAILQ_HEAD(rte_dpaa2_object_list, rte_dpaa2_object);
+
+/**
+ * A structure describing a DPAA2 object.
+ */
+struct rte_dpaa2_object {
+	TAILQ_ENTRY(rte_dpaa2_object) next; /**< Next in list. */
+	const char *name;                   /**< Name of Object. */
+	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
+	rte_dpaa2_obj_create_t create;
+	rte_dpaa2_obj_close_t close;
+};
+
 /**
  * A structure describing a DPAA2 driver.
  */
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index b48c7843d5..9d834f293a 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1465,8 +1465,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
 	case DPAA2_DPRC:
 		TAILQ_FOREACH(object, &dpaa2_obj_list, next) {
 			if (dev->dev_type == object->dev_type)
-				object->create(dev_fd, &device_info,
-					       dev->object_id);
+				object->create(dev_fd, &device_info, dev);
 			else
 				continue;
 		}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index 85e4c16c03..0ca3b2b2e4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -47,11 +47,11 @@ static struct dpaa2_dpbp_dev *get_dpbp_from_id(uint32_t dpbp_id)
 
 static int
 dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
-			 struct vfio_device_info *obj_info __rte_unused,
-			 int dpbp_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpbp_dev *dpbp_node;
-	int ret;
+	int ret, dpbp_id = obj->object_id;
 	static int register_once;
 
 	/* Allocate DPAA2 dpbp handle */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 99f2147ccb..9d7108bfdc 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2020 NXP
+ *   Copyright 2017, 2020, 2023 NXP
  *
  */
 
@@ -45,15 +45,15 @@ static struct dpaa2_dpci_dev *get_dpci_from_id(uint32_t dpci_id)
 
 static int
 rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
-			     struct vfio_device_info *obj_info __rte_unused,
-			     int dpci_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpci_dev *dpci_node;
 	struct dpci_attr attr;
 	struct dpci_rx_queue_cfg rx_queue_cfg;
 	struct dpci_rx_queue_attr rx_attr;
 	struct dpci_tx_queue_attr tx_attr;
-	int ret, i;
+	int ret, i, dpci_id = obj->object_id;
 
 	/* Allocate DPAA2 dpci handle */
 	dpci_node = rte_malloc(NULL, sizeof(struct dpaa2_dpci_dev), 0);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 954d59d123..67d4c83e8c 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -399,14 +399,14 @@ dpaa2_close_dpio_device(int object_id)
 
 static int
 dpaa2_create_dpio_device(int vdev_fd,
-			 struct vfio_device_info *obj_info,
-			 int object_id)
+	struct vfio_device_info *obj_info,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)};
 	struct qbman_swp_desc p_des;
 	struct dpio_attr attr;
-	int ret;
+	int ret, object_id = obj->object_id;
 
 	if (obj_info->num_regions < NUM_DPIO_REGIONS) {
 		DPAA2_BUS_ERR("Not sufficient number of DPIO regions");
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c b/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
index 65e2d799c3..a057cb1309 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
@@ -23,13 +23,13 @@ static struct dprc_dev_list dprc_dev_list
 
 static int
 rte_dpaa2_create_dprc_device(int vdev_fd __rte_unused,
-			     struct vfio_device_info *obj_info __rte_unused,
-			     int dprc_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dprc_dev *dprc_node;
 	struct dprc_endpoint endpoint1, endpoint2;
 	struct rte_dpaa2_device *dev, *dev_tmp;
-	int ret;
+	int ret, dprc_id = obj->object_id;
 
 	/* Allocate DPAA2 dprc handle */
 	dprc_node = rte_malloc(NULL, sizeof(struct dpaa2_dprc_dev), 0);
@@ -50,6 +50,8 @@ rte_dpaa2_create_dprc_device(int vdev_fd __rte_unused,
 	}
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_tmp) {
+		/** DPRC is always created before it's children are created.*/
+		dev->container = dprc_node;
 		if (dev->dev_type == DPAA2_ETH) {
 			int link_state;
 
diff --git a/drivers/event/dpaa2/dpaa2_hw_dpcon.c b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
index 64b0136e24..ea5b0d4b85 100644
--- a/drivers/event/dpaa2/dpaa2_hw_dpcon.c
+++ b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2020 NXP
+ *   Copyright 2017, 2020, 2023 NXP
  *
  */
 
@@ -45,12 +45,12 @@ static struct dpaa2_dpcon_dev *get_dpcon_from_id(uint32_t dpcon_id)
 
 static int
 rte_dpaa2_create_dpcon_device(int dev_fd __rte_unused,
-			      struct vfio_device_info *obj_info __rte_unused,
-			      int dpcon_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpcon_dev *dpcon_node;
 	struct dpcon_attr attr;
-	int ret;
+	int ret, dpcon_id = obj->object_id;
 
 	/* Allocate DPAA2 dpcon handle */
 	dpcon_node = rte_malloc(NULL, sizeof(struct dpaa2_dpcon_dev), 0);
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 3693f4b62e..f4b8d481af 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -374,12 +374,12 @@ rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
 
 static int
 dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
-			   struct vfio_device_info *obj_info __rte_unused,
-			   int dpdmux_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev;
 	struct dpdmux_attr attr;
-	int ret;
+	int ret, dpdmux_id = obj->object_id;
 	uint16_t maj_ver;
 	uint16_t min_ver;
 	uint8_t skip_reset_flags;
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index c08aa0f3bf..751e558c73 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2019 NXP
+ * Copyright 2019, 2023 NXP
  */
 
 #include <sys/queue.h>
@@ -134,11 +134,11 @@ int dpaa2_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
 #if defined(RTE_LIBRTE_IEEE1588)
 static int
 dpaa2_create_dprtc_device(int vdev_fd __rte_unused,
-			   struct vfio_device_info *obj_info __rte_unused,
-			   int dprtc_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dprtc_attr attr;
-	int ret;
+	int ret, dprtc_id = obj->object_id;
 
 	PMD_INIT_FUNC_TRACE();
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 19/42] bus/fslmc: fix coverity issue
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (17 preceding siblings ...)
  2024-10-22 19:12         ` [v4 18/42] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 20/42] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
                           ` (23 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Youri Querry, Nipun Gupta,
	Roy Pledge
  Cc: stable, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fix Issues reported by NXP Internal Coverity.

Fixes: 64f131a82fbe ("bus/fslmc: add qbman debug")
Cc: hemant.agrawal@nxp.com
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 49 +++++++++++++++++----------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index eea06988ff..0e471ec3fd 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2022 NXP
  */
 
 #include "compat.h"
@@ -37,6 +37,7 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
 		   struct qbman_bp_query_rslt *r)
 {
 	struct qbman_bp_query_desc *p;
+	struct qbman_bp_query_rslt *bp_query_rslt;
 
 	/* Start the management command */
 	p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s);
@@ -47,14 +48,16 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
 	p->bpid = bpid;
 
 	/* Complete the management command */
-	*r = *(struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s, p,
-						 QBMAN_BP_QUERY);
-	if (!r) {
+	bp_query_rslt = (struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s,
+						p, QBMAN_BP_QUERY);
+	if (!bp_query_rslt) {
 		pr_err("qbman: Query BPID %d failed, no response\n",
 			bpid);
 		return -EIO;
 	}
 
+	*r = *bp_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY);
 
@@ -202,20 +205,23 @@ int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
 		   struct qbman_fq_query_rslt *r)
 {
 	struct qbman_fq_query_desc *p;
+	struct qbman_fq_query_rslt *fq_query_rslt;
 
 	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->fqid = fqid;
-	*r = *(struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s, p,
-					  QBMAN_FQ_QUERY);
-	if (!r) {
+	fq_query_rslt = (struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s,
+					p, QBMAN_FQ_QUERY);
+	if (!fq_query_rslt) {
 		pr_err("qbman: Query FQID %d failed, no response\n",
 			fqid);
 		return -EIO;
 	}
 
+	*r = *fq_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY);
 
@@ -398,20 +404,23 @@ int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
 		    struct qbman_cgr_query_rslt *r)
 {
 	struct qbman_cgr_query_desc *p;
+	struct qbman_cgr_query_rslt *cgr_query_rslt;
 
 	p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->cgid = cgid;
-	*r = *(struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_CGR_QUERY);
-	if (!r) {
+	cgr_query_rslt = (struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s,
+					p, QBMAN_CGR_QUERY);
+	if (!cgr_query_rslt) {
 		pr_err("qbman: Query CGID %d failed, no response\n",
 			cgid);
 		return -EIO;
 	}
 
+	*r = *cgr_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_CGR_QUERY);
 
@@ -473,20 +482,23 @@ int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
 			struct qbman_wred_query_rslt *r)
 {
 	struct qbman_cgr_query_desc *p;
+	struct qbman_wred_query_rslt *wred_query_rslt;
 
 	p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->cgid = cgid;
-	*r = *(struct qbman_wred_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_WRED_QUERY);
-	if (!r) {
+	wred_query_rslt = (struct qbman_wred_query_rslt *)qbman_swp_mc_complete(
+					s, p, QBMAN_WRED_QUERY);
+	if (!wred_query_rslt) {
 		pr_err("qbman: Query CGID WRED %d failed, no response\n",
 			cgid);
 		return -EIO;
 	}
 
+	*r = *wred_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WRED_QUERY);
 
@@ -527,7 +539,7 @@ void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
 	if (mn == 0)
 		*maxth = ma;
 	else
-		*maxth = ((ma+256) * (1<<(mn-1)));
+		*maxth = ((uint64_t)(ma+256) * (1<<(mn-1)));
 
 	if (step_s == 0)
 		*minth = *maxth - step_i;
@@ -630,6 +642,7 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
 		       struct qbman_wqchan_query_rslt *r)
 {
 	struct qbman_wqchan_query_desc *p;
+	struct qbman_wqchan_query_rslt *wqchan_query_rslt;
 
 	/* Start the management command */
 	p = (struct qbman_wqchan_query_desc *)qbman_swp_mc_start(s);
@@ -640,14 +653,16 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
 	p->chid = chanid;
 
 	/* Complete the management command */
-	*r = *(struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_WQ_QUERY);
-	if (!r) {
+	wqchan_query_rslt = (struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(
+						s, p, QBMAN_WQ_QUERY);
+	if (!wqchan_query_rslt) {
 		pr_err("qbman: Query WQ Channel %d failed, no response\n",
 			chanid);
 		return -EIO;
 	}
 
+	*r = *wqchan_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WQ_QUERY);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 20/42] bus/fslmc: change qbman eq desc from d to desc
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (18 preceding siblings ...)
  2024-10-22 19:12         ` [v4 19/42] bus/fslmc: fix coverity issue vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 21/42] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
                           ` (22 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Change qbman_eq_desc name to avoid redefining same variable.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 3fdca9761d..5d0cedc136 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1008,9 +1008,9 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
 				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
 		p[0] = cl[0] | s->eqcr.pi_vb;
 		if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
-			struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+			struct qbman_eq_desc *desc = (struct qbman_eq_desc *)p;
 
-			d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+			desc->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
 				((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
 		}
 		eqcr_pi++;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 21/42] bus/fslmc: introduce VFIO DMA mapping API for fslmc
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (19 preceding siblings ...)
  2024-10-22 19:12         ` [v4 20/42] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 22/42] net/dpaa2: change miss flow ID macro name vanshika.shukla
                           ` (21 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Declare rte_fslmc_vfio_mem_dmamap and rte_fslmc_vfio_mem_dmaunmap
in bus_fslmc_driver.h for external usage.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     | 7 ++++++-
 drivers/bus/fslmc/fslmc_bus.c            | 2 +-
 drivers/bus/fslmc/fslmc_vfio.c           | 3 ++-
 drivers/bus/fslmc/fslmc_vfio.h           | 7 +------
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 2 +-
 5 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 777ab24c10..1d4ce4785f 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2016,2021 NXP
+ *   Copyright 2016,2021-2023 NXP
  *
  */
 
@@ -135,6 +135,11 @@ struct rte_dpaa2_object {
 	rte_dpaa2_obj_close_t close;
 };
 
+int
+rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size);
+
 /**
  * A structure describing a DPAA2 driver.
  */
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 107cc70833..fda0a4206d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -438,7 +438,7 @@ rte_fslmc_probe(void)
 	 * install callback handler.
 	 */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
-		ret = rte_fslmc_vfio_dmamap();
+		ret = fslmc_vfio_dmamap();
 		if (ret) {
 			DPAA2_BUS_ERR("Unable to DMA map existing VAs: (%d)",
 				      ret);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 9d834f293a..3f75a71e46 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1161,7 +1161,8 @@ rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 	return fslmc_unmap_dma(0, iova, size);
 }
 
-int rte_fslmc_vfio_dmamap(void)
+int
+fslmc_vfio_dmamap(void)
 {
 	int i = 0, ret;
 
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 1695b6c078..815970ec38 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -60,10 +60,5 @@ int fslmc_vfio_process_group(void);
 int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
 int fslmc_get_container_group(const char *group_name, int *gropuid);
-int rte_fslmc_vfio_dmamap(void);
-int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova,
-		uint64_t size);
-int rte_fslmc_vfio_mem_dmaunmap(uint64_t iova,
-		uint64_t size);
-
+int fslmc_vfio_dmamap(void);
 #endif /* _FSLMC_VFIO_H_ */
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 886fb7fbb0..c054988513 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -23,7 +23,7 @@
 #include <dev_driver.h>
 #include "rte_dpaa2_mempool.h"
 
-#include "fslmc_vfio.h"
+#include <bus_fslmc_driver.h>
 #include <fslmc_logs.h>
 #include <mc/fsl_dpbp.h>
 #include <portal/dpaa2_hw_pvt.h>
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 22/42] net/dpaa2: change miss flow ID macro name
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (20 preceding siblings ...)
  2024-10-22 19:12         ` [v4 21/42] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 23/42] net/dpaa2: flow API refactor vanshika.shukla
                           ` (20 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Remove miss flow id macro name to DPNI_FS_MISS_DROP since its
conflicting with enum. Also, set default miss flow id to 0.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 77367aa392..b7f1f974c6 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -30,8 +30,7 @@
 int mc_l4_port_identification;
 
 static char *dpaa2_flow_control_log;
-static uint16_t dpaa2_flow_miss_flow_id =
-	DPNI_FS_MISS_DROP;
+static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
 #define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
 
@@ -3990,7 +3989,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
 		struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 		dpaa2_flow_miss_flow_id =
-			atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
+			(uint16_t)atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
 		if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
 			DPAA2_PMD_ERR(
 				"The missed flow ID %d exceeds the max flow ID %d",
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 23/42] net/dpaa2: flow API refactor
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (21 preceding siblings ...)
  2024-10-22 19:12         ` [v4 22/42] net/dpaa2: change miss flow ID macro name vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-23  0:52           ` Stephen Hemminger
  2024-10-22 19:12         ` [v4 24/42] net/dpaa2: dump Rx parser result vanshika.shukla
                           ` (19 subsequent siblings)
  42 siblings, 1 reply; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

1) Gather redundant code with same logic from various protocol
   handlers to create common functions.
2) struct dpaa2_key_profile is used to describe each extract's
   offset of rule and size. It's easy to insert new extract previous
   IP address extract.
3) IP address profile is used to describe ipv4/v6 addresses extracts
   located at end of rule.
4) L4 ports profile is used to describe the ports positions and offsets
   of rule.
5) Once the extracts of QoS/FS table are update, go through all
   the existing flows of this table to update the rule data.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c |   27 +-
 drivers/net/dpaa2/dpaa2_ethdev.h |   90 +-
 drivers/net/dpaa2/dpaa2_flow.c   | 4839 ++++++++++++------------------
 3 files changed, 2030 insertions(+), 2926 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index bd6a578e30..e55de5b614 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2808,39 +2808,20 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	/* Init fields w.r.t. classification */
 	memset(&priv->extract.qos_key_extract, 0,
 		sizeof(struct dpaa2_key_extract));
-	priv->extract.qos_extract_param = (size_t)rte_malloc(NULL, 256, 64);
+	priv->extract.qos_extract_param = rte_malloc(NULL, 256, 64);
 	if (!priv->extract.qos_extract_param) {
-		DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow "
-			    " classification ", ret);
+		DPAA2_PMD_ERR("Memory alloc failed");
 		goto init_err;
 	}
-	priv->extract.qos_key_extract.key_info.ipv4_src_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv4_dst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv6_src_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv6_dst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
 
 	for (i = 0; i < MAX_TCS; i++) {
 		memset(&priv->extract.tc_key_extract[i], 0,
 			sizeof(struct dpaa2_key_extract));
-		priv->extract.tc_extract_param[i] =
-			(size_t)rte_malloc(NULL, 256, 64);
+		priv->extract.tc_extract_param[i] = rte_malloc(NULL, 256, 64);
 		if (!priv->extract.tc_extract_param[i]) {
-			DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classification",
-				     ret);
+			DPAA2_PMD_ERR("Memory alloc failed");
 			goto init_err;
 		}
-		priv->extract.tc_key_extract[i].key_info.ipv4_src_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv4_dst_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv6_src_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv6_dst_offset =
-			IP_ADDRESS_OFFSET_INVALID;
 	}
 
 	ret = dpni_set_max_frame_length(dpni_dev, CMD_PRI_LOW, priv->token,
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 6625afaba3..ea1c1b5117 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -145,14 +145,6 @@ extern bool dpaa2_enable_ts[];
 extern uint64_t dpaa2_timestamp_rx_dynflag;
 extern int dpaa2_timestamp_dynfield_offset;
 
-#define DPAA2_QOS_TABLE_RECONFIGURE	1
-#define DPAA2_FS_TABLE_RECONFIGURE	2
-
-#define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4
-#define DPAA2_FS_TABLE_IPADDR_EXTRACT 8
-
-#define DPAA2_FLOW_MAX_KEY_SIZE		16
-
 /* Externally defined */
 extern const struct rte_flow_ops dpaa2_flow_ops;
 
@@ -160,29 +152,85 @@ extern const struct rte_tm_ops dpaa2_tm_ops;
 
 extern bool dpaa2_enable_err_queue;
 
-#define IP_ADDRESS_OFFSET_INVALID (-1)
+struct ipv4_sd_addr_extract_rule {
+	uint32_t ipv4_src;
+	uint32_t ipv4_dst;
+};
+
+struct ipv6_sd_addr_extract_rule {
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+};
 
-struct dpaa2_key_info {
+struct ipv4_ds_addr_extract_rule {
+	uint32_t ipv4_dst;
+	uint32_t ipv4_src;
+};
+
+struct ipv6_ds_addr_extract_rule {
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+union ip_addr_extract_rule {
+	struct ipv4_sd_addr_extract_rule ipv4_sd_addr;
+	struct ipv6_sd_addr_extract_rule ipv6_sd_addr;
+	struct ipv4_ds_addr_extract_rule ipv4_ds_addr;
+	struct ipv6_ds_addr_extract_rule ipv6_ds_addr;
+};
+
+union ip_src_addr_extract_rule {
+	uint32_t ipv4_src;
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+union ip_dst_addr_extract_rule {
+	uint32_t ipv4_dst;
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+enum ip_addr_extract_type {
+	IP_NONE_ADDR_EXTRACT,
+	IP_SRC_EXTRACT,
+	IP_DST_EXTRACT,
+	IP_SRC_DST_EXTRACT,
+	IP_DST_SRC_EXTRACT
+};
+
+struct key_prot_field {
+	enum net_prot prot;
+	uint32_t key_field;
+};
+
+struct dpaa2_key_profile {
+	uint8_t num;
 	uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
 	uint8_t key_size[DPKG_MAX_NUM_OF_EXTRACTS];
-	/* Special for IP address. */
-	int ipv4_src_offset;
-	int ipv4_dst_offset;
-	int ipv6_src_offset;
-	int ipv6_dst_offset;
-	uint8_t key_total_size;
+
+	enum ip_addr_extract_type ip_addr_type;
+	uint8_t ip_addr_extract_pos;
+	uint8_t ip_addr_extract_off;
+
+	uint8_t l4_src_port_present;
+	uint8_t l4_src_port_pos;
+	uint8_t l4_src_port_offset;
+	uint8_t l4_dst_port_present;
+	uint8_t l4_dst_port_pos;
+	uint8_t l4_dst_port_offset;
+	struct key_prot_field prot_field[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint16_t key_max_size;
 };
 
 struct dpaa2_key_extract {
 	struct dpkg_profile_cfg dpkg;
-	struct dpaa2_key_info key_info;
+	struct dpaa2_key_profile key_profile;
 };
 
 struct extract_s {
 	struct dpaa2_key_extract qos_key_extract;
 	struct dpaa2_key_extract tc_key_extract[MAX_TCS];
-	uint64_t qos_extract_param;
-	uint64_t tc_extract_param[MAX_TCS];
+	uint8_t *qos_extract_param;
+	uint8_t *tc_extract_param[MAX_TCS];
 };
 
 struct dpaa2_dev_priv {
@@ -233,7 +281,8 @@ struct dpaa2_dev_priv {
 	/* Stores correction offset for one step timestamping */
 	uint16_t ptp_correction_offset;
 
-	LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */
+	struct dpaa2_dev_flow *curr;
+	LIST_HEAD(, dpaa2_dev_flow) flows;
 	LIST_HEAD(nodes, dpaa2_tm_node) nodes;
 	LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles;
 };
@@ -292,7 +341,6 @@ uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
 void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci, struct dpaa2_queue *dpaa2_q);
 void dpaa2_flow_clean(struct rte_eth_dev *dev);
 uint16_t dpaa2_dev_tx_conf(void *queue)  __rte_unused;
-int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev);
 
 int dpaa2_timesync_enable(struct rte_eth_dev *dev);
 int dpaa2_timesync_disable(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index b7f1f974c6..9e03ad5401 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2022 NXP
  */
 
 #include <sys/queue.h>
@@ -27,41 +27,40 @@
  * MC/WRIOP are not able to identify
  * the l4 protocol with l4 ports.
  */
-int mc_l4_port_identification;
+static int mc_l4_port_identification;
 
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
-#define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
-
-enum flow_rule_ipaddr_type {
-	FLOW_NONE_IPADDR,
-	FLOW_IPV4_ADDR,
-	FLOW_IPV6_ADDR
+enum dpaa2_flow_entry_size {
+	DPAA2_FLOW_ENTRY_MIN_SIZE = (DPNI_MAX_KEY_SIZE / 2),
+	DPAA2_FLOW_ENTRY_MAX_SIZE = DPNI_MAX_KEY_SIZE
 };
 
-struct flow_rule_ipaddr {
-	enum flow_rule_ipaddr_type ipaddr_type;
-	int qos_ipsrc_offset;
-	int qos_ipdst_offset;
-	int fs_ipsrc_offset;
-	int fs_ipdst_offset;
+enum dpaa2_flow_dist_type {
+	DPAA2_FLOW_QOS_TYPE = 1 << 0,
+	DPAA2_FLOW_FS_TYPE = 1 << 1
 };
 
-struct rte_flow {
-	LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
+#define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
+#define DPAA2_FLOW_MAX_KEY_SIZE			16
+
+struct dpaa2_dev_flow {
+	LIST_ENTRY(dpaa2_dev_flow) next;
 	struct dpni_rule_cfg qos_rule;
+	uint8_t *qos_key_addr;
+	uint8_t *qos_mask_addr;
+	uint16_t qos_rule_size;
 	struct dpni_rule_cfg fs_rule;
 	uint8_t qos_real_key_size;
 	uint8_t fs_real_key_size;
+	uint8_t *fs_key_addr;
+	uint8_t *fs_mask_addr;
+	uint16_t fs_rule_size;
 	uint8_t tc_id; /** Traffic Class ID. */
 	uint8_t tc_index; /** index within this Traffic Class. */
-	enum rte_flow_action_type action;
-	/* Special for IP address to specify the offset
-	 * in key/mask.
-	 */
-	struct flow_rule_ipaddr ipaddr_rule;
-	struct dpni_fs_action_cfg action_cfg;
+	enum rte_flow_action_type action_type;
+	struct dpni_fs_action_cfg fs_action_cfg;
 };
 
 static const
@@ -94,9 +93,6 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
 };
 
-/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
-#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
-
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
 	.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
@@ -151,11 +147,12 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
-
 #endif
 
-static inline void dpaa2_prot_field_string(
-	enum net_prot prot, uint32_t field,
+#define DPAA2_FLOW_DUMP printf
+
+static inline void
+dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 	char *string)
 {
 	if (!dpaa2_flow_control_log)
@@ -230,60 +227,84 @@ static inline void dpaa2_prot_field_string(
 	}
 }
 
-static inline void dpaa2_flow_qos_table_extracts_log(
-	const struct dpaa2_dev_priv *priv, FILE *f)
+static inline void
+dpaa2_flow_qos_extracts_log(const struct dpaa2_dev_priv *priv)
 {
 	int idx;
 	char string[32];
+	const struct dpkg_profile_cfg *dpkg =
+		&priv->extract.qos_key_extract.dpkg;
+	const struct dpkg_extract *extract;
+	enum dpkg_extract_type type;
+	enum net_prot prot;
+	uint32_t field;
 
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "Setup QoS table: number of extracts: %d\r\n",
-			priv->extract.qos_key_extract.dpkg.num_extracts);
-	for (idx = 0; idx < priv->extract.qos_key_extract.dpkg.num_extracts;
-		idx++) {
-		dpaa2_prot_field_string(priv->extract.qos_key_extract.dpkg
-			.extracts[idx].extract.from_hdr.prot,
-			priv->extract.qos_key_extract.dpkg.extracts[idx]
-			.extract.from_hdr.field,
-			string);
-		fprintf(f, "%s", string);
-		if ((idx + 1) < priv->extract.qos_key_extract.dpkg.num_extracts)
-			fprintf(f, " / ");
-	}
-	fprintf(f, "\r\n");
+	DPAA2_FLOW_DUMP("QoS table: %d extracts\r\n",
+		dpkg->num_extracts);
+	for (idx = 0; idx < dpkg->num_extracts; idx++) {
+		extract = &dpkg->extracts[idx];
+		type = extract->type;
+		if (type == DPKG_EXTRACT_FROM_HDR) {
+			prot = extract->extract.from_hdr.prot;
+			field = extract->extract.from_hdr.field;
+			dpaa2_prot_field_string(prot, field,
+				string);
+		} else if (type == DPKG_EXTRACT_FROM_DATA) {
+			sprintf(string, "raw offset/len: %d/%d",
+				extract->extract.from_data.offset,
+				extract->extract.from_data.size);
+		}
+		DPAA2_FLOW_DUMP("%s", string);
+		if ((idx + 1) < dpkg->num_extracts)
+			DPAA2_FLOW_DUMP(" / ");
+	}
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_fs_table_extracts_log(
-	const struct dpaa2_dev_priv *priv, int tc_id, FILE *f)
+static inline void
+dpaa2_flow_fs_extracts_log(const struct dpaa2_dev_priv *priv,
+	int tc_id)
 {
 	int idx;
 	char string[32];
+	const struct dpkg_profile_cfg *dpkg =
+		&priv->extract.tc_key_extract[tc_id].dpkg;
+	const struct dpkg_extract *extract;
+	enum dpkg_extract_type type;
+	enum net_prot prot;
+	uint32_t field;
 
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "Setup FS table: number of extracts of TC[%d]: %d\r\n",
-			tc_id, priv->extract.tc_key_extract[tc_id]
-			.dpkg.num_extracts);
-	for (idx = 0; idx < priv->extract.tc_key_extract[tc_id]
-		.dpkg.num_extracts; idx++) {
-		dpaa2_prot_field_string(priv->extract.tc_key_extract[tc_id]
-			.dpkg.extracts[idx].extract.from_hdr.prot,
-			priv->extract.tc_key_extract[tc_id].dpkg.extracts[idx]
-			.extract.from_hdr.field,
-			string);
-		fprintf(f, "%s", string);
-		if ((idx + 1) < priv->extract.tc_key_extract[tc_id]
-			.dpkg.num_extracts)
-			fprintf(f, " / ");
-	}
-	fprintf(f, "\r\n");
+	DPAA2_FLOW_DUMP("FS table: %d extracts in TC[%d]\r\n",
+		dpkg->num_extracts, tc_id);
+	for (idx = 0; idx < dpkg->num_extracts; idx++) {
+		extract = &dpkg->extracts[idx];
+		type = extract->type;
+		if (type == DPKG_EXTRACT_FROM_HDR) {
+			prot = extract->extract.from_hdr.prot;
+			field = extract->extract.from_hdr.field;
+			dpaa2_prot_field_string(prot, field,
+				string);
+		} else if (type == DPKG_EXTRACT_FROM_DATA) {
+			sprintf(string, "raw offset/len: %d/%d",
+				extract->extract.from_data.offset,
+				extract->extract.from_data.size);
+		}
+		DPAA2_FLOW_DUMP("%s", string);
+		if ((idx + 1) < dpkg->num_extracts)
+			DPAA2_FLOW_DUMP(" / ");
+	}
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_qos_entry_log(
-	const char *log_info, const struct rte_flow *flow, int qos_index, FILE *f)
+static inline void
+dpaa2_flow_qos_entry_log(const char *log_info,
+	const struct dpaa2_dev_flow *flow, int qos_index)
 {
 	int idx;
 	uint8_t *key, *mask;
@@ -291,27 +312,34 @@ static inline void dpaa2_flow_qos_entry_log(
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "\r\n%s QoS entry[%d] for TC[%d], extracts size is %d\r\n",
-		log_info, qos_index, flow->tc_id, flow->qos_real_key_size);
-
-	key = (uint8_t *)(size_t)flow->qos_rule.key_iova;
-	mask = (uint8_t *)(size_t)flow->qos_rule.mask_iova;
+	if (qos_index >= 0) {
+		DPAA2_FLOW_DUMP("%s QoS entry[%d](size %d/%d) for TC[%d]\r\n",
+			log_info, qos_index, flow->qos_rule_size,
+			flow->qos_rule.key_size,
+			flow->tc_id);
+	} else {
+		DPAA2_FLOW_DUMP("%s QoS entry(size %d/%d) for TC[%d]\r\n",
+			log_info, flow->qos_rule_size,
+			flow->qos_rule.key_size,
+			flow->tc_id);
+	}
 
-	fprintf(f, "key:\r\n");
-	for (idx = 0; idx < flow->qos_real_key_size; idx++)
-		fprintf(f, "%02x ", key[idx]);
+	key = flow->qos_key_addr;
+	mask = flow->qos_mask_addr;
 
-	fprintf(f, "\r\nmask:\r\n");
-	for (idx = 0; idx < flow->qos_real_key_size; idx++)
-		fprintf(f, "%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("key:\r\n");
+	for (idx = 0; idx < flow->qos_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", key[idx]);
 
-	fprintf(f, "\r\n%s QoS ipsrc: %d, ipdst: %d\r\n", log_info,
-		flow->ipaddr_rule.qos_ipsrc_offset,
-		flow->ipaddr_rule.qos_ipdst_offset);
+	DPAA2_FLOW_DUMP("\r\nmask:\r\n");
+	for (idx = 0; idx < flow->qos_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_fs_entry_log(
-	const char *log_info, const struct rte_flow *flow, FILE *f)
+static inline void
+dpaa2_flow_fs_entry_log(const char *log_info,
+	const struct dpaa2_dev_flow *flow)
 {
 	int idx;
 	uint8_t *key, *mask;
@@ -319,187 +347,432 @@ static inline void dpaa2_flow_fs_entry_log(
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "\r\n%s FS/TC entry[%d] of TC[%d], extracts size is %d\r\n",
-		log_info, flow->tc_index, flow->tc_id, flow->fs_real_key_size);
+	DPAA2_FLOW_DUMP("%s FS/TC entry[%d](size %d/%d) of TC[%d]\r\n",
+		log_info, flow->tc_index,
+		flow->fs_rule_size, flow->fs_rule.key_size,
+		flow->tc_id);
+
+	key = flow->fs_key_addr;
+	mask = flow->fs_mask_addr;
+
+	DPAA2_FLOW_DUMP("key:\r\n");
+	for (idx = 0; idx < flow->fs_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", key[idx]);
+
+	DPAA2_FLOW_DUMP("\r\nmask:\r\n");
+	for (idx = 0; idx < flow->fs_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("\r\n");
+}
 
-	key = (uint8_t *)(size_t)flow->fs_rule.key_iova;
-	mask = (uint8_t *)(size_t)flow->fs_rule.mask_iova;
+static int
+dpaa2_flow_ip_address_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_IPV4 &&
+		(field == NH_FLD_IPV4_SRC_IP ||
+		field == NH_FLD_IPV4_DST_IP))
+		return true;
+	else if (prot == NET_PROT_IPV6 &&
+		(field == NH_FLD_IPV6_SRC_IP ||
+		field == NH_FLD_IPV6_DST_IP))
+		return true;
+	else if (prot == NET_PROT_IP &&
+		(field == NH_FLD_IP_SRC ||
+		field == NH_FLD_IP_DST))
+		return true;
 
-	fprintf(f, "key:\r\n");
-	for (idx = 0; idx < flow->fs_real_key_size; idx++)
-		fprintf(f, "%02x ", key[idx]);
+	return false;
+}
 
-	fprintf(f, "\r\nmask:\r\n");
-	for (idx = 0; idx < flow->fs_real_key_size; idx++)
-		fprintf(f, "%02x ", mask[idx]);
+static int
+dpaa2_flow_l4_src_port_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_TCP &&
+		field == NH_FLD_TCP_PORT_SRC)
+		return true;
+	else if (prot == NET_PROT_UDP &&
+		field == NH_FLD_UDP_PORT_SRC)
+		return true;
+	else if (prot == NET_PROT_SCTP &&
+		field == NH_FLD_SCTP_PORT_SRC)
+		return true;
+
+	return false;
+}
 
-	fprintf(f, "\r\n%s FS ipsrc: %d, ipdst: %d\r\n", log_info,
-		flow->ipaddr_rule.fs_ipsrc_offset,
-		flow->ipaddr_rule.fs_ipdst_offset);
+static int
+dpaa2_flow_l4_dst_port_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_TCP &&
+		field == NH_FLD_TCP_PORT_DST)
+		return true;
+	else if (prot == NET_PROT_UDP &&
+		field == NH_FLD_UDP_PORT_DST)
+		return true;
+	else if (prot == NET_PROT_SCTP &&
+		field == NH_FLD_SCTP_PORT_DST)
+		return true;
+
+	return false;
 }
 
-static inline void dpaa2_flow_extract_key_set(
-	struct dpaa2_key_info *key_info, int index, uint8_t size)
+static int
+dpaa2_flow_add_qos_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow)
 {
-	key_info->key_size[index] = size;
-	if (index > 0) {
-		key_info->key_offset[index] =
-			key_info->key_offset[index - 1] +
-			key_info->key_size[index - 1];
-	} else {
-		key_info->key_offset[index] = 0;
+	uint16_t qos_index;
+	int ret;
+	struct fsl_mc_io *dpni = priv->hw;
+
+	if (priv->num_rx_tc <= 1 &&
+		flow->action_type != RTE_FLOW_ACTION_TYPE_RSS) {
+		DPAA2_PMD_WARN("No QoS Table for FS");
+		return -EINVAL;
 	}
-	key_info->key_total_size += size;
+
+	/* QoS entry added is only effective for multiple TCs.*/
+	qos_index = flow->tc_id * priv->fs_entries + flow->tc_index;
+	if (qos_index >= priv->qos_entries) {
+		DPAA2_PMD_ERR("QoS table full(%d >= %d)",
+			qos_index, priv->qos_entries);
+		return -EINVAL;
+	}
+
+	dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
+
+	ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+			priv->token, &flow->qos_rule,
+			flow->tc_id, qos_index,
+			0, 0);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("Add entry(%d) to table(%d) failed",
+			qos_index, flow->tc_id);
+		return ret;
+	}
+
+	return 0;
 }
 
-static int dpaa2_flow_extract_add(
-	struct dpaa2_key_extract *key_extract,
-	enum net_prot prot,
-	uint32_t field, uint8_t field_size)
+static int
+dpaa2_flow_add_fs_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow)
 {
-	int index, ip_src = -1, ip_dst = -1;
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
+	int ret;
+	struct fsl_mc_io *dpni = priv->hw;
 
-	if (dpkg->num_extracts >=
-		DPKG_MAX_NUM_OF_EXTRACTS) {
-		DPAA2_PMD_WARN("Number of extracts overflows");
-		return -1;
+	if (flow->tc_index >= priv->fs_entries) {
+		DPAA2_PMD_ERR("FS table full(%d >= %d)",
+			flow->tc_index, priv->fs_entries);
+		return -EINVAL;
 	}
-	/* Before reorder, the IP SRC and IP DST are already last
-	 * extract(s).
-	 */
-	for (index = 0; index < dpkg->num_extracts; index++) {
-		if (dpkg->extracts[index].extract.from_hdr.prot ==
-			NET_PROT_IP) {
-			if (dpkg->extracts[index].extract.from_hdr.field ==
-				NH_FLD_IP_SRC) {
-				ip_src = index;
-			}
-			if (dpkg->extracts[index].extract.from_hdr.field ==
-				NH_FLD_IP_DST) {
-				ip_dst = index;
+
+	dpaa2_flow_fs_entry_log("Start add", flow);
+
+	ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
+			priv->token, flow->tc_id,
+			flow->tc_index, &flow->fs_rule,
+			&flow->fs_action_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("Add rule(%d) to FS table(%d) failed",
+			flow->tc_index, flow->tc_id);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_rule_insert_hole(struct dpaa2_dev_flow *flow,
+	int offset, int size,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int end;
+
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		end = flow->qos_rule_size;
+		if (end > offset) {
+			memmove(flow->qos_key_addr + offset + size,
+					flow->qos_key_addr + offset,
+					end - offset);
+			memset(flow->qos_key_addr + offset,
+					0, size);
+
+			memmove(flow->qos_mask_addr + offset + size,
+					flow->qos_mask_addr + offset,
+					end - offset);
+			memset(flow->qos_mask_addr + offset,
+					0, size);
+		}
+		flow->qos_rule_size += size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		end = flow->fs_rule_size;
+		if (end > offset) {
+			memmove(flow->fs_key_addr + offset + size,
+					flow->fs_key_addr + offset,
+					end - offset);
+			memset(flow->fs_key_addr + offset,
+					0, size);
+
+			memmove(flow->fs_mask_addr + offset + size,
+					flow->fs_mask_addr + offset,
+					end - offset);
+			memset(flow->fs_mask_addr + offset,
+					0, size);
+		}
+		flow->fs_rule_size += size;
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_rule_add_all(struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type,
+	uint16_t entry_size, uint8_t tc_id)
+{
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	int ret;
+
+	while (curr) {
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			if (priv->num_rx_tc > 1 ||
+				curr->action_type ==
+				RTE_FLOW_ACTION_TYPE_RSS) {
+				curr->qos_rule.key_size = entry_size;
+				ret = dpaa2_flow_add_qos_rule(priv, curr);
+				if (ret)
+					return ret;
 			}
 		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE &&
+			curr->tc_id == tc_id) {
+			curr->fs_rule.key_size = entry_size;
+			ret = dpaa2_flow_add_fs_rule(priv, curr);
+			if (ret)
+				return ret;
+		}
+		curr = LIST_NEXT(curr, next);
 	}
 
-	if (ip_src >= 0)
-		RTE_ASSERT((ip_src + 2) >= dpkg->num_extracts);
+	return 0;
+}
 
-	if (ip_dst >= 0)
-		RTE_ASSERT((ip_dst + 2) >= dpkg->num_extracts);
+static int
+dpaa2_flow_qos_rule_insert_hole(struct dpaa2_dev_priv *priv,
+	int offset, int size)
+{
+	struct dpaa2_dev_flow *curr;
+	int ret;
 
-	if (prot == NET_PROT_IP &&
-		(field == NH_FLD_IP_SRC ||
-		field == NH_FLD_IP_DST)) {
-		index = dpkg->num_extracts;
+	curr = priv->curr;
+	if (!curr) {
+		DPAA2_PMD_ERR("Current qos flow insert hole failed.");
+		return -EINVAL;
 	} else {
-		if (ip_src >= 0 && ip_dst >= 0)
-			index = dpkg->num_extracts - 2;
-		else if (ip_src >= 0 || ip_dst >= 0)
-			index = dpkg->num_extracts - 1;
-		else
-			index = dpkg->num_extracts;
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	dpkg->extracts[index].type =	DPKG_EXTRACT_FROM_HDR;
-	dpkg->extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-	dpkg->extracts[index].extract.from_hdr.prot = prot;
-	dpkg->extracts[index].extract.from_hdr.field = field;
-	if (prot == NET_PROT_IP &&
-		(field == NH_FLD_IP_SRC ||
-		field == NH_FLD_IP_DST)) {
-		dpaa2_flow_extract_key_set(key_info, index, 0);
+	curr = LIST_FIRST(&priv->flows);
+	while (curr) {
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		curr = LIST_NEXT(curr, next);
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_fs_rule_insert_hole(struct dpaa2_dev_priv *priv,
+	int offset, int size, int tc_id)
+{
+	struct dpaa2_dev_flow *curr;
+	int ret;
+
+	curr = priv->curr;
+	if (!curr || curr->tc_id != tc_id) {
+		DPAA2_PMD_ERR("Current flow insert hole failed.");
+		return -EINVAL;
 	} else {
-		dpaa2_flow_extract_key_set(key_info, index, field_size);
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	if (prot == NET_PROT_IP) {
-		if (field == NH_FLD_IP_SRC) {
-			if (key_info->ipv4_dst_offset >= 0) {
-				key_info->ipv4_src_offset =
-					key_info->ipv4_dst_offset +
-					NH_FLD_IPV4_ADDR_SIZE;
-			} else {
-				key_info->ipv4_src_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-			if (key_info->ipv6_dst_offset >= 0) {
-				key_info->ipv6_src_offset =
-					key_info->ipv6_dst_offset +
-					NH_FLD_IPV6_ADDR_SIZE;
-			} else {
-				key_info->ipv6_src_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-		} else if (field == NH_FLD_IP_DST) {
-			if (key_info->ipv4_src_offset >= 0) {
-				key_info->ipv4_dst_offset =
-					key_info->ipv4_src_offset +
-					NH_FLD_IPV4_ADDR_SIZE;
-			} else {
-				key_info->ipv4_dst_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-			if (key_info->ipv6_src_offset >= 0) {
-				key_info->ipv6_dst_offset =
-					key_info->ipv6_src_offset +
-					NH_FLD_IPV6_ADDR_SIZE;
-			} else {
-				key_info->ipv6_dst_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
+	curr = LIST_FIRST(&priv->flows);
+
+	while (curr) {
+		if (curr->tc_id != tc_id) {
+			curr = LIST_NEXT(curr, next);
+			continue;
 		}
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		curr = LIST_NEXT(curr, next);
 	}
 
-	if (index == dpkg->num_extracts) {
-		dpkg->num_extracts++;
-		return 0;
+	return 0;
+}
+
+/* Move IPv4/IPv6 addresses to fill new extract previous IP address.
+ * Current MC/WRIOP only support generic IP extract but IP address
+ * is not fixed, so we have to put them at end of extracts, otherwise,
+ * the extracts position following them can't be identified.
+ */
+static int
+dpaa2_flow_key_profile_advance(enum net_prot prot,
+	uint32_t field, uint8_t field_size,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off += field_size;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, field_size);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, field_size, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = field_size;
+	key_profile->prot_field[pos].prot = prot;
+	key_profile->prot_field[pos].key_field = field;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	if (dpaa2_flow_l4_src_port_extract(prot, field)) {
+		key_profile->l4_src_port_present = 1;
+		key_profile->l4_src_port_pos = pos;
+		key_profile->l4_src_port_offset =
+			key_profile->key_offset[pos];
+	} else if (dpaa2_flow_l4_dst_port_extract(prot, field)) {
+		key_profile->l4_dst_port_present = 1;
+		key_profile->l4_dst_port_pos = pos;
+		key_profile->l4_dst_port_offset =
+			key_profile->key_offset[pos];
+	}
+	key_profile->key_max_size += field_size;
+
+	return pos;
+}
+
+static int
+dpaa2_flow_extract_add_hdr(enum net_prot prot,
+	uint32_t field, uint8_t field_size,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
 	}
 
-	if (ip_src >= 0) {
-		ip_src++;
-		dpkg->extracts[ip_src].type =
-			DPKG_EXTRACT_FROM_HDR;
-		dpkg->extracts[ip_src].extract.from_hdr.type =
-			DPKG_FULL_FIELD;
-		dpkg->extracts[ip_src].extract.from_hdr.prot =
-			NET_PROT_IP;
-		dpkg->extracts[ip_src].extract.from_hdr.field =
-			NH_FLD_IP_SRC;
-		dpaa2_flow_extract_key_set(key_info, ip_src, 0);
-		key_info->ipv4_src_offset += field_size;
-		key_info->ipv6_src_offset += field_size;
-	}
-	if (ip_dst >= 0) {
-		ip_dst++;
-		dpkg->extracts[ip_dst].type =
-			DPKG_EXTRACT_FROM_HDR;
-		dpkg->extracts[ip_dst].extract.from_hdr.type =
-			DPKG_FULL_FIELD;
-		dpkg->extracts[ip_dst].extract.from_hdr.prot =
-			NET_PROT_IP;
-		dpkg->extracts[ip_dst].extract.from_hdr.field =
-			NH_FLD_IP_DST;
-		dpaa2_flow_extract_key_set(key_info, ip_dst, 0);
-		key_info->ipv4_dst_offset += field_size;
-		key_info->ipv6_dst_offset += field_size;
+	pos = dpaa2_flow_key_profile_advance(prot,
+			field, field_size, priv,
+			dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
 	}
 
+	extracts[pos].type = DPKG_EXTRACT_FROM_HDR;
+	extracts[pos].extract.from_hdr.prot = prot;
+	extracts[pos].extract.from_hdr.type = DPKG_FULL_FIELD;
+	extracts[pos].extract.from_hdr.field = field;
+
 	dpkg->num_extracts++;
 
 	return 0;
 }
 
-static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
-				      int size)
+static int
+dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
+	int size)
 {
 	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
+	struct dpaa2_key_profile *key_info = &key_extract->key_profile;
 	int last_extract_size, index;
 
 	if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
@@ -527,83 +800,58 @@ static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
 			DPAA2_FLOW_MAX_KEY_SIZE * index;
 	}
 
-	key_info->key_total_size = size;
+	key_info->key_max_size = size;
 	return 0;
 }
 
-/* Protocol discrimination.
- * Discriminate IPv4/IPv6/vLan by Eth type.
- * Discriminate UDP/TCP/ICMP by next proto of IP.
- */
 static inline int
-dpaa2_flow_proto_discrimination_extract(
-	struct dpaa2_key_extract *key_extract,
-	enum rte_flow_item_type type)
+dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t key_field)
 {
-	if (type == RTE_FLOW_ITEM_TYPE_ETH) {
-		return dpaa2_flow_extract_add(
-				key_extract, NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				sizeof(rte_be16_t));
-	} else if (type == (enum rte_flow_item_type)
-		DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
-		return dpaa2_flow_extract_add(
-				key_extract, NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				NH_FLD_IP_PROTO_SIZE);
-	}
-
-	return -1;
-}
+	int pos;
+	struct key_prot_field *prot_field;
 
-static inline int dpaa2_flow_extract_search(
-	struct dpkg_profile_cfg *dpkg,
-	enum net_prot prot, uint32_t field)
-{
-	int i;
+	if (dpaa2_flow_ip_address_extract(prot, key_field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
 
-	for (i = 0; i < dpkg->num_extracts; i++) {
-		if (dpkg->extracts[i].extract.from_hdr.prot == prot &&
-			dpkg->extracts[i].extract.from_hdr.field == field) {
-			return i;
+	prot_field = key_profile->prot_field;
+	for (pos = 0; pos < key_profile->num; pos++) {
+		if (prot_field[pos].prot == prot &&
+			prot_field[pos].key_field == key_field) {
+			return pos;
 		}
 	}
 
-	return -1;
+	if (dpaa2_flow_l4_src_port_extract(prot, key_field)) {
+		if (key_profile->l4_src_port_present)
+			return key_profile->l4_src_port_pos;
+	} else if (dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
+		if (key_profile->l4_dst_port_present)
+			return key_profile->l4_dst_port_pos;
+	}
+
+	return -ENXIO;
 }
 
-static inline int dpaa2_flow_extract_key_offset(
-	struct dpaa2_key_extract *key_extract,
-	enum net_prot prot, uint32_t field)
+static inline int
+dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t key_field)
 {
 	int i;
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
 
-	if (prot == NET_PROT_IPV4 ||
-		prot == NET_PROT_IPV6)
-		i = dpaa2_flow_extract_search(dpkg, NET_PROT_IP, field);
+	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
+
+	if (i >= 0)
+		return key_profile->key_offset[i];
 	else
-		i = dpaa2_flow_extract_search(dpkg, prot, field);
-
-	if (i >= 0) {
-		if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_SRC)
-			return key_info->ipv4_src_offset;
-		else if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_DST)
-			return key_info->ipv4_dst_offset;
-		else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_SRC)
-			return key_info->ipv6_src_offset;
-		else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_DST)
-			return key_info->ipv6_dst_offset;
-		else
-			return key_info->key_offset[i];
-	} else {
-		return -1;
-	}
+		return i;
 }
 
-struct proto_discrimination {
-	enum rte_flow_item_type type;
+struct prev_proto_field_id {
+	enum net_prot prot;
 	union {
 		rte_be16_t eth_type;
 		uint8_t ip_proto;
@@ -611,103 +859,134 @@ struct proto_discrimination {
 };
 
 static int
-dpaa2_flow_proto_discrimination_rule(
-	struct dpaa2_dev_priv *priv, struct rte_flow *flow,
-	struct proto_discrimination proto, int group)
+dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct prev_proto_field_id *prev_proto,
+	int group,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	enum net_prot prot;
-	uint32_t field;
 	int offset;
-	size_t key_iova;
-	size_t mask_iova;
+	uint8_t *key_addr;
+	uint8_t *mask_addr;
+	uint32_t field = 0;
 	rte_be16_t eth_type;
 	uint8_t ip_proto;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
 
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		prot = NET_PROT_ETH;
+	if (prev_proto->prot == NET_PROT_ETH) {
 		field = NH_FLD_ETH_TYPE;
-	} else if (proto.type == DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
-		prot = NET_PROT_IP;
+	} else if (prev_proto->prot == NET_PROT_IP) {
 		field = NH_FLD_IP_PROTO;
 	} else {
-		DPAA2_PMD_ERR(
-			"Only Eth and IP support to discriminate next proto.");
-		return -1;
-	}
-
-	offset = dpaa2_flow_extract_key_offset(&priv->extract.qos_key_extract,
-			prot, field);
-	if (offset < 0) {
-		DPAA2_PMD_ERR("QoS prot %d field %d extract failed",
-				prot, field);
-		return -1;
-	}
-	key_iova = flow->qos_rule.key_iova + offset;
-	mask_iova = flow->qos_rule.mask_iova + offset;
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		eth_type = proto.eth_type;
-		memcpy((void *)key_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-		eth_type = 0xffff;
-		memcpy((void *)mask_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-	} else {
-		ip_proto = proto.ip_proto;
-		memcpy((void *)key_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-		ip_proto = 0xff;
-		memcpy((void *)mask_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-	}
-
-	offset = dpaa2_flow_extract_key_offset(
-			&priv->extract.tc_key_extract[group],
-			prot, field);
-	if (offset < 0) {
-		DPAA2_PMD_ERR("FS prot %d field %d extract failed",
-				prot, field);
-		return -1;
+		DPAA2_PMD_ERR("Prev proto(%d) not support!",
+			prev_proto->prot);
+		return -EINVAL;
 	}
-	key_iova = flow->fs_rule.key_iova + offset;
-	mask_iova = flow->fs_rule.mask_iova + offset;
 
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		eth_type = proto.eth_type;
-		memcpy((void *)key_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-		eth_type = 0xffff;
-		memcpy((void *)mask_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-	} else {
-		ip_proto = proto.ip_proto;
-		memcpy((void *)key_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-		ip_proto = 0xff;
-		memcpy((void *)mask_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		key_extract = &priv->extract.qos_key_extract;
+		key_profile = &key_extract->key_profile;
+
+		offset = dpaa2_flow_extract_key_offset(key_profile,
+				prev_proto->prot, field);
+		if (offset < 0) {
+			DPAA2_PMD_ERR("%s QoS key extract failed", __func__);
+			return -EINVAL;
+		}
+		key_addr = flow->qos_key_addr + offset;
+		mask_addr = flow->qos_mask_addr + offset;
+		if (prev_proto->prot == NET_PROT_ETH) {
+			eth_type = prev_proto->eth_type;
+			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
+			eth_type = 0xffff;
+			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
+			flow->qos_rule_size += sizeof(rte_be16_t);
+		} else if (prev_proto->prot == NET_PROT_IP) {
+			ip_proto = prev_proto->ip_proto;
+			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
+			ip_proto = 0xff;
+			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
+			flow->qos_rule_size += sizeof(uint8_t);
+		} else {
+			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
+				prev_proto->prot);
+			return -EINVAL;
+		}
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		key_extract = &priv->extract.tc_key_extract[group];
+		key_profile = &key_extract->key_profile;
+
+		offset = dpaa2_flow_extract_key_offset(key_profile,
+				prev_proto->prot, field);
+		if (offset < 0) {
+			DPAA2_PMD_ERR("%s TC[%d] key extract failed",
+				__func__, group);
+			return -EINVAL;
+		}
+		key_addr = flow->fs_key_addr + offset;
+		mask_addr = flow->fs_mask_addr + offset;
+
+		if (prev_proto->prot == NET_PROT_ETH) {
+			eth_type = prev_proto->eth_type;
+			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
+			eth_type = 0xffff;
+			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
+			flow->fs_rule_size += sizeof(rte_be16_t);
+		} else if (prev_proto->prot == NET_PROT_IP) {
+			ip_proto = prev_proto->ip_proto;
+			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
+			ip_proto = 0xff;
+			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
+			flow->fs_rule_size += sizeof(uint8_t);
+		} else {
+			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
+				prev_proto->prot);
+			return -EINVAL;
+		}
 	}
 
 	return 0;
 }
 
 static inline int
-dpaa2_flow_rule_data_set(
-	struct dpaa2_key_extract *key_extract,
-	struct dpni_rule_cfg *rule,
-	enum net_prot prot, uint32_t field,
-	const void *key, const void *mask, int size)
+dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t field, int size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int offset = dpaa2_flow_extract_key_offset(key_extract,
-				prot, field);
+	int offset;
 
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			prot, field);
 	if (offset < 0) {
-		DPAA2_PMD_ERR("prot %d, field %d extract failed",
+		DPAA2_PMD_ERR("P(%d)/F(%d) does not exist!",
 			prot, field);
-		return -1;
+		return -EINVAL;
 	}
 
-	memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
-	memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, size);
+		memcpy((flow->qos_mask_addr + offset), mask, size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size = offset + size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, size);
+		memcpy((flow->fs_mask_addr + offset), mask, size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size = offset + size;
+	}
 
 	return 0;
 }
@@ -724,145 +1003,13 @@ dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
 	return 0;
 }
 
-static inline int
-_dpaa2_flow_rule_move_ipaddr_tail(
-	struct dpaa2_key_extract *key_extract,
-	struct dpni_rule_cfg *rule, int src_offset,
-	uint32_t field, bool ipv4)
-{
-	size_t key_src;
-	size_t mask_src;
-	size_t key_dst;
-	size_t mask_dst;
-	int dst_offset, len;
-	enum net_prot prot;
-	char tmp[NH_FLD_IPV6_ADDR_SIZE];
-
-	if (field != NH_FLD_IP_SRC &&
-		field != NH_FLD_IP_DST) {
-		DPAA2_PMD_ERR("Field of IP addr reorder must be IP SRC/DST");
-		return -1;
-	}
-	if (ipv4)
-		prot = NET_PROT_IPV4;
-	else
-		prot = NET_PROT_IPV6;
-	dst_offset = dpaa2_flow_extract_key_offset(key_extract,
-				prot, field);
-	if (dst_offset < 0) {
-		DPAA2_PMD_ERR("Field %d reorder extract failed", field);
-		return -1;
-	}
-	key_src = rule->key_iova + src_offset;
-	mask_src = rule->mask_iova + src_offset;
-	key_dst = rule->key_iova + dst_offset;
-	mask_dst = rule->mask_iova + dst_offset;
-	if (ipv4)
-		len = sizeof(rte_be32_t);
-	else
-		len = NH_FLD_IPV6_ADDR_SIZE;
-
-	memcpy(tmp, (char *)key_src, len);
-	memset((char *)key_src, 0, len);
-	memcpy((char *)key_dst, tmp, len);
-
-	memcpy(tmp, (char *)mask_src, len);
-	memset((char *)mask_src, 0, len);
-	memcpy((char *)mask_dst, tmp, len);
-
-	return 0;
-}
-
-static inline int
-dpaa2_flow_rule_move_ipaddr_tail(
-	struct rte_flow *flow, struct dpaa2_dev_priv *priv,
-	int fs_group)
+static int
+dpaa2_flow_extract_support(const uint8_t *mask_src,
+	enum rte_flow_item_type type)
 {
-	int ret;
-	enum net_prot prot;
-
-	if (flow->ipaddr_rule.ipaddr_type == FLOW_NONE_IPADDR)
-		return 0;
-
-	if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR)
-		prot = NET_PROT_IPV4;
-	else
-		prot = NET_PROT_IPV6;
-
-	if (flow->ipaddr_rule.qos_ipsrc_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				flow->ipaddr_rule.qos_ipsrc_offset,
-				NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS src address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_SRC);
-	}
-
-	if (flow->ipaddr_rule.qos_ipdst_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				flow->ipaddr_rule.qos_ipdst_offset,
-				NH_FLD_IP_DST, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS dst address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_DST);
-	}
-
-	if (flow->ipaddr_rule.fs_ipsrc_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.tc_key_extract[fs_group],
-				&flow->fs_rule,
-				flow->ipaddr_rule.fs_ipsrc_offset,
-				NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("FS src address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.fs_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[fs_group],
-				prot, NH_FLD_IP_SRC);
-	}
-	if (flow->ipaddr_rule.fs_ipdst_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.tc_key_extract[fs_group],
-				&flow->fs_rule,
-				flow->ipaddr_rule.fs_ipdst_offset,
-				NH_FLD_IP_DST, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("FS dst address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.fs_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[fs_group],
-				prot, NH_FLD_IP_DST);
-	}
-
-	return 0;
-}
-
-static int
-dpaa2_flow_extract_support(
-	const uint8_t *mask_src,
-	enum rte_flow_item_type type)
-{
-	char mask[64];
-	int i, size = 0;
-	const char *mask_support = 0;
+	char mask[64];
+	int i, size = 0;
+	const char *mask_support = 0;
 
 	switch (type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
@@ -902,7 +1049,7 @@ dpaa2_flow_extract_support(
 		size = sizeof(struct rte_flow_item_gre);
 		break;
 	default:
-		return -1;
+		return -EINVAL;
 	}
 
 	memcpy(mask, mask_support, size);
@@ -917,491 +1064,444 @@ dpaa2_flow_extract_support(
 }
 
 static int
-dpaa2_configure_flow_eth(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct prev_proto_field_id *prev_prot,
+	enum dpaa2_flow_dist_type dist_type,
+	int group, int *recfg)
 {
-	int index, ret;
-	int local_cfg = 0;
-	uint32_t group;
-	const struct rte_flow_item_eth *spec, *mask;
-
-	/* TODO: Currently upper bound of range parameter is not implemented */
-	const struct rte_flow_item_eth *last __rte_unused;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
-
-	group = attr->group;
-
-	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_eth *)pattern->spec;
-	last    = (const struct rte_flow_item_eth *)pattern->last;
-	mask    = (const struct rte_flow_item_eth *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_eth_mask);
-	if (!spec) {
-		/* Don't care any field of eth header,
-		 * only care eth protocol.
-		 */
-		DPAA2_PMD_WARN("No pattern spec for Eth flow, just skip");
-		return 0;
-	}
-
-	/* Get traffic class index and flow id to be configured */
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
-
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
-		DPAA2_PMD_WARN("Extract field(s) of ethernet not support.");
-
-		return -1;
-	}
-
-	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_SA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_SA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_SA failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_SA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_SA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_SA failed.");
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	int ret, index, local_cfg = 0, size = 0;
+	struct dpaa2_key_extract *extract;
+	struct dpaa2_key_profile *key_profile;
+	enum net_prot prot = prev_prot->prot;
+	uint32_t key_field = 0;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH_SA rule set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_SA,
-				&spec->hdr.src_addr.addr_bytes,
-				&mask->hdr.src_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_SA,
-				&spec->hdr.src_addr.addr_bytes,
-				&mask->hdr.src_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
-			return -1;
-		}
+	if (prot == NET_PROT_ETH) {
+		key_field = NH_FLD_ETH_TYPE;
+		size = sizeof(rte_be16_t);
+	} else if (prot == NET_PROT_IP) {
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else if (prot == NET_PROT_IPV4) {
+		prot = NET_PROT_IP;
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else if (prot == NET_PROT_IPV6) {
+		prot = NET_PROT_IP;
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else {
+		DPAA2_PMD_ERR("Invalid Prev prot(%d)", prot);
+		return -EINVAL;
 	}
 
-	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_DA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_DA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_DA failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		extract = &priv->extract.qos_key_extract;
+		key_profile = &extract->key_profile;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_DA);
+		index = dpaa2_flow_extract_search(key_profile,
+				prot, key_field);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_DA,
-					RTE_ETHER_ADDR_LEN);
+			ret = dpaa2_flow_extract_add_hdr(prot,
+					key_field, size, priv,
+					DPAA2_FLOW_QOS_TYPE, group,
+					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_DA failed.");
+				DPAA2_PMD_ERR("QOS prev extract add failed");
 
-				return -1;
+				return -EINVAL;
 			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH DA rule set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_DA,
-				&spec->hdr.dst_addr.addr_bytes,
-				&mask->hdr.dst_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
-			return -1;
+			local_cfg |= DPAA2_FLOW_QOS_TYPE;
 		}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_DA,
-				&spec->hdr.dst_addr.addr_bytes,
-				&mask->hdr.dst_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
+		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
-			return -1;
+			DPAA2_PMD_ERR("QoS prev rule set failed");
+			return -EINVAL;
 		}
 	}
 
-	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_TYPE,
-					RTE_ETHER_TYPE_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_TYPE failed.");
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		extract = &priv->extract.tc_key_extract[group];
+		key_profile = &extract->key_profile;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
+		index = dpaa2_flow_extract_search(key_profile,
+				prot, key_field);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_TYPE,
-					RTE_ETHER_TYPE_LEN);
+			ret = dpaa2_flow_extract_add_hdr(prot,
+					key_field, size, priv,
+					DPAA2_FLOW_FS_TYPE, group,
+					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_TYPE failed.");
+				DPAA2_PMD_ERR("FS[%d] prev extract add failed",
+					group);
 
-				return -1;
+				return -EINVAL;
 			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH TYPE rule set failed");
-				return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				&spec->hdr.ether_type,
-				&mask->hdr.ether_type,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
-			return -1;
+			local_cfg |= DPAA2_FLOW_FS_TYPE;
 		}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				&spec->hdr.ether_type,
-				&mask->hdr.ether_type,
-				sizeof(rte_be16_t));
+		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
-			return -1;
+			DPAA2_PMD_ERR("FS[%d] prev rule set failed",
+				group);
+			return -EINVAL;
 		}
 	}
 
-	(*device_configured) |= local_cfg;
+	if (recfg)
+		*recfg = local_cfg;
 
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_vlan(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
+	enum net_prot prot, uint32_t field,
+	const void *key, const void *mask, int size,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int index, ret;
-	int local_cfg = 0;
-	uint32_t group;
-	const struct rte_flow_item_vlan *spec, *mask;
-
-	const struct rte_flow_item_vlan *last __rte_unused;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-
-	group = attr->group;
+	int index, ret, local_cfg = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
 
-	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_vlan *)pattern->spec;
-	last    = (const struct rte_flow_item_vlan *)pattern->last;
-	mask    = (const struct rte_flow_item_vlan *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask);
+	if (dpaa2_flow_ip_address_extract(prot, field))
+		return -EINVAL;
 
-	/* Get traffic class index and flow id to be configured */
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
 
-	if (!spec) {
-		/* Don't care any field of vlan header,
-		 * only care vlan protocol.
-		 */
-		/* Eth type is actually used for vLan classification.
-		 */
-		struct proto_discrimination proto;
+	key_profile = &key_extract->key_profile;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-						&priv->extract.qos_key_extract,
-						RTE_FLOW_ITEM_TYPE_ETH);
-			if (ret) {
-				DPAA2_PMD_ERR(
-				"QoS Ext ETH_TYPE to discriminate vLan failed");
+	index = dpaa2_flow_extract_search(key_profile,
+			prot, field);
+	if (index < 0) {
+		ret = dpaa2_flow_extract_add_hdr(prot,
+				field, size, priv,
+				dist_type, tc_id, NULL);
+		if (ret) {
+			DPAA2_PMD_ERR("QoS Extract P(%d)/F(%d) failed",
+				prot, field);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+			return ret;
 		}
+		local_cfg |= dist_type;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					RTE_FLOW_ITEM_TYPE_ETH);
-			if (ret) {
-				DPAA2_PMD_ERR(
-				"FS Ext ETH_TYPE to discriminate vLan failed.");
+	ret = dpaa2_flow_hdr_rule_data_set(flow, key_profile,
+			prot, field, size, key, mask, dist_type);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS P(%d)/F(%d) rule data set failed",
+			prot, field);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		return ret;
+	}
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"Move ipaddr before vLan discrimination set failed");
-			return -1;
-		}
+	if (recfg)
+		*recfg |= local_cfg;
 
-		proto.type = RTE_FLOW_ITEM_TYPE_ETH;
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("vLan discrimination rule set failed");
-			return -1;
-		}
+	return 0;
+}
 
-		(*device_configured) |= local_cfg;
+static int
+dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
+	enum net_prot prot, uint32_t field,
+	const void *key, const void *mask, int size,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int local_cfg = 0, num, ipaddr_extract_len = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
+	struct dpkg_profile_cfg *dpkg;
+	uint8_t *key_addr, *mask_addr;
+	union ip_addr_extract_rule *ip_addr_data;
+	union ip_addr_extract_rule *ip_addr_mask;
+	enum net_prot orig_prot;
+	uint32_t orig_field;
+
+	if (prot != NET_PROT_IPV4 && prot != NET_PROT_IPV6)
+		return -EINVAL;
 
-		return 0;
+	if (prot == NET_PROT_IPV4 && field != NH_FLD_IPV4_SRC_IP &&
+		field != NH_FLD_IPV4_DST_IP) {
+		return -EINVAL;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VLAN)) {
-		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
-
-		return -1;
+	if (prot == NET_PROT_IPV6 && field != NH_FLD_IPV6_SRC_IP &&
+		field != NH_FLD_IPV6_DST_IP) {
+		return -EINVAL;
 	}
 
-	if (!mask->hdr.vlan_tci)
-		return 0;
-
-	index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_VLAN, NH_FLD_VLAN_TCI);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-						&priv->extract.qos_key_extract,
-						NET_PROT_VLAN,
-						NH_FLD_VLAN_TCI,
-						sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract add VLAN_TCI failed.");
+	orig_prot = prot;
+	orig_field = field;
 
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+	if (prot == NET_PROT_IPV4 &&
+		field == NH_FLD_IPV4_SRC_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_SRC;
+	} else if (prot == NET_PROT_IPV4 &&
+		field == NH_FLD_IPV4_DST_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_DST;
+	} else if (prot == NET_PROT_IPV6 &&
+		field == NH_FLD_IPV6_SRC_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_SRC;
+	} else if (prot == NET_PROT_IPV6 &&
+		field == NH_FLD_IPV6_DST_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_DST;
+	} else {
+		DPAA2_PMD_ERR("Inval P(%d)/F(%d) to extract ip address",
+			prot, field);
+		return -EINVAL;
 	}
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_VLAN, NH_FLD_VLAN_TCI);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.tc_key_extract[group],
-				NET_PROT_VLAN,
-				NH_FLD_VLAN_TCI,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract add VLAN_TCI failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		key_extract = &priv->extract.qos_key_extract;
+		key_profile = &key_extract->key_profile;
+		dpkg = &key_extract->dpkg;
+		num = key_profile->num;
+		key_addr = flow->qos_key_addr;
+		mask_addr = flow->qos_mask_addr;
+	} else {
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+		key_profile = &key_extract->key_profile;
+		dpkg = &key_extract->dpkg;
+		num = key_profile->num;
+		key_addr = flow->fs_key_addr;
+		mask_addr = flow->fs_mask_addr;
 	}
 
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before VLAN TCI rule set failed");
-		return -1;
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_VLAN,
-				NH_FLD_VLAN_TCI,
-				&spec->hdr.vlan_tci,
-				&mask->hdr.vlan_tci,
-				sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
-		return -1;
+	if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT) {
+		if (field == NH_FLD_IP_SRC)
+			key_profile->ip_addr_type = IP_SRC_EXTRACT;
+		else
+			key_profile->ip_addr_type = IP_DST_EXTRACT;
+		ipaddr_extract_len = size;
+
+		key_profile->ip_addr_extract_pos = num;
+		if (num > 0) {
+			key_profile->ip_addr_extract_off =
+				key_profile->key_offset[num - 1] +
+				key_profile->key_size[num - 1];
+		} else {
+			key_profile->ip_addr_extract_off = 0;
+		}
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	} else if (key_profile->ip_addr_type == IP_SRC_EXTRACT) {
+		if (field == NH_FLD_IP_SRC) {
+			ipaddr_extract_len = size;
+			goto rule_configure;
+		}
+		key_profile->ip_addr_type = IP_SRC_DST_EXTRACT;
+		ipaddr_extract_len = size * 2;
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	} else if (key_profile->ip_addr_type == IP_DST_EXTRACT) {
+		if (field == NH_FLD_IP_DST) {
+			ipaddr_extract_len = size;
+			goto rule_configure;
+		}
+		key_profile->ip_addr_type = IP_DST_SRC_EXTRACT;
+		ipaddr_extract_len = size * 2;
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	}
+	key_profile->num++;
+
+	dpkg->extracts[num].extract.from_hdr.prot = prot;
+	dpkg->extracts[num].extract.from_hdr.field = field;
+	dpkg->extracts[num].extract.from_hdr.type = DPKG_FULL_FIELD;
+	dpkg->num_extracts++;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		local_cfg = DPAA2_FLOW_QOS_TYPE;
+	else
+		local_cfg = DPAA2_FLOW_FS_TYPE;
+
+rule_configure:
+	key_addr += key_profile->ip_addr_extract_off;
+	ip_addr_data = (union ip_addr_extract_rule *)key_addr;
+	mask_addr += key_profile->ip_addr_extract_off;
+	ip_addr_mask = (union ip_addr_extract_rule *)mask_addr;
+
+	if (orig_prot == NET_PROT_IPV4 &&
+		orig_field == NH_FLD_IPV4_SRC_IP) {
+		if (key_profile->ip_addr_type == IP_SRC_EXTRACT ||
+			key_profile->ip_addr_type == IP_SRC_DST_EXTRACT) {
+			memcpy(&ip_addr_data->ipv4_sd_addr.ipv4_src,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_sd_addr.ipv4_src,
+				mask, size);
+		} else {
+			memcpy(&ip_addr_data->ipv4_ds_addr.ipv4_src,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_ds_addr.ipv4_src,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV4 &&
+		orig_field == NH_FLD_IPV4_DST_IP) {
+		if (key_profile->ip_addr_type == IP_DST_EXTRACT ||
+			key_profile->ip_addr_type == IP_DST_SRC_EXTRACT) {
+			memcpy(&ip_addr_data->ipv4_ds_addr.ipv4_dst,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_ds_addr.ipv4_dst,
+				mask, size);
+		} else {
+			memcpy(&ip_addr_data->ipv4_sd_addr.ipv4_dst,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_sd_addr.ipv4_dst,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV6 &&
+		orig_field == NH_FLD_IPV6_SRC_IP) {
+		if (key_profile->ip_addr_type == IP_SRC_EXTRACT ||
+			key_profile->ip_addr_type == IP_SRC_DST_EXTRACT) {
+			memcpy(ip_addr_data->ipv6_sd_addr.ipv6_src,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_sd_addr.ipv6_src,
+				mask, size);
+		} else {
+			memcpy(ip_addr_data->ipv6_ds_addr.ipv6_src,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_ds_addr.ipv6_src,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV6 &&
+		orig_field == NH_FLD_IPV6_DST_IP) {
+		if (key_profile->ip_addr_type == IP_DST_EXTRACT ||
+			key_profile->ip_addr_type == IP_DST_SRC_EXTRACT) {
+			memcpy(ip_addr_data->ipv6_ds_addr.ipv6_dst,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_ds_addr.ipv6_dst,
+				mask, size);
+		} else {
+			memcpy(ip_addr_data->ipv6_sd_addr.ipv6_dst,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_sd_addr.ipv6_dst,
+				mask, size);
+		}
 	}
 
-	ret = dpaa2_flow_rule_data_set(
-			&priv->extract.tc_key_extract[group],
-			&flow->fs_rule,
-			NET_PROT_VLAN,
-			NH_FLD_VLAN_TCI,
-			&spec->hdr.vlan_tci,
-			&mask->hdr.vlan_tci,
-			sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
-		return -1;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		flow->qos_rule_size =
+			key_profile->ip_addr_extract_off + ipaddr_extract_len;
+	} else {
+		flow->fs_rule_size =
+			key_profile->ip_addr_extract_off + ipaddr_extract_len;
 	}
 
-	(*device_configured) |= local_cfg;
+	if (recfg)
+		*recfg |= local_cfg;
 
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_ip_discrimation(
-	struct dpaa2_dev_priv *priv, struct rte_flow *flow,
+dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
-	int *local_cfg,	int *device_configured,
-	uint32_t group)
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	struct proto_discrimination proto;
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_eth *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.qos_key_extract.dpkg,
-			NET_PROT_ETH, NH_FLD_ETH_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.qos_key_extract,
-				RTE_FLOW_ITEM_TYPE_ETH);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"QoS Extract ETH_TYPE to discriminate IP failed.");
-			return -1;
-		}
-		(*local_cfg) |= DPAA2_QOS_TABLE_RECONFIGURE;
-	}
+	group = attr->group;
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_ETH, NH_FLD_ETH_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				RTE_FLOW_ITEM_TYPE_ETH);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"FS Extract ETH_TYPE to discriminate IP failed.");
-			return -1;
-		}
-		(*local_cfg) |= DPAA2_FS_TABLE_RECONFIGURE;
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+			pattern->mask : &dpaa2_flow_item_eth_mask;
+	if (!spec) {
+		DPAA2_PMD_WARN("No pattern spec for Eth flow");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before IP discrimination set failed");
-		return -1;
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH)) {
+		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
+
+		return -EINVAL;
 	}
 
-	proto.type = RTE_FLOW_ITEM_TYPE_ETH;
-	if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
-	else
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	ret = dpaa2_flow_proto_discrimination_rule(priv, flow, proto, group);
-	if (ret) {
-		DPAA2_PMD_ERR("IP discrimination rule set failed");
-		return -1;
+	if (memcmp((const char *)&mask->src,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_SA, &spec->src.addr_bytes,
+			&mask->src.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_SA, &spec->src.addr_bytes,
+			&mask->src.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->dst,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_DA, &spec->dst.addr_bytes,
+			&mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_DA, &spec->dst.addr_bytes,
+			&mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->type,
+		zero_cmp, sizeof(rte_be16_t))) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_TYPE, &spec->type,
+			&mask->type, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_TYPE, &spec->type,
+			&mask->type, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	(*device_configured) |= (*local_cfg);
+	(*device_configured) |= local_cfg;
 
 	return 0;
 }
 
-
 static int
-dpaa2_configure_flow_generic_ip(
-	struct rte_flow *flow,
+dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
@@ -1409,419 +1509,338 @@ dpaa2_configure_flow_generic_ip(
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
-	const struct rte_flow_item_ipv4 *spec_ipv4 = 0,
-		*mask_ipv4 = 0;
-	const struct rte_flow_item_ipv6 *spec_ipv6 = 0,
-		*mask_ipv6 = 0;
-	const void *key, *mask;
-	enum net_prot prot;
-
+	const struct rte_flow_item_vlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
-	int size;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
-		spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec;
-		mask_ipv4 = (const struct rte_flow_item_ipv4 *)
-			(pattern->mask ? pattern->mask :
-					&dpaa2_flow_item_ipv4_mask);
-	} else {
-		spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec;
-		mask_ipv6 = (const struct rte_flow_item_ipv6 *)
-			(pattern->mask ? pattern->mask :
-					&dpaa2_flow_item_ipv6_mask);
-	}
+	spec = pattern->spec;
+	mask = pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	ret = dpaa2_configure_flow_ip_discrimation(priv,
-			flow, pattern, &local_cfg,
-			device_configured, group);
-	if (ret) {
-		DPAA2_PMD_ERR("IP discrimination failed!");
-		return -1;
+	if (!spec) {
+		struct prev_proto_field_id prev_proto;
+
+		prev_proto.prot = NET_PROT_ETH;
+		prev_proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_proto,
+				DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+				       RTE_FLOW_ITEM_TYPE_VLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+		return -EINVAL;
 	}
 
-	if (!spec_ipv4 && !spec_ipv6)
+	if (!mask->tci)
 		return 0;
 
-	if (mask_ipv4) {
-		if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
-			RTE_FLOW_ITEM_TYPE_IPV4)) {
-			DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
+					      NH_FLD_VLAN_TCI, &spec->tci,
+					      &mask->tci, sizeof(rte_be16_t),
+					      priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
 
-			return -1;
-		}
-	}
-
-	if (mask_ipv6) {
-		if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
-			RTE_FLOW_ITEM_TYPE_IPV6)) {
-			DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
-
-			return -1;
-		}
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
+					      NH_FLD_VLAN_TCI, &spec->tci,
+					      &mask->tci, sizeof(rte_be16_t),
+					      priv, group, &local_cfg,
+					      DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
 
-	if (mask_ipv4 && (mask_ipv4->hdr.src_addr ||
-		mask_ipv4->hdr.dst_addr)) {
-		flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR;
-	} else if (mask_ipv6 &&
-		(memcmp(&mask_ipv6->hdr.src_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE) ||
-		memcmp(&mask_ipv6->hdr.dst_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		flow->ipaddr_rule.ipaddr_type = FLOW_IPV6_ADDR;
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.src_addr) ||
-		(mask_ipv6 &&
-			memcmp(&mask_ipv6->hdr.src_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_IP,
-					NH_FLD_IP_SRC,
-					0);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_SRC failed.");
+	(*device_configured) |= local_cfg;
+	return 0;
+}
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+static int
+dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
+			  const struct rte_flow_attr *attr,
+			  const struct rte_flow_item *pattern,
+			  const struct rte_flow_action actions[] __rte_unused,
+			  struct rte_flow_error *error __rte_unused,
+			  int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ipv4 *spec_ipv4 = 0, *mask_ipv4 = 0;
+	const void *key, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	int size;
+	struct prev_proto_field_id prev_prot;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_SRC,
-					0);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_SRC failed.");
+	group = attr->group;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	/* Parse pattern list to get the matching parameters */
+	spec_ipv4 = pattern->spec;
+	mask_ipv4 = pattern->mask ?
+		    pattern->mask : &dpaa2_flow_item_ipv4_mask;
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.src_addr;
-		else
-			key = &spec_ipv6->hdr.src_addr;
-		if (mask_ipv4) {
-			mask = &mask_ipv4->hdr.src_addr;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-			prot = NET_PROT_IPV4;
-		} else {
-			mask = &mask_ipv6->hdr.src_addr;
-			size = NH_FLD_IPV6_ADDR_SIZE;
-			prot = NET_PROT_IPV6;
-		}
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				prot, NH_FLD_IP_SRC,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_SRC rule data set failed");
-			return -1;
-		}
+	prev_prot.prot = NET_PROT_ETH;
+	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				prot, NH_FLD_IP_SRC,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_SRC rule data set failed");
-			return -1;
-		}
+	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE, group,
+			&local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("IPv4 identification failed!");
+		return ret;
+	}
 
-		flow->ipaddr_rule.qos_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_SRC);
-		flow->ipaddr_rule.fs_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[group],
-				prot, NH_FLD_IP_SRC);
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.dst_addr) ||
-		(mask_ipv6 &&
-			memcmp(&mask_ipv6->hdr.dst_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_DST);
-		if (index < 0) {
-			if (mask_ipv4)
-				size = NH_FLD_IPV4_ADDR_SIZE;
-			else
-				size = NH_FLD_IPV6_ADDR_SIZE;
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_IP,
-					NH_FLD_IP_DST,
-					size);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+	if (!spec_ipv4)
+		return 0;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+				       RTE_FLOW_ITEM_TYPE_IPV4)) {
+		DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+		return -EINVAL;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_DST);
-		if (index < 0) {
-			if (mask_ipv4)
-				size = NH_FLD_IPV4_ADDR_SIZE;
-			else
-				size = NH_FLD_IPV6_ADDR_SIZE;
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_DST,
-					size);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+	if (mask_ipv4->hdr.src_addr) {
+		key = &spec_ipv4->hdr.src_addr;
+		mask = &mask_ipv4->hdr.src_addr;
+		size = sizeof(rte_be32_t);
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv4->hdr.dst_addr) {
+		key = &spec_ipv4->hdr.dst_addr;
+		mask = &mask_ipv4->hdr.dst_addr;
+		size = sizeof(rte_be32_t);
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv4->hdr.next_proto_id) {
+		key = &spec_ipv4->hdr.next_proto_id;
+		mask = &mask_ipv4->hdr.next_proto_id;
+		size = sizeof(uint8_t);
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	(*device_configured) |= local_cfg;
+	return 0;
+}
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.dst_addr;
-		else
-			key = &spec_ipv6->hdr.dst_addr;
-		if (mask_ipv4) {
-			mask = &mask_ipv4->hdr.dst_addr;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-			prot = NET_PROT_IPV4;
-		} else {
-			mask = &mask_ipv6->hdr.dst_addr;
-			size = NH_FLD_IPV6_ADDR_SIZE;
-			prot = NET_PROT_IPV6;
-		}
+static int
+dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
+			  const struct rte_flow_attr *attr,
+			  const struct rte_flow_item *pattern,
+			  const struct rte_flow_action actions[] __rte_unused,
+			  struct rte_flow_error *error __rte_unused,
+			  int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ipv6 *spec_ipv6 = 0, *mask_ipv6 = 0;
+	const void *key, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
+	int size;
+	struct prev_proto_field_id prev_prot;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				prot, NH_FLD_IP_DST,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_DST rule data set failed");
-			return -1;
-		}
+	group = attr->group;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				prot, NH_FLD_IP_DST,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_DST rule data set failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_DST);
-		flow->ipaddr_rule.fs_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[group],
-				prot, NH_FLD_IP_DST);
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.next_proto_id) ||
-		(mask_ipv6 && mask_ipv6->hdr.proto)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-				&priv->extract.qos_key_extract,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				NH_FLD_IP_PROTO_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+	/* Parse pattern list to get the matching parameters */
+	spec_ipv6 = pattern->spec;
+	mask_ipv6 = pattern->mask ? pattern->mask : &dpaa2_flow_item_ipv6_mask;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_PROTO,
-					NH_FLD_IP_PROTO_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+	prev_prot.prot = NET_PROT_ETH;
+	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("IPv6 identification failed!");
+		return ret;
+	}
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr after NH_FLD_IP_PROTO rule set failed");
-			return -1;
-		}
+	if (!spec_ipv6)
+		return 0;
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.next_proto_id;
-		else
-			key = &spec_ipv6->hdr.proto;
-		if (mask_ipv4)
-			mask = &mask_ipv4->hdr.next_proto_id;
-		else
-			mask = &mask_ipv6->hdr.proto;
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				key,	mask, NH_FLD_IP_PROTO_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_PROTO rule data set failed");
-			return -1;
-		}
+	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+				       RTE_FLOW_ITEM_TYPE_IPV6)) {
+		DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
+		return -EINVAL;
+	}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				key,	mask, NH_FLD_IP_PROTO_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_PROTO rule data set failed");
-			return -1;
-		}
+	if (memcmp((const char *)&mask_ipv6->hdr.src_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
+		key = &spec_ipv6->hdr.src_addr;
+		mask = &mask_ipv6->hdr.src_addr;
+		size = NH_FLD_IPV6_ADDR_SIZE;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask_ipv6->hdr.dst_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
+		key = &spec_ipv6->hdr.dst_addr;
+		mask = &mask_ipv6->hdr.dst_addr;
+		size = NH_FLD_IPV6_ADDR_SIZE;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv6->hdr.proto) {
+		key = &spec_ipv6->hdr.proto;
+		mask = &mask_ipv6->hdr.proto;
+		size = sizeof(uint8_t);
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
-
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_icmp(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_icmp *spec, *mask;
-
-	const struct rte_flow_item_icmp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_icmp *)pattern->spec;
-	last    = (const struct rte_flow_item_icmp *)pattern->last;
-	mask    = (const struct rte_flow_item_icmp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_icmp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_icmp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		/* Don't care any field of ICMP header,
-		 * only care ICMP protocol.
-		 * Example: flow create 0 ingress pattern icmp /
-		 */
 		/* Next proto of Generical IP is actually used
 		 * for ICMP identification.
+		 * Example: flow create 0 ingress pattern icmp
 		 */
-		struct proto_discrimination proto;
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate ICMP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate ICMP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before ICMP discrimination set failed");
-			return -1;
-		}
+		struct prev_proto_field_id prev_proto;
 
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_ICMP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("ICMP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_ICMP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
-
 		return 0;
 	}
 
@@ -1829,145 +1848,39 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_ICMP)) {
 		DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.icmp_type) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_TYPE,
-					NH_FLD_ICMP_TYPE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ICMP_TYPE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_TYPE,
-					NH_FLD_ICMP_TYPE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ICMP_TYPE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ICMP TYPE set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_TYPE, &spec->hdr.icmp_type,
+			&mask->hdr.icmp_type, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_TYPE,
-				&spec->hdr.icmp_type,
-				&mask->hdr.icmp_type,
-				NH_FLD_ICMP_TYPE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ICMP_TYPE rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_TYPE,
-				&spec->hdr.icmp_type,
-				&mask->hdr.icmp_type,
-				NH_FLD_ICMP_TYPE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ICMP_TYPE rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_TYPE, &spec->hdr.icmp_type,
+			&mask->hdr.icmp_type, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.icmp_code) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_CODE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_CODE,
-					NH_FLD_ICMP_CODE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ICMP_CODE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_CODE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_CODE,
-					NH_FLD_ICMP_CODE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ICMP_CODE failed.");
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_CODE, &spec->hdr.icmp_code,
+			&mask->hdr.icmp_code, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr after ICMP CODE set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_CODE,
-				&spec->hdr.icmp_code,
-				&mask->hdr.icmp_code,
-				NH_FLD_ICMP_CODE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ICMP_CODE rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_CODE,
-				&spec->hdr.icmp_code,
-				&mask->hdr.icmp_code,
-				NH_FLD_ICMP_CODE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ICMP_CODE rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_CODE, &spec->hdr.icmp_code,
+			&mask->hdr.icmp_code, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -1976,84 +1889,41 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_udp(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_udp *spec, *mask;
-
-	const struct rte_flow_item_udp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_udp *)pattern->spec;
-	last    = (const struct rte_flow_item_udp *)pattern->last;
-	mask    = (const struct rte_flow_item_udp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_udp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_udp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate UDP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate UDP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before UDP discrimination set failed");
-			return -1;
-		}
+		struct prev_proto_field_id prev_proto;
 
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_UDP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("UDP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_UDP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2065,149 +1935,40 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_UDP)) {
 		DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add UDP_SRC failed.");
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_SRC,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add UDP_SRC failed.");
+	if (mask->hdr.dst_port) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before UDP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_UDP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_UDP_PORT_SRC rule data set failed");
-			return -1;
-		}
-	}
-
-	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_DST,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add UDP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_DST,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add UDP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before UDP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_UDP_PORT_DST rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_UDP_PORT_DST rule data set failed");
-			return -1;
-		}
-	}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
 	(*device_configured) |= local_cfg;
 
@@ -2215,84 +1976,41 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_tcp(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_tcp *spec, *mask;
-
-	const struct rte_flow_item_tcp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_tcp *)pattern->spec;
-	last    = (const struct rte_flow_item_tcp *)pattern->last;
-	mask    = (const struct rte_flow_item_tcp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_tcp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_tcp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate TCP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate TCP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before TCP discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_TCP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("TCP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_TCP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2304,149 +2022,39 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_TCP)) {
 		DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_SRC,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add TCP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_SRC,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add TCP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before TCP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_TCP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_TCP_PORT_SRC rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_DST,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add TCP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_DST,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add TCP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before TCP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_TCP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_TCP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -2455,85 +2063,41 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_sctp(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_sctp *spec, *mask;
-
-	const struct rte_flow_item_sctp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_sctp *)pattern->spec;
-	last    = (const struct rte_flow_item_sctp *)pattern->last;
-	mask    = (const struct rte_flow_item_sctp *)
-			(pattern->mask ? pattern->mask :
-				&dpaa2_flow_item_sctp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_sctp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate SCTP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate SCTP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_SCTP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("SCTP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_SCTP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2549,145 +2113,35 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_SRC,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add SCTP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_SRC,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add SCTP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_SCTP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_SCTP_PORT_SRC rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_DST,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add SCTP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_DST,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add SCTP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_SCTP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_SCTP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -2696,88 +2150,46 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_gre(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_gre *spec, *mask;
-
-	const struct rte_flow_item_gre *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_gre *)pattern->spec;
-	last    = (const struct rte_flow_item_gre *)pattern->last;
-	mask    = (const struct rte_flow_item_gre *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_gre_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_gre_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate GRE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate GRE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before GRE discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_GRE;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("GRE discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_GRE;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
-		return 0;
+		if (!spec)
+			return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2790,74 +2202,19 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 	if (!mask->protocol)
 		return 0;
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.qos_key_extract.dpkg,
-			NET_PROT_GRE, NH_FLD_GRE_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.qos_key_extract,
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract add GRE_TYPE failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-	}
-
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_GRE, NH_FLD_GRE_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.tc_key_extract[group],
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract add GRE_TYPE failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-	}
-
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before GRE_TYPE set failed");
-		return -1;
-	}
-
-	ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
-				sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"QoS NH_FLD_GRE_TYPE rule data set failed");
-		return -1;
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GRE,
+			NH_FLD_GRE_TYPE, &spec->protocol,
+			&mask->protocol, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_rule_data_set(
-			&priv->extract.tc_key_extract[group],
-			&flow->fs_rule,
-			NET_PROT_GRE,
-			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
-			sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"FS NH_FLD_GRE_TYPE rule data set failed");
-		return -1;
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GRE,
+			NH_FLD_GRE_TYPE, &spec->protocol,
+			&mask->protocol, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
 
 	(*device_configured) |= local_cfg;
 
@@ -2865,404 +2222,109 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_raw(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_raw *spec = pattern->spec;
 	const struct rte_flow_item_raw *mask = pattern->mask;
 	int prev_key_size =
-		priv->extract.qos_key_extract.key_info.key_total_size;
+		priv->extract.qos_key_extract.key_profile.key_max_size;
 	int local_cfg = 0, ret;
 	uint32_t group;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
-		DPAA2_PMD_ERR("spec or mask not present.");
-		return -EINVAL;
-	}
-	/* Only supports non-relative with offset 0 */
-	if (spec->relative || spec->offset != 0 ||
-	    spec->search || spec->limit) {
-		DPAA2_PMD_ERR("relative and non zero offset not supported.");
-		return -EINVAL;
-	}
-	/* Spec len and mask len should be same */
-	if (spec->length != mask->length) {
-		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
-		return -EINVAL;
-	}
-
-	/* Get traffic class index and flow id to be configured */
-	group = attr->group;
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
-
-	if (prev_key_size <= spec->length) {
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
-						 spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-
-		ret = dpaa2_flow_extract_add_raw(
-					&priv->extract.tc_key_extract[group],
-					spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-	}
-
-	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
-					   mask->pattern, spec->length);
-	if (ret) {
-		DPAA2_PMD_ERR("QoS RAW rule data set failed");
-		return -1;
-	}
-
-	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
-					   mask->pattern, spec->length);
-	if (ret) {
-		DPAA2_PMD_ERR("FS RAW rule data set failed");
-		return -1;
-	}
-
-	(*device_configured) |= local_cfg;
-
-	return 0;
-}
-
-static inline int
-dpaa2_fs_action_supported(enum rte_flow_action_type action)
-{
-	int i;
-
-	for (i = 0; i < (int)(sizeof(dpaa2_supported_fs_action_type) /
-					sizeof(enum rte_flow_action_type)); i++) {
-		if (action == dpaa2_supported_fs_action_type[i])
-			return 1;
-	}
-
-	return 0;
-}
-/* The existing QoS/FS entry with IP address(es)
- * needs update after
- * new extract(s) are inserted before IP
- * address(es) extract(s).
- */
-static int
-dpaa2_flow_entry_update(
-	struct dpaa2_dev_priv *priv, uint8_t tc_id)
-{
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
-	int ret;
-	int qos_ipsrc_offset = -1, qos_ipdst_offset = -1;
-	int fs_ipsrc_offset = -1, fs_ipdst_offset = -1;
-	struct dpaa2_key_extract *qos_key_extract =
-		&priv->extract.qos_key_extract;
-	struct dpaa2_key_extract *tc_key_extract =
-		&priv->extract.tc_key_extract[tc_id];
-	char ipsrc_key[NH_FLD_IPV6_ADDR_SIZE];
-	char ipdst_key[NH_FLD_IPV6_ADDR_SIZE];
-	char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
-	char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
-	int extend = -1, extend1, size = -1;
-	uint16_t qos_index;
-
-	while (curr) {
-		if (curr->ipaddr_rule.ipaddr_type ==
-			FLOW_NONE_IPADDR) {
-			curr = LIST_NEXT(curr, next);
-			continue;
-		}
-
-		if (curr->ipaddr_rule.ipaddr_type ==
-			FLOW_IPV4_ADDR) {
-			qos_ipsrc_offset =
-				qos_key_extract->key_info.ipv4_src_offset;
-			qos_ipdst_offset =
-				qos_key_extract->key_info.ipv4_dst_offset;
-			fs_ipsrc_offset =
-				tc_key_extract->key_info.ipv4_src_offset;
-			fs_ipdst_offset =
-				tc_key_extract->key_info.ipv4_dst_offset;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-		} else {
-			qos_ipsrc_offset =
-				qos_key_extract->key_info.ipv6_src_offset;
-			qos_ipdst_offset =
-				qos_key_extract->key_info.ipv6_dst_offset;
-			fs_ipsrc_offset =
-				tc_key_extract->key_info.ipv6_src_offset;
-			fs_ipdst_offset =
-				tc_key_extract->key_info.ipv6_dst_offset;
-			size = NH_FLD_IPV6_ADDR_SIZE;
-		}
-
-		qos_index = curr->tc_id * priv->fs_entries +
-			curr->tc_index;
-
-		dpaa2_flow_qos_entry_log("Before update", curr, qos_index, stdout);
-
-		if (priv->num_rx_tc > 1) {
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
-					priv->token, &curr->qos_rule);
-			if (ret) {
-				DPAA2_PMD_ERR("Qos entry remove failed.");
-				return -1;
-			}
-		}
-
-		extend = -1;
-
-		if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
-			RTE_ASSERT(qos_ipsrc_offset >=
-				curr->ipaddr_rule.qos_ipsrc_offset);
-			extend1 = qos_ipsrc_offset -
-				curr->ipaddr_rule.qos_ipsrc_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-
-			memcpy(ipsrc_key,
-				(char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				0, size);
-
-			memcpy(ipsrc_mask,
-				(char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				0, size);
-
-			curr->ipaddr_rule.qos_ipsrc_offset = qos_ipsrc_offset;
-		}
-
-		if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
-			RTE_ASSERT(qos_ipdst_offset >=
-				curr->ipaddr_rule.qos_ipdst_offset);
-			extend1 = qos_ipdst_offset -
-				curr->ipaddr_rule.qos_ipdst_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-
-			memcpy(ipdst_key,
-				(char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				0, size);
-
-			memcpy(ipdst_mask,
-				(char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				0, size);
-
-			curr->ipaddr_rule.qos_ipdst_offset = qos_ipdst_offset;
-		}
-
-		if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-			memcpy((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				ipsrc_key,
-				size);
-			memcpy((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				ipsrc_mask,
-				size);
-		}
-		if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-			memcpy((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				ipdst_key,
-				size);
-			memcpy((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				ipdst_mask,
-				size);
-		}
-
-		if (extend >= 0)
-			curr->qos_real_key_size += extend;
-
-		curr->qos_rule.key_size = FIXED_ENTRY_SIZE;
-
-		dpaa2_flow_qos_entry_log("Start update", curr, qos_index, stdout);
-
-		if (priv->num_rx_tc > 1) {
-			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
-					priv->token, &curr->qos_rule,
-					curr->tc_id, qos_index,
-					0, 0);
-			if (ret) {
-				DPAA2_PMD_ERR("Qos entry update failed.");
-				return -1;
-			}
-		}
-
-		if (!dpaa2_fs_action_supported(curr->action)) {
-			curr = LIST_NEXT(curr, next);
-			continue;
-		}
+		DPAA2_PMD_ERR("spec or mask not present.");
+		return -EINVAL;
+	}
+	/* Only supports non-relative with offset 0 */
+	if (spec->relative || spec->offset != 0 ||
+	    spec->search || spec->limit) {
+		DPAA2_PMD_ERR("relative and non zero offset not supported.");
+		return -EINVAL;
+	}
+	/* Spec len and mask len should be same */
+	if (spec->length != mask->length) {
+		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
+		return -EINVAL;
+	}
 
-		dpaa2_flow_fs_entry_log("Before update", curr, stdout);
-		extend = -1;
+	/* Get traffic class index and flow id to be configured */
+	group = attr->group;
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW,
-				priv->token, curr->tc_id, &curr->fs_rule);
+	if (prev_key_size <= spec->length) {
+		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
+						 spec->length);
 		if (ret) {
-			DPAA2_PMD_ERR("FS entry remove failed.");
+			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
 			return -1;
 		}
+		local_cfg |= DPAA2_FLOW_QOS_TYPE;
 
-		if (curr->ipaddr_rule.fs_ipsrc_offset >= 0 &&
-			tc_id == curr->tc_id) {
-			RTE_ASSERT(fs_ipsrc_offset >=
-				curr->ipaddr_rule.fs_ipsrc_offset);
-			extend1 = fs_ipsrc_offset -
-				curr->ipaddr_rule.fs_ipsrc_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			memcpy(ipsrc_key,
-				(char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				0, size);
-
-			memcpy(ipsrc_mask,
-				(char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				0, size);
-
-			curr->ipaddr_rule.fs_ipsrc_offset = fs_ipsrc_offset;
+		ret = dpaa2_flow_extract_add_raw(&priv->extract.tc_key_extract[group],
+					spec->length);
+		if (ret) {
+			DPAA2_PMD_ERR("FS Extract RAW add failed.");
+			return -1;
 		}
+		local_cfg |= DPAA2_FLOW_FS_TYPE;
+	}
 
-		if (curr->ipaddr_rule.fs_ipdst_offset >= 0 &&
-			tc_id == curr->tc_id) {
-			RTE_ASSERT(fs_ipdst_offset >=
-				curr->ipaddr_rule.fs_ipdst_offset);
-			extend1 = fs_ipdst_offset -
-				curr->ipaddr_rule.fs_ipdst_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			memcpy(ipdst_key,
-				(char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				0, size);
-
-			memcpy(ipdst_mask,
-				(char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				0, size);
-
-			curr->ipaddr_rule.fs_ipdst_offset = fs_ipdst_offset;
-		}
+	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
+					   mask->pattern, spec->length);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS RAW rule data set failed");
+		return -1;
+	}
 
-		if (curr->ipaddr_rule.fs_ipsrc_offset >= 0) {
-			memcpy((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				ipsrc_key,
-				size);
-			memcpy((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				ipsrc_mask,
-				size);
-		}
-		if (curr->ipaddr_rule.fs_ipdst_offset >= 0) {
-			memcpy((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				ipdst_key,
-				size);
-			memcpy((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				ipdst_mask,
-				size);
-		}
+	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
+					   mask->pattern, spec->length);
+	if (ret) {
+		DPAA2_PMD_ERR("FS RAW rule data set failed");
+		return -1;
+	}
 
-		if (extend >= 0)
-			curr->fs_real_key_size += extend;
-		curr->fs_rule.key_size = FIXED_ENTRY_SIZE;
+	(*device_configured) |= local_cfg;
 
-		dpaa2_flow_fs_entry_log("Start update", curr, stdout);
+	return 0;
+}
 
-		ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
-				priv->token, curr->tc_id, curr->tc_index,
-				&curr->fs_rule, &curr->action_cfg);
-		if (ret) {
-			DPAA2_PMD_ERR("FS entry update failed.");
-			return -1;
-		}
+static inline int
+dpaa2_fs_action_supported(enum rte_flow_action_type action)
+{
+	int i;
+	int action_num = sizeof(dpaa2_supported_fs_action_type) /
+		sizeof(enum rte_flow_action_type);
 
-		curr = LIST_NEXT(curr, next);
+	for (i = 0; i < action_num; i++) {
+		if (action == dpaa2_supported_fs_action_type[i])
+			return true;
 	}
 
-	return 0;
+	return false;
 }
 
 static inline int
-dpaa2_flow_verify_attr(
-	struct dpaa2_dev_priv *priv,
+dpaa2_flow_verify_attr(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr)
 {
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
 
 	while (curr) {
 		if (curr->tc_id == attr->group &&
 			curr->tc_index == attr->priority) {
-			DPAA2_PMD_ERR(
-				"Flow with group %d and priority %d already exists.",
+			DPAA2_PMD_ERR("Flow(TC[%d].entry[%d] exists",
 				attr->group, attr->priority);
 
-			return -1;
+			return -EINVAL;
 		}
 		curr = LIST_NEXT(curr, next);
 	}
@@ -3275,18 +2337,16 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_action *action)
 {
 	const struct rte_flow_action_port_id *port_id;
+	const struct rte_flow_action_ethdev *ethdev;
 	int idx = -1;
 	struct rte_eth_dev *dest_dev;
 
 	if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
-		port_id = (const struct rte_flow_action_port_id *)
-					action->conf;
+		port_id = action->conf;
 		if (!port_id->original)
 			idx = port_id->id;
 	} else if (action->type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) {
-		const struct rte_flow_action_ethdev *ethdev;
-
-		ethdev = (const struct rte_flow_action_ethdev *)action->conf;
+		ethdev = action->conf;
 		idx = ethdev->port_id;
 	} else {
 		return NULL;
@@ -3306,8 +2366,7 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 }
 
 static inline int
-dpaa2_flow_verify_action(
-	struct dpaa2_dev_priv *priv,
+dpaa2_flow_verify_action(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_action actions[])
 {
@@ -3319,15 +2378,14 @@ dpaa2_flow_verify_action(
 	while (!end_of_list) {
 		switch (actions[j].type) {
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
-			dest_queue = (const struct rte_flow_action_queue *)
-					(actions[j].conf);
+			dest_queue = actions[j].conf;
 			rxq = priv->rx_vq[dest_queue->index];
 			if (attr->group != rxq->tc_index) {
-				DPAA2_PMD_ERR(
-					"RXQ[%d] does not belong to the group %d",
-					dest_queue->index, attr->group);
+				DPAA2_PMD_ERR("FSQ(%d.%d) not in TC[%d]",
+					rxq->tc_index, rxq->flow_id,
+					attr->group);
 
-				return -1;
+				return -ENOTSUP;
 			}
 			break;
 		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
@@ -3341,20 +2399,17 @@ dpaa2_flow_verify_action(
 			rss_conf = (const struct rte_flow_action_rss *)
 					(actions[j].conf);
 			if (rss_conf->queue_num > priv->dist_queues) {
-				DPAA2_PMD_ERR(
-					"RSS number exceeds the distribution size");
+				DPAA2_PMD_ERR("RSS number too large");
 				return -ENOTSUP;
 			}
 			for (i = 0; i < (int)rss_conf->queue_num; i++) {
 				if (rss_conf->queue[i] >= priv->nb_rx_queues) {
-					DPAA2_PMD_ERR(
-						"RSS queue index exceeds the number of RXQs");
+					DPAA2_PMD_ERR("RSS queue not in range");
 					return -ENOTSUP;
 				}
 				rxq = priv->rx_vq[rss_conf->queue[i]];
 				if (rxq->tc_index != attr->group) {
-					DPAA2_PMD_ERR(
-						"Queue/Group combination are not supported");
+					DPAA2_PMD_ERR("RSS queue not in group");
 					return -ENOTSUP;
 				}
 			}
@@ -3374,28 +2429,248 @@ dpaa2_flow_verify_action(
 }
 
 static int
-dpaa2_generic_flow_set(struct rte_flow *flow,
-		       struct rte_eth_dev *dev,
-		       const struct rte_flow_attr *attr,
-		       const struct rte_flow_item pattern[],
-		       const struct rte_flow_action actions[],
-		       struct rte_flow_error *error)
+dpaa2_configure_flow_fs_action(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct rte_flow_action *rte_action)
 {
+	struct rte_eth_dev *dest_dev;
+	struct dpaa2_dev_priv *dest_priv;
 	const struct rte_flow_action_queue *dest_queue;
+	struct dpaa2_queue *dest_q;
+
+	memset(&flow->fs_action_cfg, 0,
+		sizeof(struct dpni_fs_action_cfg));
+	flow->action_type = rte_action->type;
+
+	if (flow->action_type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		dest_queue = rte_action->conf;
+		dest_q = priv->rx_vq[dest_queue->index];
+		flow->fs_action_cfg.flow_id = dest_q->flow_id;
+	} else if (flow->action_type == RTE_FLOW_ACTION_TYPE_PORT_ID ||
+		   flow->action_type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) {
+		dest_dev = dpaa2_flow_redirect_dev(priv, rte_action);
+		if (!dest_dev) {
+			DPAA2_PMD_ERR("Invalid device to redirect");
+			return -EINVAL;
+		}
+
+		dest_priv = dest_dev->data->dev_private;
+		dest_q = dest_priv->tx_vq[0];
+		flow->fs_action_cfg.options =
+			DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
+		flow->fs_action_cfg.redirect_obj_token =
+			dest_priv->token;
+		flow->fs_action_cfg.flow_id = dest_q->flow_id;
+	}
+
+	return 0;
+}
+
+static inline uint16_t
+dpaa2_flow_entry_size(uint16_t key_max_size)
+{
+	if (key_max_size > DPAA2_FLOW_ENTRY_MAX_SIZE) {
+		DPAA2_PMD_ERR("Key size(%d) > max(%d)",
+			key_max_size,
+			DPAA2_FLOW_ENTRY_MAX_SIZE);
+
+		return 0;
+	}
+
+	if (key_max_size > DPAA2_FLOW_ENTRY_MIN_SIZE)
+		return DPAA2_FLOW_ENTRY_MAX_SIZE;
+
+	/* Current MC only support fixed entry size(56)*/
+	return DPAA2_FLOW_ENTRY_MAX_SIZE;
+}
+
+static inline int
+dpaa2_flow_clear_fs_table(struct dpaa2_dev_priv *priv,
+	uint8_t tc_id)
+{
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	int need_clear = 0, ret;
+	struct fsl_mc_io *dpni = priv->hw;
+
+	while (curr) {
+		if (curr->tc_id == tc_id) {
+			need_clear = 1;
+			break;
+		}
+		curr = LIST_NEXT(curr, next);
+	}
+
+	if (need_clear) {
+		ret = dpni_clear_fs_entries(dpni, CMD_PRI_LOW,
+				priv->token, tc_id);
+		if (ret) {
+			DPAA2_PMD_ERR("TC[%d] clear failed", tc_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_configure_fs_rss_table(struct dpaa2_dev_priv *priv,
+	uint8_t tc_id, uint16_t dist_size, int rss_dist)
+{
+	struct dpaa2_key_extract *tc_extract;
+	uint8_t *key_cfg_buf;
+	uint64_t key_cfg_iova;
+	int ret;
+	struct dpni_rx_dist_cfg tc_cfg;
+	struct fsl_mc_io *dpni = priv->hw;
+	uint16_t entry_size;
+	uint16_t key_max_size;
+
+	ret = dpaa2_flow_clear_fs_table(priv, tc_id);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] clear failed", tc_id);
+		return ret;
+	}
+
+	tc_extract = &priv->extract.tc_key_extract[tc_id];
+	key_cfg_buf = priv->extract.tc_extract_param[tc_id];
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+
+	key_max_size = tc_extract->key_profile.key_max_size;
+	entry_size = dpaa2_flow_entry_size(key_max_size);
+
+	dpaa2_flow_fs_extracts_log(priv, tc_id);
+	ret = dpkg_prepare_key_cfg(&tc_extract->dpkg,
+			key_cfg_buf);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] prepare key failed", tc_id);
+		return ret;
+	}
+
+	memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
+	tc_cfg.dist_size = dist_size;
+	tc_cfg.key_cfg_iova = key_cfg_iova;
+	if (rss_dist)
+		tc_cfg.enable = true;
+	else
+		tc_cfg.enable = false;
+	tc_cfg.tc = tc_id;
+	ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
+			priv->token, &tc_cfg);
+	if (ret < 0) {
+		if (rss_dist) {
+			DPAA2_PMD_ERR("RSS TC[%d] set failed",
+				tc_id);
+		} else {
+			DPAA2_PMD_ERR("FS TC[%d] hash disable failed",
+				tc_id);
+		}
+
+		return ret;
+	}
+
+	if (rss_dist)
+		return 0;
+
+	tc_cfg.enable = true;
+	tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
+	ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
+			priv->token, &tc_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] FS configured failed", tc_id);
+		return ret;
+	}
+
+	ret = dpaa2_flow_rule_add_all(priv, DPAA2_FLOW_FS_TYPE,
+			entry_size, tc_id);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
+	int rss_dist)
+{
+	struct dpaa2_key_extract *qos_extract;
+	uint8_t *key_cfg_buf;
+	uint64_t key_cfg_iova;
+	int ret;
+	struct dpni_qos_tbl_cfg qos_cfg;
+	struct fsl_mc_io *dpni = priv->hw;
+	uint16_t entry_size;
+	uint16_t key_max_size;
+
+	if (!rss_dist && priv->num_rx_tc <= 1) {
+		/* QoS table is effecitive for FS multiple TCs or RSS.*/
+		return 0;
+	}
+
+	if (LIST_FIRST(&priv->flows)) {
+		ret = dpni_clear_qos_table(dpni, CMD_PRI_LOW,
+				priv->token);
+		if (ret < 0) {
+			DPAA2_PMD_ERR("QoS table clear failed");
+			return ret;
+		}
+	}
+
+	qos_extract = &priv->extract.qos_key_extract;
+	key_cfg_buf = priv->extract.qos_extract_param;
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+
+	key_max_size = qos_extract->key_profile.key_max_size;
+	entry_size = dpaa2_flow_entry_size(key_max_size);
+
+	dpaa2_flow_qos_extracts_log(priv);
+
+	ret = dpkg_prepare_key_cfg(&qos_extract->dpkg,
+			key_cfg_buf);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("QoS prepare extract failed");
+		return ret;
+	}
+	memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
+	qos_cfg.keep_entries = true;
+	qos_cfg.key_cfg_iova = key_cfg_iova;
+	if (rss_dist) {
+		qos_cfg.discard_on_miss = true;
+	} else {
+		qos_cfg.discard_on_miss = false;
+		qos_cfg.default_tc = 0;
+	}
+
+	ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
+			priv->token, &qos_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("QoS table set failed");
+		return ret;
+	}
+
+	ret = dpaa2_flow_rule_add_all(priv, DPAA2_FLOW_QOS_TYPE,
+			entry_size, 0);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item pattern[],
+	const struct rte_flow_action actions[],
+	struct rte_flow_error *error)
+{
 	const struct rte_flow_action_rss *rss_conf;
 	int is_keycfg_configured = 0, end_of_list = 0;
 	int ret = 0, i = 0, j = 0;
-	struct dpni_rx_dist_cfg tc_cfg;
-	struct dpni_qos_tbl_cfg qos_cfg;
-	struct dpni_fs_action_cfg action;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct dpaa2_queue *dest_q;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
-	size_t param;
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
-	uint16_t qos_index;
-	struct rte_eth_dev *dest_dev;
-	struct dpaa2_dev_priv *dest_priv;
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	uint16_t dist_size, key_size;
+	struct dpaa2_key_extract *qos_key_extract;
+	struct dpaa2_key_extract *tc_key_extract;
 
 	ret = dpaa2_flow_verify_attr(priv, attr);
 	if (ret)
@@ -3413,7 +2688,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("ETH flow configuration failed!");
+				DPAA2_PMD_ERR("ETH flow config failed!");
 				return ret;
 			}
 			break;
@@ -3422,17 +2697,25 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("vLan flow configuration failed!");
+				DPAA2_PMD_ERR("vLan flow config failed!");
 				return ret;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
+			ret = dpaa2_configure_flow_ipv4(flow,
+					dev, attr, &pattern[i], actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("IPV4 flow config failed!");
+				return ret;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ret = dpaa2_configure_flow_generic_ip(flow,
+			ret = dpaa2_configure_flow_ipv6(flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("IP flow configuration failed!");
+				DPAA2_PMD_ERR("IPV6 flow config failed!");
 				return ret;
 			}
 			break;
@@ -3441,7 +2724,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("ICMP flow configuration failed!");
+				DPAA2_PMD_ERR("ICMP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3450,7 +2733,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("UDP flow configuration failed!");
+				DPAA2_PMD_ERR("UDP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3459,7 +2742,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("TCP flow configuration failed!");
+				DPAA2_PMD_ERR("TCP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3468,7 +2751,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("SCTP flow configuration failed!");
+				DPAA2_PMD_ERR("SCTP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3477,17 +2760,17 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("GRE flow configuration failed!");
+				DPAA2_PMD_ERR("GRE flow config failed!");
 				return ret;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow,
-						       dev, attr, &pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					dev, attr, &pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("RAW flow configuration failed!");
+				DPAA2_PMD_ERR("RAW flow config failed!");
 				return ret;
 			}
 			break;
@@ -3502,6 +2785,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 		i++;
 	}
 
+	qos_key_extract = &priv->extract.qos_key_extract;
+	key_size = qos_key_extract->key_profile.key_max_size;
+	flow->qos_rule.key_size = dpaa2_flow_entry_size(key_size);
+
+	tc_key_extract = &priv->extract.tc_key_extract[flow->tc_id];
+	key_size = tc_key_extract->key_profile.key_max_size;
+	flow->fs_rule.key_size = dpaa2_flow_entry_size(key_size);
+
 	/* Let's parse action on matching traffic */
 	end_of_list = 0;
 	while (!end_of_list) {
@@ -3509,150 +2800,33 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
 		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
 		case RTE_FLOW_ACTION_TYPE_PORT_ID:
-			memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
-			flow->action = actions[j].type;
-
-			if (actions[j].type == RTE_FLOW_ACTION_TYPE_QUEUE) {
-				dest_queue = (const struct rte_flow_action_queue *)
-								(actions[j].conf);
-				dest_q = priv->rx_vq[dest_queue->index];
-				action.flow_id = dest_q->flow_id;
-			} else {
-				dest_dev = dpaa2_flow_redirect_dev(priv,
-								   &actions[j]);
-				if (!dest_dev) {
-					DPAA2_PMD_ERR("Invalid destination device to redirect!");
-					return -1;
-				}
-
-				dest_priv = dest_dev->data->dev_private;
-				dest_q = dest_priv->tx_vq[0];
-				action.options =
-						DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
-				action.redirect_obj_token = dest_priv->token;
-				action.flow_id = dest_q->flow_id;
-			}
+			ret = dpaa2_configure_flow_fs_action(priv, flow,
+							     &actions[j]);
+			if (ret)
+				return ret;
 
 			/* Configure FS table first*/
-			if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
-				dpaa2_flow_fs_table_extracts_log(priv,
-							flow->tc_id, stdout);
-				if (dpkg_prepare_key_cfg(
-				&priv->extract.tc_key_extract[flow->tc_id].dpkg,
-				(uint8_t *)(size_t)priv->extract
-				.tc_extract_param[flow->tc_id]) < 0) {
-					DPAA2_PMD_ERR(
-					"Unable to prepare extract parameters");
-					return -1;
-				}
-
-				memset(&tc_cfg, 0,
-					sizeof(struct dpni_rx_dist_cfg));
-				tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc;
-				tc_cfg.key_cfg_iova =
-					(uint64_t)priv->extract.tc_extract_param[flow->tc_id];
-				tc_cfg.tc = flow->tc_id;
-				tc_cfg.enable = false;
-				ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
-						priv->token, &tc_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"TC hash cannot be disabled.(%d)",
-						ret);
-					return -1;
-				}
-				tc_cfg.enable = true;
-				tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
-				ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
-							 priv->token, &tc_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"TC distribution cannot be configured.(%d)",
-						ret);
-					return -1;
-				}
+			dist_size = priv->nb_rx_queues / priv->num_rx_tc;
+			if (is_keycfg_configured & DPAA2_FLOW_FS_TYPE) {
+				ret = dpaa2_configure_fs_rss_table(priv,
+								   flow->tc_id,
+								   dist_size,
+								   false);
+				if (ret)
+					return ret;
 			}
 
 			/* Configure QoS table then.*/
-			if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
-				dpaa2_flow_qos_table_extracts_log(priv, stdout);
-				if (dpkg_prepare_key_cfg(
-					&priv->extract.qos_key_extract.dpkg,
-					(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
-					DPAA2_PMD_ERR(
-						"Unable to prepare extract parameters");
-					return -1;
-				}
-
-				memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
-				qos_cfg.discard_on_miss = false;
-				qos_cfg.default_tc = 0;
-				qos_cfg.keep_entries = true;
-				qos_cfg.key_cfg_iova =
-					(size_t)priv->extract.qos_extract_param;
-				/* QoS table is effective for multiple TCs. */
-				if (priv->num_rx_tc > 1) {
-					ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
-						priv->token, &qos_cfg);
-					if (ret < 0) {
-						DPAA2_PMD_ERR(
-						"RSS QoS table can not be configured(%d)",
-							ret);
-						return -1;
-					}
-				}
-			}
-
-			flow->qos_real_key_size = priv->extract
-				.qos_key_extract.key_info.key_total_size;
-			if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
-				if (flow->ipaddr_rule.qos_ipdst_offset >=
-					flow->ipaddr_rule.qos_ipsrc_offset) {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipdst_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				} else {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipsrc_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				}
-			} else if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV6_ADDR) {
-				if (flow->ipaddr_rule.qos_ipdst_offset >=
-					flow->ipaddr_rule.qos_ipsrc_offset) {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipdst_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				} else {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipsrc_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				}
+			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
+				ret = dpaa2_configure_qos_table(priv, false);
+				if (ret)
+					return ret;
 			}
 
-			/* QoS entry added is only effective for multiple TCs.*/
 			if (priv->num_rx_tc > 1) {
-				qos_index = flow->tc_id * priv->fs_entries +
-					flow->tc_index;
-				if (qos_index >= priv->qos_entries) {
-					DPAA2_PMD_ERR("QoS table with %d entries full",
-						priv->qos_entries);
-					return -1;
-				}
-				flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
-
-				dpaa2_flow_qos_entry_log("Start add", flow,
-							qos_index, stdout);
-
-				ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
-						priv->token, &flow->qos_rule,
-						flow->tc_id, qos_index,
-						0, 0);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"Error in adding entry to QoS table(%d)", ret);
+				ret = dpaa2_flow_add_qos_rule(priv, flow);
+				if (ret)
 					return ret;
-				}
 			}
 
 			if (flow->tc_index >= priv->fs_entries) {
@@ -3661,140 +2835,47 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 				return -1;
 			}
 
-			flow->fs_real_key_size =
-				priv->extract.tc_key_extract[flow->tc_id]
-				.key_info.key_total_size;
-
-			if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV4_ADDR) {
-				if (flow->ipaddr_rule.fs_ipdst_offset >=
-					flow->ipaddr_rule.fs_ipsrc_offset) {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipdst_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				} else {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipsrc_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				}
-			} else if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV6_ADDR) {
-				if (flow->ipaddr_rule.fs_ipdst_offset >=
-					flow->ipaddr_rule.fs_ipsrc_offset) {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipdst_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				} else {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipsrc_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				}
-			}
-
-			flow->fs_rule.key_size = FIXED_ENTRY_SIZE;
-
-			dpaa2_flow_fs_entry_log("Start add", flow, stdout);
-
-			ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
-						flow->tc_id, flow->tc_index,
-						&flow->fs_rule, &action);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"Error in adding entry to FS table(%d)", ret);
+			ret = dpaa2_flow_add_fs_rule(priv, flow);
+			if (ret)
 				return ret;
-			}
-			memcpy(&flow->action_cfg, &action,
-				sizeof(struct dpni_fs_action_cfg));
+
 			break;
 		case RTE_FLOW_ACTION_TYPE_RSS:
-			rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
+			rss_conf = actions[j].conf;
+			flow->action_type = RTE_FLOW_ACTION_TYPE_RSS;
 
-			flow->action = RTE_FLOW_ACTION_TYPE_RSS;
 			ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types,
-					&priv->extract.tc_key_extract[flow->tc_id].dpkg);
+					&tc_key_extract->dpkg);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"unable to set flow distribution.please check queue config");
+				DPAA2_PMD_ERR("TC[%d] distset RSS failed",
+					      flow->tc_id);
 				return ret;
 			}
 
-			/* Allocate DMA'ble memory to write the rules */
-			param = (size_t)rte_malloc(NULL, 256, 64);
-			if (!param) {
-				DPAA2_PMD_ERR("Memory allocation failure");
-				return -1;
-			}
-
-			if (dpkg_prepare_key_cfg(
-				&priv->extract.tc_key_extract[flow->tc_id].dpkg,
-				(uint8_t *)param) < 0) {
-				DPAA2_PMD_ERR(
-				"Unable to prepare extract parameters");
-				rte_free((void *)param);
-				return -1;
-			}
-
-			memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
-			tc_cfg.dist_size = rss_conf->queue_num;
-			tc_cfg.key_cfg_iova = (size_t)param;
-			tc_cfg.enable = true;
-			tc_cfg.tc = flow->tc_id;
-			ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
-						 priv->token, &tc_cfg);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"RSS TC table cannot be configured: %d",
-					ret);
-				rte_free((void *)param);
-				return -1;
+			dist_size = rss_conf->queue_num;
+			if (is_keycfg_configured & DPAA2_FLOW_FS_TYPE) {
+				ret = dpaa2_configure_fs_rss_table(priv,
+								   flow->tc_id,
+								   dist_size,
+								   true);
+				if (ret)
+					return ret;
 			}
 
-			rte_free((void *)param);
-			if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
-				if (dpkg_prepare_key_cfg(
-					&priv->extract.qos_key_extract.dpkg,
-					(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
-					DPAA2_PMD_ERR(
-					"Unable to prepare extract parameters");
-					return -1;
-				}
-				memset(&qos_cfg, 0,
-					sizeof(struct dpni_qos_tbl_cfg));
-				qos_cfg.discard_on_miss = true;
-				qos_cfg.keep_entries = true;
-				qos_cfg.key_cfg_iova =
-					(size_t)priv->extract.qos_extract_param;
-				ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
-							 priv->token, &qos_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-					"RSS QoS dist can't be configured-%d",
-					ret);
-					return -1;
-				}
+			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
+				ret = dpaa2_configure_qos_table(priv, true);
+				if (ret)
+					return ret;
 			}
 
-			/* Add Rule into QoS table */
-			qos_index = flow->tc_id * priv->fs_entries +
-				flow->tc_index;
-			if (qos_index >= priv->qos_entries) {
-				DPAA2_PMD_ERR("QoS table with %d entries full",
-					priv->qos_entries);
-				return -1;
-			}
+			ret = dpaa2_flow_add_qos_rule(priv, flow);
+			if (ret)
+				return ret;
 
-			flow->qos_real_key_size =
-			  priv->extract.qos_key_extract.key_info.key_total_size;
-			flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
-			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-						&flow->qos_rule, flow->tc_id,
-						qos_index, 0, 0);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"Error in entry addition in QoS table(%d)",
-				ret);
+			ret = dpaa2_flow_add_fs_rule(priv, flow);
+			if (ret)
 				return ret;
-			}
+
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
@@ -3808,16 +2889,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 	}
 
 	if (!ret) {
-		if (is_keycfg_configured &
-			(DPAA2_QOS_TABLE_RECONFIGURE |
-			DPAA2_FS_TABLE_RECONFIGURE)) {
-			ret = dpaa2_flow_entry_update(priv, flow->tc_id);
-			if (ret) {
-				DPAA2_PMD_ERR("Flow entry update failed.");
-
-				return -1;
-			}
-		}
 		/* New rules are inserted. */
 		if (!curr) {
 			LIST_INSERT_HEAD(&priv->flows, flow, next);
@@ -3832,7 +2903,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 
 static inline int
 dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
-		      const struct rte_flow_attr *attr)
+	const struct rte_flow_attr *attr)
 {
 	int ret = 0;
 
@@ -3906,18 +2977,18 @@ dpaa2_dev_verify_actions(const struct rte_flow_action actions[])
 	}
 	for (j = 0; actions[j].type != RTE_FLOW_ACTION_TYPE_END; j++) {
 		if (actions[j].type != RTE_FLOW_ACTION_TYPE_DROP &&
-				!actions[j].conf)
+		    !actions[j].conf)
 			ret = -EINVAL;
 	}
 	return ret;
 }
 
-static
-int dpaa2_flow_validate(struct rte_eth_dev *dev,
-			const struct rte_flow_attr *flow_attr,
-			const struct rte_flow_item pattern[],
-			const struct rte_flow_action actions[],
-			struct rte_flow_error *error)
+static int
+dpaa2_flow_validate(struct rte_eth_dev *dev,
+	const struct rte_flow_attr *flow_attr,
+	const struct rte_flow_item pattern[],
+	const struct rte_flow_action actions[],
+	struct rte_flow_error *error)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpni_attr dpni_attr;
@@ -3971,127 +3042,128 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static
-struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
-				   const struct rte_flow_attr *attr,
-				   const struct rte_flow_item pattern[],
-				   const struct rte_flow_action actions[],
-				   struct rte_flow_error *error)
+static struct rte_flow *
+dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
 {
-	struct rte_flow *flow = NULL;
-	size_t key_iova = 0, mask_iova = 0;
+	struct dpaa2_dev_flow *flow = NULL;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int ret;
 
 	dpaa2_flow_control_log =
 		getenv("DPAA2_FLOW_CONTROL_LOG");
 
 	if (getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")) {
-		struct dpaa2_dev_priv *priv = dev->data->dev_private;
-
 		dpaa2_flow_miss_flow_id =
 			(uint16_t)atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
 		if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
-			DPAA2_PMD_ERR(
-				"The missed flow ID %d exceeds the max flow ID %d",
-				dpaa2_flow_miss_flow_id,
-				priv->dist_queues - 1);
+			DPAA2_PMD_ERR("Missed flow ID %d >= dist size(%d)",
+				      dpaa2_flow_miss_flow_id,
+				      priv->dist_queues);
 			return NULL;
 		}
 	}
 
-	flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
+	flow = rte_zmalloc(NULL, sizeof(struct dpaa2_dev_flow),
+			   RTE_CACHE_LINE_SIZE);
 	if (!flow) {
 		DPAA2_PMD_ERR("Failure to allocate memory for flow");
 		goto mem_failure;
 	}
-	/* Allocate DMA'ble memory to write the rules */
-	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!key_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+
+	/* Allocate DMA'ble memory to write the qos rules */
+	flow->qos_key_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->qos_key_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!mask_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+	flow->qos_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->qos_key_addr);
+
+	flow->qos_mask_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->qos_mask_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
+	flow->qos_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->qos_mask_addr);
 
-	flow->qos_rule.key_iova = key_iova;
-	flow->qos_rule.mask_iova = mask_iova;
-
-	/* Allocate DMA'ble memory to write the rules */
-	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!key_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+	/* Allocate DMA'ble memory to write the FS rules */
+	flow->fs_key_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->fs_key_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!mask_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+	flow->fs_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->fs_key_addr);
+
+	flow->fs_mask_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->fs_mask_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
+	flow->fs_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->fs_mask_addr);
 
-	flow->fs_rule.key_iova = key_iova;
-	flow->fs_rule.mask_iova = mask_iova;
-
-	flow->ipaddr_rule.ipaddr_type = FLOW_NONE_IPADDR;
-	flow->ipaddr_rule.qos_ipsrc_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.qos_ipdst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.fs_ipsrc_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.fs_ipdst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
+	priv->curr = flow;
 
-	ret = dpaa2_generic_flow_set(flow, dev, attr, pattern,
-			actions, error);
+	ret = dpaa2_generic_flow_set(flow, dev, attr, pattern, actions, error);
 	if (ret < 0) {
 		if (error && error->type > RTE_FLOW_ERROR_TYPE_ACTION)
 			rte_flow_error_set(error, EPERM,
-					RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-					attr, "unknown");
-		DPAA2_PMD_ERR("Failure to create flow, return code (%d)", ret);
+					   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   attr, "unknown");
+		DPAA2_PMD_ERR("Create flow failed (%d)", ret);
 		goto creation_error;
 	}
 
-	return flow;
+	priv->curr = NULL;
+	return (struct rte_flow *)flow;
+
 mem_failure:
-	rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-			   NULL, "memory alloc");
+	rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "memory alloc");
+
 creation_error:
-	rte_free((void *)flow);
-	rte_free((void *)key_iova);
-	rte_free((void *)mask_iova);
+	if (flow) {
+		if (flow->qos_key_addr)
+			rte_free(flow->qos_key_addr);
+		if (flow->qos_mask_addr)
+			rte_free(flow->qos_mask_addr);
+		if (flow->fs_key_addr)
+			rte_free(flow->fs_key_addr);
+		if (flow->fs_mask_addr)
+			rte_free(flow->fs_mask_addr);
+		rte_free(flow);
+	}
+	priv->curr = NULL;
 
 	return NULL;
 }
 
-static
-int dpaa2_flow_destroy(struct rte_eth_dev *dev,
-		       struct rte_flow *flow,
-		       struct rte_flow_error *error)
+static int
+dpaa2_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *_flow,
+		   struct rte_flow_error *error)
 {
 	int ret = 0;
+	struct dpaa2_dev_flow *flow;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
+	struct fsl_mc_io *dpni = priv->hw;
 
-	switch (flow->action) {
+	flow = (struct dpaa2_dev_flow *)_flow;
+
+	switch (flow->action_type) {
 	case RTE_FLOW_ACTION_TYPE_QUEUE:
 	case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
 	case RTE_FLOW_ACTION_TYPE_PORT_ID:
 		if (priv->num_rx_tc > 1) {
 			/* Remove entry from QoS table first */
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-					&flow->qos_rule);
+			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+						    priv->token,
+						    &flow->qos_rule);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"Error in removing entry from QoS table(%d)", ret);
+				DPAA2_PMD_ERR("Remove FS QoS entry failed");
+				dpaa2_flow_qos_entry_log("Delete failed", flow,
+							 -1);
+				abort();
 				goto error;
 			}
 		}
@@ -4100,34 +3172,37 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
 		ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token,
 					   flow->tc_id, &flow->fs_rule);
 		if (ret < 0) {
-			DPAA2_PMD_ERR(
-				"Error in removing entry from FS table(%d)", ret);
+			DPAA2_PMD_ERR("Remove entry from FS[%d] failed",
+				      flow->tc_id);
 			goto error;
 		}
 		break;
 	case RTE_FLOW_ACTION_TYPE_RSS:
 		if (priv->num_rx_tc > 1) {
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-					&flow->qos_rule);
+			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+						    priv->token,
+						    &flow->qos_rule);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"Error in entry addition in QoS table(%d)", ret);
+				DPAA2_PMD_ERR("Remove RSS QoS entry failed");
 				goto error;
 			}
 		}
 		break;
 	default:
-		DPAA2_PMD_ERR(
-		"Action type (%d) is not supported", flow->action);
+		DPAA2_PMD_ERR("Action(%d) not supported", flow->action_type);
 		ret = -ENOTSUP;
 		break;
 	}
 
 	LIST_REMOVE(flow, next);
-	rte_free((void *)(size_t)flow->qos_rule.key_iova);
-	rte_free((void *)(size_t)flow->qos_rule.mask_iova);
-	rte_free((void *)(size_t)flow->fs_rule.key_iova);
-	rte_free((void *)(size_t)flow->fs_rule.mask_iova);
+	if (flow->qos_key_addr)
+		rte_free(flow->qos_key_addr);
+	if (flow->qos_mask_addr)
+		rte_free(flow->qos_mask_addr);
+	if (flow->fs_key_addr)
+		rte_free(flow->fs_key_addr);
+	if (flow->fs_mask_addr)
+		rte_free(flow->fs_mask_addr);
 	/* Now free the flow */
 	rte_free(flow);
 
@@ -4152,12 +3227,12 @@ dpaa2_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct rte_flow *flow = LIST_FIRST(&priv->flows);
+	struct dpaa2_dev_flow *flow = LIST_FIRST(&priv->flows);
 
 	while (flow) {
-		struct rte_flow *next = LIST_NEXT(flow, next);
+		struct dpaa2_dev_flow *next = LIST_NEXT(flow, next);
 
-		dpaa2_flow_destroy(dev, flow, error);
+		dpaa2_flow_destroy(dev, (struct rte_flow *)flow, error);
 		flow = next;
 	}
 	return 0;
@@ -4165,10 +3240,10 @@ dpaa2_flow_flush(struct rte_eth_dev *dev,
 
 static int
 dpaa2_flow_query(struct rte_eth_dev *dev __rte_unused,
-		struct rte_flow *flow __rte_unused,
-		const struct rte_flow_action *actions __rte_unused,
-		void *data __rte_unused,
-		struct rte_flow_error *error __rte_unused)
+	struct rte_flow *_flow __rte_unused,
+	const struct rte_flow_action *actions __rte_unused,
+	void *data __rte_unused,
+	struct rte_flow_error *error __rte_unused)
 {
 	return 0;
 }
@@ -4185,11 +3260,11 @@ dpaa2_flow_query(struct rte_eth_dev *dev __rte_unused,
 void
 dpaa2_flow_clean(struct rte_eth_dev *dev)
 {
-	struct rte_flow *flow;
+	struct dpaa2_dev_flow *flow;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	while ((flow = LIST_FIRST(&priv->flows)))
-		dpaa2_flow_destroy(dev, flow, NULL);
+		dpaa2_flow_destroy(dev, (struct rte_flow *)flow, NULL);
 }
 
 const struct rte_flow_ops dpaa2_flow_ops = {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 24/42] net/dpaa2: dump Rx parser result
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (22 preceding siblings ...)
  2024-10-22 19:12         ` [v4 23/42] net/dpaa2: flow API refactor vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 25/42] net/dpaa2: enhancement of raw flow extract vanshika.shukla
                           ` (18 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

export DPAA2_PRINT_RX_PARSER_RESULT=1 is used to dump
RX parser result and frame attribute flags generated by
hardware parser and soft parser.
The parser results are converted to big endian described in RM.
The areas set by soft parser are dump as well.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c     |   5 +
 drivers/net/dpaa2/dpaa2_ethdev.h     |  90 ++++++++++
 drivers/net/dpaa2/dpaa2_parse_dump.h | 248 +++++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_rxtx.c       |   7 +
 4 files changed, 350 insertions(+)
 create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index e55de5b614..187b648799 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -75,6 +75,8 @@ int dpaa2_timestamp_dynfield_offset = -1;
 /* Enable error queue */
 bool dpaa2_enable_err_queue;
 
+bool dpaa2_print_parser_result;
+
 #define MAX_NB_RX_DESC		11264
 int total_nb_rx_desc;
 
@@ -2730,6 +2732,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		DPAA2_PMD_INFO("Enable error queue");
 	}
 
+	if (getenv("DPAA2_PRINT_RX_PARSER_RESULT"))
+		dpaa2_print_parser_result = 1;
+
 	/* Allocate memory for hardware structure for queues */
 	ret = dpaa2_alloc_rx_tx_queues(eth_dev);
 	if (ret) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index ea1c1b5117..c864859b3f 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -19,6 +19,8 @@
 #include <mc/fsl_dpni.h>
 #include <mc/fsl_mc_sys.h>
 
+#include "base/dpaa2_hw_dpni_annot.h"
+
 #define DPAA2_MIN_RX_BUF_SIZE 512
 #define DPAA2_MAX_RX_PKT_LEN  10240 /*WRIOP support*/
 #define NET_DPAA2_PMD_DRIVER_NAME net_dpaa2
@@ -152,6 +154,88 @@ extern const struct rte_tm_ops dpaa2_tm_ops;
 
 extern bool dpaa2_enable_err_queue;
 
+extern bool dpaa2_print_parser_result;
+
+#define DPAA2_FAPR_SIZE \
+	(sizeof(struct dpaa2_annot_hdr) - \
+	offsetof(struct dpaa2_annot_hdr, word3))
+
+#define DPAA2_PR_NXTHDR_OFFSET 0
+
+#define DPAA2_FAFE_PSR_OFFSET 2
+#define DPAA2_FAFE_PSR_SIZE 2
+
+#define DPAA2_FAF_PSR_OFFSET 4
+#define DPAA2_FAF_PSR_SIZE 12
+
+#define DPAA2_FAF_TOTAL_SIZE \
+	(DPAA2_FAFE_PSR_SIZE + DPAA2_FAF_PSR_SIZE)
+
+/* Just most popular Frame attribute flags (FAF) here.*/
+enum dpaa2_rx_faf_offset {
+	/* Set by SP start*/
+	FAFE_VXLAN_IN_VLAN_FRAM = 0,
+	FAFE_VXLAN_IN_IPV4_FRAM = 1,
+	FAFE_VXLAN_IN_IPV6_FRAM = 2,
+	FAFE_VXLAN_IN_UDP_FRAM = 3,
+	FAFE_VXLAN_IN_TCP_FRAM = 4,
+	/* Set by SP end*/
+
+	FAF_GTP_PRIMED_FRAM = 1 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_PTP_FRAM = 3 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_VXLAN_FRAM = 4 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ETH_FRAM = 10 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_LLC_SNAP_FRAM = 18 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_VLAN_FRAM = 21 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_PPPOE_PPP_FRAM = 25 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_MPLS_FRAM = 27 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ARP_FRAM = 30 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPV4_FRAM = 34 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPV6_FRAM = 42 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IP_FRAM = 48 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ICMP_FRAM = 57 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IGMP_FRAM = 58 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_GRE_FRAM = 65 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_UDP_FRAM = 70 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_TCP_FRAM = 72 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_FRAM = 77 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_ESP_FRAM = 78 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_AH_FRAM = 79 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_SCTP_FRAM = 81 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_DCCP_FRAM = 83 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_GTP_FRAM = 87 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ESP_FRAM = 89 + DPAA2_FAFE_PSR_SIZE * 8,
+};
+
+#define DPAA2_PR_ETH_OFF_OFFSET 19
+#define DPAA2_PR_TCI_OFF_OFFSET 21
+#define DPAA2_PR_LAST_ETYPE_OFFSET 23
+#define DPAA2_PR_L3_OFF_OFFSET 27
+#define DPAA2_PR_L4_OFF_OFFSET 30
+#define DPAA2_PR_L5_OFF_OFFSET 31
+#define DPAA2_PR_NXTHDR_OFF_OFFSET 34
+
+/* Set by SP for vxlan distribution start*/
+#define DPAA2_VXLAN_IN_TCI_OFFSET 16
+
+#define DPAA2_VXLAN_IN_DADDR0_OFFSET 20
+#define DPAA2_VXLAN_IN_DADDR1_OFFSET 22
+#define DPAA2_VXLAN_IN_DADDR2_OFFSET 24
+#define DPAA2_VXLAN_IN_DADDR3_OFFSET 25
+#define DPAA2_VXLAN_IN_DADDR4_OFFSET 26
+#define DPAA2_VXLAN_IN_DADDR5_OFFSET 28
+
+#define DPAA2_VXLAN_IN_SADDR0_OFFSET 29
+#define DPAA2_VXLAN_IN_SADDR1_OFFSET 32
+#define DPAA2_VXLAN_IN_SADDR2_OFFSET 33
+#define DPAA2_VXLAN_IN_SADDR3_OFFSET 35
+#define DPAA2_VXLAN_IN_SADDR4_OFFSET 41
+#define DPAA2_VXLAN_IN_SADDR5_OFFSET 42
+
+#define DPAA2_VXLAN_VNI_OFFSET 43
+#define DPAA2_VXLAN_IN_TYPE_OFFSET 46
+/* Set by SP for vxlan distribution end*/
+
 struct ipv4_sd_addr_extract_rule {
 	uint32_t ipv4_src;
 	uint32_t ipv4_dst;
@@ -197,7 +281,13 @@ enum ip_addr_extract_type {
 	IP_DST_SRC_EXTRACT
 };
 
+enum key_prot_type {
+	DPAA2_NET_PROT_KEY,
+	DPAA2_FAF_KEY
+};
+
 struct key_prot_field {
+	enum key_prot_type type;
 	enum net_prot prot;
 	uint32_t key_field;
 };
diff --git a/drivers/net/dpaa2/dpaa2_parse_dump.h b/drivers/net/dpaa2/dpaa2_parse_dump.h
new file mode 100644
index 0000000000..f1cdc003de
--- /dev/null
+++ b/drivers/net/dpaa2/dpaa2_parse_dump.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ *   Copyright 2022 NXP
+ *
+ */
+
+#ifndef _DPAA2_PARSE_DUMP_H
+#define _DPAA2_PARSE_DUMP_H
+
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_pmd_dpaa2.h>
+
+#include <dpaa2_hw_pvt.h>
+#include "dpaa2_tm.h"
+
+#include <mc/fsl_dpni.h>
+#include <mc/fsl_mc_sys.h>
+
+#include "base/dpaa2_hw_dpni_annot.h"
+
+#define DPAA2_PR_PRINT printf
+
+struct dpaa2_faf_bit_info {
+	const char *name;
+	int position;
+};
+
+struct dpaa2_fapr_field_info {
+	const char *name;
+	uint16_t value;
+};
+
+struct dpaa2_fapr_array {
+	union {
+		uint64_t pr_64[DPAA2_FAPR_SIZE / 8];
+		uint8_t pr[DPAA2_FAPR_SIZE];
+	};
+};
+
+#define NEXT_HEADER_NAME "Next Header"
+#define ETH_OFF_NAME "ETH OFFSET"
+#define VLAN_TCI_OFF_NAME "VLAN TCI OFFSET"
+#define LAST_ENTRY_OFF_NAME "LAST ETYPE Offset"
+#define L3_OFF_NAME "L3 Offset"
+#define L4_OFF_NAME "L4 Offset"
+#define L5_OFF_NAME "L5 Offset"
+#define NEXT_HEADER_OFF_NAME "Next Header Offset"
+
+static const
+struct dpaa2_fapr_field_info support_dump_fields[] = {
+	{
+		.name = NEXT_HEADER_NAME,
+	},
+	{
+		.name = ETH_OFF_NAME,
+	},
+	{
+		.name = VLAN_TCI_OFF_NAME,
+	},
+	{
+		.name = LAST_ENTRY_OFF_NAME,
+	},
+	{
+		.name = L3_OFF_NAME,
+	},
+	{
+		.name = L4_OFF_NAME,
+	},
+	{
+		.name = L5_OFF_NAME,
+	},
+	{
+		.name = NEXT_HEADER_OFF_NAME,
+	}
+};
+
+static inline void
+dpaa2_print_faf(struct dpaa2_fapr_array *fapr)
+{
+	const int faf_bit_len = DPAA2_FAF_TOTAL_SIZE * 8;
+	struct dpaa2_faf_bit_info faf_bits[faf_bit_len];
+	int i, byte_pos, bit_pos, vxlan = 0, vxlan_vlan = 0;
+	struct rte_ether_hdr vxlan_in_eth;
+	uint16_t vxlan_vlan_tci;
+
+	for (i = 0; i < faf_bit_len; i++) {
+		faf_bits[i].position = i;
+		if (i == FAFE_VXLAN_IN_VLAN_FRAM)
+			faf_bits[i].name = "VXLAN VLAN Present";
+		else if (i == FAFE_VXLAN_IN_IPV4_FRAM)
+			faf_bits[i].name = "VXLAN IPV4 Present";
+		else if (i == FAFE_VXLAN_IN_IPV6_FRAM)
+			faf_bits[i].name = "VXLAN IPV6 Present";
+		else if (i == FAFE_VXLAN_IN_UDP_FRAM)
+			faf_bits[i].name = "VXLAN UDP Present";
+		else if (i == FAFE_VXLAN_IN_TCP_FRAM)
+			faf_bits[i].name = "VXLAN TCP Present";
+		else if (i == FAF_VXLAN_FRAM)
+			faf_bits[i].name = "VXLAN Present";
+		else if (i == FAF_ETH_FRAM)
+			faf_bits[i].name = "Ethernet MAC Present";
+		else if (i == FAF_VLAN_FRAM)
+			faf_bits[i].name = "VLAN 1 Present";
+		else if (i == FAF_IPV4_FRAM)
+			faf_bits[i].name = "IPv4 1 Present";
+		else if (i == FAF_IPV6_FRAM)
+			faf_bits[i].name = "IPv6 1 Present";
+		else if (i == FAF_UDP_FRAM)
+			faf_bits[i].name = "UDP Present";
+		else if (i == FAF_TCP_FRAM)
+			faf_bits[i].name = "TCP Present";
+		else
+			faf_bits[i].name = "Check RM for this unusual frame";
+	}
+
+	DPAA2_PR_PRINT("Frame Annotation Flags:\r\n");
+	for (i = 0; i < faf_bit_len; i++) {
+		byte_pos = i / 8 + DPAA2_FAFE_PSR_OFFSET;
+		bit_pos = i % 8;
+		if (fapr->pr[byte_pos] & (1 << (7 - bit_pos))) {
+			DPAA2_PR_PRINT("FAF bit %d : %s\r\n",
+				faf_bits[i].position, faf_bits[i].name);
+			if (i == FAF_VXLAN_FRAM)
+				vxlan = 1;
+		}
+	}
+
+	if (vxlan) {
+		vxlan_in_eth.dst_addr.addr_bytes[0] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR0_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[1] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR1_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[2] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR2_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[3] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR3_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[4] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR4_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[5] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR5_OFFSET];
+
+		vxlan_in_eth.src_addr.addr_bytes[0] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR0_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[1] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR1_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[2] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR2_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[3] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR3_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[4] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR4_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[5] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR5_OFFSET];
+
+		vxlan_in_eth.ether_type =
+			fapr->pr[DPAA2_VXLAN_IN_TYPE_OFFSET];
+		vxlan_in_eth.ether_type =
+			vxlan_in_eth.ether_type << 8;
+		vxlan_in_eth.ether_type |=
+			fapr->pr[DPAA2_VXLAN_IN_TYPE_OFFSET + 1];
+
+		if (vxlan_in_eth.ether_type == RTE_ETHER_TYPE_VLAN)
+			vxlan_vlan = 1;
+		DPAA2_PR_PRINT("VXLAN inner eth:\r\n");
+		DPAA2_PR_PRINT("dst addr: ");
+		for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+			if (i != 0)
+				DPAA2_PR_PRINT(":");
+			DPAA2_PR_PRINT("%02x",
+				vxlan_in_eth.dst_addr.addr_bytes[i]);
+		}
+		DPAA2_PR_PRINT("\r\n");
+		DPAA2_PR_PRINT("src addr: ");
+		for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+			if (i != 0)
+				DPAA2_PR_PRINT(":");
+			DPAA2_PR_PRINT("%02x",
+				vxlan_in_eth.src_addr.addr_bytes[i]);
+		}
+		DPAA2_PR_PRINT("\r\n");
+		DPAA2_PR_PRINT("type: 0x%04x\r\n",
+			vxlan_in_eth.ether_type);
+		if (vxlan_vlan) {
+			vxlan_vlan_tci = fapr->pr[DPAA2_VXLAN_IN_TCI_OFFSET];
+			vxlan_vlan_tci = vxlan_vlan_tci << 8;
+			vxlan_vlan_tci |=
+				fapr->pr[DPAA2_VXLAN_IN_TCI_OFFSET + 1];
+
+			DPAA2_PR_PRINT("vlan tci: 0x%04x\r\n",
+				vxlan_vlan_tci);
+		}
+	}
+}
+
+static inline void
+dpaa2_print_parse_result(struct dpaa2_annot_hdr *annotation)
+{
+	struct dpaa2_fapr_array fapr;
+	struct dpaa2_fapr_field_info
+		fapr_fields[sizeof(support_dump_fields) /
+		sizeof(struct dpaa2_fapr_field_info)];
+	uint64_t len, i;
+
+	memcpy(&fapr, &annotation->word3, DPAA2_FAPR_SIZE);
+	for (i = 0; i < (DPAA2_FAPR_SIZE / 8); i++)
+		fapr.pr_64[i] = rte_cpu_to_be_64(fapr.pr_64[i]);
+
+	memcpy(fapr_fields, support_dump_fields,
+		sizeof(support_dump_fields));
+
+	for (i = 0;
+		i < sizeof(fapr_fields) /
+		sizeof(struct dpaa2_fapr_field_info);
+		i++) {
+		if (!strcmp(fapr_fields[i].name, NEXT_HEADER_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_NXTHDR_OFFSET];
+			fapr_fields[i].value = fapr_fields[i].value << 8;
+			fapr_fields[i].value |=
+				fapr.pr[DPAA2_PR_NXTHDR_OFFSET + 1];
+		} else if (!strcmp(fapr_fields[i].name, ETH_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_ETH_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, VLAN_TCI_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_TCI_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, LAST_ENTRY_OFF_NAME)) {
+			fapr_fields[i].value =
+				fapr.pr[DPAA2_PR_LAST_ETYPE_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L3_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L3_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L4_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L4_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L5_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L5_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, NEXT_HEADER_OFF_NAME)) {
+			fapr_fields[i].value =
+				fapr.pr[DPAA2_PR_NXTHDR_OFF_OFFSET];
+		}
+	}
+
+	len = sizeof(fapr_fields) / sizeof(struct dpaa2_fapr_field_info);
+	DPAA2_PR_PRINT("Parse Result:\r\n");
+	for (i = 0; i < len; i++) {
+		DPAA2_PR_PRINT("%21s : 0x%02x\r\n",
+			fapr_fields[i].name, fapr_fields[i].value);
+	}
+	dpaa2_print_faf(&fapr);
+}
+
+#endif
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 92e9dd40dc..71b2b4a427 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -25,6 +25,7 @@
 #include "dpaa2_pmd_logs.h"
 #include "dpaa2_ethdev.h"
 #include "base/dpaa2_hw_dpni_annot.h"
+#include "dpaa2_parse_dump.h"
 
 static inline uint32_t __rte_hot
 dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
@@ -57,6 +58,9 @@ dpaa2_dev_rx_parse_new(struct rte_mbuf *m, const struct qbman_fd *fd,
 	struct dpaa2_annot_hdr *annotation =
 			(struct dpaa2_annot_hdr *)hw_annot_addr;
 
+	if (unlikely(dpaa2_print_parser_result))
+		dpaa2_print_parse_result(annotation);
+
 	m->packet_type = RTE_PTYPE_UNKNOWN;
 	switch (frc) {
 	case DPAA2_PKT_TYPE_ETHER:
@@ -252,6 +256,9 @@ dpaa2_dev_rx_parse(struct rte_mbuf *mbuf, void *hw_annot_addr)
 	else
 		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
+	if (unlikely(dpaa2_print_parser_result))
+		dpaa2_print_parse_result(annotation);
+
 	if (dpaa2_enable_ts[mbuf->port]) {
 		*dpaa2_timestamp_dynfield(mbuf) = annotation->word2;
 		mbuf->ol_flags |= dpaa2_timestamp_rx_dynflag;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 25/42] net/dpaa2: enhancement of raw flow extract
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (23 preceding siblings ...)
  2024-10-22 19:12         ` [v4 24/42] net/dpaa2: dump Rx parser result vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 26/42] net/dpaa2: frame attribute flags parser vanshika.shukla
                           ` (17 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support combination of RAW extract and header extracts.
RAW extract can start from any absolute offset.

TBD: relative offset support.
To support relative offset of previous L3 protocol item,
extracts should be expanded to identify if the frame is:
vlan or none-vlan.

To support relative offset of previous L4 protocol item,
extracts should be expanded to identify if the frame is:
vlan/IPv4 or vlan/IPv6 or none-vlan/IPv4 or none-vlan/IPv6.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |  10 +
 drivers/net/dpaa2/dpaa2_flow.c   | 385 ++++++++++++++++++++++++++-----
 2 files changed, 340 insertions(+), 55 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index c864859b3f..8f548467a4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -292,6 +292,11 @@ struct key_prot_field {
 	uint32_t key_field;
 };
 
+struct dpaa2_raw_region {
+	uint8_t raw_start;
+	uint8_t raw_size;
+};
+
 struct dpaa2_key_profile {
 	uint8_t num;
 	uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
@@ -301,6 +306,10 @@ struct dpaa2_key_profile {
 	uint8_t ip_addr_extract_pos;
 	uint8_t ip_addr_extract_off;
 
+	uint8_t raw_extract_pos;
+	uint8_t raw_extract_off;
+	uint8_t raw_extract_num;
+
 	uint8_t l4_src_port_present;
 	uint8_t l4_src_port_pos;
 	uint8_t l4_src_port_offset;
@@ -309,6 +318,7 @@ struct dpaa2_key_profile {
 	uint8_t l4_dst_port_offset;
 	struct key_prot_field prot_field[DPKG_MAX_NUM_OF_EXTRACTS];
 	uint16_t key_max_size;
+	struct dpaa2_raw_region raw_region;
 };
 
 struct dpaa2_key_extract {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 9e03ad5401..a66edf78bc 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -768,42 +768,272 @@ dpaa2_flow_extract_add_hdr(enum net_prot prot,
 }
 
 static int
-dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
-	int size)
+dpaa2_flow_extract_new_raw(struct dpaa2_dev_priv *priv,
+	int offset, int size,
+	enum dpaa2_flow_dist_type dist_type, int tc_id)
 {
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_profile *key_info = &key_extract->key_profile;
-	int last_extract_size, index;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpaa2_key_profile *key_profile;
+	int last_extract_size, index, pos, item_size;
+	uint8_t num_extracts;
+	uint32_t field;
 
-	if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
-	    DPKG_EXTRACT_FROM_DATA) {
-		DPAA2_PMD_WARN("RAW extract cannot be combined with others");
-		return -1;
-	}
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	key_profile = &key_extract->key_profile;
+
+	key_profile->raw_region.raw_start = 0;
+	key_profile->raw_region.raw_size = 0;
 
 	last_extract_size = (size % DPAA2_FLOW_MAX_KEY_SIZE);
-	dpkg->num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
+	num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
 	if (last_extract_size)
-		dpkg->num_extracts++;
+		num_extracts++;
 	else
 		last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
 
-	for (index = 0; index < dpkg->num_extracts; index++) {
-		dpkg->extracts[index].type = DPKG_EXTRACT_FROM_DATA;
-		if (index == dpkg->num_extracts - 1)
-			dpkg->extracts[index].extract.from_data.size =
-				last_extract_size;
+	for (index = 0; index < num_extracts; index++) {
+		if (index == num_extracts - 1)
+			item_size = last_extract_size;
 		else
-			dpkg->extracts[index].extract.from_data.size =
-				DPAA2_FLOW_MAX_KEY_SIZE;
-		dpkg->extracts[index].extract.from_data.offset =
-			DPAA2_FLOW_MAX_KEY_SIZE * index;
+			item_size = DPAA2_FLOW_MAX_KEY_SIZE;
+		field = offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+		field |= item_size;
+
+		pos = dpaa2_flow_key_profile_advance(NET_PROT_PAYLOAD,
+				field, item_size, priv, dist_type,
+				tc_id, NULL);
+		if (pos < 0)
+			return pos;
+
+		dpkg->extracts[pos].type = DPKG_EXTRACT_FROM_DATA;
+		dpkg->extracts[pos].extract.from_data.size = item_size;
+		dpkg->extracts[pos].extract.from_data.offset = offset;
+
+		if (index == 0) {
+			key_profile->raw_extract_pos = pos;
+			key_profile->raw_extract_off =
+				key_profile->key_offset[pos];
+			key_profile->raw_region.raw_start = offset;
+		}
+		key_profile->raw_extract_num++;
+		key_profile->raw_region.raw_size +=
+			key_profile->key_size[pos];
+
+		offset += item_size;
+		dpkg->num_extracts++;
 	}
 
-	key_info->key_max_size = size;
 	return 0;
 }
 
+static int
+dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
+	int offset, int size, enum dpaa2_flow_dist_type dist_type,
+	int tc_id, int *recfg)
+{
+	struct dpaa2_key_profile *key_profile;
+	struct dpaa2_raw_region *raw_region;
+	int end = offset + size, ret = 0, extract_extended, sz_extend;
+	int start_cmp, end_cmp, new_size, index, pos, end_pos;
+	int last_extract_size, item_size, num_extracts, bk_num = 0;
+	struct dpkg_extract extract_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint8_t key_offset_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint8_t key_size_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	struct key_prot_field prot_field_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	struct dpaa2_raw_region raw_hole;
+	struct dpkg_profile_cfg *dpkg;
+	enum net_prot prot;
+	uint32_t field;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+		dpkg = &priv->extract.qos_key_extract.dpkg;
+	} else {
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+		dpkg = &priv->extract.tc_key_extract[tc_id].dpkg;
+	}
+
+	raw_region = &key_profile->raw_region;
+	if (!raw_region->raw_size) {
+		/* New RAW region*/
+		ret = dpaa2_flow_extract_new_raw(priv, offset, size,
+			dist_type, tc_id);
+		if (!ret && recfg)
+			(*recfg) |= dist_type;
+
+		return ret;
+	}
+	start_cmp = raw_region->raw_start;
+	end_cmp = raw_region->raw_start + raw_region->raw_size;
+
+	if (offset >= start_cmp && end <= end_cmp)
+		return 0;
+
+	sz_extend = 0;
+	new_size = raw_region->raw_size;
+	if (offset < start_cmp) {
+		sz_extend += start_cmp - offset;
+		new_size += (start_cmp - offset);
+	}
+	if (end > end_cmp) {
+		sz_extend += end - end_cmp;
+		new_size += (end - end_cmp);
+	}
+
+	last_extract_size = (new_size % DPAA2_FLOW_MAX_KEY_SIZE);
+	num_extracts = (new_size / DPAA2_FLOW_MAX_KEY_SIZE);
+	if (last_extract_size)
+		num_extracts++;
+	else
+		last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
+
+	if ((key_profile->num + num_extracts -
+		key_profile->raw_extract_num) >=
+		DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("%s Failed to expand raw extracts",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (offset < start_cmp) {
+		raw_hole.raw_start = key_profile->raw_extract_off;
+		raw_hole.raw_size = start_cmp - offset;
+		raw_region->raw_start = offset;
+		raw_region->raw_size += start_cmp - offset;
+
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size);
+			if (ret)
+				return ret;
+		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE) {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size, tc_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	if (end > end_cmp) {
+		raw_hole.raw_start =
+			key_profile->raw_extract_off +
+			raw_region->raw_size;
+		raw_hole.raw_size = end - end_cmp;
+		raw_region->raw_size += end - end_cmp;
+
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size);
+			if (ret)
+				return ret;
+		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE) {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size, tc_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	end_pos = key_profile->raw_extract_pos +
+		key_profile->raw_extract_num;
+	if (key_profile->num > end_pos) {
+		bk_num = key_profile->num - end_pos;
+		memcpy(extract_bk, &dpkg->extracts[end_pos],
+			bk_num * sizeof(struct dpkg_extract));
+		memcpy(key_offset_bk, &key_profile->key_offset[end_pos],
+			bk_num * sizeof(uint8_t));
+		memcpy(key_size_bk, &key_profile->key_size[end_pos],
+			bk_num * sizeof(uint8_t));
+		memcpy(prot_field_bk, &key_profile->prot_field[end_pos],
+			bk_num * sizeof(struct key_prot_field));
+
+		for (index = 0; index < bk_num; index++) {
+			key_offset_bk[index] += sz_extend;
+			prot = prot_field_bk[index].prot;
+			field = prot_field_bk[index].key_field;
+			if (dpaa2_flow_l4_src_port_extract(prot,
+				field)) {
+				key_profile->l4_src_port_present = 1;
+				key_profile->l4_src_port_pos = end_pos + index;
+				key_profile->l4_src_port_offset =
+					key_offset_bk[index];
+			} else if (dpaa2_flow_l4_dst_port_extract(prot,
+				field)) {
+				key_profile->l4_dst_port_present = 1;
+				key_profile->l4_dst_port_pos = end_pos + index;
+				key_profile->l4_dst_port_offset =
+					key_offset_bk[index];
+			}
+		}
+	}
+
+	pos = key_profile->raw_extract_pos;
+
+	for (index = 0; index < num_extracts; index++) {
+		if (index == num_extracts - 1)
+			item_size = last_extract_size;
+		else
+			item_size = DPAA2_FLOW_MAX_KEY_SIZE;
+		field = offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+		field |= item_size;
+
+		if (pos > 0) {
+			key_profile->key_offset[pos] =
+				key_profile->key_offset[pos - 1] +
+				key_profile->key_size[pos - 1];
+		} else {
+			key_profile->key_offset[pos] = 0;
+		}
+		key_profile->key_size[pos] = item_size;
+		key_profile->prot_field[pos].prot = NET_PROT_PAYLOAD;
+		key_profile->prot_field[pos].key_field = field;
+
+		dpkg->extracts[pos].type = DPKG_EXTRACT_FROM_DATA;
+		dpkg->extracts[pos].extract.from_data.size = item_size;
+		dpkg->extracts[pos].extract.from_data.offset = offset;
+		offset += item_size;
+		pos++;
+	}
+
+	if (bk_num) {
+		memcpy(&dpkg->extracts[pos], extract_bk,
+			bk_num * sizeof(struct dpkg_extract));
+		memcpy(&key_profile->key_offset[end_pos],
+			key_offset_bk, bk_num * sizeof(uint8_t));
+		memcpy(&key_profile->key_size[end_pos],
+			key_size_bk, bk_num * sizeof(uint8_t));
+		memcpy(&key_profile->prot_field[end_pos],
+			prot_field_bk, bk_num * sizeof(struct key_prot_field));
+	}
+
+	extract_extended = num_extracts - key_profile->raw_extract_num;
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		key_profile->ip_addr_extract_pos += extract_extended;
+		key_profile->ip_addr_extract_off += sz_extend;
+	}
+	key_profile->raw_extract_num = num_extracts;
+	key_profile->num += extract_extended;
+	key_profile->key_max_size += sz_extend;
+
+	dpkg->num_extracts += extract_extended;
+	if (!ret && recfg)
+		(*recfg) |= dist_type;
+
+	return ret;
+}
+
 static inline int
 dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 	enum net_prot prot, uint32_t key_field)
@@ -843,7 +1073,6 @@ dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
 	int i;
 
 	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
-
 	if (i >= 0)
 		return key_profile->key_offset[i];
 	else
@@ -992,13 +1221,37 @@ dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 }
 
 static inline int
-dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
-			     const void *key, const void *mask, int size)
+dpaa2_flow_raw_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	uint32_t extract_offset, int size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int offset = 0;
+	int extract_size = size > DPAA2_FLOW_MAX_KEY_SIZE ?
+		DPAA2_FLOW_MAX_KEY_SIZE : size;
+	int offset, field;
+
+	field = extract_offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+	field |= extract_size;
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			NET_PROT_PAYLOAD, field);
+	if (offset < 0) {
+		DPAA2_PMD_ERR("offset(%d)/size(%d) raw extract failed",
+			extract_offset, size);
+		return -EINVAL;
+	}
 
-	memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
-	memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, size);
+		memcpy((flow->qos_mask_addr + offset), mask, size);
+		flow->qos_rule_size = offset + size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, size);
+		memcpy((flow->fs_mask_addr + offset), mask, size);
+		flow->fs_rule_size = offset + size;
+	}
 
 	return 0;
 }
@@ -2233,22 +2486,36 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_raw *spec = pattern->spec;
 	const struct rte_flow_item_raw *mask = pattern->mask;
-	int prev_key_size =
-		priv->extract.qos_key_extract.key_profile.key_max_size;
 	int local_cfg = 0, ret;
 	uint32_t group;
+	struct dpaa2_key_extract *qos_key_extract;
+	struct dpaa2_key_extract *tc_key_extract;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
 		DPAA2_PMD_ERR("spec or mask not present.");
 		return -EINVAL;
 	}
-	/* Only supports non-relative with offset 0 */
-	if (spec->relative || spec->offset != 0 ||
-	    spec->search || spec->limit) {
-		DPAA2_PMD_ERR("relative and non zero offset not supported.");
+
+	if (spec->relative) {
+		/* TBD: relative offset support.
+		 * To support relative offset of previous L3 protocol item,
+		 * extracts should be expanded to identify if the frame is:
+		 * vlan or none-vlan.
+		 *
+		 * To support relative offset of previous L4 protocol item,
+		 * extracts should be expanded to identify if the frame is:
+		 * vlan/IPv4 or vlan/IPv6 or none-vlan/IPv4 or none-vlan/IPv6.
+		 */
+		DPAA2_PMD_ERR("relative not supported.");
+		return -EINVAL;
+	}
+
+	if (spec->search) {
+		DPAA2_PMD_ERR("search not supported.");
 		return -EINVAL;
 	}
+
 	/* Spec len and mask len should be same */
 	if (spec->length != mask->length) {
 		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
@@ -2260,36 +2527,44 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (prev_key_size <= spec->length) {
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
-						 spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FLOW_QOS_TYPE;
+	qos_key_extract = &priv->extract.qos_key_extract;
+	tc_key_extract = &priv->extract.tc_key_extract[group];
 
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.tc_key_extract[group],
-					spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FLOW_FS_TYPE;
+	ret = dpaa2_flow_extract_add_raw(priv,
+			spec->offset, spec->length,
+			DPAA2_FLOW_QOS_TYPE, 0, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS Extract RAW add failed.");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
-					   mask->pattern, spec->length);
+	ret = dpaa2_flow_extract_add_raw(priv,
+			spec->offset, spec->length,
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("FS[%d] Extract RAW add failed.",
+			group);
+		return -EINVAL;
+	}
+
+	ret = dpaa2_flow_raw_rule_data_set(flow,
+			&qos_key_extract->key_profile,
+			spec->offset, spec->length,
+			spec->pattern, mask->pattern,
+			DPAA2_FLOW_QOS_TYPE);
 	if (ret) {
 		DPAA2_PMD_ERR("QoS RAW rule data set failed");
-		return -1;
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
-					   mask->pattern, spec->length);
+	ret = dpaa2_flow_raw_rule_data_set(flow,
+			&tc_key_extract->key_profile,
+			spec->offset, spec->length,
+			spec->pattern, mask->pattern,
+			DPAA2_FLOW_FS_TYPE);
 	if (ret) {
 		DPAA2_PMD_ERR("FS RAW rule data set failed");
-		return -1;
+		return -EINVAL;
 	}
 
 	(*device_configured) |= local_cfg;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 26/42] net/dpaa2: frame attribute flags parser
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (24 preceding siblings ...)
  2024-10-22 19:12         ` [v4 25/42] net/dpaa2: enhancement of raw flow extract vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 27/42] net/dpaa2: add VXLAN distribution support vanshika.shukla
                           ` (16 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

FAF parser extracts are used to identify protocol type
instead of extracts of previous protocol' type.
FAF starts from offset 2 to include user defined flags which
will be used for soft protocol distribution.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 475 +++++++++++++++++++--------------
 1 file changed, 273 insertions(+), 202 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index a66edf78bc..4c80efeff7 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -22,13 +22,6 @@
 #include <dpaa2_ethdev.h>
 #include <dpaa2_pmd_logs.h>
 
-/* Workaround to discriminate the UDP/TCP/SCTP
- * with next protocol of l3.
- * MC/WRIOP are not able to identify
- * the l4 protocol with l4 ports.
- */
-static int mc_l4_port_identification;
-
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
@@ -256,6 +249,10 @@ dpaa2_flow_qos_extracts_log(const struct dpaa2_dev_priv *priv)
 			sprintf(string, "raw offset/len: %d/%d",
 				extract->extract.from_data.offset,
 				extract->extract.from_data.size);
+		} else if (type == DPKG_EXTRACT_FROM_PARSE) {
+			sprintf(string, "parse offset/len: %d/%d",
+				extract->extract.from_parse.offset,
+				extract->extract.from_parse.size);
 		}
 		DPAA2_FLOW_DUMP("%s", string);
 		if ((idx + 1) < dpkg->num_extracts)
@@ -294,6 +291,10 @@ dpaa2_flow_fs_extracts_log(const struct dpaa2_dev_priv *priv,
 			sprintf(string, "raw offset/len: %d/%d",
 				extract->extract.from_data.offset,
 				extract->extract.from_data.size);
+		} else if (type == DPKG_EXTRACT_FROM_PARSE) {
+			sprintf(string, "parse offset/len: %d/%d",
+				extract->extract.from_parse.offset,
+				extract->extract.from_parse.size);
 		}
 		DPAA2_FLOW_DUMP("%s", string);
 		if ((idx + 1) < dpkg->num_extracts)
@@ -627,6 +628,66 @@ dpaa2_flow_fs_rule_insert_hole(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_faf_advance(struct dpaa2_dev_priv *priv,
+	int faf_byte, enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off++;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, 1);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, 1, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = 1;
+	key_profile->prot_field[pos].type = DPAA2_FAF_KEY;
+	key_profile->prot_field[pos].key_field = faf_byte;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	key_profile->key_max_size++;
+
+	return pos;
+}
+
 /* Move IPv4/IPv6 addresses to fill new extract previous IP address.
  * Current MC/WRIOP only support generic IP extract but IP address
  * is not fixed, so we have to put them at end of extracts, otherwise,
@@ -688,6 +749,7 @@ dpaa2_flow_key_profile_advance(enum net_prot prot,
 	}
 
 	key_profile->key_size[pos] = field_size;
+	key_profile->prot_field[pos].type = DPAA2_NET_PROT_KEY;
 	key_profile->prot_field[pos].prot = prot;
 	key_profile->prot_field[pos].key_field = field;
 	key_profile->num++;
@@ -711,6 +773,55 @@ dpaa2_flow_key_profile_advance(enum net_prot prot,
 	return pos;
 }
 
+static int
+dpaa2_flow_faf_add_hdr(int faf_byte,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i, offset;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	pos = dpaa2_flow_faf_advance(priv,
+			faf_byte, dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
+	}
+
+	offset = DPAA2_FAFE_PSR_OFFSET + faf_byte;
+
+	extracts[pos].type = DPKG_EXTRACT_FROM_PARSE;
+	extracts[pos].extract.from_parse.offset = offset;
+	extracts[pos].extract.from_parse.size = 1;
+
+	dpkg->num_extracts++;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_extract_add_hdr(enum net_prot prot,
 	uint32_t field, uint8_t field_size,
@@ -997,6 +1108,7 @@ dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
 			key_profile->key_offset[pos] = 0;
 		}
 		key_profile->key_size[pos] = item_size;
+		key_profile->prot_field[pos].type = DPAA2_NET_PROT_KEY;
 		key_profile->prot_field[pos].prot = NET_PROT_PAYLOAD;
 		key_profile->prot_field[pos].key_field = field;
 
@@ -1036,7 +1148,7 @@ dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
 
 static inline int
 dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
-	enum net_prot prot, uint32_t key_field)
+	enum key_prot_type type, enum net_prot prot, uint32_t key_field)
 {
 	int pos;
 	struct key_prot_field *prot_field;
@@ -1049,16 +1161,23 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 
 	prot_field = key_profile->prot_field;
 	for (pos = 0; pos < key_profile->num; pos++) {
-		if (prot_field[pos].prot == prot &&
-			prot_field[pos].key_field == key_field) {
+		if (type == DPAA2_NET_PROT_KEY &&
+			prot_field[pos].prot == prot &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
+			return pos;
+		else if (type == DPAA2_FAF_KEY &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
 			return pos;
-		}
 	}
 
-	if (dpaa2_flow_l4_src_port_extract(prot, key_field)) {
+	if (type == DPAA2_NET_PROT_KEY &&
+		dpaa2_flow_l4_src_port_extract(prot, key_field)) {
 		if (key_profile->l4_src_port_present)
 			return key_profile->l4_src_port_pos;
-	} else if (dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
+	} else if (type == DPAA2_NET_PROT_KEY &&
+		dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
 		if (key_profile->l4_dst_port_present)
 			return key_profile->l4_dst_port_pos;
 	}
@@ -1068,80 +1187,53 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 
 static inline int
 dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
-	enum net_prot prot, uint32_t key_field)
+	enum key_prot_type type, enum net_prot prot, uint32_t key_field)
 {
 	int i;
 
-	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
+	i = dpaa2_flow_extract_search(key_profile, type, prot, key_field);
 	if (i >= 0)
 		return key_profile->key_offset[i];
 	else
 		return i;
 }
 
-struct prev_proto_field_id {
-	enum net_prot prot;
-	union {
-		rte_be16_t eth_type;
-		uint8_t ip_proto;
-	};
-};
-
 static int
-dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
+dpaa2_flow_faf_add_rule(struct dpaa2_dev_priv *priv,
 	struct dpaa2_dev_flow *flow,
-	const struct prev_proto_field_id *prev_proto,
+	enum dpaa2_rx_faf_offset faf_bit_off,
 	int group,
 	enum dpaa2_flow_dist_type dist_type)
 {
 	int offset;
 	uint8_t *key_addr;
 	uint8_t *mask_addr;
-	uint32_t field = 0;
-	rte_be16_t eth_type;
-	uint8_t ip_proto;
 	struct dpaa2_key_extract *key_extract;
 	struct dpaa2_key_profile *key_profile;
+	uint8_t faf_byte = faf_bit_off / 8;
+	uint8_t faf_bit_in_byte = faf_bit_off % 8;
 
-	if (prev_proto->prot == NET_PROT_ETH) {
-		field = NH_FLD_ETH_TYPE;
-	} else if (prev_proto->prot == NET_PROT_IP) {
-		field = NH_FLD_IP_PROTO;
-	} else {
-		DPAA2_PMD_ERR("Prev proto(%d) not support!",
-			prev_proto->prot);
-		return -EINVAL;
-	}
+	faf_bit_in_byte = 7 - faf_bit_in_byte;
 
 	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
 		key_extract = &priv->extract.qos_key_extract;
 		key_profile = &key_extract->key_profile;
 
 		offset = dpaa2_flow_extract_key_offset(key_profile,
-				prev_proto->prot, field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (offset < 0) {
 			DPAA2_PMD_ERR("%s QoS key extract failed", __func__);
 			return -EINVAL;
 		}
 		key_addr = flow->qos_key_addr + offset;
 		mask_addr = flow->qos_mask_addr + offset;
-		if (prev_proto->prot == NET_PROT_ETH) {
-			eth_type = prev_proto->eth_type;
-			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
-			eth_type = 0xffff;
-			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
-			flow->qos_rule_size += sizeof(rte_be16_t);
-		} else if (prev_proto->prot == NET_PROT_IP) {
-			ip_proto = prev_proto->ip_proto;
-			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
-			ip_proto = 0xff;
-			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
-			flow->qos_rule_size += sizeof(uint8_t);
-		} else {
-			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
-				prev_proto->prot);
-			return -EINVAL;
-		}
+
+		if (!(*key_addr) &&
+			key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size++;
+
+		*key_addr |=  (1 << faf_bit_in_byte);
+		*mask_addr |=  (1 << faf_bit_in_byte);
 	}
 
 	if (dist_type & DPAA2_FLOW_FS_TYPE) {
@@ -1149,7 +1241,7 @@ dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
 		key_profile = &key_extract->key_profile;
 
 		offset = dpaa2_flow_extract_key_offset(key_profile,
-				prev_proto->prot, field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (offset < 0) {
 			DPAA2_PMD_ERR("%s TC[%d] key extract failed",
 				__func__, group);
@@ -1158,23 +1250,12 @@ dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
 		key_addr = flow->fs_key_addr + offset;
 		mask_addr = flow->fs_mask_addr + offset;
 
-		if (prev_proto->prot == NET_PROT_ETH) {
-			eth_type = prev_proto->eth_type;
-			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
-			eth_type = 0xffff;
-			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
-			flow->fs_rule_size += sizeof(rte_be16_t);
-		} else if (prev_proto->prot == NET_PROT_IP) {
-			ip_proto = prev_proto->ip_proto;
-			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
-			ip_proto = 0xff;
-			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
-			flow->fs_rule_size += sizeof(uint8_t);
-		} else {
-			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
-				prev_proto->prot);
-			return -EINVAL;
-		}
+		if (!(*key_addr) &&
+			key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size++;
+
+		*key_addr |=  (1 << faf_bit_in_byte);
+		*mask_addr |=  (1 << faf_bit_in_byte);
 	}
 
 	return 0;
@@ -1196,7 +1277,7 @@ dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 	}
 
 	offset = dpaa2_flow_extract_key_offset(key_profile,
-			prot, field);
+			DPAA2_NET_PROT_KEY, prot, field);
 	if (offset < 0) {
 		DPAA2_PMD_ERR("P(%d)/F(%d) does not exist!",
 			prot, field);
@@ -1234,7 +1315,7 @@ dpaa2_flow_raw_rule_data_set(struct dpaa2_dev_flow *flow,
 	field = extract_offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
 	field |= extract_size;
 	offset = dpaa2_flow_extract_key_offset(key_profile,
-			NET_PROT_PAYLOAD, field);
+			DPAA2_NET_PROT_KEY, NET_PROT_PAYLOAD, field);
 	if (offset < 0) {
 		DPAA2_PMD_ERR("offset(%d)/size(%d) raw extract failed",
 			extract_offset, size);
@@ -1317,60 +1398,39 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 }
 
 static int
-dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
+dpaa2_flow_identify_by_faf(struct dpaa2_dev_priv *priv,
 	struct dpaa2_dev_flow *flow,
-	const struct prev_proto_field_id *prev_prot,
+	enum dpaa2_rx_faf_offset faf_off,
 	enum dpaa2_flow_dist_type dist_type,
 	int group, int *recfg)
 {
-	int ret, index, local_cfg = 0, size = 0;
+	int ret, index, local_cfg = 0;
 	struct dpaa2_key_extract *extract;
 	struct dpaa2_key_profile *key_profile;
-	enum net_prot prot = prev_prot->prot;
-	uint32_t key_field = 0;
-
-	if (prot == NET_PROT_ETH) {
-		key_field = NH_FLD_ETH_TYPE;
-		size = sizeof(rte_be16_t);
-	} else if (prot == NET_PROT_IP) {
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else if (prot == NET_PROT_IPV4) {
-		prot = NET_PROT_IP;
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else if (prot == NET_PROT_IPV6) {
-		prot = NET_PROT_IP;
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else {
-		DPAA2_PMD_ERR("Invalid Prev prot(%d)", prot);
-		return -EINVAL;
-	}
+	uint8_t faf_byte = faf_off / 8;
 
 	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
 		extract = &priv->extract.qos_key_extract;
 		key_profile = &extract->key_profile;
 
 		index = dpaa2_flow_extract_search(key_profile,
-				prot, key_field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add_hdr(prot,
-					key_field, size, priv,
-					DPAA2_FLOW_QOS_TYPE, group,
+			ret = dpaa2_flow_faf_add_hdr(faf_byte,
+					priv, DPAA2_FLOW_QOS_TYPE, group,
 					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("QOS prev extract add failed");
+				DPAA2_PMD_ERR("QOS faf extract add failed");
 
 				return -EINVAL;
 			}
 			local_cfg |= DPAA2_FLOW_QOS_TYPE;
 		}
 
-		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+		ret = dpaa2_flow_faf_add_rule(priv, flow, faf_off, group,
 				DPAA2_FLOW_QOS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("QoS prev rule set failed");
+			DPAA2_PMD_ERR("QoS faf rule set failed");
 			return -EINVAL;
 		}
 	}
@@ -1380,14 +1440,13 @@ dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
 		key_profile = &extract->key_profile;
 
 		index = dpaa2_flow_extract_search(key_profile,
-				prot, key_field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add_hdr(prot,
-					key_field, size, priv,
-					DPAA2_FLOW_FS_TYPE, group,
+			ret = dpaa2_flow_faf_add_hdr(faf_byte,
+					priv, DPAA2_FLOW_FS_TYPE, group,
 					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS[%d] prev extract add failed",
+				DPAA2_PMD_ERR("FS[%d] faf extract add failed",
 					group);
 
 				return -EINVAL;
@@ -1395,17 +1454,17 @@ dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
 			local_cfg |= DPAA2_FLOW_FS_TYPE;
 		}
 
-		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+		ret = dpaa2_flow_faf_add_rule(priv, flow, faf_off, group,
 				DPAA2_FLOW_FS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS[%d] prev rule set failed",
+			DPAA2_PMD_ERR("FS[%d] faf rule set failed",
 				group);
 			return -EINVAL;
 		}
 	}
 
 	if (recfg)
-		*recfg = local_cfg;
+		*recfg |= local_cfg;
 
 	return 0;
 }
@@ -1432,7 +1491,7 @@ dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
 	key_profile = &key_extract->key_profile;
 
 	index = dpaa2_flow_extract_search(key_profile,
-			prot, field);
+			DPAA2_NET_PROT_KEY, prot, field);
 	if (index < 0) {
 		ret = dpaa2_flow_extract_add_hdr(prot,
 				field, size, priv,
@@ -1571,6 +1630,7 @@ dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
 		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
 	}
 	key_profile->num++;
+	key_profile->prot_field[num].type = DPAA2_NET_PROT_KEY;
 
 	dpkg->extracts[num].extract.from_hdr.prot = prot;
 	dpkg->extracts[num].extract.from_hdr.field = field;
@@ -1681,15 +1741,28 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 	spec = pattern->spec;
 	mask = pattern->mask ?
 			pattern->mask : &dpaa2_flow_item_eth_mask;
-	if (!spec) {
-		DPAA2_PMD_WARN("No pattern spec for Eth flow");
-		return -EINVAL;
-	}
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ETH_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ETH_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
 		RTE_FLOW_ITEM_TYPE_ETH)) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
@@ -1778,15 +1851,18 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_ETH;
-		prev_proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
-		ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_proto,
-				DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-				group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
 		if (ret)
 			return ret;
+
 		(*device_configured) |= local_cfg;
 		return 0;
 	}
@@ -1833,7 +1909,6 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	const void *key, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int size;
-	struct prev_proto_field_id prev_prot;
 
 	group = attr->group;
 
@@ -1846,19 +1921,21 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	prev_prot.prot = NET_PROT_ETH;
-	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
+					 DPAA2_FLOW_QOS_TYPE, group,
+					 &local_cfg);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE, group,
-			&local_cfg);
-	if (ret) {
-		DPAA2_PMD_ERR("IPv4 identification failed!");
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
+					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret)
 		return ret;
-	}
 
-	if (!spec_ipv4)
+	if (!spec_ipv4) {
+		(*device_configured) |= local_cfg;
 		return 0;
+	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
 				       RTE_FLOW_ITEM_TYPE_IPV4)) {
@@ -1950,7 +2027,6 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
 	int size;
-	struct prev_proto_field_id prev_prot;
 
 	group = attr->group;
 
@@ -1962,19 +2038,21 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	prev_prot.prot = NET_PROT_ETH;
-	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
+					 DPAA2_FLOW_QOS_TYPE, group,
+					 &local_cfg);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
-	if (ret) {
-		DPAA2_PMD_ERR("IPv6 identification failed!");
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
+					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret)
 		return ret;
-	}
 
-	if (!spec_ipv6)
+	if (!spec_ipv6) {
+		(*device_configured) |= local_cfg;
 		return 0;
+	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
 				       RTE_FLOW_ITEM_TYPE_IPV6)) {
@@ -2078,18 +2156,15 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		/* Next proto of Generical IP is actually used
-		 * for ICMP identification.
-		 * Example: flow create 0 ingress pattern icmp
-		 */
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ICMP_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_ICMP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ICMP_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
 		if (ret)
 			return ret;
 
@@ -2166,22 +2241,21 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_UDP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_UDP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_UDP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2253,22 +2327,21 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_TCP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_TCP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_TCP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2340,22 +2413,21 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_SCTP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_SCTP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_SCTP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2428,21 +2500,20 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GRE_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_GRE;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GRE_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
 		if (ret)
 			return ret;
 
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 27/42] net/dpaa2: add VXLAN distribution support
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (25 preceding siblings ...)
  2024-10-22 19:12         ` [v4 26/42] net/dpaa2: frame attribute flags parser vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 28/42] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
                           ` (15 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Extracts from vxlan header for distribution.
The vxlan header is set by soft parser code in
soft parser context located from offset 43 of parser results:

<assign-variable name="$softparsectx[0:3]" value="vxlan.vnid"/>

vxlan protocol is identified by vxlan bit of frame attribute flags.
The parser result extracts are added for this functionality.

Example:
flow create 0 ingress pattern vxlan / end actions pf / queue index 4 / end

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |   6 +-
 drivers/net/dpaa2/dpaa2_flow.c   | 313 +++++++++++++++++++++++++++++++
 2 files changed, 318 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 8f548467a4..aeddcfdfa9 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -282,8 +282,12 @@ enum ip_addr_extract_type {
 };
 
 enum key_prot_type {
+	/* HW extracts from standard protocol fields*/
 	DPAA2_NET_PROT_KEY,
-	DPAA2_FAF_KEY
+	/* HW extracts from FAF of PR*/
+	DPAA2_FAF_KEY,
+	/* HW extracts from PR other than FAF*/
+	DPAA2_PR_KEY
 };
 
 struct key_prot_field {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 4c80efeff7..3530417a29 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -38,6 +38,8 @@ enum dpaa2_flow_dist_type {
 #define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
 #define DPAA2_FLOW_MAX_KEY_SIZE			16
 
+#define VXLAN_HF_VNI 0x08
+
 struct dpaa2_dev_flow {
 	LIST_ENTRY(dpaa2_dev_flow) next;
 	struct dpni_rule_cfg qos_rule;
@@ -140,6 +142,11 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
+
+static const struct rte_flow_item_vxlan dpaa2_flow_item_vxlan_mask = {
+	.flags = 0xff,
+	.vni = "\xff\xff\xff",
+};
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -688,6 +695,68 @@ dpaa2_flow_faf_advance(struct dpaa2_dev_priv *priv,
 	return pos;
 }
 
+static int
+dpaa2_flow_pr_advance(struct dpaa2_dev_priv *priv,
+	uint32_t pr_offset, uint32_t pr_size,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off += pr_size;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, pr_size);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, pr_size, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = pr_size;
+	key_profile->prot_field[pos].type = DPAA2_PR_KEY;
+	key_profile->prot_field[pos].key_field =
+		(pr_offset << 16) | pr_size;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	key_profile->key_max_size += pr_size;
+
+	return pos;
+}
+
 /* Move IPv4/IPv6 addresses to fill new extract previous IP address.
  * Current MC/WRIOP only support generic IP extract but IP address
  * is not fixed, so we have to put them at end of extracts, otherwise,
@@ -822,6 +891,59 @@ dpaa2_flow_faf_add_hdr(int faf_byte,
 	return 0;
 }
 
+static int
+dpaa2_flow_pr_add_hdr(uint32_t pr_offset,
+	uint32_t pr_size, struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if ((pr_offset + pr_size) > DPAA2_FAPR_SIZE) {
+		DPAA2_PMD_ERR("PR extracts(%d:%d) overflow",
+			pr_offset, pr_size);
+		return -EINVAL;
+	}
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	pos = dpaa2_flow_pr_advance(priv,
+			pr_offset, pr_size, dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
+	}
+
+	extracts[pos].type = DPKG_EXTRACT_FROM_PARSE;
+	extracts[pos].extract.from_parse.offset = pr_offset;
+	extracts[pos].extract.from_parse.size = pr_size;
+
+	dpkg->num_extracts++;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_extract_add_hdr(enum net_prot prot,
 	uint32_t field, uint8_t field_size,
@@ -1170,6 +1292,10 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 			prot_field[pos].key_field == key_field &&
 			prot_field[pos].type == type)
 			return pos;
+		else if (type == DPAA2_PR_KEY &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
+			return pos;
 	}
 
 	if (type == DPAA2_NET_PROT_KEY &&
@@ -1261,6 +1387,41 @@ dpaa2_flow_faf_add_rule(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static inline int
+dpaa2_flow_pr_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	uint32_t pr_offset, uint32_t pr_size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int offset;
+	uint32_t pr_field = pr_offset << 16 | pr_size;
+
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			DPAA2_PR_KEY, NET_PROT_NONE, pr_field);
+	if (offset < 0) {
+		DPAA2_PMD_ERR("PR off(%d)/size(%d) does not exist!",
+			pr_offset, pr_size);
+		return -EINVAL;
+	}
+
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, pr_size);
+		memcpy((flow->qos_mask_addr + offset), mask, pr_size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size = offset + pr_size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, pr_size);
+		memcpy((flow->fs_mask_addr + offset), mask, pr_size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size = offset + pr_size;
+	}
+
+	return 0;
+}
+
 static inline int
 dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 	struct dpaa2_key_profile *key_profile,
@@ -1382,6 +1543,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_gre_mask;
 		size = sizeof(struct rte_flow_item_gre);
 		break;
+	case RTE_FLOW_ITEM_TYPE_VXLAN:
+		mask_support = (const char *)&dpaa2_flow_item_vxlan_mask;
+		size = sizeof(struct rte_flow_item_vxlan);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -1469,6 +1634,55 @@ dpaa2_flow_identify_by_faf(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_add_pr_extract_rule(struct dpaa2_dev_flow *flow,
+	uint32_t pr_offset, uint32_t pr_size,
+	const void *key, const void *mask,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int index, ret, local_cfg = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
+	uint32_t pr_field = pr_offset << 16 | pr_size;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	key_profile = &key_extract->key_profile;
+
+	index = dpaa2_flow_extract_search(key_profile,
+			DPAA2_PR_KEY, NET_PROT_NONE, pr_field);
+	if (index < 0) {
+		ret = dpaa2_flow_pr_add_hdr(pr_offset,
+				pr_size, priv,
+				dist_type, tc_id, NULL);
+		if (ret) {
+			DPAA2_PMD_ERR("PR add off(%d)/size(%d) failed",
+				pr_offset, pr_size);
+
+			return ret;
+		}
+		local_cfg |= dist_type;
+	}
+
+	ret = dpaa2_flow_pr_rule_data_set(flow, key_profile,
+			pr_offset, pr_size, key, mask, dist_type);
+	if (ret) {
+		DPAA2_PMD_ERR("PR off(%d)/size(%d) rule data set failed",
+			pr_offset, pr_size);
+
+		return ret;
+	}
+
+	if (recfg)
+		*recfg |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
 	enum net_prot prot, uint32_t field,
@@ -2545,6 +2759,90 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_vxlan *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_vxlan_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_VXLAN_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_VXLAN_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VXLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of VXLAN not support.");
+
+		return -1;
+	}
+
+	if (mask->flags) {
+		if (spec->flags != VXLAN_HF_VNI) {
+			DPAA2_PMD_ERR("vxlan flag(0x%02x) must be 0x%02x.",
+				spec->flags, VXLAN_HF_VNI);
+			return -EINVAL;
+		}
+		if (mask->flags != 0xff) {
+			DPAA2_PMD_ERR("Not support to extract vxlan flag.");
+			return -EINVAL;
+		}
+	}
+
+	if (mask->vni[0] || mask->vni[1] || mask->vni[2]) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_VNI_OFFSET,
+			sizeof(mask->vni), spec->vni,
+			mask->vni,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_VNI_OFFSET,
+			sizeof(mask->vni), spec->vni,
+			mask->vni,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -2760,6 +3058,9 @@ dpaa2_flow_verify_action(struct dpaa2_dev_priv *priv,
 				}
 			}
 
+			break;
+		case RTE_FLOW_ACTION_TYPE_PF:
+			/* Skip this action, have to add for vxlan*/
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
@@ -3110,6 +3411,15 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				return ret;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+			ret = dpaa2_configure_flow_vxlan(flow,
+					dev, attr, &pattern[i], actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("VXLAN flow config failed!");
+				return ret;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow,
 					dev, attr, &pattern[i],
@@ -3222,6 +3532,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			if (ret)
 				return ret;
 
+			break;
+		case RTE_FLOW_ACTION_TYPE_PF:
+			/* Skip this action, have to add for vxlan*/
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 28/42] net/dpaa2: protocol inside tunnel distribution
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (26 preceding siblings ...)
  2024-10-22 19:12         ` [v4 27/42] net/dpaa2: add VXLAN distribution support vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 29/42] net/dpaa2: eCPRI support by parser result vanshika.shukla
                           ` (14 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Control flow by protocols inside tunnel.
The tunnel flow items applied by application are in order from
outer to inner. The inner items start from tunnel item, something
like vxlan, GRE etc.

For example:
flow create 0 ingress pattern ipv4 / vxlan / ipv6 / end
	actions pf / queue index 2 / end

So the items following the tunnel item are tagged with "innner".
The inner items are extracted from parser results which are set
by soft parser.
So far only vxlan tunnel is supported. Limited by soft parser area,
only ethernet header and vlan header inside tunnel are able to be used
for flow distribution. IPv4, IPv6, UDP and TCP inside tunnel can be
detected by user defined FAF set by SP for flow distribution.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 587 +++++++++++++++++++++++++++++----
 1 file changed, 519 insertions(+), 68 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 3530417a29..d02859fea7 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -58,6 +58,11 @@ struct dpaa2_dev_flow {
 	struct dpni_fs_action_cfg fs_action_cfg;
 };
 
+struct rte_dpaa2_flow_item {
+	struct rte_flow_item generic_item;
+	int in_tunnel;
+};
+
 static const
 enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
@@ -1935,10 +1940,203 @@ dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+dpaa2_configure_flow_tunnel_eth(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_eth *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+			pattern->mask : &dpaa2_flow_item_eth_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec)
+		return 0;
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH)) {
+		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
+
+		return -EINVAL;
+	}
+
+	if (memcmp((const char *)&mask->src,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		/*SRC[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR0_OFFSET,
+			1, &spec->src.addr_bytes[0],
+			&mask->src.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[1:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR1_OFFSET,
+			2, &spec->src.addr_bytes[1],
+			&mask->src.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[3:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR3_OFFSET,
+			1, &spec->src.addr_bytes[3],
+			&mask->src.addr_bytes[3],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[4:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR4_OFFSET,
+			2, &spec->src.addr_bytes[4],
+			&mask->src.addr_bytes[4],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		/*SRC[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR0_OFFSET,
+			1, &spec->src.addr_bytes[0],
+			&mask->src.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[1:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR1_OFFSET,
+			2, &spec->src.addr_bytes[1],
+			&mask->src.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[3:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR3_OFFSET,
+			1, &spec->src.addr_bytes[3],
+			&mask->src.addr_bytes[3],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[4:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR4_OFFSET,
+			2, &spec->src.addr_bytes[4],
+			&mask->src.addr_bytes[4],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->dst,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		/*DST[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR0_OFFSET,
+			1, &spec->dst.addr_bytes[0],
+			&mask->dst.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[1:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR1_OFFSET,
+			1, &spec->dst.addr_bytes[1],
+			&mask->dst.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[2:3]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR2_OFFSET,
+			3, &spec->dst.addr_bytes[2],
+			&mask->dst.addr_bytes[2],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[5:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR5_OFFSET,
+			1, &spec->dst.addr_bytes[5],
+			&mask->dst.addr_bytes[5],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		/*DST[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR0_OFFSET,
+			1, &spec->dst.addr_bytes[0],
+			&mask->dst.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[1:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR1_OFFSET,
+			1, &spec->dst.addr_bytes[1],
+			&mask->dst.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[2:3]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR2_OFFSET,
+			3, &spec->dst.addr_bytes[2],
+			&mask->dst.addr_bytes[2],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[5:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR5_OFFSET,
+			1, &spec->dst.addr_bytes[5],
+			&mask->dst.addr_bytes[5],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->type,
+		zero_cmp, sizeof(rte_be16_t))) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TYPE_OFFSET,
+			sizeof(rte_be16_t), &spec->type, &mask->type,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TYPE_OFFSET,
+			sizeof(rte_be16_t), &spec->type, &mask->type,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -1948,6 +2146,13 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 	const struct rte_flow_item_eth *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	if (dpaa2_pattern->in_tunnel) {
+		return dpaa2_configure_flow_tunnel_eth(flow,
+				dev, attr, pattern, device_configured);
+	}
 
 	group = attr->group;
 
@@ -2041,10 +2246,81 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
+dpaa2_configure_flow_tunnel_vlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_vlan *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_vlan_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAFE_VXLAN_IN_VLAN_FRAM,
+				DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAFE_VXLAN_IN_VLAN_FRAM,
+				DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+
+		return -EINVAL;
+	}
+
+	if (!mask->tci)
+		return 0;
+
+	ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TCI_OFFSET,
+			sizeof(rte_be16_t), &spec->tci, &mask->tci,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TCI_OFFSET,
+			sizeof(rte_be16_t), &spec->tci, &mask->tci,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2053,6 +2329,13 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_vlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	if (dpaa2_pattern->in_tunnel) {
+		return dpaa2_configure_flow_tunnel_vlan(flow,
+				dev, attr, pattern, device_configured);
+	}
 
 	group = attr->group;
 
@@ -2112,7 +2395,7 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 static int
 dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
+			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
 			  const struct rte_flow_action actions[] __rte_unused,
 			  struct rte_flow_error *error __rte_unused,
 			  int *device_configured)
@@ -2123,6 +2406,7 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	const void *key, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int size;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2131,6 +2415,26 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	mask_ipv4 = pattern->mask ?
 		    pattern->mask : &dpaa2_flow_item_ipv4_mask;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec_ipv4) {
+			DPAA2_PMD_ERR("Tunnel-IPv4 distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV4_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV4_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
@@ -2229,7 +2533,7 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 static int
 dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
+			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
 			  const struct rte_flow_action actions[] __rte_unused,
 			  struct rte_flow_error *error __rte_unused,
 			  int *device_configured)
@@ -2241,6 +2545,7 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
 	int size;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2252,6 +2557,26 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec_ipv6) {
+			DPAA2_PMD_ERR("Tunnel-IPv6 distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV6_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV6_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
 					 DPAA2_FLOW_QOS_TYPE, group,
 					 &local_cfg);
@@ -2348,7 +2673,7 @@ static int
 dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2357,6 +2682,7 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_icmp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2369,6 +2695,11 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ICMP distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_ICMP_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2434,7 +2765,7 @@ static int
 dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2443,6 +2774,7 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_udp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2455,6 +2787,26 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec) {
+			DPAA2_PMD_ERR("Tunnel-UDP distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_UDP_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_UDP_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_UDP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2520,7 +2872,7 @@ static int
 dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2529,6 +2881,7 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_tcp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2541,6 +2894,26 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec) {
+			DPAA2_PMD_ERR("Tunnel-TCP distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_TCP_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_TCP_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_TCP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2606,7 +2979,7 @@ static int
 dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2615,6 +2988,7 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_sctp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2627,6 +3001,11 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-SCTP distribution not support");
+		return -ENOTSUP;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_SCTP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2692,7 +3071,7 @@ static int
 dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2701,6 +3080,7 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_gre *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2713,6 +3093,11 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-GRE distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_GRE_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2763,7 +3148,7 @@ static int
 dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2772,6 +3157,7 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_vxlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2784,6 +3170,11 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-VXLAN distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_VXLAN_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2847,18 +3238,19 @@ static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const struct rte_flow_item_raw *spec = pattern->spec;
-	const struct rte_flow_item_raw *mask = pattern->mask;
 	int local_cfg = 0, ret;
 	uint32_t group;
 	struct dpaa2_key_extract *qos_key_extract;
 	struct dpaa2_key_extract *tc_key_extract;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
+	const struct rte_flow_item_raw *spec = pattern->spec;
+	const struct rte_flow_item_raw *mask = pattern->mask;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
@@ -3302,6 +3694,45 @@ dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_item_convert(const struct rte_flow_item pattern[],
+			struct rte_dpaa2_flow_item **dpaa2_pattern)
+{
+	struct rte_dpaa2_flow_item *new_pattern;
+	int num = 0, tunnel_start = 0;
+
+	while (1) {
+		num++;
+		if (pattern[num].type == RTE_FLOW_ITEM_TYPE_END)
+			break;
+	}
+
+	new_pattern = rte_malloc(NULL, sizeof(struct rte_dpaa2_flow_item) * num,
+				 RTE_CACHE_LINE_SIZE);
+	if (!new_pattern) {
+		DPAA2_PMD_ERR("Failed to alloc %d flow items", num);
+		return -ENOMEM;
+	}
+
+	num = 0;
+	while (pattern[num].type != RTE_FLOW_ITEM_TYPE_END) {
+		memcpy(&new_pattern[num].generic_item, &pattern[num],
+		       sizeof(struct rte_flow_item));
+		new_pattern[num].in_tunnel = 0;
+
+		if (pattern[num].type == RTE_FLOW_ITEM_TYPE_VXLAN)
+			tunnel_start = 1;
+		else if (tunnel_start)
+			new_pattern[num].in_tunnel = 1;
+		num++;
+	}
+
+	new_pattern[num].generic_item.type = RTE_FLOW_ITEM_TYPE_END;
+	*dpaa2_pattern = new_pattern;
+
+	return 0;
+}
+
 static int
 dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3318,6 +3749,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	uint16_t dist_size, key_size;
 	struct dpaa2_key_extract *qos_key_extract;
 	struct dpaa2_key_extract *tc_key_extract;
+	struct rte_dpaa2_flow_item *dpaa2_pattern = NULL;
 
 	ret = dpaa2_flow_verify_attr(priv, attr);
 	if (ret)
@@ -3327,107 +3759,121 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	if (ret)
 		return ret;
 
+	ret = dpaa2_flow_item_convert(pattern, &dpaa2_pattern);
+	if (ret)
+		return ret;
+
 	/* Parse pattern list to get the matching parameters */
 	while (!end_of_list) {
 		switch (pattern[i].type) {
 		case RTE_FLOW_ITEM_TYPE_ETH:
-			ret = dpaa2_configure_flow_eth(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_eth(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ETH flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_VLAN:
-			ret = dpaa2_configure_flow_vlan(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_vlan(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("vLan flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
-			ret = dpaa2_configure_flow_ipv4(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_ipv4(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV4 flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ret = dpaa2_configure_flow_ipv6(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_ipv6(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV6 flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_ICMP:
-			ret = dpaa2_configure_flow_icmp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_icmp(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ICMP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_UDP:
-			ret = dpaa2_configure_flow_udp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_udp(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("UDP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
-			ret = dpaa2_configure_flow_tcp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_tcp(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("TCP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_SCTP:
-			ret = dpaa2_configure_flow_sctp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_sctp(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("SCTP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
-			ret = dpaa2_configure_flow_gre(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_gre(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("GRE flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = dpaa2_configure_flow_vxlan(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_vxlan(flow, dev, attr,
+							 &dpaa2_pattern[i],
+							 actions, error,
+							 &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("VXLAN flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
-			ret = dpaa2_configure_flow_raw(flow,
-					dev, attr, &pattern[i],
-					actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_raw(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("RAW flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_END:
@@ -3459,7 +3905,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			ret = dpaa2_configure_flow_fs_action(priv, flow,
 							     &actions[j]);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			/* Configure FS table first*/
 			dist_size = priv->nb_rx_queues / priv->num_rx_tc;
@@ -3469,20 +3915,20 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 								   dist_size,
 								   false);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			/* Configure QoS table then.*/
 			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
 				ret = dpaa2_configure_qos_table(priv, false);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (priv->num_rx_tc > 1) {
 				ret = dpaa2_flow_add_qos_rule(priv, flow);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (flow->tc_index >= priv->fs_entries) {
@@ -3493,7 +3939,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 
 			ret = dpaa2_flow_add_fs_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			break;
 		case RTE_FLOW_ACTION_TYPE_RSS:
@@ -3505,7 +3951,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			if (ret < 0) {
 				DPAA2_PMD_ERR("TC[%d] distset RSS failed",
 					      flow->tc_id);
-				return ret;
+				goto end_flow_set;
 			}
 
 			dist_size = rss_conf->queue_num;
@@ -3515,22 +3961,22 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 								   dist_size,
 								   true);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
 				ret = dpaa2_configure_qos_table(priv, true);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			ret = dpaa2_flow_add_qos_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			ret = dpaa2_flow_add_fs_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			break;
 		case RTE_FLOW_ACTION_TYPE_PF:
@@ -3547,6 +3993,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 		j++;
 	}
 
+end_flow_set:
 	if (!ret) {
 		/* New rules are inserted. */
 		if (!curr) {
@@ -3557,6 +4004,10 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			LIST_INSERT_AFTER(curr, flow, next);
 		}
 	}
+
+	if (dpaa2_pattern)
+		rte_free(dpaa2_pattern);
+
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 29/42] net/dpaa2: eCPRI support by parser result
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (27 preceding siblings ...)
  2024-10-22 19:12         ` [v4 28/42] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 30/42] net/dpaa2: add GTP flow support vanshika.shukla
                           ` (13 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Soft parser extracts ECPRI header and message to specified
areas of parser result.
Flow is classified according to the ECPRI extracts from praser result.
This implementation supports ECPRI over ethernet/vlan/UDP and various
types/messages combinations.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |  18 ++
 drivers/net/dpaa2/dpaa2_flow.c   | 348 ++++++++++++++++++++++++++++++-
 2 files changed, 365 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index aeddcfdfa9..eaa653d266 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -179,6 +179,8 @@ enum dpaa2_rx_faf_offset {
 	FAFE_VXLAN_IN_IPV6_FRAM = 2,
 	FAFE_VXLAN_IN_UDP_FRAM = 3,
 	FAFE_VXLAN_IN_TCP_FRAM = 4,
+
+	FAFE_ECPRI_FRAM = 7,
 	/* Set by SP end*/
 
 	FAF_GTP_PRIMED_FRAM = 1 + DPAA2_FAFE_PSR_SIZE * 8,
@@ -207,6 +209,17 @@ enum dpaa2_rx_faf_offset {
 	FAF_ESP_FRAM = 89 + DPAA2_FAFE_PSR_SIZE * 8,
 };
 
+enum dpaa2_ecpri_fafe_type {
+	ECPRI_FAFE_TYPE_0 = (8 - FAFE_ECPRI_FRAM),
+	ECPRI_FAFE_TYPE_1 = (8 - FAFE_ECPRI_FRAM) | (1 << 1),
+	ECPRI_FAFE_TYPE_2 = (8 - FAFE_ECPRI_FRAM) | (2 << 1),
+	ECPRI_FAFE_TYPE_3 = (8 - FAFE_ECPRI_FRAM) | (3 << 1),
+	ECPRI_FAFE_TYPE_4 = (8 - FAFE_ECPRI_FRAM) | (4 << 1),
+	ECPRI_FAFE_TYPE_5 = (8 - FAFE_ECPRI_FRAM) | (5 << 1),
+	ECPRI_FAFE_TYPE_6 = (8 - FAFE_ECPRI_FRAM) | (6 << 1),
+	ECPRI_FAFE_TYPE_7 = (8 - FAFE_ECPRI_FRAM) | (7 << 1)
+};
+
 #define DPAA2_PR_ETH_OFF_OFFSET 19
 #define DPAA2_PR_TCI_OFF_OFFSET 21
 #define DPAA2_PR_LAST_ETYPE_OFFSET 23
@@ -236,6 +249,11 @@ enum dpaa2_rx_faf_offset {
 #define DPAA2_VXLAN_IN_TYPE_OFFSET 46
 /* Set by SP for vxlan distribution end*/
 
+/* ECPRI shares SP context with VXLAN*/
+#define DPAA2_ECPRI_MSG_OFFSET DPAA2_VXLAN_VNI_OFFSET
+
+#define DPAA2_ECPRI_MAX_EXTRACT_NB 8
+
 struct ipv4_sd_addr_extract_rule {
 	uint32_t ipv4_src;
 	uint32_t ipv4_dst;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index d02859fea7..0fdf8f14b8 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -152,6 +152,13 @@ static const struct rte_flow_item_vxlan dpaa2_flow_item_vxlan_mask = {
 	.flags = 0xff,
 	.vni = "\xff\xff\xff",
 };
+
+static const struct rte_flow_item_ecpri dpaa2_flow_item_ecpri_mask = {
+	.hdr.common.type = 0xff,
+	.hdr.dummy[0] = RTE_BE32(0xffffffff),
+	.hdr.dummy[1] = RTE_BE32(0xffffffff),
+	.hdr.dummy[2] = RTE_BE32(0xffffffff),
+};
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -1552,6 +1559,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_vxlan_mask;
 		size = sizeof(struct rte_flow_item_vxlan);
 		break;
+	case RTE_FLOW_ITEM_TYPE_ECPRI:
+		mask_support = (const char *)&dpaa2_flow_item_ecpri_mask;
+		size = sizeof(struct rte_flow_item_ecpri);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -3234,6 +3245,330 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ecpri *spec, *mask;
+	struct rte_flow_item_ecpri local_mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+	uint8_t extract_nb = 0, i;
+	uint64_t rule_data[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint64_t mask_data[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint8_t extract_size[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint8_t extract_off[DPAA2_ECPRI_MAX_EXTRACT_NB];
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	if (pattern->mask) {
+		memcpy(&local_mask, pattern->mask,
+			sizeof(struct rte_flow_item_ecpri));
+		local_mask.hdr.common.u32 =
+			rte_be_to_cpu_32(local_mask.hdr.common.u32);
+		mask = &local_mask;
+	} else {
+		mask = &dpaa2_flow_item_ecpri_mask;
+	}
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ECPRI distribution not support");
+		return -ENOTSUP;
+	}
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAFE_ECPRI_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAFE_ECPRI_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ECPRI)) {
+		DPAA2_PMD_WARN("Extract field(s) of ECPRI not support.");
+
+		return -1;
+	}
+
+	if (mask->hdr.common.type != 0xff) {
+		DPAA2_PMD_WARN("ECPRI header type not specified.");
+
+		return -1;
+	}
+
+	if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_0;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type0.pc_id) {
+			rule_data[extract_nb] = spec->hdr.type0.pc_id;
+			mask_data[extract_nb] = mask->hdr.type0.pc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_iq_data, pc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type0.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type0.seq_id;
+			mask_data[extract_nb] = mask->hdr.type0.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_iq_data, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_BIT_SEQ) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_1;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type1.pc_id) {
+			rule_data[extract_nb] = spec->hdr.type1.pc_id;
+			mask_data[extract_nb] = mask->hdr.type1.pc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_bit_seq, pc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type1.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type1.seq_id;
+			mask_data[extract_nb] = mask->hdr.type1.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_bit_seq, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RTC_CTRL) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_2;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type2.rtc_id) {
+			rule_data[extract_nb] = spec->hdr.type2.rtc_id;
+			mask_data[extract_nb] = mask->hdr.type2.rtc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_rtc_ctrl, rtc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type2.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type2.seq_id;
+			mask_data[extract_nb] = mask->hdr.type2.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_rtc_ctrl, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_GEN_DATA) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_3;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type3.pc_id || mask->hdr.type3.seq_id)
+			DPAA2_PMD_WARN("Extract type3 msg not support.");
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RM_ACC) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_4;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type4.rma_id) {
+			rule_data[extract_nb] = spec->hdr.type4.rma_id;
+			mask_data[extract_nb] = mask->hdr.type4.rma_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET + 0;
+				/** Compiler not support to take address
+				 * of bit-field
+				 * offsetof(struct rte_ecpri_msg_rm_access,
+				 * rma_id);
+				 */
+			extract_nb++;
+		}
+		if (mask->hdr.type4.ele_id) {
+			rule_data[extract_nb] = spec->hdr.type4.ele_id;
+			mask_data[extract_nb] = mask->hdr.type4.ele_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET + 2;
+				/** Compiler not support to take address
+				 * of bit-field
+				 * offsetof(struct rte_ecpri_msg_rm_access,
+				 * ele_id);
+				 */
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_DLY_MSR) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_5;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type5.msr_id) {
+			rule_data[extract_nb] = spec->hdr.type5.msr_id;
+			mask_data[extract_nb] = mask->hdr.type5.msr_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_delay_measure,
+					msr_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type5.act_type) {
+			rule_data[extract_nb] = spec->hdr.type5.act_type;
+			mask_data[extract_nb] = mask->hdr.type5.act_type;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_delay_measure,
+					act_type);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RMT_RST) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_6;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type6.rst_id) {
+			rule_data[extract_nb] = spec->hdr.type6.rst_id;
+			mask_data[extract_nb] = mask->hdr.type6.rst_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_remote_reset,
+					rst_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type6.rst_op) {
+			rule_data[extract_nb] = spec->hdr.type6.rst_op;
+			mask_data[extract_nb] = mask->hdr.type6.rst_op;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_remote_reset,
+					rst_op);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_EVT_IND) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_7;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type7.evt_id) {
+			rule_data[extract_nb] = spec->hdr.type7.evt_id;
+			mask_data[extract_nb] = mask->hdr.type7.evt_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					evt_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.evt_type) {
+			rule_data[extract_nb] = spec->hdr.type7.evt_type;
+			mask_data[extract_nb] = mask->hdr.type7.evt_type;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					evt_type);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.seq) {
+			rule_data[extract_nb] = spec->hdr.type7.seq;
+			mask_data[extract_nb] = mask->hdr.type7.seq;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					seq);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.number) {
+			rule_data[extract_nb] = spec->hdr.type7.number;
+			mask_data[extract_nb] = mask->hdr.type7.number;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					number);
+			extract_nb++;
+		}
+	} else {
+		DPAA2_PMD_ERR("Invalid ecpri header type(%d)",
+				spec->hdr.common.type);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < extract_nb; i++) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			extract_off[i],
+			extract_size[i], &rule_data[i], &mask_data[i],
+			priv, group,
+			device_configured,
+			DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			extract_off[i],
+			extract_size[i], &rule_data[i], &mask_data[i],
+			priv, group,
+			device_configured,
+			DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3866,6 +4201,16 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_ECPRI:
+			ret = dpaa2_configure_flow_ecpri(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("ECPRI flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow, dev, attr,
 						       &dpaa2_pattern[i],
@@ -3880,7 +4225,8 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			end_of_list = 1;
 			break; /*End of List*/
 		default:
-			DPAA2_PMD_ERR("Invalid action type");
+			DPAA2_PMD_ERR("Invalid flow item[%d] type(%d)",
+				i, pattern[i].type);
 			ret = -ENOTSUP;
 			break;
 		}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 30/42] net/dpaa2: add GTP flow support
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (28 preceding siblings ...)
  2024-10-22 19:12         ` [v4 29/42] net/dpaa2: eCPRI support by parser result vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 31/42] net/dpaa2: check if Soft parser is loaded vanshika.shukla
                           ` (12 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Configure gtp flow to support RSS and FS.
Check FAF of parser result to identify GTP frame.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 172 ++++++++++++++++++++++++++-------
 1 file changed, 138 insertions(+), 34 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 0fdf8f14b8..c7c3681005 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -37,7 +37,7 @@ enum dpaa2_flow_dist_type {
 
 #define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
 #define DPAA2_FLOW_MAX_KEY_SIZE			16
-
+#define DPAA2_PROT_FIELD_STRING_SIZE		16
 #define VXLAN_HF_VNI 0x08
 
 struct dpaa2_dev_flow {
@@ -75,6 +75,7 @@ enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_TCP,
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
+	RTE_FLOW_ITEM_TYPE_GTP
 };
 
 static const
@@ -159,6 +160,11 @@ static const struct rte_flow_item_ecpri dpaa2_flow_item_ecpri_mask = {
 	.hdr.dummy[1] = RTE_BE32(0xffffffff),
 	.hdr.dummy[2] = RTE_BE32(0xffffffff),
 };
+
+static const struct rte_flow_item_gtp dpaa2_flow_item_gtp_mask = {
+	.teid = RTE_BE32(0xffffffff),
+};
+
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -234,6 +240,12 @@ dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 			strcat(string, ".type");
 		else
 			strcat(string, ".unknown field");
+	} else if (prot == NET_PROT_GTP) {
+		rte_strscpy(string, "gtp", DPAA2_PROT_FIELD_STRING_SIZE);
+		if (field == NH_FLD_GTP_TEID)
+			strcat(string, ".teid");
+		else
+			strcat(string, ".unknown field");
 	} else {
 		strcpy(string, "unknown protocol");
 	}
@@ -1563,6 +1575,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_ecpri_mask;
 		size = sizeof(struct rte_flow_item_ecpri);
 		break;
+	case RTE_FLOW_ITEM_TYPE_GTP:
+		mask_support = (const char *)&dpaa2_flow_item_gtp_mask;
+		size = sizeof(struct rte_flow_item_gtp);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -3569,6 +3585,84 @@ dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_gtp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_gtp *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_gtp_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-GTP distribution not support");
+		return -ENOTSUP;
+	}
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GTP_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GTP_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GTP)) {
+		DPAA2_PMD_WARN("Extract field(s) of GTP not support.");
+
+		return -1;
+	}
+
+	if (!mask->teid)
+		return 0;
+
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GTP,
+			NH_FLD_GTP_TEID, &spec->teid,
+			&mask->teid, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GTP,
+			NH_FLD_GTP_TEID, &spec->teid,
+			&mask->teid, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -4103,9 +4197,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 		switch (pattern[i].type) {
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			ret = dpaa2_configure_flow_eth(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ETH flow config failed!");
 				goto end_flow_set;
@@ -4113,9 +4207,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_VLAN:
 			ret = dpaa2_configure_flow_vlan(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("vLan flow config failed!");
 				goto end_flow_set;
@@ -4123,9 +4217,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			ret = dpaa2_configure_flow_ipv4(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV4 flow config failed!");
 				goto end_flow_set;
@@ -4133,9 +4227,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
 			ret = dpaa2_configure_flow_ipv6(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV6 flow config failed!");
 				goto end_flow_set;
@@ -4143,9 +4237,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_ICMP:
 			ret = dpaa2_configure_flow_icmp(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ICMP flow config failed!");
 				goto end_flow_set;
@@ -4153,9 +4247,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_UDP:
 			ret = dpaa2_configure_flow_udp(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("UDP flow config failed!");
 				goto end_flow_set;
@@ -4163,9 +4257,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			ret = dpaa2_configure_flow_tcp(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("TCP flow config failed!");
 				goto end_flow_set;
@@ -4173,9 +4267,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_SCTP:
 			ret = dpaa2_configure_flow_sctp(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("SCTP flow config failed!");
 				goto end_flow_set;
@@ -4183,9 +4277,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
 			ret = dpaa2_configure_flow_gre(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("GRE flow config failed!");
 				goto end_flow_set;
@@ -4193,9 +4287,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			ret = dpaa2_configure_flow_vxlan(flow, dev, attr,
-							 &dpaa2_pattern[i],
-							 actions, error,
-							 &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("VXLAN flow config failed!");
 				goto end_flow_set;
@@ -4211,11 +4305,21 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_GTP:
+			ret = dpaa2_configure_flow_gtp(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("GTP flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("RAW flow config failed!");
 				goto end_flow_set;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 31/42] net/dpaa2: check if Soft parser is loaded
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (29 preceding siblings ...)
  2024-10-22 19:12         ` [v4 30/42] net/dpaa2: add GTP flow support vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 32/42] net/dpaa2: soft parser flow verification vanshika.shukla
                           ` (11 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang, Vanshika Shukla

From: Jun Yang <jun.yang@nxp.com>

Access sp instruction area to check if sp is loaded.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c |  4 ++
 drivers/net/dpaa2/dpaa2_ethdev.h |  2 +
 drivers/net/dpaa2/dpaa2_flow.c   | 88 ++++++++++++++++++++++++++++++++
 3 files changed, 94 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 187b648799..da0ea57ed2 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2861,6 +2861,10 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 			return ret;
 		}
 	}
+
+	ret = dpaa2_soft_parser_loaded();
+	if (ret > 0)
+		DPAA2_PMD_INFO("soft parser is loaded");
 	DPAA2_PMD_INFO("%s: netdev created, connected to %s",
 		eth_dev->data->name, dpaa2_dev->ep_name);
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index eaa653d266..db918725a7 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -479,6 +479,8 @@ int dpaa2_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
 
 int dpaa2_dev_recycle_config(struct rte_eth_dev *eth_dev);
 int dpaa2_dev_recycle_deconfig(struct rte_eth_dev *eth_dev);
+int dpaa2_soft_parser_loaded(void);
+
 int dpaa2_dev_recycle_qp_setup(struct rte_dpaa2_device *dpaa2_dev,
 	uint16_t qidx, uint64_t cntx,
 	eth_rx_burst_t tx_lpbk, eth_tx_burst_t rx_lpbk,
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index c7c3681005..58ea0f578f 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -9,6 +9,7 @@
 #include <string.h>
 #include <unistd.h>
 #include <stdarg.h>
+#include <sys/mman.h>
 
 #include <rte_ethdev.h>
 #include <rte_log.h>
@@ -24,6 +25,7 @@
 
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
+static int dpaa2_sp_loaded = -1;
 
 enum dpaa2_flow_entry_size {
 	DPAA2_FLOW_ENTRY_MIN_SIZE = (DPNI_MAX_KEY_SIZE / 2),
@@ -397,6 +399,92 @@ dpaa2_flow_fs_entry_log(const char *log_info,
 	DPAA2_FLOW_DUMP("\r\n");
 }
 
+/** For LX2160A, LS2088A and LS1088A*/
+#define WRIOP_CCSR_BASE 0x8b80000
+#define WRIOP_CCSR_CTLU_OFFSET 0
+#define WRIOP_CCSR_CTLU_PARSER_OFFSET 0
+#define WRIOP_CCSR_CTLU_PARSER_INGRESS_OFFSET 0
+
+#define WRIOP_INGRESS_PARSER_PHY \
+	(WRIOP_CCSR_BASE + WRIOP_CCSR_CTLU_OFFSET + \
+	WRIOP_CCSR_CTLU_PARSER_OFFSET + \
+	WRIOP_CCSR_CTLU_PARSER_INGRESS_OFFSET)
+
+struct dpaa2_parser_ccsr {
+	uint32_t psr_cfg;
+	uint32_t psr_idle;
+	uint32_t psr_pclm;
+	uint8_t psr_ver_min;
+	uint8_t psr_ver_maj;
+	uint8_t psr_id1_l;
+	uint8_t psr_id1_h;
+	uint32_t psr_rev2;
+	uint8_t rsv[0x2c];
+	uint8_t sp_ins[4032];
+};
+
+int
+dpaa2_soft_parser_loaded(void)
+{
+	int fd, i, ret = 0;
+	struct dpaa2_parser_ccsr *parser_ccsr = NULL;
+
+	dpaa2_flow_control_log = getenv("DPAA2_FLOW_CONTROL_LOG");
+
+	if (dpaa2_sp_loaded >= 0)
+		return dpaa2_sp_loaded;
+
+	fd = open("/dev/mem", O_RDWR | O_SYNC);
+	if (fd < 0) {
+		DPAA2_PMD_ERR("open \"/dev/mem\" ERROR(%d)", fd);
+		ret = fd;
+		goto exit;
+	}
+
+	parser_ccsr = mmap(NULL, sizeof(struct dpaa2_parser_ccsr),
+		PROT_READ | PROT_WRITE, MAP_SHARED, fd,
+		WRIOP_INGRESS_PARSER_PHY);
+	if (!parser_ccsr) {
+		DPAA2_PMD_ERR("Map 0x%" PRIx64 "(size=0x%x) failed",
+			(uint64_t)WRIOP_INGRESS_PARSER_PHY,
+			(uint32_t)sizeof(struct dpaa2_parser_ccsr));
+		ret = -ENOBUFS;
+		goto exit;
+	}
+
+	DPAA2_PMD_INFO("Parser ID:0x%02x%02x, Rev:major(%02x), minor(%02x)",
+		parser_ccsr->psr_id1_h, parser_ccsr->psr_id1_l,
+		parser_ccsr->psr_ver_maj, parser_ccsr->psr_ver_min);
+
+	if (dpaa2_flow_control_log) {
+		for (i = 0; i < 64; i++) {
+			DPAA2_FLOW_DUMP("%02x ",
+				parser_ccsr->sp_ins[i]);
+			if (!((i + 1) % 16))
+				DPAA2_FLOW_DUMP("\r\n");
+		}
+	}
+
+	for (i = 0; i < 16; i++) {
+		if (parser_ccsr->sp_ins[i]) {
+			dpaa2_sp_loaded = 1;
+			break;
+		}
+	}
+	if (dpaa2_sp_loaded < 0)
+		dpaa2_sp_loaded = 0;
+
+	ret = dpaa2_sp_loaded;
+
+exit:
+	if (parser_ccsr)
+		munmap(parser_ccsr, sizeof(struct dpaa2_parser_ccsr));
+	if (fd >= 0)
+		close(fd);
+
+	return ret;
+}
+
 static int
 dpaa2_flow_ip_address_extract(enum net_prot prot,
 	uint32_t field)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 32/42] net/dpaa2: soft parser flow verification
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (30 preceding siblings ...)
  2024-10-22 19:12         ` [v4 31/42] net/dpaa2: check if Soft parser is loaded vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 33/42] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
                           ` (10 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Add flow supported by soft parser to verification list.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 84 +++++++++++++++++++++-------------
 1 file changed, 51 insertions(+), 33 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 58ea0f578f..018ffec266 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -66,7 +66,7 @@ struct rte_dpaa2_flow_item {
 };
 
 static const
-enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
+enum rte_flow_item_type dpaa2_hp_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
 	RTE_FLOW_ITEM_TYPE_ETH,
 	RTE_FLOW_ITEM_TYPE_VLAN,
@@ -77,7 +77,14 @@ enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_TCP,
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
-	RTE_FLOW_ITEM_TYPE_GTP
+	RTE_FLOW_ITEM_TYPE_GTP,
+	RTE_FLOW_ITEM_TYPE_RAW
+};
+
+static const
+enum rte_flow_item_type dpaa2_sp_supported_pattern_type[] = {
+	RTE_FLOW_ITEM_TYPE_VXLAN,
+	RTE_FLOW_ITEM_TYPE_ECPRI
 };
 
 static const
@@ -4556,16 +4563,17 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
 	int ret = 0;
 
 	if (unlikely(attr->group >= dpni_attr->num_rx_tcs)) {
-		DPAA2_PMD_ERR("Priority group is out of range");
+		DPAA2_PMD_ERR("Group/TC(%d) is out of range(%d)",
+			attr->group, dpni_attr->num_rx_tcs);
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->priority >= dpni_attr->fs_entries)) {
-		DPAA2_PMD_ERR("Priority within the group is out of range");
+		DPAA2_PMD_ERR("Priority(%d) within group is out of range(%d)",
+			attr->priority, dpni_attr->fs_entries);
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->egress)) {
-		DPAA2_PMD_ERR(
-			"Flow configuration is not supported on egress side");
+		DPAA2_PMD_ERR("Egress flow configuration is not supported");
 		ret = -ENOTSUP;
 	}
 	if (unlikely(!attr->ingress)) {
@@ -4580,27 +4588,41 @@ dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
 {
 	unsigned int i, j, is_found = 0;
 	int ret = 0;
+	const enum rte_flow_item_type *hp_supported;
+	const enum rte_flow_item_type *sp_supported;
+	uint64_t hp_supported_num, sp_supported_num;
+
+	hp_supported = dpaa2_hp_supported_pattern_type;
+	hp_supported_num = RTE_DIM(dpaa2_hp_supported_pattern_type);
+
+	sp_supported = dpaa2_sp_supported_pattern_type;
+	sp_supported_num = RTE_DIM(dpaa2_sp_supported_pattern_type);
 
 	for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
-		for (i = 0; i < RTE_DIM(dpaa2_supported_pattern_type); i++) {
-			if (dpaa2_supported_pattern_type[i]
-					== pattern[j].type) {
+		is_found = 0;
+		for (i = 0; i < hp_supported_num; i++) {
+			if (hp_supported[i] == pattern[j].type) {
 				is_found = 1;
 				break;
 			}
 		}
+		if (is_found)
+			continue;
+		if (dpaa2_sp_loaded > 0) {
+			for (i = 0; i < sp_supported_num; i++) {
+				if (sp_supported[i] == pattern[j].type) {
+					is_found = 1;
+					break;
+				}
+			}
+		}
 		if (!is_found) {
+			DPAA2_PMD_WARN("Flow type(%d) not supported",
+				pattern[j].type);
 			ret = -ENOTSUP;
 			break;
 		}
 	}
-	/* Lets verify other combinations of given pattern rules */
-	for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
-		if (!pattern[j].spec) {
-			ret = -EINVAL;
-			break;
-		}
-	}
 
 	return ret;
 }
@@ -4647,43 +4669,39 @@ dpaa2_flow_validate(struct rte_eth_dev *dev,
 	memset(&dpni_attr, 0, sizeof(struct dpni_attr));
 	ret = dpni_get_attributes(dpni, CMD_PRI_LOW, token, &dpni_attr);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Failure to get dpni@%p attribute, err code  %d",
-			dpni, ret);
+		DPAA2_PMD_ERR("Get dpni@%d attribute failed(%d)",
+			priv->hw_id, ret);
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ATTR,
-			   flow_attr, "invalid");
+			RTE_FLOW_ERROR_TYPE_ATTR,
+			flow_attr, "invalid");
 		return ret;
 	}
 
 	/* Verify input attributes */
 	ret = dpaa2_dev_verify_attr(&dpni_attr, flow_attr);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid attributes are given");
+		DPAA2_PMD_ERR("Invalid attributes are given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ATTR,
-			   flow_attr, "invalid");
+			RTE_FLOW_ERROR_TYPE_ATTR,
+			flow_attr, "invalid");
 		goto not_valid_params;
 	}
 	/* Verify input pattern list */
 	ret = dpaa2_dev_verify_patterns(pattern);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid pattern list is given");
+		DPAA2_PMD_ERR("Invalid pattern list is given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ITEM,
-			   pattern, "invalid");
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			pattern, "invalid");
 		goto not_valid_params;
 	}
 	/* Verify input action list */
 	ret = dpaa2_dev_verify_actions(actions);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid action list is given");
+		DPAA2_PMD_ERR("Invalid action list is given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ACTION,
-			   actions, "invalid");
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			actions, "invalid");
 		goto not_valid_params;
 	}
 not_valid_params:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 33/42] net/dpaa2: add flow support for IPsec AH and ESP
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (31 preceding siblings ...)
  2024-10-22 19:12         ` [v4 32/42] net/dpaa2: soft parser flow verification vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 34/42] net/dpaa2: fix memory corruption in TM vanshika.shukla
                           ` (9 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support AH/ESP flow with SPI field.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 528 ++++++++++++++++++++++++---------
 1 file changed, 385 insertions(+), 143 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 018ffec266..1605c0c584 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -78,6 +78,8 @@ enum rte_flow_item_type dpaa2_hp_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
 	RTE_FLOW_ITEM_TYPE_GTP,
+	RTE_FLOW_ITEM_TYPE_ESP,
+	RTE_FLOW_ITEM_TYPE_AH,
 	RTE_FLOW_ITEM_TYPE_RAW
 };
 
@@ -154,6 +156,17 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 	},
 };
 
+static const struct rte_flow_item_esp dpaa2_flow_item_esp_mask = {
+	.hdr = {
+		.spi = RTE_BE32(0xffffffff),
+		.seq = RTE_BE32(0xffffffff),
+	},
+};
+
+static const struct rte_flow_item_ah dpaa2_flow_item_ah_mask = {
+	.spi = RTE_BE32(0xffffffff),
+};
+
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
@@ -255,8 +268,16 @@ dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 			strcat(string, ".teid");
 		else
 			strcat(string, ".unknown field");
+	} else if (prot == NET_PROT_IPSEC_ESP) {
+		rte_strscpy(string, "esp", DPAA2_PROT_FIELD_STRING_SIZE);
+		if (field == NH_FLD_IPSEC_ESP_SPI)
+			strcat(string, ".spi");
+		else if (field == NH_FLD_IPSEC_ESP_SEQUENCE_NUM)
+			strcat(string, ".seq");
+		else
+			strcat(string, ".unknown field");
 	} else {
-		strcpy(string, "unknown protocol");
+		sprintf(string, "unknown protocol(%d)", prot);
 	}
 }
 
@@ -1654,6 +1675,14 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_tcp_mask;
 		size = sizeof(struct rte_flow_item_tcp);
 		break;
+	case RTE_FLOW_ITEM_TYPE_ESP:
+		mask_support = (const char *)&dpaa2_flow_item_esp_mask;
+		size = sizeof(struct rte_flow_item_esp);
+		break;
+	case RTE_FLOW_ITEM_TYPE_AH:
+		mask_support = (const char *)&dpaa2_flow_item_ah_mask;
+		size = sizeof(struct rte_flow_item_ah);
+		break;
 	case RTE_FLOW_ITEM_TYPE_SCTP:
 		mask_support = (const char *)&dpaa2_flow_item_sctp_mask;
 		size = sizeof(struct rte_flow_item_sctp);
@@ -1684,7 +1713,7 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask[i] = (mask[i] | mask_src[i]);
 
 	if (memcmp(mask, mask_support, size))
-		return -1;
+		return -ENOTSUP;
 
 	return 0;
 }
@@ -2088,11 +2117,12 @@ dpaa2_configure_flow_tunnel_eth(struct dpaa2_dev_flow *flow,
 	if (!spec)
 		return 0;
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask->src,
@@ -2304,11 +2334,12 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask->src,
@@ -2409,11 +2440,12 @@ dpaa2_configure_flow_tunnel_vlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (!mask->tci)
@@ -2471,14 +2503,14 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
@@ -2486,27 +2518,28 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-				       RTE_FLOW_ITEM_TYPE_VLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+			RTE_FLOW_ITEM_TYPE_VLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (!mask->tci)
 		return 0;
 
 	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
-					      NH_FLD_VLAN_TCI, &spec->tci,
-					      &mask->tci, sizeof(rte_be16_t),
-					      priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+			NH_FLD_VLAN_TCI, &spec->tci,
+			&mask->tci, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
-					      NH_FLD_VLAN_TCI, &spec->tci,
-					      &mask->tci, sizeof(rte_be16_t),
-					      priv, group, &local_cfg,
-					      DPAA2_FLOW_FS_TYPE);
+			NH_FLD_VLAN_TCI, &spec->tci,
+			&mask->tci, sizeof(rte_be16_t),
+			priv, group, &local_cfg,
+			DPAA2_FLOW_FS_TYPE);
 	if (ret)
 		return ret;
 
@@ -2515,12 +2548,13 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	int ret, local_cfg = 0;
 	uint32_t group;
@@ -2544,16 +2578,16 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV4_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV4_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV4_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV4_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
@@ -2562,13 +2596,13 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_index = attr->priority;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
-					 DPAA2_FLOW_QOS_TYPE, group,
-					 &local_cfg);
+			DPAA2_FLOW_QOS_TYPE, group,
+			&local_cfg);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
-					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
 	if (ret)
 		return ret;
 
@@ -2577,10 +2611,11 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
-				       RTE_FLOW_ITEM_TYPE_IPV4)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+			RTE_FLOW_ITEM_TYPE_IPV4);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask_ipv4->hdr.src_addr) {
@@ -2589,18 +2624,18 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(rte_be32_t);
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV4_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV4_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2611,17 +2646,17 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(rte_be32_t);
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV4_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV4_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2632,18 +2667,18 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(uint8_t);
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2653,12 +2688,13 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 }
 
 static int
-dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	int ret, local_cfg = 0;
 	uint32_t group;
@@ -2686,27 +2722,27 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV6_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV6_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV6_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV6_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
-					 DPAA2_FLOW_QOS_TYPE, group,
-					 &local_cfg);
+			DPAA2_FLOW_QOS_TYPE, group,
+			&local_cfg);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
-					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
 	if (ret)
 		return ret;
 
@@ -2715,10 +2751,11 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
-				       RTE_FLOW_ITEM_TYPE_IPV6)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+			RTE_FLOW_ITEM_TYPE_IPV6);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask_ipv6->hdr.src_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
@@ -2727,18 +2764,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = NH_FLD_IPV6_ADDR_SIZE;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV6_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV6_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2749,18 +2786,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = NH_FLD_IPV6_ADDR_SIZE;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV6_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV6_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2771,18 +2808,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(uint8_t);
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2839,11 +2876,12 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ICMP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ICMP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.icmp_type) {
@@ -2916,16 +2954,16 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_UDP_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_UDP_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_UDP_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_UDP_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
@@ -2946,11 +2984,12 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_UDP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_UDP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3023,9 +3062,9 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_TCP_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_TCP_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
@@ -3053,11 +3092,12 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_TCP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_TCP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3097,6 +3137,183 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_esp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_esp *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_esp_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ESP distribution not support");
+		return -ENOTSUP;
+	}
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_ESP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_ESP_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	if (!spec) {
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ESP);
+	if (ret) {
+		DPAA2_PMD_WARN("Extract field(s) of ESP not support.");
+
+		return ret;
+	}
+
+	if (mask->hdr.spi) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SPI, &spec->hdr.spi,
+			&mask->hdr.spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SPI, &spec->hdr.spi,
+			&mask->hdr.spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask->hdr.seq) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SEQUENCE_NUM, &spec->hdr.seq,
+			&mask->hdr.seq, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SEQUENCE_NUM, &spec->hdr.seq,
+			&mask->hdr.seq, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_ah(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ah *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_ah_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-AH distribution not support");
+		return -ENOTSUP;
+	}
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_AH_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_AH_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	if (!spec) {
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_AH);
+	if (ret) {
+		DPAA2_PMD_WARN("Extract field(s) of AH not support.");
+
+		return ret;
+	}
+
+	if (mask->spi) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_AH,
+			NH_FLD_IPSEC_AH_SPI, &spec->spi,
+			&mask->spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_AH,
+			NH_FLD_IPSEC_AH_SPI, &spec->spi,
+			&mask->spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask->seq_num) {
+		DPAA2_PMD_ERR("AH seq distribution not support");
+		return -ENOTSUP;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3145,11 +3362,12 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_SCTP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_SCTP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of SCTP not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3237,11 +3455,12 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_GRE)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GRE);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of GRE not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (!mask->protocol)
@@ -3314,11 +3533,12 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VXLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VXLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of VXLAN not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->flags) {
@@ -3418,17 +3638,18 @@ dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ECPRI)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ECPRI);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ECPRI not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->hdr.common.type != 0xff) {
 		DPAA2_PMD_WARN("ECPRI header type not specified.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA) {
@@ -3729,11 +3950,12 @@ dpaa2_configure_flow_gtp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_GTP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GTP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of GTP not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (!mask->teid)
@@ -4370,6 +4592,26 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_ESP:
+			ret = dpaa2_configure_flow_esp(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("ESP flow config failed!");
+				goto end_flow_set;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_AH:
+			ret = dpaa2_configure_flow_ah(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("AH flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
 			ret = dpaa2_configure_flow_gre(flow, dev, attr,
 					&dpaa2_pattern[i],
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 34/42] net/dpaa2: fix memory corruption in TM
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (32 preceding siblings ...)
  2024-10-22 19:12         ` [v4 33/42] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 35/42] net/dpaa2: support software taildrop vanshika.shukla
                           ` (8 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh; +Cc: stable

From: Gagandeep Singh <g.singh@nxp.com>

driver was reserving memory in an array for 8 queues only,
but it can support many more queues configuration.

This patch fixes the memory corruption issue by defining the
queue array with correct size.

Fixes: 72100f0dee21 ("net/dpaa2: support level 2 in traffic management")
Cc: g.singh@nxp.com
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/net/dpaa2/dpaa2_tm.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index fb8c384ca4..ab3e355853 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -684,6 +684,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 	struct dpaa2_tm_node *leaf_node, *temp_leaf_node, *channel_node;
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
 	int ret, t;
+	bool conf_schedule = false;
 
 	/* Populate TCs */
 	LIST_FOREACH(channel_node, &priv->nodes, next) {
@@ -757,7 +758,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 	}
 
 	LIST_FOREACH(channel_node, &priv->nodes, next) {
-		int wfq_grp = 0, is_wfq_grp = 0, conf[DPNI_MAX_TC];
+		int wfq_grp = 0, is_wfq_grp = 0, conf[priv->nb_tx_queues];
 		struct dpni_tx_priorities_cfg prio_cfg;
 
 		memset(&prio_cfg, 0, sizeof(prio_cfg));
@@ -767,6 +768,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 		if (channel_node->level_id != CHANNEL_LEVEL)
 			continue;
 
+		conf_schedule = false;
 		LIST_FOREACH(leaf_node, &priv->nodes, next) {
 			struct dpaa2_queue *leaf_dpaa2_q;
 			uint8_t leaf_tc_id;
@@ -789,6 +791,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			if (leaf_node->parent != channel_node)
 				continue;
 
+			conf_schedule = true;
 			leaf_dpaa2_q =  (struct dpaa2_queue *)dev->data->tx_queues[leaf_node->id];
 			leaf_tc_id = leaf_dpaa2_q->tc_index;
 			/* Process sibling leaf nodes */
@@ -829,8 +832,8 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 						goto out;
 					}
 					is_wfq_grp = 1;
-					conf[temp_leaf_node->id] = 1;
 				}
+				conf[temp_leaf_node->id] = 1;
 			}
 			if (is_wfq_grp) {
 				if (wfq_grp == 0) {
@@ -851,6 +854,9 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			}
 			conf[leaf_node->id] = 1;
 		}
+		if (!conf_schedule)
+			continue;
+
 		if (wfq_grp > 1) {
 			prio_cfg.separate_groups = 1;
 			if (prio_cfg.prio_group_B < prio_cfg.prio_group_A) {
@@ -864,6 +870,16 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 
 		prio_cfg.prio_group_A = 1;
 		prio_cfg.channel_idx = channel_node->channel_id;
+		DPAA2_PMD_DEBUG("########################################");
+		DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
+		for (t = 0; t < DPNI_MAX_TC; t++)
+			DPAA2_PMD_DEBUG("tc = %d mode = %d, delta = %d", t,
+					prio_cfg.tc_sched[t].mode,
+					prio_cfg.tc_sched[t].delta_bandwidth);
+
+		DPAA2_PMD_DEBUG("prioritya = %d, priorityb = %d, separate grps"
+				" = %d", prio_cfg.prio_group_A,
+				prio_cfg.prio_group_B, prio_cfg.separate_groups);
 		ret = dpni_set_tx_priorities(dpni, 0, priv->token, &prio_cfg);
 		if (ret) {
 			ret = -rte_tm_error_set(error, EINVAL,
@@ -871,15 +887,6 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 					"Scheduling Failed\n");
 			goto out;
 		}
-		DPAA2_PMD_DEBUG("########################################");
-		DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
-		for (t = 0; t < DPNI_MAX_TC; t++) {
-			DPAA2_PMD_DEBUG("tc = %d mode = %d ", t, prio_cfg.tc_sched[t].mode);
-			DPAA2_PMD_DEBUG("delta = %d", prio_cfg.tc_sched[t].delta_bandwidth);
-		}
-		DPAA2_PMD_DEBUG("prioritya = %d", prio_cfg.prio_group_A);
-		DPAA2_PMD_DEBUG("priorityb = %d", prio_cfg.prio_group_B);
-		DPAA2_PMD_DEBUG("separate grps = %d", prio_cfg.separate_groups);
 	}
 	return 0;
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 35/42] net/dpaa2: support software taildrop
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (33 preceding siblings ...)
  2024-10-22 19:12         ` [v4 34/42] net/dpaa2: fix memory corruption in TM vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 36/42] net/dpaa2: check IOVA before sending MC command vanshika.shukla
                           ` (7 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

Add software based taildrop support.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  2 +-
 drivers/net/dpaa2/dpaa2_rxtx.c          | 24 +++++++++++++++++++++++-
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 74a1a8b2fa..b6cd1f00c4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -179,7 +179,7 @@ struct __rte_cache_aligned dpaa2_queue {
 	struct dpaa2_queue *tx_conf_queue;
 	int32_t eventfd;	/*!< Event Fd of this queue */
 	uint16_t nb_desc;
-	uint16_t resv;
+	uint16_t tm_sw_td;	/*!< TM software taildrop */
 	uint64_t offloads;
 	uint64_t lpbk_cntx;
 	uint8_t data_stashing_off;
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 71b2b4a427..fd07a75a40 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1297,8 +1297,11 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		while (qbman_result_SCN_state(dpaa2_q->cscn)) {
 			retry_count++;
 			/* Retry for some time before giving up */
-			if (retry_count > CONG_RETRY_COUNT)
+			if (retry_count > CONG_RETRY_COUNT) {
+				if (dpaa2_q->tm_sw_td)
+					goto sw_td;
 				goto skip_tx;
+			}
 		}
 
 		frames_to_send = (nb_pkts > dpaa2_eqcr_size) ?
@@ -1490,6 +1493,25 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
+	return num_tx;
+sw_td:
+	loop = 0;
+	while (loop < num_tx) {
+		if (unlikely(RTE_MBUF_HAS_EXTBUF(*bufs)))
+			rte_pktmbuf_free(*bufs);
+		bufs++;
+		loop++;
+	}
+
+	/* free the pending buffers */
+	while (nb_pkts) {
+		rte_pktmbuf_free(*bufs);
+		bufs++;
+		nb_pkts--;
+		num_tx++;
+	}
+	dpaa2_q->tx_pkts += num_tx;
+
 	return num_tx;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 36/42] net/dpaa2: check IOVA before sending MC command
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (34 preceding siblings ...)
  2024-10-22 19:12         ` [v4 35/42] net/dpaa2: support software taildrop vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 37/42] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
                           ` (6 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Convert VA to IOVA and check IOVA before sending parameter
to MC. Invalid IOVA of parameter sent to MC will cause system
stuck and not be recovered unless power reset.
IOVA is not checked in data path because:
1) MC is not involved and error can be recovered.
2) IOVA check impacts performance a little bit.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c |  63 +++--
 drivers/net/dpaa2/dpaa2_ethdev.c       | 338 +++++++++++++------------
 drivers/net/dpaa2/dpaa2_ethdev.h       |   3 +
 drivers/net/dpaa2/dpaa2_flow.c         |  67 ++++-
 drivers/net/dpaa2/dpaa2_sparser.c      |  25 +-
 drivers/net/dpaa2/dpaa2_tm.c           |  43 ++--
 6 files changed, 320 insertions(+), 219 deletions(-)

diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 4d33b51fea..20b37a97bb 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -30,8 +30,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 
 int
 rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
-			      uint16_t offset,
-			      uint8_t size)
+	uint16_t offset, uint8_t size)
 {
 	struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
 	struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
@@ -52,8 +51,8 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	p_params = rte_zmalloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_zmalloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -73,17 +72,23 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 	}
 
 	memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
-	tc_cfg.key_cfg_iova = (size_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.dist_size = eth_dev->data->nb_rx_queues;
 	tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
 
 	ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index,
-				  &tc_cfg);
+			&tc_cfg);
 	rte_free(p_params);
 	if (ret) {
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("Set RX TC dist failed(err=%d)", ret);
 		return ret;
 	}
 
@@ -115,8 +120,8 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 	if (tc_dist_queues > priv->dist_queues)
 		tc_dist_queues = priv->dist_queues;
 
-	p_params = rte_malloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_malloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -133,7 +138,15 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 		return ret;
 	}
 
-	tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.dist_size = tc_dist_queues;
 	tc_cfg.enable = true;
 	tc_cfg.tc = tc_index;
@@ -148,17 +161,15 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 	ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, &tc_cfg);
 	rte_free(p_params);
 	if (ret) {
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("RX Hash dist for failed(err=%d)", ret);
 		return ret;
 	}
 
 	return 0;
 }
 
-int dpaa2_remove_flow_dist(
-	struct rte_eth_dev *eth_dev,
+int
+dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
 	uint8_t tc_index)
 {
 	struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
@@ -168,8 +179,8 @@ int dpaa2_remove_flow_dist(
 	void *p_params;
 	int ret;
 
-	p_params = rte_malloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_malloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -177,7 +188,15 @@ int dpaa2_remove_flow_dist(
 
 	memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
 	tc_cfg.dist_size = 0;
-	tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.enable = true;
 	tc_cfg.tc = tc_index;
 
@@ -194,9 +213,7 @@ int dpaa2_remove_flow_dist(
 			&tc_cfg);
 	rte_free(p_params);
 	if (ret)
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("RX hash dist failed(err=%d)", ret);
 	return ret;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index da0ea57ed2..7a3937346c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -123,9 +123,9 @@ dpaa2_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
-		return -1;
+		return -EINVAL;
 	}
 
 	if (on)
@@ -174,8 +174,8 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 static int
 dpaa2_vlan_tpid_set(struct rte_eth_dev *dev,
-		      enum rte_vlan_type vlan_type __rte_unused,
-		      uint16_t tpid)
+	enum rte_vlan_type vlan_type __rte_unused,
+	uint16_t tpid)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct fsl_mc_io *dpni = dev->process_private;
@@ -212,8 +212,7 @@ dpaa2_vlan_tpid_set(struct rte_eth_dev *dev,
 
 static int
 dpaa2_fw_version_get(struct rte_eth_dev *dev,
-		     char *fw_version,
-		     size_t fw_size)
+	char *fw_version, size_t fw_size)
 {
 	int ret;
 	struct fsl_mc_io *dpni = dev->process_private;
@@ -245,7 +244,8 @@ dpaa2_fw_version_get(struct rte_eth_dev *dev,
 }
 
 static int
-dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+dpaa2_dev_info_get(struct rte_eth_dev *dev,
+	struct rte_eth_dev_info *dev_info)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
@@ -291,8 +291,8 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 static int
 dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
-			__rte_unused uint16_t queue_id,
-			struct rte_eth_burst_mode *mode)
+	__rte_unused uint16_t queue_id,
+	struct rte_eth_burst_mode *mode)
 {
 	struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
 	int ret = -EINVAL;
@@ -368,7 +368,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	uint8_t num_rxqueue_per_tc;
 	struct dpaa2_queue *mc_q, *mcq;
 	uint32_t tot_queues;
-	int i;
+	int i, ret;
 	struct dpaa2_queue *dpaa2_q;
 
 	PMD_INIT_FUNC_TRACE();
@@ -382,7 +382,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 			  RTE_CACHE_LINE_SIZE);
 	if (!mc_q) {
 		DPAA2_PMD_ERR("Memory allocation failed for rx/tx queues");
-		return -1;
+		return -ENOBUFS;
 	}
 
 	for (i = 0; i < priv->nb_rx_queues; i++) {
@@ -404,8 +404,10 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	if (dpaa2_enable_err_queue) {
 		priv->rx_err_vq = rte_zmalloc("dpni_rx_err",
 			sizeof(struct dpaa2_queue), 0);
-		if (!priv->rx_err_vq)
+		if (!priv->rx_err_vq) {
+			ret = -ENOBUFS;
 			goto fail;
+		}
 
 		dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq;
 		dpaa2_q->q_storage = rte_malloc("err_dq_storage",
@@ -424,13 +426,15 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 
 	for (i = 0; i < priv->nb_tx_queues; i++) {
 		mc_q->eth_data = dev->data;
-		mc_q->flow_id = 0xffff;
+		mc_q->flow_id = DPAA2_INVALID_FLOW_ID;
 		priv->tx_vq[i] = mc_q++;
 		dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i];
 		dpaa2_q->cscn = rte_malloc(NULL,
 					   sizeof(struct qbman_result), 16);
-		if (!dpaa2_q->cscn)
+		if (!dpaa2_q->cscn) {
+			ret = -ENOBUFS;
 			goto fail_tx;
+		}
 	}
 
 	if (priv->flags & DPAA2_TX_CONF_ENABLE) {
@@ -498,7 +502,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	}
 
 	rte_free(mc_q);
-	return -1;
+	return ret;
 }
 
 static void
@@ -718,14 +722,14 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
  */
 static int
 dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
-			 uint16_t rx_queue_id,
-			 uint16_t nb_rx_desc,
-			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_rxconf *rx_conf,
-			 struct rte_mempool *mb_pool)
+	uint16_t rx_queue_id,
+	uint16_t nb_rx_desc,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_rxconf *rx_conf,
+	struct rte_mempool *mb_pool)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct dpni_queue cfg;
 	uint8_t options = 0;
@@ -747,8 +751,8 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* Rx deferred start is not supported */
 	if (rx_conf->rx_deferred_start) {
-		DPAA2_PMD_ERR("%p:Rx deferred start not supported",
-				(void *)dev);
+		DPAA2_PMD_ERR("%s:Rx deferred start not supported",
+			dev->data->name);
 		return -EINVAL;
 	}
 
@@ -764,7 +768,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		if (ret)
 			return ret;
 	}
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = priv->rx_vq[rx_queue_id];
 	dpaa2_q->mb_pool = mb_pool; /**< mbuf pool to populate RX ring. */
 	dpaa2_q->bp_array = rte_dpaa2_bpid_info;
 	dpaa2_q->nb_desc = UINT16_MAX;
@@ -790,7 +794,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		cfg.cgid = i;
 		dpaa2_q->cgid = cfg.cgid;
 	} else {
-		dpaa2_q->cgid = 0xff;
+		dpaa2_q->cgid = DPAA2_INVALID_CGID;
 	}
 
 	/*if ls2088 or rev2 device, enable the stashing */
@@ -814,10 +818,10 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 	}
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_RX,
-			     dpaa2_q->tc_index, flow_id, options, &cfg);
+			dpaa2_q->tc_index, flow_id, options, &cfg);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in setting the rx flow: = %d", ret);
-		return -1;
+		return ret;
 	}
 
 	if (!(priv->flags & DPAA2_RX_TAILDROP_OFF)) {
@@ -830,7 +834,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		 * There is no HW restriction, but number of CGRs are limited,
 		 * hence this restriction is placed.
 		 */
-		if (dpaa2_q->cgid != 0xff) {
+		if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 			/*enabling per rx queue congestion control */
 			taildrop.threshold = nb_rx_desc;
 			taildrop.units = DPNI_CONGESTION_UNIT_FRAMES;
@@ -856,15 +860,15 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 		if (ret) {
 			DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)",
-				      ret);
-			return -1;
+				ret);
+			return ret;
 		}
 	} else { /* Disable tail Drop */
 		struct dpni_taildrop taildrop = {0};
 		DPAA2_PMD_INFO("Tail drop is disabled on queue");
 
 		taildrop.enable = 0;
-		if (dpaa2_q->cgid != 0xff) {
+		if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
 					dpaa2_q->tc_index,
@@ -876,8 +880,8 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 		if (ret) {
 			DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)",
-				      ret);
-			return -1;
+				ret);
+			return ret;
 		}
 	}
 
@@ -887,16 +891,14 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 static int
 dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
-			 uint16_t tx_queue_id,
-			 uint16_t nb_tx_desc,
-			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_txconf *tx_conf)
+	uint16_t tx_queue_id,
+	uint16_t nb_tx_desc,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_txconf *tx_conf)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)
-		priv->tx_vq[tx_queue_id];
-	struct dpaa2_queue *dpaa2_tx_conf_q = (struct dpaa2_queue *)
-		priv->tx_conf_vq[tx_queue_id];
+	struct dpaa2_queue *dpaa2_q = priv->tx_vq[tx_queue_id];
+	struct dpaa2_queue *dpaa2_tx_conf_q = priv->tx_conf_vq[tx_queue_id];
 	struct fsl_mc_io *dpni = dev->process_private;
 	struct dpni_queue tx_conf_cfg;
 	struct dpni_queue tx_flow_cfg;
@@ -906,13 +908,14 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct dpni_queue_id qid;
 	uint32_t tc_id;
 	int ret;
+	uint64_t iova;
 
 	PMD_INIT_FUNC_TRACE();
 
 	/* Tx deferred start is not supported */
 	if (tx_conf->tx_deferred_start) {
-		DPAA2_PMD_ERR("%p:Tx deferred start not supported",
-				(void *)dev);
+		DPAA2_PMD_ERR("%s:Tx deferred start not supported",
+			dev->data->name);
 		return -EINVAL;
 	}
 
@@ -920,7 +923,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	dpaa2_q->offloads = tx_conf->offloads;
 
 	/* Return if queue already configured */
-	if (dpaa2_q->flow_id != 0xffff) {
+	if (dpaa2_q->flow_id != DPAA2_INVALID_FLOW_ID) {
 		dev->data->tx_queues[tx_queue_id] = dpaa2_q;
 		return 0;
 	}
@@ -962,7 +965,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		DPAA2_PMD_ERR("Error in setting the tx flow: "
 			"tc_id=%d, flow=%d err=%d",
 			tc_id, flow_id, ret);
-			return -1;
+			return ret;
 	}
 
 	dpaa2_q->flow_id = flow_id;
@@ -970,11 +973,11 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	dpaa2_q->tc_index = tc_id;
 
 	ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX, ((channel_id << 8) | dpaa2_q->tc_index),
-			     dpaa2_q->flow_id, &tx_flow_cfg, &qid);
+			DPNI_QUEUE_TX, ((channel_id << 8) | dpaa2_q->tc_index),
+			dpaa2_q->flow_id, &tx_flow_cfg, &qid);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-		return -1;
+		return ret;
 	}
 	dpaa2_q->fqid = qid.fqid;
 
@@ -990,8 +993,17 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		 */
 		cong_notif_cfg.threshold_exit = (nb_tx_desc * 9) / 10;
 		cong_notif_cfg.message_ctx = 0;
-		cong_notif_cfg.message_iova =
-				(size_t)DPAA2_VADDR_TO_IOVA(dpaa2_q->cscn);
+
+		iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(dpaa2_q->cscn,
+			sizeof(struct qbman_result));
+		if (iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("No IOMMU map for cscn(%p)(size=%x)",
+				dpaa2_q->cscn, (uint32_t)sizeof(struct qbman_result));
+
+			return -ENOBUFS;
+		}
+
+		cong_notif_cfg.message_iova = iova;
 		cong_notif_cfg.dest_cfg.dest_type = DPNI_DEST_NONE;
 		cong_notif_cfg.notification_mode =
 					 DPNI_CONG_OPT_WRITE_MEM_ON_ENTER |
@@ -999,16 +1011,13 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 					 DPNI_CONG_OPT_COHERENT_WRITE;
 		cong_notif_cfg.cg_point = DPNI_CP_QUEUE;
 
-		ret = dpni_set_congestion_notification(dpni, CMD_PRI_LOW,
-						       priv->token,
-						       DPNI_QUEUE_TX,
-						       ((channel_id << 8) | tc_id),
-						       &cong_notif_cfg);
+		ret = dpni_set_congestion_notification(dpni,
+				CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
+				((channel_id << 8) | tc_id), &cong_notif_cfg);
 		if (ret) {
-			DPAA2_PMD_ERR(
-			   "Error in setting tx congestion notification: "
-			   "err=%d", ret);
-			return -ret;
+			DPAA2_PMD_ERR("Set TX congestion notification err=%d",
+			   ret);
+			return ret;
 		}
 	}
 	dpaa2_q->cb_eqresp_free = dpaa2_dev_free_eqresp_buf;
@@ -1019,22 +1028,24 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		options = options | DPNI_QUEUE_OPT_USER_CTX;
 		tx_conf_cfg.user_context = (size_t)(dpaa2_q);
 		ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX_CONFIRM, ((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
-			     dpaa2_tx_conf_q->flow_id, options, &tx_conf_cfg);
+				DPNI_QUEUE_TX_CONFIRM,
+				((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
+				dpaa2_tx_conf_q->flow_id,
+				options, &tx_conf_cfg);
 		if (ret) {
-			DPAA2_PMD_ERR("Error in setting the tx conf flow: "
-			      "tc_index=%d, flow=%d err=%d",
-			      dpaa2_tx_conf_q->tc_index,
-			      dpaa2_tx_conf_q->flow_id, ret);
-			return -1;
+			DPAA2_PMD_ERR("Set TC[%d].TX[%d] conf flow err=%d",
+				dpaa2_tx_conf_q->tc_index,
+				dpaa2_tx_conf_q->flow_id, ret);
+			return ret;
 		}
 
 		ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX_CONFIRM, ((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
-			     dpaa2_tx_conf_q->flow_id, &tx_conf_cfg, &qid);
+				DPNI_QUEUE_TX_CONFIRM,
+				((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
+				dpaa2_tx_conf_q->flow_id, &tx_conf_cfg, &qid);
 		if (ret) {
 			DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-			return -1;
+			return ret;
 		}
 		dpaa2_tx_conf_q->fqid = qid.fqid;
 	}
@@ -1046,8 +1057,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct dpaa2_queue *dpaa2_q = dev->data->rx_queues[rx_queue_id];
 	struct dpaa2_dev_priv *priv = dpaa2_q->eth_data->dev_private;
-	struct fsl_mc_io *dpni =
-		(struct fsl_mc_io *)priv->eth_dev->process_private;
+	struct fsl_mc_io *dpni = priv->eth_dev->process_private;
 	uint8_t options = 0;
 	int ret;
 	struct dpni_queue cfg;
@@ -1057,7 +1067,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	total_nb_rx_desc -= dpaa2_q->nb_desc;
 
-	if (dpaa2_q->cgid != 0xff) {
+	if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 		options = DPNI_QUEUE_OPT_CLEAR_CGID;
 		cfg.cgid = dpaa2_q->cgid;
 
@@ -1069,7 +1079,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 			DPAA2_PMD_ERR("Unable to clear CGR from q=%u err=%d",
 					dpaa2_q->fqid, ret);
 		priv->cgid_in_use[dpaa2_q->cgid] = 0;
-		dpaa2_q->cgid = 0xff;
+		dpaa2_q->cgid = DPAA2_INVALID_CGID;
 	}
 }
 
@@ -1233,10 +1243,10 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
 	dpaa2_dev_set_link_up(dev);
 
 	for (i = 0; i < data->nb_rx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)data->rx_queues[i];
+		dpaa2_q = data->rx_queues[i];
 		ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-				     DPNI_QUEUE_RX, dpaa2_q->tc_index,
-				       dpaa2_q->flow_id, &cfg, &qid);
+				DPNI_QUEUE_RX, dpaa2_q->tc_index,
+				dpaa2_q->flow_id, &cfg, &qid);
 		if (ret) {
 			DPAA2_PMD_ERR("Error in getting flow information: "
 				      "err=%d", ret);
@@ -1253,7 +1263,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
 						ret);
 			return ret;
 		}
-		dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq;
+		dpaa2_q = priv->rx_err_vq;
 		dpaa2_q->fqid = qid.fqid;
 		dpaa2_q->eth_data = dev->data;
 
@@ -1318,7 +1328,7 @@ static int
 dpaa2_dev_stop(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int ret;
 	struct rte_eth_link link;
 	struct rte_device *rdev = dev->device;
@@ -1371,7 +1381,7 @@ static int
 dpaa2_dev_close(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int i, ret;
 	struct rte_eth_link link;
 
@@ -1382,7 +1392,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 
 	if (!dpni) {
 		DPAA2_PMD_WARN("Already closed or not started");
-		return -1;
+		return -EINVAL;
 	}
 
 	dpaa2_tm_deinit(dev);
@@ -1391,7 +1401,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	ret = dpni_reset(dpni, CMD_PRI_LOW, priv->token);
 	if (ret) {
 		DPAA2_PMD_ERR("Failure cleaning dpni device: err=%d", ret);
-		return -1;
+		return ret;
 	}
 
 	memset(&link, 0, sizeof(link));
@@ -1403,7 +1413,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	ret = dpni_close(dpni, CMD_PRI_LOW, priv->token);
 	if (ret) {
 		DPAA2_PMD_ERR("Failure closing dpni device with err code %d",
-			      ret);
+			ret);
 	}
 
 	/* Free the allocated memory for ethernet private data and dpni*/
@@ -1412,18 +1422,17 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	rte_free(dpni);
 
 	for (i = 0; i < MAX_TCS; i++)
-		rte_free((void *)(size_t)priv->extract.tc_extract_param[i]);
+		rte_free(priv->extract.tc_extract_param[i]);
 
 	if (priv->extract.qos_extract_param)
-		rte_free((void *)(size_t)priv->extract.qos_extract_param);
+		rte_free(priv->extract.qos_extract_param);
 
 	DPAA2_PMD_INFO("%s: netdev deleted", dev->data->name);
 	return 0;
 }
 
 static int
-dpaa2_dev_promiscuous_enable(
-		struct rte_eth_dev *dev)
+dpaa2_dev_promiscuous_enable(struct rte_eth_dev *dev)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
@@ -1483,7 +1492,7 @@ dpaa2_dev_allmulticast_enable(
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1504,7 +1513,7 @@ dpaa2_dev_allmulticast_disable(struct rte_eth_dev *dev)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1529,13 +1538,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
 				+ VLAN_TAG_SIZE;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1547,7 +1556,7 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 					frame_size - RTE_ETHER_CRC_LEN);
 	if (ret) {
 		DPAA2_PMD_ERR("Setting the max frame length failed");
-		return -1;
+		return ret;
 	}
 	dev->data->mtu = mtu;
 	DPAA2_PMD_INFO("MTU configured for the device: %d", mtu);
@@ -1556,36 +1565,35 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 static int
 dpaa2_dev_add_mac_addr(struct rte_eth_dev *dev,
-		       struct rte_ether_addr *addr,
-		       __rte_unused uint32_t index,
-		       __rte_unused uint32_t pool)
+	struct rte_ether_addr *addr,
+	__rte_unused uint32_t index,
+	__rte_unused uint32_t pool)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
-		return -1;
+		return -EINVAL;
 	}
 
 	ret = dpni_add_mac_addr(dpni, CMD_PRI_LOW, priv->token,
 				addr->addr_bytes, 0, 0, 0);
 	if (ret)
-		DPAA2_PMD_ERR(
-			"error: Adding the MAC ADDR failed: err = %d", ret);
-	return 0;
+		DPAA2_PMD_ERR("ERR(%d) Adding the MAC ADDR failed", ret);
+	return ret;
 }
 
 static void
 dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
-			  uint32_t index)
+	uint32_t index)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct rte_eth_dev_data *data = dev->data;
 	struct rte_ether_addr *macaddr;
 
@@ -1593,7 +1601,7 @@ dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
 
 	macaddr = &data->mac_addrs[index];
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return;
 	}
@@ -1607,15 +1615,15 @@ dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
 
 static int
 dpaa2_dev_set_mac_addr(struct rte_eth_dev *dev,
-		       struct rte_ether_addr *addr)
+	struct rte_ether_addr *addr)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1624,19 +1632,18 @@ dpaa2_dev_set_mac_addr(struct rte_eth_dev *dev,
 					priv->token, addr->addr_bytes);
 
 	if (ret)
-		DPAA2_PMD_ERR(
-			"error: Setting the MAC ADDR failed %d", ret);
+		DPAA2_PMD_ERR("ERR(%d) Setting the MAC ADDR failed", ret);
 
 	return ret;
 }
 
-static
-int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
-			 struct rte_eth_stats *stats)
+static int
+dpaa2_dev_stats_get(struct rte_eth_dev *dev,
+	struct rte_eth_stats *stats)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
-	int32_t  retcode;
+	struct fsl_mc_io *dpni = dev->process_private;
+	int32_t retcode;
 	uint8_t page0 = 0, page1 = 1, page2 = 2;
 	union dpni_statistics value;
 	int i;
@@ -1691,8 +1698,8 @@ int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
 	/* Fill in per queue stats */
 	for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) &&
 		(i < priv->nb_rx_queues || i < priv->nb_tx_queues); ++i) {
-		dpaa2_rxq = (struct dpaa2_queue *)priv->rx_vq[i];
-		dpaa2_txq = (struct dpaa2_queue *)priv->tx_vq[i];
+		dpaa2_rxq = priv->rx_vq[i];
+		dpaa2_txq = priv->tx_vq[i];
 		if (dpaa2_rxq)
 			stats->q_ipackets[i] = dpaa2_rxq->rx_pkts;
 		if (dpaa2_txq)
@@ -1711,19 +1718,20 @@ int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
 };
 
 static int
-dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
-		     unsigned int n)
+dpaa2_dev_xstats_get(struct rte_eth_dev *dev,
+	struct rte_eth_xstat *xstats, unsigned int n)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
-	int32_t  retcode;
+	int32_t retcode;
 	union dpni_statistics value[5] = {};
 	unsigned int i = 0, num = RTE_DIM(dpaa2_xstats_strings);
+	uint8_t page_id, stats_id;
 
 	if (n < num)
 		return num;
 
-	if (xstats == NULL)
+	if (!xstats)
 		return 0;
 
 	/* Get Counters from page_0*/
@@ -1758,8 +1766,9 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 
 	for (i = 0; i < num; i++) {
 		xstats[i].id = i;
-		xstats[i].value = value[dpaa2_xstats_strings[i].page_id].
-			raw.counter[dpaa2_xstats_strings[i].stats_id];
+		page_id = dpaa2_xstats_strings[i].page_id;
+		stats_id = dpaa2_xstats_strings[i].stats_id;
+		xstats[i].value = value[page_id].raw.counter[stats_id];
 	}
 	return i;
 err:
@@ -1769,8 +1778,8 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 
 static int
 dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
-		       struct rte_eth_xstat_name *xstats_names,
-		       unsigned int limit)
+	struct rte_eth_xstat_name *xstats_names,
+	unsigned int limit)
 {
 	unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings);
 
@@ -1788,16 +1797,16 @@ dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 
 static int
 dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
-		       uint64_t *values, unsigned int n)
+	uint64_t *values, unsigned int n)
 {
 	unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings);
 	uint64_t values_copy[stat_cnt];
+	uint8_t page_id, stats_id;
 
 	if (!ids) {
 		struct dpaa2_dev_priv *priv = dev->data->dev_private;
-		struct fsl_mc_io *dpni =
-			(struct fsl_mc_io *)dev->process_private;
-		int32_t  retcode;
+		struct fsl_mc_io *dpni = dev->process_private;
+		int32_t retcode;
 		union dpni_statistics value[5] = {};
 
 		if (n < stat_cnt)
@@ -1831,8 +1840,9 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 			return 0;
 
 		for (i = 0; i < stat_cnt; i++) {
-			values[i] = value[dpaa2_xstats_strings[i].page_id].
-				raw.counter[dpaa2_xstats_strings[i].stats_id];
+			page_id = dpaa2_xstats_strings[i].page_id;
+			stats_id = dpaa2_xstats_strings[i].stats_id;
+			values[i] = value[page_id].raw.counter[stats_id];
 		}
 		return stat_cnt;
 	}
@@ -1842,7 +1852,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 	for (i = 0; i < n; i++) {
 		if (ids[i] >= stat_cnt) {
 			DPAA2_PMD_ERR("xstats id value isn't valid");
-			return -1;
+			return -EINVAL;
 		}
 		values[i] = values_copy[ids[i]];
 	}
@@ -1850,8 +1860,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 }
 
 static int
-dpaa2_xstats_get_names_by_id(
-	struct rte_eth_dev *dev,
+dpaa2_xstats_get_names_by_id(struct rte_eth_dev *dev,
 	const uint64_t *ids,
 	struct rte_eth_xstat_name *xstats_names,
 	unsigned int limit)
@@ -1878,14 +1887,14 @@ static int
 dpaa2_dev_stats_reset(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int retcode;
 	int i;
 	struct dpaa2_queue *dpaa2_q;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1896,13 +1905,13 @@ dpaa2_dev_stats_reset(struct rte_eth_dev *dev)
 
 	/* Reset the per queue stats in dpaa2_queue structure */
 	for (i = 0; i < priv->nb_rx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[i];
+		dpaa2_q = priv->rx_vq[i];
 		if (dpaa2_q)
 			dpaa2_q->rx_pkts = 0;
 	}
 
 	for (i = 0; i < priv->nb_tx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i];
+		dpaa2_q = priv->tx_vq[i];
 		if (dpaa2_q)
 			dpaa2_q->tx_pkts = 0;
 	}
@@ -1921,12 +1930,12 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct rte_eth_link link;
 	struct dpni_link_state state = {0};
 	uint8_t count;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return 0;
 	}
@@ -1936,7 +1945,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 					  &state);
 		if (ret < 0) {
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
-			return -1;
+			return ret;
 		}
 		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
@@ -1955,7 +1964,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
-	if (ret == -1)
+	if (ret < 0)
 		DPAA2_PMD_DEBUG("No change in status");
 	else
 		DPAA2_PMD_INFO("Port %d Link is %s", dev->data->port_id,
@@ -1978,9 +1987,9 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	struct dpni_link_state state = {0};
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return ret;
 	}
@@ -2040,9 +2049,9 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("Device has not yet been configured");
 		return ret;
 	}
@@ -2094,9 +2103,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL || fc_conf == NULL) {
+	if (!dpni || !fc_conf) {
 		DPAA2_PMD_ERR("device not configured");
 		return ret;
 	}
@@ -2149,9 +2158,9 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return ret;
 	}
@@ -2394,10 +2403,10 @@ dpaa2_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 {
 	struct dpaa2_queue *rxq;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	uint16_t max_frame_length;
 
-	rxq = (struct dpaa2_queue *)dev->data->rx_queues[queue_id];
+	rxq = dev->data->rx_queues[queue_id];
 
 	qinfo->mp = rxq->mb_pool;
 	qinfo->scattered_rx = dev->data->scattered_rx;
@@ -2513,10 +2522,10 @@ static struct eth_dev_ops dpaa2_ethdev_ops = {
  * Returns the table of MAC entries (multiple entries)
  */
 static int
-populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv,
-		  struct rte_ether_addr *mac_entry)
+populate_mac_addr(struct fsl_mc_io *dpni_dev,
+	struct dpaa2_dev_priv *priv, struct rte_ether_addr *mac_entry)
 {
-	int ret;
+	int ret = 0;
 	struct rte_ether_addr phy_mac, prime_mac;
 
 	memset(&phy_mac, 0, sizeof(struct rte_ether_addr));
@@ -2574,7 +2583,7 @@ populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv,
 	return 0;
 
 cleanup:
-	return -1;
+	return ret;
 }
 
 static int
@@ -2633,7 +2642,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 	dpni_dev->regs = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
-	eth_dev->process_private = (void *)dpni_dev;
+	eth_dev->process_private = dpni_dev;
 
 	/* For secondary processes, the primary has done all the work */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
@@ -2662,7 +2671,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 			     "Failure in opening dpni@%d with err code %d",
 			     hw_id, ret);
 		rte_free(dpni_dev);
-		return -1;
+		return ret;
 	}
 
 	if (eth_dev->data->dev_conf.lpbk_mode)
@@ -2813,7 +2822,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	/* Init fields w.r.t. classification */
 	memset(&priv->extract.qos_key_extract, 0,
 		sizeof(struct dpaa2_key_extract));
-	priv->extract.qos_extract_param = rte_malloc(NULL, 256, 64);
+	priv->extract.qos_extract_param = rte_malloc(NULL,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE,
+		RTE_CACHE_LINE_SIZE);
 	if (!priv->extract.qos_extract_param) {
 		DPAA2_PMD_ERR("Memory alloc failed");
 		goto init_err;
@@ -2822,7 +2833,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	for (i = 0; i < MAX_TCS; i++) {
 		memset(&priv->extract.tc_key_extract[i], 0,
 			sizeof(struct dpaa2_key_extract));
-		priv->extract.tc_extract_param[i] = rte_malloc(NULL, 256, 64);
+		priv->extract.tc_extract_param[i] = rte_malloc(NULL,
+			DPAA2_EXTRACT_PARAM_MAX_SIZE,
+			RTE_CACHE_LINE_SIZE);
 		if (!priv->extract.tc_extract_param[i]) {
 			DPAA2_PMD_ERR("Memory alloc failed");
 			goto init_err;
@@ -2982,12 +2995,11 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 
 	if ((DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE) >
 		RTE_PKTMBUF_HEADROOM) {
-		DPAA2_PMD_ERR(
-		"RTE_PKTMBUF_HEADROOM(%d) shall be > DPAA2 Annotation req(%d)",
-		RTE_PKTMBUF_HEADROOM,
-		DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE);
+		DPAA2_PMD_ERR("RTE_PKTMBUF_HEADROOM(%d) < DPAA2 Annotation(%d)",
+			RTE_PKTMBUF_HEADROOM,
+			DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE);
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index db918725a7..a2b9fc5678 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -31,6 +31,9 @@
 #define MAX_DPNI		8
 #define DPAA2_MAX_CHANNELS	16
 
+#define DPAA2_EXTRACT_PARAM_MAX_SIZE 256
+#define DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE 256
+
 #define DPAA2_RX_DEFAULT_NBDESC 512
 
 #define DPAA2_ETH_MAX_LEN (RTE_ETHER_MTU + \
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 1605c0c584..fb635815aa 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -4318,7 +4318,14 @@ dpaa2_configure_fs_rss_table(struct dpaa2_dev_priv *priv,
 
 	tc_extract = &priv->extract.tc_key_extract[tc_id];
 	key_cfg_buf = priv->extract.tc_extract_param[tc_id];
-	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_buf,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_buf);
+
+		return -ENOBUFS;
+	}
 
 	key_max_size = tc_extract->key_profile.key_max_size;
 	entry_size = dpaa2_flow_entry_size(key_max_size);
@@ -4402,7 +4409,14 @@ dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
 
 	qos_extract = &priv->extract.qos_key_extract;
 	key_cfg_buf = priv->extract.qos_extract_param;
-	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_buf,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_buf);
+
+		return -ENOBUFS;
+	}
 
 	key_max_size = qos_extract->key_profile.key_max_size;
 	entry_size = dpaa2_flow_entry_size(key_max_size);
@@ -4959,6 +4973,7 @@ dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	struct dpaa2_dev_flow *flow = NULL;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int ret;
+	uint64_t iova;
 
 	dpaa2_flow_control_log =
 		getenv("DPAA2_FLOW_CONTROL_LOG");
@@ -4982,34 +4997,66 @@ dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	}
 
 	/* Allocate DMA'ble memory to write the qos rules */
-	flow->qos_key_addr = rte_zmalloc(NULL, 256, 64);
+	flow->qos_key_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->qos_key_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->qos_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->qos_key_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->qos_key_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for qos key(%p)",
+			__func__, flow->qos_key_addr);
+		goto mem_failure;
+	}
+	flow->qos_rule.key_iova = iova;
 
-	flow->qos_mask_addr = rte_zmalloc(NULL, 256, 64);
+	flow->qos_mask_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->qos_mask_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->qos_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->qos_mask_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->qos_mask_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for qos mask(%p)",
+			__func__, flow->qos_mask_addr);
+		goto mem_failure;
+	}
+	flow->qos_rule.mask_iova = iova;
 
 	/* Allocate DMA'ble memory to write the FS rules */
-	flow->fs_key_addr = rte_zmalloc(NULL, 256, 64);
+	flow->fs_key_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->fs_key_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->fs_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->fs_key_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->fs_key_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for fs key(%p)",
+			__func__, flow->fs_key_addr);
+		goto mem_failure;
+	}
+	flow->fs_rule.key_iova = iova;
 
-	flow->fs_mask_addr = rte_zmalloc(NULL, 256, 64);
+	flow->fs_mask_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->fs_mask_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->fs_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->fs_mask_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->fs_mask_addr,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for fs mask(%p)",
+			__func__, flow->fs_mask_addr);
+		goto mem_failure;
+	}
+	flow->fs_rule.mask_iova = iova;
 
 	priv->curr = flow;
 
diff --git a/drivers/net/dpaa2/dpaa2_sparser.c b/drivers/net/dpaa2/dpaa2_sparser.c
index 59f7a172c6..265c9b5c57 100644
--- a/drivers/net/dpaa2/dpaa2_sparser.c
+++ b/drivers/net/dpaa2/dpaa2_sparser.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2023 NXP
  */
 
 #include <rte_mbuf.h>
@@ -170,7 +170,14 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 	}
 
 	memcpy(addr, sp_param.byte_code, sp_param.size);
-	cfg.ss_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(addr));
+	cfg.ss_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(addr, sp_param.size);
+	if (cfg.ss_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("No IOMMU map for soft sequence(%p), size=%d",
+			addr, sp_param.size);
+		rte_free(addr);
+
+		return -ENOBUFS;
+	}
 
 	ret = dpni_load_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
@@ -179,7 +186,7 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 		return ret;
 	}
 
-	priv->ss_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(addr));
+	priv->ss_iova = cfg.ss_iova;
 	priv->ss_offset += sp_param.size;
 	DPAA2_PMD_INFO("Soft parser loaded for dpni@%d", priv->hw_id);
 
@@ -219,7 +226,15 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 		}
 
 		memcpy(param_addr, sp_param.param_array, cfg.param_size);
-		cfg.param_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(param_addr));
+		cfg.param_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(param_addr,
+			cfg.param_size);
+		if (cfg.param_iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("%s: No IOMMU map for %p, size=%d",
+				__func__, param_addr, cfg.param_size);
+			rte_free(param_addr);
+
+			return -ENOBUFS;
+		}
 		priv->ss_param_iova = cfg.param_iova;
 	} else {
 		cfg.param_iova = 0;
@@ -227,7 +242,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 
 	ret = dpni_enable_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d",
+		DPAA2_PMD_ERR("Soft parser enabled for dpni@%d failed",
 			priv->hw_id);
 		rte_free(param_addr);
 		return ret;
diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index ab3e355853..f91392b092 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2020-2021 NXP
+ * Copyright 2020-2023 NXP
  */
 
 #include <rte_ethdev.h>
@@ -572,41 +572,42 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
+	uint64_t iova;
 
 	memset(&tx_flow_cfg, 0, sizeof(struct dpni_queue));
-	dpaa2_q =  (struct dpaa2_queue *)dev->data->tx_queues[node->id];
+	dpaa2_q = (struct dpaa2_queue *)dev->data->tx_queues[node->id];
 	tc_id = node->parent->tc_id;
 	node->parent->tc_id++;
 	flow_id = 0;
 
-	if (dpaa2_q == NULL) {
-		DPAA2_PMD_ERR("Queue is not configured for node = %d", node->id);
-		return -1;
+	if (!dpaa2_q) {
+		DPAA2_PMD_ERR("Queue is not configured for node = %d",
+			node->id);
+		return -ENOMEM;
 	}
 
 	DPAA2_PMD_DEBUG("tc_id = %d, channel = %d", tc_id,
 			node->parent->channel_id);
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
-			     ((node->parent->channel_id << 8) | tc_id),
-			     flow_id, options, &tx_flow_cfg);
+			((node->parent->channel_id << 8) | tc_id),
+			flow_id, options, &tx_flow_cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("Error in setting the tx flow: "
-		       "channel id  = %d tc_id= %d, param = 0x%x "
-		       "flow=%d err=%d", node->parent->channel_id, tc_id,
-		       ((node->parent->channel_id << 8) | tc_id), flow_id,
-		       ret);
-		return -1;
+		DPAA2_PMD_ERR("Set the TC[%d].ch[%d].TX flow[%d] (err=%d)",
+			tc_id, node->parent->channel_id, flow_id,
+			ret);
+		return ret;
 	}
 
 	dpaa2_q->flow_id = flow_id;
 	dpaa2_q->tc_index = tc_id;
 
 	ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-		DPNI_QUEUE_TX, ((node->parent->channel_id << 8) | dpaa2_q->tc_index),
-		dpaa2_q->flow_id, &tx_flow_cfg, &qid);
+			DPNI_QUEUE_TX,
+			((node->parent->channel_id << 8) | dpaa2_q->tc_index),
+			dpaa2_q->flow_id, &tx_flow_cfg, &qid);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-		return -1;
+		return ret;
 	}
 	dpaa2_q->fqid = qid.fqid;
 
@@ -621,8 +622,13 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 		 */
 		cong_notif_cfg.threshold_exit = (dpaa2_q->nb_desc * 9) / 10;
 		cong_notif_cfg.message_ctx = 0;
-		cong_notif_cfg.message_iova =
-			(size_t)DPAA2_VADDR_TO_IOVA(dpaa2_q->cscn);
+		iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(dpaa2_q->cscn,
+				sizeof(struct qbman_result));
+		if (iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("No IOMMU map for cscn(%p)", dpaa2_q->cscn);
+			return -ENOBUFS;
+		}
+		cong_notif_cfg.message_iova = iova;
 		cong_notif_cfg.dest_cfg.dest_type = DPNI_DEST_NONE;
 		cong_notif_cfg.notification_mode =
 					DPNI_CONG_OPT_WRITE_MEM_ON_ENTER |
@@ -641,6 +647,7 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 			return -ret;
 		}
 	}
+	dpaa2_q->tm_sw_td = true;
 
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 37/42] net/dpaa2: improve DPDMUX error behavior settings
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (35 preceding siblings ...)
  2024-10-22 19:12         ` [v4 36/42] net/dpaa2: check IOVA before sending MC command vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 38/42] net/dpaa2: store drop priority in mbuf vanshika.shukla
                           ` (5 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena

From: Sachin Saxena <sachin.saxena@nxp.com>

compatible with MC v10.36 or later

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index f4b8d481af..13de7d5783 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2021,2023 NXP
  */
 
 #include <sys/queue.h>
@@ -448,13 +448,12 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 		struct dpdmux_error_cfg mux_err_cfg;
 
 		memset(&mux_err_cfg, 0, sizeof(mux_err_cfg));
+		/* Note: Discarded flag(DPDMUX_ERROR_DISC) has effect only when
+		 * ERROR_ACTION is set to DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE.
+		 */
+		mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
 		mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE;
 
-		if (attr.method != DPDMUX_METHOD_C_VLAN_MAC)
-			mux_err_cfg.errors = DPDMUX_ERROR_DISC;
-		else
-			mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
-
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
 				dpdmux_dev->token, DPAA2_DPDMUX_DPMAC_IDX,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 38/42] net/dpaa2: store drop priority in mbuf
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (36 preceding siblings ...)
  2024-10-22 19:12         ` [v4 37/42] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 39/42] net/dpaa2: add API to get endpoint name vanshika.shukla
                           ` (4 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

store drop priority in mbuf from fd.

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 1 +
 drivers/net/dpaa2/dpaa2_rxtx.c          | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index b6cd1f00c4..cd22974752 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -329,6 +329,7 @@ enum qbman_fd_format {
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
 #define DPAA2_GET_FD_IVP(fd)   (((fd)->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_GET_FD_DROPP(fd)  (((fd)->simple.ctrl & 0x07000000) >> 24)
 #define DPAA2_GET_FD_FRC(fd)   ((fd)->simple.frc)
 #define DPAA2_GET_FD_FLC(fd) \
 	(((uint64_t)((fd)->simple.flc_hi) << 32) + (fd)->simple.flc_lo)
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index fd07a75a40..01e699d282 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -388,6 +388,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 	mbuf->pkt_len = mbuf->data_len;
 	mbuf->port = port_id;
 	mbuf->next = NULL;
+	mbuf->hash.sched.color = DPAA2_GET_FD_DROPP(fd);
 	rte_mbuf_refcnt_set(mbuf, 1);
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 39/42] net/dpaa2: add API to get endpoint name
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (37 preceding siblings ...)
  2024-10-22 19:12         ` [v4 38/42] net/dpaa2: store drop priority in mbuf vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 40/42] net/dpaa2: support VLAN traffic splitting vanshika.shukla
                           ` (3 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Export API in rte_pmd_dpaa2.h

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 24 ++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h  |  4 ++++
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  3 +++
 drivers/net/dpaa2/version.map     |  1 +
 4 files changed, 32 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 7a3937346c..137e116963 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2903,6 +2903,30 @@ rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+const char *
+rte_pmd_dpaa2_ep_name(uint32_t eth_id)
+{
+	struct rte_eth_dev *dev;
+	struct dpaa2_dev_priv *priv;
+
+	if (eth_id >= RTE_MAX_ETHPORTS)
+		return NULL;
+
+	if (!rte_pmd_dpaa2_dev_is_dpaa2(eth_id))
+		return NULL;
+
+	dev = &rte_eth_devices[eth_id];
+	if (!dev->data)
+		return NULL;
+
+	if (!dev->data->dev_private)
+		return NULL;
+
+	priv = dev->data->dev_private;
+
+	return priv->ep_name;
+}
+
 #if defined(RTE_LIBRTE_IEEE1588)
 int
 rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index a2b9fc5678..fd6bad7f74 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -385,6 +385,10 @@ struct dpaa2_dev_priv {
 	uint8_t max_cgs;
 	uint8_t cgid_in_use[MAX_RX_QUEUES];
 
+	enum rte_dpaa2_dev_type ep_dev_type;   /**< Endpoint Device Type */
+	uint16_t ep_object_id;                 /**< Endpoint DPAA2 Object ID */
+	char ep_name[RTE_DEV_NAME_MAX_LEN];
+
 	struct extract_s extract;
 
 	uint16_t ss_offset;
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index fc52a9218e..f93af1c65f 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -130,6 +130,9 @@ rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
 __rte_experimental
 int
 rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id);
+__rte_experimental
+const char *
+rte_pmd_dpaa2_ep_name(uint32_t eth_id);
 
 #if defined(RTE_LIBRTE_IEEE1588)
 __rte_experimental
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 233c6e6b2c..35815f7777 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -18,6 +18,7 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_get_tlu_hash;
 	# added in 24.11
 	rte_pmd_dpaa2_dev_is_dpaa2;
+	rte_pmd_dpaa2_ep_name;
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
 	rte_pmd_dpaa2_mux_dump_counter;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 40/42] net/dpaa2: support VLAN traffic splitting
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (38 preceding siblings ...)
  2024-10-22 19:12         ` [v4 39/42] net/dpaa2: add API to get endpoint name vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 41/42] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
                           ` (2 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds support for adding rules in DPDMUX
to split VLAN traffic based on VLAN ids.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 13de7d5783..c8f1d46bb2 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -118,6 +118,26 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	}
 	break;
 
+	case RTE_FLOW_ITEM_TYPE_VLAN:
+	{
+		const struct rte_flow_item_vlan *spec;
+
+		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
+		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
+		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
+		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FROM_FIELD;
+		kg_cfg.extracts[0].extract.from_hdr.offset = 1;
+		kg_cfg.extracts[0].extract.from_hdr.size = 1;
+		kg_cfg.num_extracts = 1;
+
+		spec = (const struct rte_flow_item_vlan *)pattern[0]->spec;
+		memcpy((void *)key_iova, (const void *)(&spec->hdr.vlan_tci),
+			sizeof(uint16_t));
+		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
+		key_size = sizeof(uint16_t);
+	}
+	break;
+
 	case RTE_FLOW_ITEM_TYPE_UDP:
 	{
 		const struct rte_flow_item_udp *spec;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 41/42] net/dpaa2: add support for C-VLAN and MAC
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (39 preceding siblings ...)
  2024-10-22 19:12         ` [v4 40/42] net/dpaa2: support VLAN traffic splitting vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-22 19:12         ` [v4 42/42] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds the support for DPDMUX_METHOD_C_VLAN_MAC method
which implements DPDMUX based on C-VLAN and MAC address.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     |  2 +-
 drivers/net/dpaa2/mc/fsl_dpdmux.h | 16 ++++++++++++++++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index c8f1d46bb2..6e10739dd3 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021,2023 NXP
+ * Copyright 2018-2024 NXP
  */
 
 #include <sys/queue.h>
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 97b09e59f9..70b81f3b3b 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -593,6 +593,22 @@ int dpdmux_dump_table(struct fsl_mc_io *mc_io,
  */
 #define DPDMUX__ERROR_L4CE			0x00000001
 
+#define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
+				 DPDMUX__ERROR_L4CV | \
+				 DPDMUX__ERROR_L3CE | \
+				 DPDMUX__ERROR_L3CV | \
+				 DPDMUX_ERROR_BLE | \
+				 DPDMUX_ERROR_PHE | \
+				 DPDMUX_ERROR_ISP | \
+				 DPDMUX_ERROR_PTE | \
+				 DPDMUX_ERROR_FPE | \
+				 DPDMUX_ERROR_FLE | \
+				 DPDMUX_ERROR_PIEE | \
+				 DPDMUX_ERROR_TIDE | \
+				 DPDMUX_ERROR_MNLE | \
+				 DPDMUX_ERROR_EOFHE | \
+				 DPDMUX_ERROR_KSE)
+
 #define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
 				 DPDMUX__ERROR_L4CV | \
 				 DPDMUX__ERROR_L3CE | \
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v4 42/42] net/dpaa2: dpdmux single flow/multiple rules support
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (40 preceding siblings ...)
  2024-10-22 19:12         ` [v4 41/42] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
@ 2024-10-22 19:12         ` vanshika.shukla
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-22 19:12 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support multiple extractions as well as hardware descriptions
instead of hard code.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h     |   1 +
 drivers/net/dpaa2/dpaa2_flow.c       |  22 --
 drivers/net/dpaa2/dpaa2_mux.c        | 393 ++++++++++++++++-----------
 drivers/net/dpaa2/dpaa2_parse_dump.h |   2 +
 drivers/net/dpaa2/rte_pmd_dpaa2.h    |   8 +-
 5 files changed, 246 insertions(+), 180 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fd6bad7f74..fd3119247a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -198,6 +198,7 @@ enum dpaa2_rx_faf_offset {
 	FAF_IPV4_FRAM = 34 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IPV6_FRAM = 42 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IP_FRAM = 48 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IP_FRAG_FRAM = 50 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_ICMP_FRAM = 57 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IGMP_FRAM = 58 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_GRE_FRAM = 65 + DPAA2_FAFE_PSR_SIZE * 8,
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index fb635815aa..1ec2b83b7d 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -98,13 +98,6 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = {
 	RTE_FLOW_ACTION_TYPE_RSS
 };
 
-static const
-enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
-	RTE_FLOW_ACTION_TYPE_QUEUE,
-	RTE_FLOW_ACTION_TYPE_PORT_ID,
-	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
-};
-
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
 	.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
@@ -4079,21 +4072,6 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
-static inline int
-dpaa2_fs_action_supported(enum rte_flow_action_type action)
-{
-	int i;
-	int action_num = sizeof(dpaa2_supported_fs_action_type) /
-		sizeof(enum rte_flow_action_type);
-
-	for (i = 0; i < action_num; i++) {
-		if (action == dpaa2_supported_fs_action_type[i])
-			return true;
-	}
-
-	return false;
-}
-
 static inline int
 dpaa2_flow_verify_attr(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr)
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 6e10739dd3..a6d35f2fcb 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -32,8 +32,9 @@ struct dpaa2_dpdmux_dev {
 	uint8_t num_ifs;   /* Number of interfaces in DPDMUX */
 };
 
-struct rte_flow {
-	struct dpdmux_rule_cfg rule;
+#define DPAA2_MUX_FLOW_MAX_RULE_NUM 8
+struct dpaa2_mux_flow {
+	struct dpdmux_rule_cfg rule[DPAA2_MUX_FLOW_MAX_RULE_NUM];
 };
 
 TAILQ_HEAD(dpdmux_dev_list, dpaa2_dpdmux_dev);
@@ -53,204 +54,287 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 	return dpdmux_dev;
 }
 
-struct rte_flow *
+int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
-			      struct rte_flow_item *pattern[],
-			      struct rte_flow_action *actions[])
+	struct rte_flow_item pattern[],
+	struct rte_flow_action actions[])
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev;
+	static struct dpkg_profile_cfg s_kg_cfg;
 	struct dpkg_profile_cfg kg_cfg;
 	const struct rte_flow_action_vf *vf_conf;
 	struct dpdmux_cls_action dpdmux_action;
-	struct rte_flow *flow = NULL;
-	void *key_iova, *mask_iova, *key_cfg_iova = NULL;
+	uint8_t *key_va = NULL, *mask_va = NULL;
+	void *key_cfg_va = NULL;
+	uint64_t key_iova, mask_iova, key_cfg_iova;
 	uint8_t key_size = 0;
-	int ret;
-	static int i;
+	int ret = 0, loop = 0;
+	static int s_i;
+	struct dpkg_extract *extract;
+	struct dpdmux_rule_cfg rule;
 
-	if (!pattern || !actions || !pattern[0] || !actions[0])
-		return NULL;
+	memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
 
 	/* Find the DPDMUX from dpdmux_id in our list */
 	dpdmux_dev = get_dpdmux_from_id(dpdmux_id);
 	if (!dpdmux_dev) {
 		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
-		return NULL;
+		ret = -ENODEV;
+		goto creation_error;
 	}
 
-	key_cfg_iova = rte_zmalloc(NULL, DIST_PARAM_IOVA_SIZE,
-				   RTE_CACHE_LINE_SIZE);
-	if (!key_cfg_iova) {
-		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
-		return NULL;
+	key_cfg_va = rte_zmalloc(NULL, DIST_PARAM_IOVA_SIZE,
+				RTE_CACHE_LINE_SIZE);
+	if (!key_cfg_va) {
+		DPAA2_PMD_ERR("Unable to allocate key configure buffer");
+		ret = -ENOMEM;
+		goto creation_error;
+	}
+
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_va,
+		DIST_PARAM_IOVA_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_va);
+		ret = -ENOBUFS;
+		goto creation_error;
 	}
-	flow = rte_zmalloc(NULL, sizeof(struct rte_flow) +
-			   (2 * DIST_PARAM_IOVA_SIZE), RTE_CACHE_LINE_SIZE);
-	if (!flow) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+
+	key_va = rte_zmalloc(NULL, (2 * DIST_PARAM_IOVA_SIZE),
+		RTE_CACHE_LINE_SIZE);
+	if (!key_va) {
+		DPAA2_PMD_ERR("Unable to allocate flow dist parameter");
+		ret = -ENOMEM;
 		goto creation_error;
 	}
-	key_iova = (void *)((size_t)flow + sizeof(struct rte_flow));
-	mask_iova = (void *)((size_t)key_iova + DIST_PARAM_IOVA_SIZE);
+
+	key_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_va,
+		(2 * DIST_PARAM_IOVA_SIZE));
+	if (key_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU mapping for address(%p)",
+			__func__, key_va);
+		ret = -ENOBUFS;
+		goto creation_error;
+	}
+
+	mask_va = key_va + DIST_PARAM_IOVA_SIZE;
+	mask_iova = key_iova + DIST_PARAM_IOVA_SIZE;
 
 	/* Currently taking only IP protocol as an extract type.
 	 * This can be extended to other fields using pattern->type.
 	 */
 	memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
 
-	switch (pattern[0]->type) {
-	case RTE_FLOW_ITEM_TYPE_IPV4:
-	{
-		const struct rte_flow_item_ipv4 *spec;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_IP;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_ipv4 *)pattern[0]->spec;
-		memcpy(key_iova, (const void *)(&spec->hdr.next_proto_id),
-			sizeof(uint8_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint8_t));
-		key_size = sizeof(uint8_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_VLAN:
-	{
-		const struct rte_flow_item_vlan *spec;
-
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FROM_FIELD;
-		kg_cfg.extracts[0].extract.from_hdr.offset = 1;
-		kg_cfg.extracts[0].extract.from_hdr.size = 1;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_vlan *)pattern[0]->spec;
-		memcpy((void *)key_iova, (const void *)(&spec->hdr.vlan_tci),
-			sizeof(uint16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_UDP:
-	{
-		const struct rte_flow_item_udp *spec;
-		uint16_t udp_dst_port;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_UDP;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_udp *)pattern[0]->spec;
-		udp_dst_port = rte_constant_bswap16(spec->hdr.dst_port);
-		memcpy((void *)key_iova, (const void *)&udp_dst_port,
-							sizeof(rte_be16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_ETH:
-	{
-		const struct rte_flow_item_eth *spec;
-		uint16_t eth_type;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_ETH;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_ETH_TYPE;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
-		memcpy((void *)key_iova, (const void *)&eth_type,
-							sizeof(rte_be16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_RAW:
-	{
-		const struct rte_flow_item_raw *spec;
-
-		spec = (const struct rte_flow_item_raw *)pattern[0]->spec;
-		kg_cfg.extracts[0].extract.from_data.offset = spec->offset;
-		kg_cfg.extracts[0].extract.from_data.size = spec->length;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_DATA;
-		kg_cfg.num_extracts = 1;
-		memcpy((void *)key_iova, (const void *)spec->pattern,
-							spec->length);
-		memcpy(mask_iova, pattern[0]->mask, spec->length);
-
-		key_size = spec->length;
-	}
-	break;
+	while (pattern[loop].type != RTE_FLOW_ITEM_TYPE_END) {
+		if (kg_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+			DPAA2_PMD_ERR("Too many extracts(%d)",
+				kg_cfg.num_extracts);
+			ret = -ENOTSUP;
+			goto creation_error;
+		}
+		switch (pattern[loop].type) {
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		{
+			const struct rte_flow_item_ipv4 *spec;
+			const struct rte_flow_item_ipv4 *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_IP;
+			extract->extract.from_hdr.field = NH_FLD_IP_PROTO;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->hdr.next_proto_id, sizeof(uint8_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->hdr.next_proto_id,
+					sizeof(uint8_t));
+			} else {
+				mask_va[key_size] = 0xff;
+			}
+			key_size += sizeof(uint8_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+		{
+			const struct rte_flow_item_vlan *spec;
+			const struct rte_flow_item_vlan *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_VLAN;
+			extract->extract.from_hdr.field = NH_FLD_VLAN_TCI;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->tci, sizeof(uint16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->tci, sizeof(uint16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(uint16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_UDP:
+		{
+			const struct rte_flow_item_udp *spec;
+			const struct rte_flow_item_udp *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_UDP;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+			extract->extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->hdr.dst_port, sizeof(rte_be16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->hdr.dst_port,
+					sizeof(rte_be16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(rte_be16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_ETH:
+		{
+			const struct rte_flow_item_eth *spec;
+			const struct rte_flow_item_eth *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_ETH;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+			extract->extract.from_hdr.field = NH_FLD_ETH_TYPE;
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->type, sizeof(rte_be16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->type, sizeof(rte_be16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(rte_be16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_RAW:
+		{
+			const struct rte_flow_item_raw *spec;
+			const struct rte_flow_item_raw *mask;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_DATA;
+			extract->extract.from_data.offset = spec->offset;
+			extract->extract.from_data.size = spec->length;
+			kg_cfg.num_extracts++;
+
+			rte_memcpy(&key_va[key_size],
+				spec->pattern, spec->length);
+			if (mask && mask->pattern) {
+				rte_memcpy(&mask_va[key_size],
+					mask->pattern, spec->length);
+			} else {
+				memset(&mask_va[key_size], 0xff, spec->length);
+			}
+
+			key_size += spec->length;
+		}
+		break;
 
-	default:
-		DPAA2_PMD_ERR("Not supported pattern type: %d",
-				pattern[0]->type);
-		goto creation_error;
+		default:
+			DPAA2_PMD_ERR("Not supported pattern[%d] type: %d",
+				loop, pattern[loop].type);
+			ret = -ENOTSUP;
+			goto creation_error;
+		}
+		loop++;
 	}
 
-	ret = dpkg_prepare_key_cfg(&kg_cfg, key_cfg_iova);
+	ret = dpkg_prepare_key_cfg(&kg_cfg, key_cfg_va);
 	if (ret) {
 		DPAA2_PMD_ERR("dpkg_prepare_key_cfg failed: err(%d)", ret);
 		goto creation_error;
 	}
 
-	/* Multiple rules with same DPKG extracts (kg_cfg.extracts) like same
-	 * offset and length values in raw is supported right now. Different
-	 * values of kg_cfg may not work.
-	 */
-	if (i == 0) {
-		ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-					    dpdmux_dev->token,
-				(uint64_t)(DPAA2_VADDR_TO_IOVA(key_cfg_iova)));
+	if (!s_i) {
+		ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux,
+				CMD_PRI_LOW, dpdmux_dev->token, key_cfg_iova);
 		if (ret) {
 			DPAA2_PMD_ERR("dpdmux_set_custom_key failed: err(%d)",
-					ret);
+				ret);
+			goto creation_error;
+		}
+		rte_memcpy(&s_kg_cfg, &kg_cfg, sizeof(struct dpkg_profile_cfg));
+	} else {
+		if (memcmp(&s_kg_cfg, &kg_cfg,
+			sizeof(struct dpkg_profile_cfg))) {
+			DPAA2_PMD_ERR("%s: Single flow support only.",
+				__func__);
+			ret = -ENOTSUP;
 			goto creation_error;
 		}
 	}
-	/* As now our key extract parameters are set, let us configure
-	 * the rule.
-	 */
-	flow->rule.key_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(key_iova));
-	flow->rule.mask_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(mask_iova));
-	flow->rule.key_size = key_size;
-	flow->rule.entry_index = i++;
 
-	vf_conf = (const struct rte_flow_action_vf *)(actions[0]->conf);
+	vf_conf = actions[0].conf;
 	if (vf_conf->id == 0 || vf_conf->id > dpdmux_dev->num_ifs) {
-		DPAA2_PMD_ERR("Invalid destination id");
+		DPAA2_PMD_ERR("Invalid destination id(%d)", vf_conf->id);
 		goto creation_error;
 	}
 	dpdmux_action.dest_if = vf_conf->id;
 
-	ret = dpdmux_add_custom_cls_entry(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-					  dpdmux_dev->token, &flow->rule,
-					  &dpdmux_action);
+	rule.key_iova = key_iova;
+	rule.mask_iova = mask_iova;
+	rule.key_size = key_size;
+	rule.entry_index = s_i;
+	s_i++;
+
+	/* As now our key extract parameters are set, let us configure
+	 * the rule.
+	 */
+	ret = dpdmux_add_custom_cls_entry(&dpdmux_dev->dpdmux,
+			CMD_PRI_LOW, dpdmux_dev->token,
+			&rule, &dpdmux_action);
 	if (ret) {
-		DPAA2_PMD_ERR("dpdmux_add_custom_cls_entry failed: err(%d)",
-			      ret);
+		DPAA2_PMD_ERR("Add classification entry failed:err(%d)", ret);
 		goto creation_error;
 	}
 
-	return flow;
-
 creation_error:
-	rte_free((void *)key_cfg_iova);
-	rte_free((void *)flow);
-	return NULL;
+	if (key_cfg_va)
+		rte_free(key_cfg_va);
+	if (key_va)
+		rte_free(key_va);
+
+	return ret;
 }
 
 int
@@ -407,10 +491,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	PMD_INIT_FUNC_TRACE();
 
 	/* Allocate DPAA2 dpdmux handle */
-	dpdmux_dev = rte_malloc(NULL, sizeof(struct dpaa2_dpdmux_dev), 0);
+	dpdmux_dev = rte_zmalloc(NULL,
+		sizeof(struct dpaa2_dpdmux_dev), RTE_CACHE_LINE_SIZE);
 	if (!dpdmux_dev) {
 		DPAA2_PMD_ERR("Memory allocation failed for DPDMUX Device");
-		return -1;
+		return -ENOMEM;
 	}
 
 	/* Open the dpdmux object */
diff --git a/drivers/net/dpaa2/dpaa2_parse_dump.h b/drivers/net/dpaa2/dpaa2_parse_dump.h
index f1cdc003de..78fd3b768c 100644
--- a/drivers/net/dpaa2/dpaa2_parse_dump.h
+++ b/drivers/net/dpaa2/dpaa2_parse_dump.h
@@ -105,6 +105,8 @@ dpaa2_print_faf(struct dpaa2_fapr_array *fapr)
 			faf_bits[i].name = "IPv4 1 Present";
 		else if (i == FAF_IPV6_FRAM)
 			faf_bits[i].name = "IPv6 1 Present";
+		else if (i == FAF_IP_FRAG_FRAM)
+			faf_bits[i].name = "IP fragment Present";
 		else if (i == FAF_UDP_FRAM)
 			faf_bits[i].name = "UDP Present";
 		else if (i == FAF_TCP_FRAM)
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index f93af1c65f..237c3cd6e7 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -26,12 +26,12 @@
  *    Associated actions.
  *
  * @return
- *    A valid handle in case of success, NULL otherwise.
+ *    0 in case of success,  otherwise failure.
  */
-struct rte_flow *
+int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
-			      struct rte_flow_item *pattern[],
-			      struct rte_flow_action *actions[]);
+	struct rte_flow_item pattern[],
+	struct rte_flow_action actions[]);
 int
 rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
 	uint16_t entry_index);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* Re: [v4 23/42] net/dpaa2: flow API refactor
  2024-10-22 19:12         ` [v4 23/42] net/dpaa2: flow API refactor vanshika.shukla
@ 2024-10-23  0:52           ` Stephen Hemminger
  2024-10-23 12:04             ` [EXT] " Vanshika Shukla
  0 siblings, 1 reply; 229+ messages in thread
From: Stephen Hemminger @ 2024-10-23  0:52 UTC (permalink / raw)
  To: vanshika.shukla; +Cc: dev, Hemant Agrawal, Sachin Saxena, Jun Yang

On Wed, 23 Oct 2024 00:42:36 +0530
vanshika.shukla@nxp.com wrote:

> From: Jun Yang <jun.yang@nxp.com>
> 
> 1) Gather redundant code with same logic from various protocol
>    handlers to create common functions.
> 2) struct dpaa2_key_profile is used to describe each extract's
>    offset of rule and size. It's easy to insert new extract previous
>    IP address extract.
> 3) IP address profile is used to describe ipv4/v6 addresses extracts
>    located at end of rule.
> 4) L4 ports profile is used to describe the ports positions and offsets
>    of rule.
> 5) Once the extracts of QoS/FS table are update, go through all
>    the existing flows of this table to update the rule data.
> 
> Signed-off-by: Jun Yang <jun.yang@nxp.com>

Before it looked possible to dump flow info to file, now it only goes
to stdout. Is that ok?

^ permalink raw reply	[flat|nested] 229+ messages in thread

* Re: [v4 16/42] bus/fslmc: dynamic IOVA mode configuration
  2024-10-22 19:12         ` [v4 16/42] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
@ 2024-10-23  1:02           ` Stephen Hemminger
  0 siblings, 0 replies; 229+ messages in thread
From: Stephen Hemminger @ 2024-10-23  1:02 UTC (permalink / raw)
  To: vanshika.shukla
  Cc: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh, Jun Yang

On Wed, 23 Oct 2024 00:42:29 +0530
vanshika.shukla@nxp.com wrote:

> +		if (mem_rsp->mp_param.result == SOCKET_OK) {
> +			rte_memcpy(&fslmc_memsegs,
> +				&mem_rsp->memsegs,
> +				sizeof(struct fslmc_dmaseg_list));
> +			rte_memcpy(&fslmc_memsegs,
> +				&mem_rsp->memsegs,
> +				sizeof(struct fslmc_dmaseg_list));

Why are you using rte_memcpy() instead of a structure assignment.
Using memcpy loses type safety.

^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 00/42] DPAA2 specific patches
  2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
                           ` (41 preceding siblings ...)
  2024-10-22 19:12         ` [v4 42/42] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
@ 2024-10-23 11:59         ` vanshika.shukla
  2024-10-23 11:59           ` [v5 01/42] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
                             ` (42 more replies)
  42 siblings, 43 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This series includes:
-> Fixes and enhancements for NXP DPAA2 drivers.
-> Upgrade with MC version 10.37
-> Enhancements in DPDMUX code
-> Fixes for coverity issues reported

V2 changes:
Fixed the broken compilation for clang in:
        "net/dpaa2: dpdmux single flow/multiple rules support" patch.
Fixed checkpatch warnings in the below patches:
        "net/dpaa2: protocol inside tunnel distribution"
        "net/dpaa2: add VXLAN distribution support"
        "bus/fslmc: dynamic IOVA mode configuration"
        "bus/fslmc: enhance MC VFIO multiprocess support"

V3 changes:
Rebased to the latest commit.

V4 changes:
Fixed the checkpatch warnings in:
        "bus/fslmc: get MC VFIO group FD directly"
        "bus/fslmc: dynamic IOVA mode configuration"
        "net/dpaa2: add GTP flow support"
        "net/dpaa2: add flow support for IPsec AH and ESP
        "bus/fslmc: enhance MC VFIO multiprocess support"
Resolved comments by the reviewer.

V5 changes:
Resolved comments by the reviewer in:
	"bus/fslmc: dynamic IOVA mode configuration"

Apeksha Gupta (2):
  net/dpaa2: add proper MTU debugging print
  net/dpaa2: store drop priority in mbuf

Brick Yang (1):
  net/dpaa2: update DPNI link status method

Gagandeep Singh (3):
  bus/fslmc: upgrade with MC version 10.37
  net/dpaa2: fix memory corruption in TM
  net/dpaa2: support software taildrop

Hemant Agrawal (2):
  net/dpaa2: add support to dump dpdmux counters
  bus/fslmc: change dpcon close as internal symbol

Jun Yang (23):
  net/dpaa2: enhance Tx scatter-gather mempool
  net/dpaa2: add new PMD API to check dpaa platform version
  bus/fslmc: improve BMAN buffer acquire
  bus/fslmc: get MC VFIO group FD directly
  bus/fslmc: enhance MC VFIO multiprocess support
  bus/fslmc: dynamic IOVA mode configuration
  bus/fslmc: remove VFIO IRQ mapping
  bus/fslmc: create dpaa2 device with it's object
  bus/fslmc: introduce VFIO DMA mapping API for fslmc
  net/dpaa2: flow API refactor
  net/dpaa2: dump Rx parser result
  net/dpaa2: enhancement of raw flow extract
  net/dpaa2: frame attribute flags parser
  net/dpaa2: add VXLAN distribution support
  net/dpaa2: protocol inside tunnel distribution
  net/dpaa2: eCPRI support by parser result
  net/dpaa2: add GTP flow support
  net/dpaa2: check if Soft parser is loaded
  net/dpaa2: soft parser flow verification
  net/dpaa2: add flow support for IPsec AH and ESP
  net/dpaa2: check IOVA before sending MC command
  net/dpaa2: add API to get endpoint name
  net/dpaa2: dpdmux single flow/multiple rules support

Rohit Raj (6):
  bus/fslmc: add close API to close DPAA2 device
  net/dpaa2: support link state for eth interfaces
  bus/fslmc: free VFIO group FD in case of add group failure
  bus/fslmc: fix coverity issue
  bus/fslmc: change qbman eq desc from d to desc
  net/dpaa2: change miss flow ID macro name

Sachin Saxena (1):
  net/dpaa2: improve DPDMUX error behavior settings

Vanshika Shukla (4):
  net/dpaa2: support PTP packet one-step timestamp
  net/dpaa2: dpdmux: add support for CVLAN
  net/dpaa2: support VLAN traffic splitting
  net/dpaa2: add support for C-VLAN and MAC

 doc/guides/platform/dpaa2.rst                 |    4 +-
 drivers/bus/fslmc/bus_fslmc_driver.h          |   72 +-
 drivers/bus/fslmc/fslmc_bus.c                 |   62 +-
 drivers/bus/fslmc/fslmc_vfio.c                | 1621 +++-
 drivers/bus/fslmc/fslmc_vfio.h                |   35 +-
 drivers/bus/fslmc/mc/dpio.c                   |   94 +-
 drivers/bus/fslmc/mc/fsl_dpcon.h              |    6 +-
 drivers/bus/fslmc/mc/fsl_dpio.h               |   21 +-
 drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |   13 +-
 drivers/bus/fslmc/mc/fsl_dpmng.h              |    4 +-
 drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |    8 +-
 drivers/bus/fslmc/meson.build                 |    3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |   38 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |   38 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   50 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |    3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dprc.c      |    8 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  114 +-
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |   12 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |   49 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   30 +-
 drivers/bus/fslmc/version.map                 |   16 +-
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |   91 +-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |   47 +-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |   19 +-
 drivers/dma/dpaa2/dpaa2_qdma.c                |    1 +
 drivers/event/dpaa2/dpaa2_hw_dpcon.c          |   38 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |    2 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |   63 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  597 +-
 drivers/net/dpaa2/dpaa2_ethdev.h              |  225 +-
 drivers/net/dpaa2/dpaa2_flow.c                | 7066 ++++++++++-------
 drivers/net/dpaa2/dpaa2_mux.c                 |  541 +-
 drivers/net/dpaa2/dpaa2_parse_dump.h          |  250 +
 drivers/net/dpaa2/dpaa2_ptp.c                 |    8 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |   32 +-
 drivers/net/dpaa2/dpaa2_sparser.c             |   25 +-
 drivers/net/dpaa2/dpaa2_tm.c                  |   72 +-
 drivers/net/dpaa2/mc/dpdmux.c                 |  205 +-
 drivers/net/dpaa2/mc/dpkg.c                   |   12 +-
 drivers/net/dpaa2/mc/dpni.c                   |  383 +-
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |   99 +-
 drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |   83 +-
 drivers/net/dpaa2/mc/fsl_dpkg.h               |    7 +-
 drivers/net/dpaa2/mc/fsl_dpni.h               |  176 +-
 drivers/net/dpaa2/mc/fsl_dpni_cmd.h           |  125 +-
 drivers/net/dpaa2/rte_pmd_dpaa2.h             |   51 +-
 drivers/net/dpaa2/version.map                 |    6 +
 48 files changed, 8271 insertions(+), 4254 deletions(-)
 create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 01/42] net/dpaa2: enhance Tx scatter-gather mempool
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 02/42] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
                             ` (41 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Create TX SG pool only for primary process and lookup
this pool in secondary process.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 46 +++++++++++++++++++++++---------
 1 file changed, 33 insertions(+), 13 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 7b3e587a8d..4b93606de1 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2870,6 +2870,35 @@ int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+static int dpaa2_tx_sg_pool_init(void)
+{
+	char name[RTE_MEMZONE_NAMESIZE];
+
+	if (dpaa2_tx_sg_pool)
+		return 0;
+
+	sprintf(name, "dpaa2_mbuf_tx_sg_pool");
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		dpaa2_tx_sg_pool = rte_pktmbuf_pool_create(name,
+			DPAA2_POOL_SIZE,
+			DPAA2_POOL_CACHE_SIZE, 0,
+			DPAA2_MAX_SGS * sizeof(struct qbman_sge),
+			rte_socket_id());
+		if (!dpaa2_tx_sg_pool) {
+			DPAA2_PMD_ERR("SG pool creation failed");
+			return -ENOMEM;
+		}
+	} else {
+		dpaa2_tx_sg_pool = rte_mempool_lookup(name);
+		if (!dpaa2_tx_sg_pool) {
+			DPAA2_PMD_ERR("SG pool lookup failed");
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
 static int
 rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 		struct rte_dpaa2_device *dpaa2_dev)
@@ -2924,19 +2953,10 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 
 	/* Invoke PMD device initialization function */
 	diag = dpaa2_dev_init(eth_dev);
-	if (diag == 0) {
-		if (!dpaa2_tx_sg_pool) {
-			dpaa2_tx_sg_pool =
-				rte_pktmbuf_pool_create("dpaa2_mbuf_tx_sg_pool",
-				DPAA2_POOL_SIZE,
-				DPAA2_POOL_CACHE_SIZE, 0,
-				DPAA2_MAX_SGS * sizeof(struct qbman_sge),
-				rte_socket_id());
-			if (dpaa2_tx_sg_pool == NULL) {
-				DPAA2_PMD_ERR("SG pool creation failed");
-				return -ENOMEM;
-			}
-		}
+	if (!diag) {
+		diag = dpaa2_tx_sg_pool_init();
+		if (diag)
+			return diag;
 		rte_eth_dev_probing_finish(eth_dev);
 		dpaa2_valid_dev++;
 		return 0;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 02/42] net/dpaa2: support PTP packet one-step timestamp
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
  2024-10-23 11:59           ` [v5 01/42] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 03/42] net/dpaa2: add proper MTU debugging print vanshika.shukla
                             ` (40 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds PTP one-step timestamping support.
dpni_set_single_step_cfg() MC API is utilized with offset provided
to insert correction time on frame.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 61 +++++++++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h  |  3 ++
 drivers/net/dpaa2/rte_pmd_dpaa2.h | 10 +++++
 drivers/net/dpaa2/version.map     |  3 ++
 4 files changed, 77 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4b93606de1..051ebd9d8e 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -548,6 +548,9 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	int tx_l4_csum_offload = false;
 	int ret, tc_index;
 	uint32_t max_rx_pktlen;
+#if defined(RTE_LIBRTE_IEEE1588)
+	uint16_t ptp_correction_offset;
+#endif
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -632,6 +635,11 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
+#if defined(RTE_LIBRTE_IEEE1588)
+	/* By default setting ptp correction offset for Ethernet SYNC packets */
+	ptp_correction_offset = RTE_ETHER_HDR_LEN + 8;
+	rte_pmd_dpaa2_set_one_step_ts(dev->data->port_id, ptp_correction_offset, 0);
+#endif
 	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
@@ -2870,6 +2878,59 @@ int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+#if defined(RTE_LIBRTE_IEEE1588)
+int
+rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpni = priv->eth_dev->process_private;
+	struct dpni_single_step_cfg ptp_cfg;
+	int err;
+
+	if (!mc_query)
+		return priv->ptp_correction_offset;
+
+	err = dpni_get_single_step_cfg(dpni, CMD_PRI_LOW, priv->token, &ptp_cfg);
+	if (err) {
+		DPAA2_PMD_ERR("Failed to retrieve onestep configuration");
+		return err;
+	}
+
+	if (!ptp_cfg.ptp_onestep_reg_base) {
+		DPAA2_PMD_ERR("1588 onestep reg not available");
+		return -1;
+	}
+
+	priv->ptp_correction_offset = ptp_cfg.offset;
+
+	return priv->ptp_correction_offset;
+}
+
+int
+rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	struct fsl_mc_io *dpni = dev->process_private;
+	struct dpni_single_step_cfg cfg;
+	int err;
+
+	cfg.en = 1;
+	cfg.ch_update = ch_update;
+	cfg.offset = offset;
+	cfg.peer_delay = 0;
+
+	err = dpni_set_single_step_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
+	if (err)
+		return err;
+
+	priv->ptp_correction_offset = offset;
+
+	return 0;
+}
+#endif
+
 static int dpaa2_tx_sg_pool_init(void)
 {
 	char name[RTE_MEMZONE_NAMESIZE];
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 9feb631d5f..6625afaba3 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -230,6 +230,9 @@ struct dpaa2_dev_priv {
 	rte_spinlock_t lpbk_qp_lock;
 
 	uint8_t channel_inuse;
+	/* Stores correction offset for one step timestamping */
+	uint16_t ptp_correction_offset;
+
 	LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */
 	LIST_HEAD(nodes, dpaa2_tm_node) nodes;
 	LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles;
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index a1152eb717..aea9bae905 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -102,4 +102,14 @@ rte_pmd_dpaa2_thread_init(void);
 __rte_experimental
 uint32_t
 rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
+
+#if defined(RTE_LIBRTE_IEEE1588)
+__rte_experimental
+int
+rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update);
+
+__rte_experimental
+int
+rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query);
+#endif
 #endif /* _RTE_PMD_DPAA2_H */
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index ba756d26bd..2d95303e27 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -16,6 +16,9 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_thread_init;
 	# added in 21.11
 	rte_pmd_dpaa2_get_tlu_hash;
+	# added in 24.11
+	rte_pmd_dpaa2_set_one_step_ts;
+	rte_pmd_dpaa2_get_one_step_ts;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 03/42] net/dpaa2: add proper MTU debugging print
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
  2024-10-23 11:59           ` [v5 01/42] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
  2024-10-23 11:59           ` [v5 02/42] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 04/42] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
                             ` (39 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Apeksha Gupta, Jun Yang

From: Apeksha Gupta <apeksha.gupta@nxp.com>

This patch add proper debug info for check information of
max-pkt-len and configured params.

also store MTU

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 051ebd9d8e..ab64df6a59 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -579,9 +579,11 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 			DPAA2_PMD_ERR("Unable to set mtu. check config");
 			return ret;
 		}
-		DPAA2_PMD_INFO("MTU configured for the device: %d",
+		DPAA2_PMD_DEBUG("MTU configured for the device: %d",
 				dev->data->mtu);
 	} else {
+		DPAA2_PMD_ERR("Configured mtu %d and calculated max-pkt-len is %d which should be <= %d",
+			eth_conf->rxmode.mtu, max_rx_pktlen, DPAA2_MAX_RX_PKT_LEN);
 		return -1;
 	}
 
@@ -1537,6 +1539,7 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 		DPAA2_PMD_ERR("Setting the max frame length failed");
 		return -1;
 	}
+	dev->data->mtu = mtu;
 	DPAA2_PMD_INFO("MTU configured for the device: %d", mtu);
 	return 0;
 }
@@ -2839,6 +2842,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		DPAA2_PMD_ERR("Unable to set mtu. check config");
 		goto init_err;
 	}
+	eth_dev->data->mtu = RTE_ETHER_MTU;
 
 	/*TODO To enable soft parser support DPAA2 driver needs to integrate
 	 * with external entity to receive byte code for software sequence
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 04/42] net/dpaa2: add support to dump dpdmux counters
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (2 preceding siblings ...)
  2024-10-23 11:59           ` [v5 03/42] net/dpaa2: add proper MTU debugging print vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 05/42] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
                             ` (38 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Hemant Agrawal <hemant.agrawal@nxp.com>

This patch add supports to dump dpdmux counters as they are required
to identify the reasons for packet drop in dpdmux.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     | 84 +++++++++++++++++++++++++++++++
 drivers/net/dpaa2/rte_pmd_dpaa2.h | 18 +++++++
 drivers/net/dpaa2/version.map     |  1 +
 3 files changed, 103 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 7dd5a60966..b2ec5337b1 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -259,6 +259,90 @@ rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 	return ret;
 }
 
+/* dump the status of the dpaa2_mux counters on the console */
+void
+rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
+{
+	struct dpaa2_dpdmux_dev *dpdmux;
+	uint64_t counter;
+	int ret;
+	int if_id;
+
+	/* Find the DPDMUX from dpdmux_id in our list */
+	dpdmux = get_dpdmux_from_id(dpdmux_id);
+	if (!dpdmux) {
+		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
+		return;
+	}
+
+	for (if_id = 0; if_id < num_if; if_id++) {
+		fprintf(f, "dpdmux.%d\n", if_id);
+
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FRAME, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BYTE, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FLTR_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FLTR_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_FRAME_DISCARD,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_FRAME_DISCARD %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_MCAST_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_MCAST_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_MCAST_BYTE,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_MCAST_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BCAST_FRAME,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BCAST_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_ING_BCAST_BYTES,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_ING_BCAST_BYTES %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_FRAME, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_FRAME %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_BYTE, &counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_BYTE %" PRIu64 "\n",
+				counter);
+		ret = dpdmux_if_get_counter(&dpdmux->dpdmux, CMD_PRI_LOW,
+			dpdmux->token, if_id, DPDMUX_CNT_EGR_FRAME_DISCARD,
+			&counter);
+		if (!ret)
+			fprintf(f, "DPDMUX_CNT_EGR_FRAME_DISCARD %" PRIu64 "\n",
+				counter);
+	}
+}
+
 static int
 dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 			   struct vfio_device_info *obj_info __rte_unused,
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index aea9bae905..fd9acd841b 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -33,6 +33,24 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			      struct rte_flow_item *pattern[],
 			      struct rte_flow_action *actions[]);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Dump demultiplex ethernet traffic counters
+ *
+ * @param f
+ *    output stream
+ * @param dpdmux_id
+ *    ID of the DPDMUX MC object.
+ * @param num_if
+ *    number of interface in dpdmux object
+ *
+ */
+__rte_experimental
+void
+rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 2d95303e27..7323fc8869 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
 	# added in 24.11
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
+	rte_pmd_dpaa2_mux_dump_counter;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 05/42] bus/fslmc: change dpcon close as internal symbol
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (3 preceding siblings ...)
  2024-10-23 11:59           ` [v5 04/42] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 06/42] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
                             ` (37 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena

From: Hemant Agrawal <hemant.agrawal@nxp.com>

This patch marks dpcon_close API as internal symbol and
also adds it into version map file

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/mc/fsl_dpcon.h | 3 ++-
 drivers/bus/fslmc/version.map    | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index db72477c8a..34b30d15c2 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2021 NXP
  *
  */
 #ifndef __FSL_DPCON_H
@@ -28,6 +28,7 @@ int dpcon_open(struct fsl_mc_io *mc_io,
 	       int dpcon_id,
 	       uint16_t *token);
 
+__rte_internal
 int dpcon_close(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index e19b8d1f6b..01e28c6625 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -36,6 +36,7 @@ INTERNAL {
 	dpci_set_rx_queue;
 	dpcon_get_attributes;
 	dpcon_open;
+	dpcon_close;
 	dpdmai_close;
 	dpdmai_disable;
 	dpdmai_enable;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 06/42] bus/fslmc: add close API to close DPAA2 device
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (4 preceding siblings ...)
  2024-10-23 11:59           ` [v5 05/42] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 07/42] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
                             ` (36 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Add rte_fslmc_close API to close all the DPAA2 devices while
closing the DPDK application.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     |  3 +
 drivers/bus/fslmc/fslmc_bus.c            | 13 ++++
 drivers/bus/fslmc/fslmc_vfio.c           | 87 ++++++++++++++++++++++++
 drivers/bus/fslmc/fslmc_vfio.h           |  3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c | 31 ++++++++-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c | 32 ++++++++-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 34 +++++++++
 drivers/event/dpaa2/dpaa2_hw_dpcon.c     | 32 ++++++++-
 drivers/net/dpaa2/dpaa2_mux.c            | 18 ++++-
 drivers/net/dpaa2/rte_pmd_dpaa2.h        |  5 +-
 10 files changed, 252 insertions(+), 6 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 3095458133..a3428fe28b 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -98,6 +98,8 @@ typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
 				      struct vfio_device_info *obj_info,
 				      int object_id);
 
+typedef void (*rte_dpaa2_obj_close_t)(int object_id);
+
 /**
  * A structure describing a DPAA2 object.
  */
@@ -106,6 +108,7 @@ struct rte_dpaa2_object {
 	const char *name;                   /**< Name of Object. */
 	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
 	rte_dpaa2_obj_create_t create;
+	rte_dpaa2_obj_close_t close;
 };
 
 /**
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 097d6dca08..97473c278f 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -384,6 +384,18 @@ rte_fslmc_match(struct rte_dpaa2_driver *dpaa2_drv,
 	return 1;
 }
 
+static int
+rte_fslmc_close(void)
+{
+	int ret = 0;
+
+	ret = fslmc_vfio_close_group();
+	if (ret)
+		DPAA2_BUS_ERR("Unable to close devices %d", ret);
+
+	return 0;
+}
+
 static int
 rte_fslmc_probe(void)
 {
@@ -664,6 +676,7 @@ struct rte_fslmc_bus rte_fslmc_bus = {
 	.bus = {
 		.scan = rte_fslmc_scan,
 		.probe = rte_fslmc_probe,
+		.cleanup = rte_fslmc_close,
 		.parse = rte_fslmc_parse,
 		.find_device = rte_fslmc_find_device,
 		.get_iommu_class = rte_dpaa2_get_iommu_class,
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 6981679a2d..ecca593c34 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -702,6 +702,54 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 	return -1;
 }
 
+static void
+fslmc_close_iodevices(struct rte_dpaa2_device *dev)
+{
+	struct rte_dpaa2_object *object = NULL;
+	struct rte_dpaa2_driver *drv;
+	int ret, probe_all;
+
+	switch (dev->dev_type) {
+	case DPAA2_IO:
+	case DPAA2_CON:
+	case DPAA2_CI:
+	case DPAA2_BPOOL:
+	case DPAA2_MUX:
+		TAILQ_FOREACH(object, &dpaa2_obj_list, next) {
+			if (dev->dev_type == object->dev_type)
+				object->close(dev->object_id);
+			else
+				continue;
+		}
+		break;
+	case DPAA2_ETH:
+	case DPAA2_CRYPTO:
+	case DPAA2_QDMA:
+		probe_all = rte_fslmc_bus.bus.conf.scan_mode !=
+			    RTE_BUS_SCAN_ALLOWLIST;
+		TAILQ_FOREACH(drv, &rte_fslmc_bus.driver_list, next) {
+			if (drv->drv_type != dev->dev_type)
+				continue;
+			if (rte_dev_is_probed(&dev->device))
+				continue;
+			if (probe_all ||
+			    (dev->device.devargs &&
+			     dev->device.devargs->policy ==
+			     RTE_DEV_ALLOWED)) {
+				ret = drv->remove(dev);
+				if (ret)
+					DPAA2_BUS_ERR("Unable to remove");
+			}
+		}
+		break;
+	default:
+		break;
+	}
+
+	DPAA2_BUS_LOG(DEBUG, "Device (%s) Closed",
+		      dev->device.name);
+}
+
 /*
  * fslmc_process_iodevices for processing only IO (ETH, CRYPTO, and possibly
  * EVENT) devices.
@@ -807,6 +855,45 @@ fslmc_process_mcp(struct rte_dpaa2_device *dev)
 	return ret;
 }
 
+int
+fslmc_vfio_close_group(void)
+{
+	struct rte_dpaa2_device *dev, *dev_temp;
+
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+		if (dev->device.devargs &&
+		    dev->device.devargs->policy == RTE_DEV_BLOCKED) {
+			DPAA2_BUS_LOG(DEBUG, "%s Blacklisted, skipping",
+				      dev->device.name);
+			TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+				continue;
+		}
+		switch (dev->dev_type) {
+		case DPAA2_ETH:
+		case DPAA2_CRYPTO:
+		case DPAA2_QDMA:
+		case DPAA2_IO:
+			fslmc_close_iodevices(dev);
+			break;
+		case DPAA2_CON:
+		case DPAA2_CI:
+		case DPAA2_BPOOL:
+		case DPAA2_MUX:
+			if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+				continue;
+
+			fslmc_close_iodevices(dev);
+			break;
+		case DPAA2_DPRTC:
+		default:
+			DPAA2_BUS_DEBUG("Device cannot be closed: Not supported (%s)",
+					dev->device.name);
+		}
+	}
+
+	return 0;
+}
+
 int
 fslmc_vfio_process_group(void)
 {
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 133606a9fd..b6677bdd18 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2019 NXP
+ *   Copyright 2016,2019-2020 NXP
  *
  */
 
@@ -55,6 +55,7 @@ int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 
 int fslmc_vfio_setup_group(void);
 int fslmc_vfio_process_group(void);
+int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
 int fslmc_get_container_group(int *gropuid);
 int rte_fslmc_vfio_dmamap(void);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index d7f6e45b7d..bc36607e64 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016 NXP
+ *   Copyright 2016,2020 NXP
  *
  */
 
@@ -33,6 +33,19 @@ TAILQ_HEAD(dpbp_dev_list, dpaa2_dpbp_dev);
 static struct dpbp_dev_list dpbp_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpbp_dev_list); /*!< DPBP device list */
 
+static struct dpaa2_dpbp_dev *get_dpbp_from_id(uint32_t dpbp_id)
+{
+	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
+
+	/* Get DPBP dev handle from list using index */
+	TAILQ_FOREACH(dpbp_dev, &dpbp_dev_list, next) {
+		if (dpbp_dev->dpbp_id == dpbp_id)
+			break;
+	}
+
+	return dpbp_dev;
+}
+
 static int
 dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
 			 struct vfio_device_info *obj_info __rte_unused,
@@ -116,9 +129,25 @@ int dpaa2_dpbp_supported(void)
 	return 0;
 }
 
+static void
+dpaa2_close_dpbp_device(int object_id)
+{
+	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
+
+	dpbp_dev = get_dpbp_from_id((uint32_t)object_id);
+
+	if (dpbp_dev) {
+		dpaa2_free_dpbp_dev(dpbp_dev);
+		dpbp_close(&dpbp_dev->dpbp, CMD_PRI_LOW, dpbp_dev->token);
+		TAILQ_REMOVE(&dpbp_dev_list, dpbp_dev, next);
+		rte_free(dpbp_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpbp_obj = {
 	.dev_type = DPAA2_BPOOL,
 	.create = dpaa2_create_dpbp_device,
+	.close = dpaa2_close_dpbp_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpbp, rte_dpaa2_dpbp_obj);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 7e858a113f..99f2147ccb 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017 NXP
+ *   Copyright 2017,2020 NXP
  *
  */
 
@@ -30,6 +30,19 @@ TAILQ_HEAD(dpci_dev_list, dpaa2_dpci_dev);
 static struct dpci_dev_list dpci_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpci_dev_list); /*!< DPCI device list */
 
+static struct dpaa2_dpci_dev *get_dpci_from_id(uint32_t dpci_id)
+{
+	struct dpaa2_dpci_dev *dpci_dev = NULL;
+
+	/* Get DPCI dev handle from list using index */
+	TAILQ_FOREACH(dpci_dev, &dpci_dev_list, next) {
+		if (dpci_dev->dpci_id == dpci_id)
+			break;
+	}
+
+	return dpci_dev;
+}
+
 static int
 rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
 			     struct vfio_device_info *obj_info __rte_unused,
@@ -179,9 +192,26 @@ void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci)
 	}
 }
 
+
+static void
+rte_dpaa2_close_dpci_device(int object_id)
+{
+	struct dpaa2_dpci_dev *dpci_dev = NULL;
+
+	dpci_dev = get_dpci_from_id((uint32_t)object_id);
+
+	if (dpci_dev) {
+		rte_dpaa2_free_dpci_dev(dpci_dev);
+		dpci_close(&dpci_dev->dpci, CMD_PRI_LOW, dpci_dev->token);
+		TAILQ_REMOVE(&dpci_dev_list, dpci_dev, next);
+		rte_free(dpci_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpci_obj = {
 	.dev_type = DPAA2_CI,
 	.create = rte_dpaa2_create_dpci_device,
+	.close = rte_dpaa2_close_dpci_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpci, rte_dpaa2_dpci_obj);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index d8a98326d9..c3f6e24139 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -86,6 +86,19 @@ static int dpaa2_cluster_sz = 2;
  * Cluster 4 (ID = x07) : CPU14, CPU15;
  */
 
+static struct dpaa2_dpio_dev *get_dpio_dev_from_id(int32_t dpio_id)
+{
+	struct dpaa2_dpio_dev *dpio_dev = NULL;
+
+	/* Get DPIO dev handle from list using index */
+	TAILQ_FOREACH(dpio_dev, &dpio_dev_list, next) {
+		if (dpio_dev->hw_id == dpio_id)
+			break;
+	}
+
+	return dpio_dev;
+}
+
 static int
 dpaa2_get_core_id(void)
 {
@@ -366,6 +379,26 @@ static void dpaa2_portal_finish(void *arg)
 	pthread_setspecific(dpaa2_portal_key, NULL);
 }
 
+static void
+dpaa2_close_dpio_device(int object_id)
+{
+	struct dpaa2_dpio_dev *dpio_dev = NULL;
+
+	dpio_dev = get_dpio_dev_from_id((int32_t)object_id);
+
+	if (dpio_dev) {
+		if (dpio_dev->dpio) {
+			dpio_disable(dpio_dev->dpio, CMD_PRI_LOW,
+				     dpio_dev->token);
+			dpio_close(dpio_dev->dpio, CMD_PRI_LOW,
+				   dpio_dev->token);
+			rte_free(dpio_dev->dpio);
+		}
+		TAILQ_REMOVE(&dpio_dev_list, dpio_dev, next);
+		rte_free(dpio_dev);
+	}
+}
+
 static int
 dpaa2_create_dpio_device(int vdev_fd,
 			 struct vfio_device_info *obj_info,
@@ -643,6 +676,7 @@ dpaa2_free_eq_descriptors(void)
 static struct rte_dpaa2_object rte_dpaa2_dpio_obj = {
 	.dev_type = DPAA2_IO,
 	.create = dpaa2_create_dpio_device,
+	.close = dpaa2_close_dpio_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpio, rte_dpaa2_dpio_obj);
diff --git a/drivers/event/dpaa2/dpaa2_hw_dpcon.c b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
index a68d3ac154..64b0136e24 100644
--- a/drivers/event/dpaa2/dpaa2_hw_dpcon.c
+++ b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017 NXP
+ *   Copyright 2017,2020 NXP
  *
  */
 
@@ -30,6 +30,19 @@ TAILQ_HEAD(dpcon_dev_list, dpaa2_dpcon_dev);
 static struct dpcon_dev_list dpcon_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpcon_dev_list); /*!< DPCON device list */
 
+static struct dpaa2_dpcon_dev *get_dpcon_from_id(uint32_t dpcon_id)
+{
+	struct dpaa2_dpcon_dev *dpcon_dev = NULL;
+
+	/* Get DPCONC dev handle from list using index */
+	TAILQ_FOREACH(dpcon_dev, &dpcon_dev_list, next) {
+		if (dpcon_dev->dpcon_id == dpcon_id)
+			break;
+	}
+
+	return dpcon_dev;
+}
+
 static int
 rte_dpaa2_create_dpcon_device(int dev_fd __rte_unused,
 			      struct vfio_device_info *obj_info __rte_unused,
@@ -105,9 +118,26 @@ void rte_dpaa2_free_dpcon_dev(struct dpaa2_dpcon_dev *dpcon)
 	}
 }
 
+
+static void
+rte_dpaa2_close_dpcon_device(int object_id)
+{
+	struct dpaa2_dpcon_dev *dpcon_dev = NULL;
+
+	dpcon_dev = get_dpcon_from_id((uint32_t)object_id);
+
+	if (dpcon_dev) {
+		rte_dpaa2_free_dpcon_dev(dpcon_dev);
+		dpcon_close(&dpcon_dev->dpcon, CMD_PRI_LOW, dpcon_dev->token);
+		TAILQ_REMOVE(&dpcon_dev_list, dpcon_dev, next);
+		rte_free(dpcon_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpcon_obj = {
 	.dev_type = DPAA2_CON,
 	.create = rte_dpaa2_create_dpcon_device,
+	.close = rte_dpaa2_close_dpcon_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpcon, rte_dpaa2_dpcon_obj);
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index b2ec5337b1..489beb6f27 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -44,7 +44,7 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev = NULL;
 
-	/* Get DPBP dev handle from list using index */
+	/* Get DPDMUX dev handle from list using index */
 	TAILQ_FOREACH(dpdmux_dev, &dpdmux_dev_list, next) {
 		if (dpdmux_dev->dpdmux_id == dpdmux_id)
 			break;
@@ -442,9 +442,25 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	return -1;
 }
 
+static void
+dpaa2_close_dpdmux_device(int object_id)
+{
+	struct dpaa2_dpdmux_dev *dpdmux_dev;
+
+	dpdmux_dev = get_dpdmux_from_id((uint32_t)object_id);
+
+	if (dpdmux_dev) {
+		dpdmux_close(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+			     dpdmux_dev->token);
+		TAILQ_REMOVE(&dpdmux_dev_list, dpdmux_dev, next);
+		rte_free(dpdmux_dev);
+	}
+}
+
 static struct rte_dpaa2_object rte_dpaa2_dpdmux_obj = {
 	.dev_type = DPAA2_MUX,
 	.create = dpaa2_create_dpdmux_device,
+	.close = dpaa2_close_dpdmux_device,
 };
 
 RTE_PMD_REGISTER_DPAA2_OBJECT(dpdmux, rte_dpaa2_dpdmux_obj);
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index fd9acd841b..80e5e3298b 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2024 NXP
  */
 
 #ifndef _RTE_PMD_DPAA2_H
@@ -32,6 +32,9 @@ struct rte_flow *
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			      struct rte_flow_item *pattern[],
 			      struct rte_flow_action *actions[]);
+int
+rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
+	uint16_t entry_index);
 
 /**
  * @warning
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 07/42] net/dpaa2: dpdmux: add support for CVLAN
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (5 preceding siblings ...)
  2024-10-23 11:59           ` [v5 06/42] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 08/42] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
                             ` (35 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds the support for DPDMUX_METHOD_C_VLAN_MAC method
which implements DPDMUX based on C-VLAN and MAC address.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     | 59 +++++++++++++++++++++++++------
 drivers/net/dpaa2/mc/fsl_dpdmux.h | 18 +++++++++-
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  3 ++
 3 files changed, 68 insertions(+), 12 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 489beb6f27..3693f4b62e 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -233,6 +233,35 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	return NULL;
 }
 
+int
+rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
+	uint8_t mac_addr[6], uint16_t vlan_id, int dest_if)
+{
+	struct dpaa2_dpdmux_dev *dpdmux_dev;
+	struct dpdmux_l2_rule rule;
+	int ret, i;
+
+	/* Find the DPDMUX from dpdmux_id in our list */
+	dpdmux_dev = get_dpdmux_from_id(dpdmux_id);
+	if (!dpdmux_dev) {
+		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
+		return -ENODEV;
+	}
+
+	for (i = 0; i < 6; i++)
+		rule.mac_addr[i] = mac_addr[i];
+	rule.vlan_id = vlan_id;
+
+	ret = dpdmux_if_add_l2_rule(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+			dpdmux_dev->token, dest_if, &rule);
+	if (ret) {
+		DPAA2_PMD_ERR("dpdmux_if_add_l2_rule failed:err(%d)", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
 int
 rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 {
@@ -353,6 +382,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	int ret;
 	uint16_t maj_ver;
 	uint16_t min_ver;
+	uint8_t skip_reset_flags;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -379,12 +409,18 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 		goto init_err;
 	}
 
-	ret = dpdmux_if_set_default(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-				    dpdmux_dev->token, attr.default_if);
-	if (ret) {
-		DPAA2_PMD_ERR("setting default interface failed in %s",
-			      __func__);
-		goto init_err;
+	if (attr.method != DPDMUX_METHOD_C_VLAN_MAC) {
+		ret = dpdmux_if_set_default(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+				dpdmux_dev->token, attr.default_if);
+		if (ret) {
+			DPAA2_PMD_ERR("setting default interface failed in %s",
+				      __func__);
+			goto init_err;
+		}
+		skip_reset_flags = DPDMUX_SKIP_DEFAULT_INTERFACE
+			| DPDMUX_SKIP_UNICAST_RULES | DPDMUX_SKIP_MULTICAST_RULES;
+	} else {
+		skip_reset_flags = DPDMUX_SKIP_DEFAULT_INTERFACE;
 	}
 
 	ret = dpdmux_get_api_version(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
@@ -400,10 +436,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	 */
 	if (maj_ver >= 6 && min_ver >= 6) {
 		ret = dpdmux_set_resetable(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-				dpdmux_dev->token,
-				DPDMUX_SKIP_DEFAULT_INTERFACE |
-				DPDMUX_SKIP_UNICAST_RULES |
-				DPDMUX_SKIP_MULTICAST_RULES);
+				dpdmux_dev->token, skip_reset_flags);
 		if (ret) {
 			DPAA2_PMD_ERR("setting default interface failed in %s",
 				      __func__);
@@ -416,7 +449,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 
 		memset(&mux_err_cfg, 0, sizeof(mux_err_cfg));
 		mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE;
-		mux_err_cfg.errors = DPDMUX_ERROR_DISC;
+
+		if (attr.method != DPDMUX_METHOD_C_VLAN_MAC)
+			mux_err_cfg.errors = DPDMUX_ERROR_DISC;
+		else
+			mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
 
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 4600ea94d4..9bbac44219 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2022 NXP
  *
  */
 #ifndef __FSL_DPDMUX_H
@@ -549,6 +549,22 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
  */
 #define DPDMUX__ERROR_L4CE			0x00000001
 
+#define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
+				 DPDMUX__ERROR_L4CV | \
+				 DPDMUX__ERROR_L3CE | \
+				 DPDMUX__ERROR_L3CV | \
+				 DPDMUX_ERROR_BLE | \
+				 DPDMUX_ERROR_PHE | \
+				 DPDMUX_ERROR_ISP | \
+				 DPDMUX_ERROR_PTE | \
+				 DPDMUX_ERROR_FPE | \
+				 DPDMUX_ERROR_FLE | \
+				 DPDMUX_ERROR_PIEE | \
+				 DPDMUX_ERROR_TIDE | \
+				 DPDMUX_ERROR_MNLE | \
+				 DPDMUX_ERROR_EOFHE | \
+				 DPDMUX_ERROR_KSE)
+
 enum dpdmux_error_action {
 	DPDMUX_ERROR_ACTION_DISCARD = 0,
 	DPDMUX_ERROR_ACTION_CONTINUE = 1
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index 80e5e3298b..bebebcacdc 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -35,6 +35,9 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 int
 rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
 	uint16_t entry_index);
+int
+rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
+	uint8_t mac_addr[6], uint16_t vlan_id, int dest_if);
 
 /**
  * @warning
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 08/42] bus/fslmc: upgrade with MC version 10.37
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (6 preceding siblings ...)
  2024-10-23 11:59           ` [v5 07/42] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 09/42] net/dpaa2: support link state for eth interfaces vanshika.shukla
                             ` (34 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh; +Cc: Apeksha Gupta

From: Gagandeep Singh <g.singh@nxp.com>

This patch upgrades the MC version compaitbility to 10.37

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 doc/guides/platform/dpaa2.rst                 |   4 +-
 drivers/bus/fslmc/mc/dpio.c                   |  94 ++++-
 drivers/bus/fslmc/mc/fsl_dpcon.h              |   5 +-
 drivers/bus/fslmc/mc/fsl_dpio.h               |  21 +-
 drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |  13 +-
 drivers/bus/fslmc/mc/fsl_dpmng.h              |   4 +-
 drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |   8 +-
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |  12 +-
 drivers/bus/fslmc/version.map                 |   7 +
 drivers/crypto/dpaa2_sec/mc/dpseci.c          |  91 ++++-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |  47 ++-
 drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |  19 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  36 +-
 drivers/net/dpaa2/mc/dpdmux.c                 | 205 +++++++++-
 drivers/net/dpaa2/mc/dpkg.c                   |  12 +-
 drivers/net/dpaa2/mc/dpni.c                   | 383 +++++++++++++++++-
 drivers/net/dpaa2/mc/fsl_dpdmux.h             |  67 ++-
 drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |  83 +++-
 drivers/net/dpaa2/mc/fsl_dpkg.h               |   7 +-
 drivers/net/dpaa2/mc/fsl_dpni.h               | 176 +++++---
 drivers/net/dpaa2/mc/fsl_dpni_cmd.h           | 125 ++++--
 21 files changed, 1267 insertions(+), 152 deletions(-)

diff --git a/doc/guides/platform/dpaa2.rst b/doc/guides/platform/dpaa2.rst
index 2b0d93a976..c9ec21334f 100644
--- a/doc/guides/platform/dpaa2.rst
+++ b/doc/guides/platform/dpaa2.rst
@@ -105,8 +105,8 @@ separately:
 
 Currently supported by DPDK:
 
-- NXP SDK **LSDK 19.09++**.
-- MC Firmware version **10.18.0** and higher.
+- NXP SDK **LSDK 21.08++**.
+- MC Firmware version **10.37.0** and higher.
 - Supported architectures:  **arm64 LE**.
 
 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
diff --git a/drivers/bus/fslmc/mc/dpio.c b/drivers/bus/fslmc/mc/dpio.c
index a3382ed142..97c08fa713 100644
--- a/drivers/bus/fslmc/mc/dpio.c
+++ b/drivers/bus/fslmc/mc/dpio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -376,6 +376,98 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpio_set_stashing_destination_by_core_id() - Set the stashing destination source
+ * using the core id.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @core_id:	Core id stashing destination
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
+					uint32_t cmd_flags,
+					uint16_t token,
+					uint8_t core_id)
+{
+	struct dpio_stashing_dest_by_core_id *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST_BY_CORE_ID,
+										cmd_flags,
+										token);
+	cmd_params = (struct dpio_stashing_dest_by_core_id  *)cmd.params;
+	cmd_params->core_id = core_id;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpio_set_stashing_destination_source() - Set the stashing destination source.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @ss:		Stashing destination source (0 manual/1 automatic)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ss)
+{
+	struct dpio_stashing_dest_source *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST_SOURCE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpio_stashing_dest_source *)cmd.params;
+	cmd_params->ss = ss;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpio_get_stashing_destination_source() - Get the stashing destination source.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPIO object
+ * @ss:		Returns the stashing destination source (0 manual/1 automatic)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *ss)
+{
+	struct dpio_stashing_dest_source *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_STASHING_DEST_SOURCE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpio_stashing_dest_source *)cmd.params;
+	*ss = rsp_params->ss;
+
+	return 0;
+}
+
 /**
  * dpio_add_static_dequeue_channel() - Add a static dequeue channel.
  * @mc_io:		Pointer to MC portal's I/O object
diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index 34b30d15c2..e3a626077e 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2021, 2024 NXP
  *
  */
 #ifndef __FSL_DPCON_H
@@ -52,10 +52,12 @@ int dpcon_destroy(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint32_t obj_id);
 
+__rte_internal
 int dpcon_enable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
 
+__rte_internal
 int dpcon_disable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token);
@@ -65,6 +67,7 @@ int dpcon_is_enabled(struct fsl_mc_io *mc_io,
 		     uint16_t token,
 		     int *en);
 
+__rte_internal
 int dpcon_reset(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
index c2db76bdf8..eddce58a5f 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPIO_H
@@ -87,11 +87,30 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint16_t token,
 				  uint8_t sdest);
 
+__rte_internal
 int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
 				  uint8_t *sdest);
 
+__rte_internal
+int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t core_id);
+
+__rte_internal
+int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ss);
+
+__rte_internal
+int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t *ss);
+
 __rte_internal
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
diff --git a/drivers/bus/fslmc/mc/fsl_dpio_cmd.h b/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
index 45ed01f809..360c68eaa5 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2019 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPIO_CMD_H
@@ -40,6 +40,9 @@
 #define DPIO_CMDID_GET_STASHING_DEST			DPIO_CMD(0x121)
 #define DPIO_CMDID_ADD_STATIC_DEQUEUE_CHANNEL		DPIO_CMD(0x122)
 #define DPIO_CMDID_REMOVE_STATIC_DEQUEUE_CHANNEL	DPIO_CMD(0x123)
+#define DPIO_CMDID_SET_STASHING_DEST_SOURCE		DPIO_CMD(0x124)
+#define DPIO_CMDID_GET_STASHING_DEST_SOURCE		DPIO_CMD(0x125)
+#define DPIO_CMDID_SET_STASHING_DEST_BY_CORE_ID		DPIO_CMD(0x126)
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPIO_MASK(field)        \
@@ -98,6 +101,14 @@ struct dpio_stashing_dest {
 	uint8_t sdest;
 };
 
+struct dpio_stashing_dest_source {
+	uint8_t ss;
+};
+
+struct dpio_stashing_dest_by_core_id {
+	uint8_t core_id;
+};
+
 struct dpio_cmd_static_dequeue_channel {
 	uint32_t dpcon_id;
 };
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index c6ea220df7..dfa51b3a86 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2017-2022 NXP
+ * Copyright 2017-2023 NXP
  *
  */
 #ifndef __FSL_DPMNG_H
@@ -20,7 +20,7 @@ struct fsl_mc_io;
  * Management Complex firmware version information
  */
 #define MC_VER_MAJOR 10
-#define MC_VER_MINOR 32
+#define MC_VER_MINOR 37
 
 /**
  * struct mc_version
diff --git a/drivers/bus/fslmc/mc/fsl_dprc_cmd.h b/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
index 6efa5634d2..d5ba35b5f0 100644
--- a/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dprc_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 
@@ -10,13 +10,17 @@
 
 /* Minimal supported DPRC Version */
 #define DPRC_VER_MAJOR			6
-#define DPRC_VER_MINOR			6
+#define DPRC_VER_MINOR			7
 
 /* Command versioning */
 #define DPRC_CMD_BASE_VERSION			1
+#define DPRC_CMD_VERSION_2			2
+#define DPRC_CMD_VERSION_3			3
 #define DPRC_CMD_ID_OFFSET			4
 
 #define DPRC_CMD(id)	((id << DPRC_CMD_ID_OFFSET) | DPRC_CMD_BASE_VERSION)
+#define DPRC_CMD_V2(id)	(((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_VERSION_2)
+#define DPRC_CMD_V3(id)	(((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_VERSION_3)
 
 /* Command IDs */
 #define DPRC_CMDID_CLOSE                        DPRC_CMD(0x800)
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 18b6a3c2e4..297d4ed4fc 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2023 NXP
  */
 #ifndef _FSL_QBMAN_DEBUG_H
 #define _FSL_QBMAN_DEBUG_H
@@ -105,16 +105,6 @@ uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r);
 uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r);
 uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r);
 
-/* FQ query command for non-programmable fields*/
-enum qbman_fq_schedstate_e {
-	qbman_fq_schedstate_oos = 0,
-	qbman_fq_schedstate_retired,
-	qbman_fq_schedstate_tentatively_scheduled,
-	qbman_fq_schedstate_truly_scheduled,
-	qbman_fq_schedstate_parked,
-	qbman_fq_schedstate_held_active,
-};
-
 struct qbman_fq_query_np_rslt {
 uint8_t verb;
 	uint8_t rslt;
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index 01e28c6625..df1143733d 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -37,6 +37,9 @@ INTERNAL {
 	dpcon_get_attributes;
 	dpcon_open;
 	dpcon_close;
+	dpcon_reset;
+	dpcon_enable;
+	dpcon_disable;
 	dpdmai_close;
 	dpdmai_disable;
 	dpdmai_enable;
@@ -53,7 +56,11 @@ INTERNAL {
 	dpio_open;
 	dpio_remove_static_dequeue_channel;
 	dpio_reset;
+	dpio_get_stashing_destination;
+	dpio_get_stashing_destination_source;
 	dpio_set_stashing_destination;
+	dpio_set_stashing_destination_by_core_id;
+	dpio_set_stashing_destination_source;
 	mc_get_soc_version;
 	mc_get_version;
 	mc_send_command;
diff --git a/drivers/crypto/dpaa2_sec/mc/dpseci.c b/drivers/crypto/dpaa2_sec/mc/dpseci.c
index 87e0defdc6..773b4648e0 100644
--- a/drivers/crypto/dpaa2_sec/mc/dpseci.c
+++ b/drivers/crypto/dpaa2_sec/mc/dpseci.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -763,3 +763,92 @@ int dpseci_get_congestion_notification(
 
 	return 0;
 }
+
+
+/**
+ * dpseci_get_rx_queue_status() - Get queue status attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue_index:	Select the queue_index
+ * @attr:	Returned queue status attributes
+ *
+ * Return:	'0' on success, error code otherwise
+ */
+int dpseci_get_rx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr)
+{
+	struct dpseci_rsp_get_queue_status *rsp_params;
+	struct dpseci_cmd_get_queue_status *cmd_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_RX_QUEUE_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpseci_cmd_get_queue_status *)cmd.params;
+	cmd_params->queue_index = cpu_to_le32(queue_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpseci_rsp_get_queue_status *)cmd.params;
+	attr->fqid = le32_to_cpu(rsp_params->fqid);
+	attr->schedstate = (enum qbman_fq_schedstate_e)(le16_to_cpu(rsp_params->schedstate));
+	attr->state_flags = le16_to_cpu(rsp_params->state_flags);
+	attr->frame_count = le32_to_cpu(rsp_params->frame_count);
+	attr->byte_count = le32_to_cpu(rsp_params->byte_count);
+
+	return 0;
+}
+
+/**
+ * dpseci_get_tx_queue_status() - Get queue status attributes
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPSECI object
+ * @queue_index:	Select the queue_index
+ * @attr:	Returned queue status attributes
+ *
+ * Return:	'0' on success, error code otherwise
+ */
+int dpseci_get_tx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr)
+{
+	struct dpseci_rsp_get_queue_status *rsp_params;
+	struct dpseci_cmd_get_queue_status *cmd_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPSECI_CMDID_GET_TX_QUEUE_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpseci_cmd_get_queue_status *)cmd.params;
+	cmd_params->queue_index = cpu_to_le32(queue_index);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpseci_rsp_get_queue_status *)cmd.params;
+	attr->fqid = le32_to_cpu(rsp_params->fqid);
+	attr->schedstate = (enum qbman_fq_schedstate_e)(le16_to_cpu(rsp_params->schedstate));
+	attr->state_flags = le16_to_cpu(rsp_params->state_flags);
+	attr->frame_count = le32_to_cpu(rsp_params->frame_count);
+	attr->byte_count = le32_to_cpu(rsp_params->byte_count);
+
+	return 0;
+}
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
index c295c04f24..e371abdd64 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPSECI_H
@@ -429,4 +429,49 @@ int dpseci_get_congestion_notification(
 			uint16_t token,
 			struct dpseci_congestion_notification_cfg *cfg);
 
+/* Available FQ's scheduling states */
+enum qbman_fq_schedstate_e {
+	qbman_fq_schedstate_oos = 0,
+	qbman_fq_schedstate_retired,
+	qbman_fq_schedstate_tentatively_scheduled,
+	qbman_fq_schedstate_truly_scheduled,
+	qbman_fq_schedstate_parked,
+	qbman_fq_schedstate_held_active,
+};
+
+/* FQ's force eligible pending bit */
+#define DPSECI_FQ_STATE_FORCE_ELIGIBLE			0x00000001
+/* FQ's XON/XOFF state, 0: XON, 1: XOFF */
+#define DPSECI_FQ_STATE_XOFF					0x00000002
+/* FQ's retirement pending bit */
+#define DPSECI_FQ_STATE_RETIREMENT_PENDING		0x00000004
+/* FQ's overflow error bit */
+#define DPSECI_FQ_STATE_OVERFLOW_ERROR			0x00000008
+
+struct dpseci_queue_status {
+	uint32_t fqid;
+	/* FQ's scheduling states
+	 * (available scheduling states are defined in qbman_fq_schedstate_e)
+	 */
+	enum qbman_fq_schedstate_e schedstate;
+	/* FQ's state flags (available flags are defined above) */
+	uint16_t state_flags;
+	/* FQ's frame count */
+	uint32_t frame_count;
+	/* FQ's byte count */
+	uint32_t byte_count;
+};
+
+int dpseci_get_rx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr);
+
+int dpseci_get_tx_queue_status(struct fsl_mc_io *mc_io,
+				uint32_t cmd_flags,
+				uint16_t token,
+				uint32_t queue_index,
+				struct dpseci_queue_status *attr);
+
 #endif /* __FSL_DPSECI_H */
diff --git a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
index af3518a0f3..065464b701 100644
--- a/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
+++ b/drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPSECI_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPSECI Version */
 #define DPSECI_VER_MAJOR		5
-#define DPSECI_VER_MINOR		3
+#define DPSECI_VER_MINOR		4
 
 /* Command versioning */
 #define DPSECI_CMD_BASE_VERSION		1
@@ -46,6 +46,9 @@
 #define DPSECI_CMDID_GET_OPR		DPSECI_CMD_V1(0x19B)
 #define DPSECI_CMDID_SET_CONGESTION_NOTIFICATION	DPSECI_CMD_V1(0x170)
 #define DPSECI_CMDID_GET_CONGESTION_NOTIFICATION	DPSECI_CMD_V1(0x171)
+#define DPSECI_CMDID_GET_RX_QUEUE_STATUS	DPSECI_CMD_V1(0x172)
+#define DPSECI_CMDID_GET_TX_QUEUE_STATUS	DPSECI_CMD_V1(0x173)
+
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPSECI_MASK(field)        \
@@ -251,5 +254,17 @@ struct dpseci_cmd_set_congestion_notification {
 	uint32_t threshold_exit;
 };
 
+struct dpseci_cmd_get_queue_status {
+	uint32_t queue_index;
+};
+
+struct dpseci_rsp_get_queue_status {
+	uint32_t fqid;
+	uint16_t schedstate;
+	uint16_t state_flags;
+	uint32_t frame_count;
+	uint32_t byte_count;
+};
+
 #pragma pack(pop)
 #endif /* _FSL_DPSECI_CMD_H */
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index ab64df6a59..439b8f97a4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -899,6 +899,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct dpni_queue tx_conf_cfg;
 	struct dpni_queue tx_flow_cfg;
 	uint8_t options = 0, flow_id;
+	uint8_t ceetm_ch_idx;
 	uint16_t channel_id;
 	struct dpni_queue_id qid;
 	uint32_t tc_id;
@@ -925,20 +926,27 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	memset(&tx_conf_cfg, 0, sizeof(struct dpni_queue));
 	memset(&tx_flow_cfg, 0, sizeof(struct dpni_queue));
 
-	if (tx_queue_id == 0) {
-		/*Set tx-conf and error configuration*/
-		if (priv->flags & DPAA2_TX_CONF_ENABLE)
-			ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW,
-							    priv->token,
-							    DPNI_CONF_AFFINE);
-		else
-			ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW,
-							    priv->token,
-							    DPNI_CONF_DISABLE);
-		if (ret) {
-			DPAA2_PMD_ERR("Error in set tx conf mode settings: "
-				      "err=%d", ret);
-			return -1;
+	if (!tx_queue_id) {
+		for (ceetm_ch_idx = 0;
+			ceetm_ch_idx <= (priv->num_channels - 1);
+			ceetm_ch_idx++) {
+			/*Set tx-conf and error configuration*/
+			if (priv->flags & DPAA2_TX_CONF_ENABLE) {
+				ret = dpni_set_tx_confirmation_mode(dpni,
+						CMD_PRI_LOW, priv->token,
+						ceetm_ch_idx,
+						DPNI_CONF_AFFINE);
+			} else {
+				ret = dpni_set_tx_confirmation_mode(dpni,
+						CMD_PRI_LOW, priv->token,
+						ceetm_ch_idx,
+						DPNI_CONF_DISABLE);
+			}
+			if (ret) {
+				DPAA2_PMD_ERR("Error(%d) in tx conf setting",
+					ret);
+				return ret;
+			}
 		}
 	}
 
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 1bb153cad7..f4feef3840 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -287,15 +287,19 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
  * @token:	Token of DPDMUX object
  * @skip_reset_flags:	By default all are 0.
  *			By setting 1 will deactivate the reset.
- *	The flags are:
- *			DPDMUX_SKIP_DEFAULT_INTERFACE  0x01
- *			DPDMUX_SKIP_UNICAST_RULES      0x02
- *			DPDMUX_SKIP_MULTICAST_RULES    0x04
+ * The flags are:
+ *			DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE  0x01
+ *			DPDMUX_SKIP_UNICAST_RULES             0x02
+ *			DPDMUX_SKIP_MULTICAST_RULES           0x04
+ *			DPDMUX_SKIP_RESET_DEFAULT_INTERFACE   0x08
  *
  * For example, by default, through DPDMUX_RESET the default
  * interface will be restored with the one from create.
- * By setting DPDMUX_SKIP_DEFAULT_INTERFACE flag,
- * through DPDMUX_RESET the default interface will not be modified.
+ * By setting DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE flag,
+ * through DPDMUX_RESET the default interface will not be modified after reset.
+ * By setting DPDMUX_SKIP_RESET_DEFAULT_INTERFACE flag,
+ * through DPDMUX_RESET the default interface will not be reset
+ * and will continue to be functional during reset procedure.
  *
  * Return:	'0' on Success; Error code otherwise.
  */
@@ -327,10 +331,11 @@ int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
  * @token:	Token of DPDMUX object
  * @skip_reset_flags:	Get the reset flags.
  *
- *	The flags are:
- *			DPDMUX_SKIP_DEFAULT_INTERFACE  0x01
- *			DPDMUX_SKIP_UNICAST_RULES      0x02
- *			DPDMUX_SKIP_MULTICAST_RULES    0x04
+ * The flags are:
+ *			DPDMUX_SKIP_MODIFY_DEFAULT_INTERFACE  0x01
+ *			DPDMUX_SKIP_UNICAST_RULES             0x02
+ *			DPDMUX_SKIP_MULTICAST_RULES           0x04
+ *			DPDMUX_SKIP_RESET_DEFAULT_INTERFACE   0x08
  *
  * Return:	'0' on Success; Error code otherwise.
  */
@@ -1064,6 +1069,127 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpdmux_if_set_taildrop() - enable taildrop for egress interface queues.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id:	Interface Identifier
+ * @cfg: Taildrop configuration
+ */
+int dpdmux_if_set_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg)
+{
+	struct mc_command cmd = { 0 };
+	struct dpdmux_cmd_set_taildrop *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_SET_TAILDROP,
+			cmd_flags,
+			token);
+	cmd_params = (struct dpdmux_cmd_set_taildrop *)cmd.params;
+	cmd_params->if_id		= cpu_to_le16(if_id);
+	cmd_params->units		= cfg->units;
+	cmd_params->threshold	= cpu_to_le32(cfg->threshold);
+	dpdmux_set_field(cmd_params->oal_en, ENABLE, (!!cfg->enable));
+
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpdmux_if_get_taildrop() - get current taildrop configuration.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id:	Interface Identifier
+ * @cfg: Taildrop configuration
+ */
+int dpdmux_if_get_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg)
+{
+	struct mc_command cmd = {0};
+	struct dpdmux_cmd_get_taildrop *cmd_params;
+	struct dpdmux_rsp_get_taildrop *rsp_params;
+	int err = 0;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_IF_GET_TAILDROP,
+			cmd_flags,
+			token);
+	cmd_params = (struct dpdmux_cmd_get_taildrop *)cmd.params;
+	cmd_params->if_id	= cpu_to_le16(if_id);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpdmux_rsp_get_taildrop *)cmd.params;
+	cfg->threshold = le32_to_cpu(rsp_params->threshold);
+	cfg->units = rsp_params->units;
+	cfg->enable = dpdmux_get_field(rsp_params->oal_en, ENABLE);
+
+	return err;
+}
+
+/**
+ * dpdmux_dump_table() - Dump the content of table_type table into memory.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @table_type: The type of the table to dump
+ *	- DPDMUX_DMAT_TABLE
+ *	- DPDMUX_MISS_TABLE
+ *	- DPDMUX_PRUNE_TABLE
+ * @table_index: The index of the table to dump in case of more than one table
+ *	if table_type == DPDMUX_DMAT_TABLE
+ *		- DPDMUX_HMAP_UNICAST
+ *		- DPDMUX_HMAP_MULTICAST
+ *	else 0
+ * @iova_addr: The snapshot will be stored in this variable as an header of struct dump_table_header
+ *             followed by an array of struct dump_table_entry
+ * @iova_size: Memory size allocated for iova_addr
+ * @num_entries: Number of entries written in iova_addr
+ *
+ * Return: Completion status. '0' on Success; Error code otherwise.
+ *
+ * The memory allocated at iova_addr must be zeroed before command execution.
+ * If the table content exceeds the memory size provided the dump will be truncated.
+ */
+int dpdmux_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	struct dpdmux_cmd_dump_table *cmd_params;
+	struct dpdmux_rsp_dump_table *rsp_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_DUMP_TABLE, cmd_flags, token);
+	cmd_params = (struct dpdmux_cmd_dump_table *)cmd.params;
+	cmd_params->table_type = cpu_to_le16(table_type);
+	cmd_params->table_index = cpu_to_le16(table_index);
+	cmd_params->iova_addr = cpu_to_le64(iova_addr);
+	cmd_params->iova_size = cpu_to_le32(iova_size);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpdmux_rsp_dump_table *)cmd.params;
+	*num_entries = le16_to_cpu(rsp_params->num_entries);
+
+	return 0;
+}
+
+
 /**
  * dpdmux_if_set_errors_behavior() - Set errors behavior
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
@@ -1100,3 +1226,60 @@ int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 	/* send command to mc*/
 	return mc_send_command(mc_io, &cmd);
 }
+
+/* Sets up a Soft Parser Profile on this DPDMUX
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @sp_profile: Soft Parser Profile name (must a valid name for a defined profile)
+ *			Maximum allowed length for this string is 8 characters long
+ *			If this parameter is empty string (all zeros)
+ *			then the Default SP Profile is set on this dpdmux
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ */
+int dpdmux_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type)
+{
+	struct dpdmux_cmd_set_sp_profile *cmd_params;
+	struct mc_command cmd = { 0 };
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SET_SP_PROFILE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpdmux_cmd_set_sp_profile *)cmd.params;
+	for (i = 0; i < MAX_SP_PROFILE_ID_SIZE && sp_profile[i]; i++)
+		cmd_params->sp_profile[i] = sp_profile[i];
+	cmd_params->type = type;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/* Enable/Disable Soft Parser on this DPDMUX interface
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPDMUX object
+ * @if_id: interface id
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ * @en: 1 to enable or 0 to disable
+ */
+int dpdmux_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t if_id, uint8_t type, uint8_t en)
+{
+	struct dpdmux_cmd_sp_enable *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SP_ENABLE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpdmux_cmd_sp_enable *)cmd.params;
+	cmd_params->if_id = if_id;
+	cmd_params->type = type;
+	cmd_params->en = en;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
diff --git a/drivers/net/dpaa2/mc/dpkg.c b/drivers/net/dpaa2/mc/dpkg.c
index 4789976b7d..5db3d092c1 100644
--- a/drivers/net/dpaa2/mc/dpkg.c
+++ b/drivers/net/dpaa2/mc/dpkg.c
@@ -1,16 +1,18 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2021, 2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
 #include <fsl_mc_cmd.h>
 #include <fsl_dpkg.h>
+#include <string.h>
 
 /**
  * dpkg_prepare_key_cfg() - function prepare extract parameters
  * @cfg: defining a full Key Generation profile (rule)
- * @key_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
+ * @key_cfg_buf: Zeroed memory whose size is sizeo of
+ *		"struct dpni_ext_set_rx_tc_dist" before mapping it to DMA
  *
  * This function has to be called before the following functions:
  *	- dpni_set_rx_tc_dist()
@@ -18,7 +20,8 @@
  *	- dpkg_prepare_key_cfg()
  */
 int
-dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, uint8_t *key_cfg_buf)
+dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+	void *key_cfg_buf)
 {
 	int i, j;
 	struct dpni_ext_set_rx_tc_dist *dpni_ext;
@@ -27,11 +30,12 @@ dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, uint8_t *key_cfg_buf)
 	if (cfg->num_extracts > DPKG_MAX_NUM_OF_EXTRACTS)
 		return -EINVAL;
 
-	dpni_ext = (struct dpni_ext_set_rx_tc_dist *)key_cfg_buf;
+	dpni_ext = key_cfg_buf;
 	dpni_ext->num_extracts = cfg->num_extracts;
 
 	for (i = 0; i < cfg->num_extracts; i++) {
 		extr = &dpni_ext->extracts[i];
+		memset(extr, 0, sizeof(struct dpni_dist_extract));
 
 		switch (cfg->extracts[i].type) {
 		case DPKG_EXTRACT_FROM_HDR:
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index 4d97b98939..558f08dc69 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2022 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #include <fsl_mc_sys.h>
@@ -852,6 +852,92 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpni_get_qdid_ex() - Extension for the function to get the Queuing Destination ID (QDID)
+ *			that should be used for enqueue operations.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @qtype:	Type of queue to receive QDID for
+ * @qdid:	Array of virtual QDID value that should be used as an argument
+ *			in all enqueue operations.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * This function must be used when dpni is created using multiple Tx channels to return one
+ * qdid for each channel.
+ */
+int dpni_get_qdid_ex(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  enum dpni_queue_type qtype,
+		  uint16_t *qdid)
+{
+	struct mc_command cmd = { 0 };
+	struct dpni_cmd_get_qdid *cmd_params;
+	struct dpni_rsp_get_qdid_ex *rsp_params;
+	int i;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QDID_EX,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_qdid *)cmd.params;
+	cmd_params->qtype = qtype;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_qdid_ex *)cmd.params;
+	for (i = 0; i < DPNI_MAX_CHANNELS; i++)
+		qdid[i] = le16_to_cpu(rsp_params->qdid[i]);
+
+	return 0;
+}
+
+/**
+ * dpni_get_sp_info() - Get the AIOP storage profile IDs associated
+ *			with the DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @sp_info:	Returned AIOP storage-profile information
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * @warning	Only relevant for DPNI that belongs to AIOP container.
+ */
+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
+		     uint32_t cmd_flags,
+		     uint16_t token,
+		     struct dpni_sp_info *sp_info)
+{
+	struct dpni_rsp_get_sp_info *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err, i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SP_INFO,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_sp_info *)cmd.params;
+	for (i = 0; i < DPNI_MAX_SP; i++)
+		sp_info->spids[i] = le16_to_cpu(rsp_params->spids[i]);
+
+	return 0;
+}
+
 /**
  * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
  * @mc_io:	Pointer to MC portal's I/O object
@@ -1684,6 +1770,7 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
  * @mc_io:	Pointer to MC portal's I/O object
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
  * @mode:	Tx confirmation mode
  *
  * This function is useful only when 'DPNI_OPT_TX_CONF_DISABLED' is not
@@ -1701,6 +1788,7 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
 int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode mode)
 {
 	struct dpni_tx_confirmation_mode *cmd_params;
@@ -1711,6 +1799,7 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 					  cmd_flags,
 					  token);
 	cmd_params = (struct dpni_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
 	cmd_params->confirmation_mode = mode;
 
 	/* send command to mc*/
@@ -1722,6 +1811,7 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
  * @mc_io:	Pointer to MC portal's I/O object
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
  * @mode:	Tx confirmation mode
  *
  * Return:  '0' on Success; Error code otherwise.
@@ -1729,8 +1819,10 @@ int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode *mode)
 {
+	struct dpni_tx_confirmation_mode *cmd_params;
 	struct dpni_tx_confirmation_mode *rsp_params;
 	struct mc_command cmd = { 0 };
 	int err;
@@ -1738,6 +1830,8 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONFIRMATION_MODE,
 					cmd_flags,
 					token);
+	cmd_params = (struct dpni_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
 
 	err = mc_send_command(mc_io, &cmd);
 	if (err)
@@ -1749,6 +1843,78 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 	return 0;
 }
 
+/**
+ * dpni_set_queue_tx_confirmation_mode() - Set Tx confirmation mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
+ * @index:	queue index
+ * @mode:	Tx confirmation mode
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode mode)
+{
+	struct dpni_queue_tx_confirmation_mode *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QUEUE_TX_CONFIRMATION_MODE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
+	cmd_params->index = index;
+	cmd_params->confirmation_mode = mode;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_queue_tx_confirmation_mode() - Get Tx confirmation mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @ceetm_ch_idx:	ceetm channel index
+ * @index:	queue index
+ * @mode:	Tx confirmation mode
+ *
+ * Return:  '0' on Success; Error code otherwise.
+ */
+int dpni_get_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode *mode)
+{
+	struct dpni_queue_tx_confirmation_mode *cmd_params;
+	struct dpni_queue_tx_confirmation_mode *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QUEUE_TX_CONFIRMATION_MODE,
+					cmd_flags,
+					token);
+	cmd_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	cmd_params->ceetm_ch_idx = ceetm_ch_idx;
+	cmd_params->index = index;
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_queue_tx_confirmation_mode *)cmd.params;
+	*mode =  rsp_params->confirmation_mode;
+
+	return 0;
+}
+
 /**
  * dpni_set_qos_table() - Set QoS mapping table
  * @mc_io:	Pointer to MC portal's I/O object
@@ -2291,8 +2457,7 @@ int dpni_set_congestion_notification(struct fsl_mc_io *mc_io,
  * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
  * @token:	Token of DPNI object
  * @qtype:	Type of queue - Rx, Tx and Tx confirm types are supported
- * @param:	Traffic class and channel. Bits[0-7] contain traaffic class,
- *		byte[8-15] contains channel id
+ * @tc_id:	Traffic class selection (0-7)
  * @cfg:	congestion notification configuration
  *
  * Return:	'0' on Success; error code otherwise.
@@ -3114,8 +3279,216 @@ int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 
 	cmd_params = (struct dpni_cmd_set_port_cfg *)cmd.params;
 	cmd_params->flags = cpu_to_le32(flags);
-	dpni_set_field(cmd_params->bit_params,	PORT_LOOPBACK_EN,
-			!!port_cfg->loopback_en);
+	dpni_set_field(cmd_params->bit_params, PORT_LOOPBACK_EN, !!port_cfg->loopback_en);
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_single_step_cfg() - return current configuration for single step PTP
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPNI object
+ * @ptp_cfg: ptp single step configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ */
+int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_single_step_cfg *ptp_cfg)
+{
+	struct dpni_rsp_single_step_cfg *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SINGLE_STEP_CFG,
+						cmd_flags,
+						token);
+	/* send command to mc*/
+	err =  mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* read command response */
+	rsp_params = (struct dpni_rsp_single_step_cfg *)cmd.params;
+	ptp_cfg->offset = le16_to_cpu(rsp_params->offset);
+	ptp_cfg->en = dpni_get_field(rsp_params->flags, PTP_ENABLE);
+	ptp_cfg->ch_update = dpni_get_field(rsp_params->flags, PTP_CH_UPDATE);
+	ptp_cfg->peer_delay = le32_to_cpu(rsp_params->peer_delay);
+	ptp_cfg->ptp_onestep_reg_base =
+				  le32_to_cpu(rsp_params->ptp_onestep_reg_base);
+
+	return err;
+}
+
+/**
+ * dpni_get_port_cfg() - return configuration from physical port. The command has effect only if
+ *			dpni is connected to a mac object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @port_cfg: Configuration data
+ * The command can be called only when dpni is connected to a dpmac object.
+ * If the dpni is unconnected or the endpoint is not a dpni it will return error;
+ */
+int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_port_cfg *port_cfg)
+{
+	struct dpni_rsp_get_port_cfg *rsp_params;
+	struct mc_command cmd = { 0 };
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PORT_CFG,
+			cmd_flags, token);
+
+	/* send command to MC */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* read command response */
+	rsp_params = (struct dpni_rsp_get_port_cfg *)cmd.params;
+	port_cfg->loopback_en = dpni_get_field(rsp_params->bit_params, PORT_LOOPBACK_EN);
+
+	return 0;
+}
+
+/**
+ * dpni_set_single_step_cfg() - enable/disable and configure single step PTP
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPNI object
+ * @ptp_cfg: ptp single step configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * The function has effect only when dpni object is connected to a dpmac object. If the
+ * dpni is not connected to a dpmac the configuration will be stored inside and applied
+ * when connection is made.
+ */
+int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_single_step_cfg *ptp_cfg)
+{
+	struct dpni_cmd_single_step_cfg *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_SINGLE_STEP_CFG,
+						cmd_flags,
+						token);
+	cmd_params = (struct dpni_cmd_single_step_cfg *)cmd.params;
+	cmd_params->offset = cpu_to_le16(ptp_cfg->offset);
+	cmd_params->peer_delay = cpu_to_le32(ptp_cfg->peer_delay);
+	dpni_set_field(cmd_params->flags, PTP_ENABLE, !!ptp_cfg->en);
+	dpni_set_field(cmd_params->flags, PTP_CH_UPDATE, !!ptp_cfg->ch_update);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_dump_table() - Dump the content of table_type table into memory.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPSW object
+ * @table_type: The type of the table to dump
+ * @table_index: The index of the table to dump in case of more than one table
+ * @iova_addr: The snapshot will be stored in this variable as an header of struct dump_table_header
+ *             followed by an array of struct dump_table_entry
+ * @iova_size: Memory size allocated for iova_addr
+ * @num_entries: Number of entries written in iova_addr
+ *
+ * Return: Completion status. '0' on Success; Error code otherwise.
+ *
+ * The memory allocated at iova_addr must be zeroed before command execution.
+ * If the table content exceeds the memory size provided the dump will be truncated.
+ */
+int dpni_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries)
+{
+	struct mc_command cmd = { 0 };
+	int err;
+	struct dpni_cmd_dump_table *cmd_params;
+	struct dpni_rsp_dump_table *rsp_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_DUMP_TABLE, cmd_flags, token);
+	cmd_params = (struct dpni_cmd_dump_table *)cmd.params;
+	cmd_params->table_type = cpu_to_le16(table_type);
+	cmd_params->table_index = cpu_to_le16(table_index);
+	cmd_params->iova_addr = cpu_to_le64(iova_addr);
+	cmd_params->iova_size = cpu_to_le32(iova_size);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_rsp_dump_table *)cmd.params;
+	*num_entries = le16_to_cpu(rsp_params->num_entries);
+
+	return 0;
+}
+
+/* Sets up a Soft Parser Profile on this DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @sp_profile: Soft Parser Profile name (must a valid name for a defined profile)
+ *			Maximum allowed length for this string is 8 characters long
+ *			If this parameter is empty string (all zeros)
+ *			then the Default SP Profile is set on this dpni
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ */
+int dpni_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type)
+{
+	struct dpni_cmd_set_sp_profile *cmd_params;
+	struct mc_command cmd = { 0 };
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_SP_PROFILE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpni_cmd_set_sp_profile *)cmd.params;
+	for (i = 0; i < MAX_SP_PROFILE_ID_SIZE && sp_profile[i]; i++)
+		cmd_params->sp_profile[i] = sp_profile[i];
+	cmd_params->type = type;
+
+	/* send command to MC */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/* Enable/Disable Soft Parser on this DPNI
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @type: one of the SP Profile types defined above: Ingress or Egress (or both using bitwise OR)
+ * @en: 1 to enable or 0 to disable
+ */
+int dpni_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t type, uint8_t en)
+{
+	struct dpni_cmd_sp_enable *cmd_params;
+	struct mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SP_ENABLE,
+			cmd_flags, token);
+
+	cmd_params = (struct dpni_cmd_sp_enable *)cmd.params;
+	cmd_params->type = type;
+	cmd_params->en = en;
 
 	/* send command to MC */
 	return mc_send_command(mc_io, &cmd);
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 9bbac44219..97b09e59f9 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2022 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #ifndef __FSL_DPDMUX_H
@@ -154,6 +154,10 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
  *Setting 1 DPDMUX_RESET will not reset multicast rules
  */
 #define DPDMUX_SKIP_MULTICAST_RULES	0x04
+/**
+ *Setting 4 DPDMUX_RESET will not reset default interface
+ */
+#define DPDMUX_SKIP_RESET_DEFAULT_INTERFACE	0x08
 
 int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
@@ -464,10 +468,50 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io,
 			   uint16_t *major_ver,
 			   uint16_t *minor_ver);
 
+enum dpdmux_congestion_unit {
+	DPDMUX_TAIDLROP_DROP_UNIT_BYTE = 0,
+	DPDMUX_TAILDROP_DROP_UNIT_FRAMES,
+	DPDMUX_TAILDROP_DROP_UNIT_BUFFERS
+};
+
 /**
- * Discard bit. This bit must be used together with other bits in
- * DPDMUX_ERROR_ACTION_CONTINUE to disable discarding of frames containing
- * errors
+ * struct dpdmux_taildrop_cfg - interface taildrop configuration
+ * @enable - enable (1 ) or disable (0) taildrop
+ * @units - taildrop units
+ * @threshold - taildtop threshold
+ */
+struct dpdmux_taildrop_cfg {
+	char enable;
+	enum dpdmux_congestion_unit units;
+	uint32_t threshold;
+};
+
+int dpdmux_if_set_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg);
+
+int dpdmux_if_get_taildrop(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+			      uint16_t if_id, struct dpdmux_taildrop_cfg *cfg);
+
+#define DPDMUX_MAX_KEY_SIZE 56
+
+enum dpdmux_table_type {
+	DPDMUX_DMAT_TABLE = 1,
+	DPDMUX_MISS_TABLE = 2,
+	DPDMUX_PRUNE_TABLE = 3,
+};
+
+int dpdmux_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries);
+
+/**
+ * Discard bit. This bit must be used together with other bits in DPDMUX_ERROR_ACTION_CONTINUE
+ * to disable discarding of frames containing errors
  */
 #define DPDMUX_ERROR_DISC		0x80000000
 /**
@@ -583,4 +627,19 @@ struct dpdmux_error_cfg {
 int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 		uint16_t token, uint16_t if_id, struct dpdmux_error_cfg *cfg);
 
+/**
+ * SP Profile on Ingress DPDMUX
+ */
+#define DPDMUX_SP_PROFILE_INGRESS 0x1
+/**
+ * SP Profile on Egress DPDMUX
+ */
+#define DPDMUX_SP_PROFILE_EGRESS	0x2
+
+int dpdmux_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type);
+
+int dpdmux_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t if_id, uint8_t type, uint8_t en);
+
 #endif /* __FSL_DPDMUX_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
index bf6b8a20d1..a94f1bf91a 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2023 NXP
  *
  */
 #ifndef _FSL_DPDMUX_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPDMUX Version */
 #define DPDMUX_VER_MAJOR		6
-#define DPDMUX_VER_MINOR		9
+#define DPDMUX_VER_MINOR		10
 
 #define DPDMUX_CMD_BASE_VERSION		1
 #define DPDMUX_CMD_VERSION_2		2
@@ -63,8 +63,17 @@
 
 #define DPDMUX_CMDID_SET_RESETABLE		DPDMUX_CMD(0x0ba)
 #define DPDMUX_CMDID_GET_RESETABLE		DPDMUX_CMD(0x0bb)
+
+#define DPDMUX_CMDID_IF_SET_TAILDROP		DPDMUX_CMD(0x0bc)
+#define DPDMUX_CMDID_IF_GET_TAILDROP		DPDMUX_CMD(0x0bd)
+
+#define DPDMUX_CMDID_DUMP_TABLE           DPDMUX_CMD(0x0be)
+
 #define DPDMUX_CMDID_SET_ERRORS_BEHAVIOR	DPDMUX_CMD(0x0bf)
 
+#define DPDMUX_CMDID_SET_SP_PROFILE			DPDMUX_CMD(0x0c0)
+#define DPDMUX_CMDID_SP_ENABLE				DPDMUX_CMD(0x0c1)
+
 #define DPDMUX_MASK(field)        \
 	GENMASK(DPDMUX_##field##_SHIFT + DPDMUX_##field##_SIZE - 1, \
 		DPDMUX_##field##_SHIFT)
@@ -241,7 +250,7 @@ struct dpdmux_cmd_remove_custom_cls_entry {
 };
 
 #define DPDMUX_SKIP_RESET_FLAGS_SHIFT    0
-#define DPDMUX_SKIP_RESET_FLAGS_SIZE     3
+#define DPDMUX_SKIP_RESET_FLAGS_SIZE     4
 
 struct dpdmux_cmd_set_skip_reset_flags {
 	uint8_t skip_reset_flags;
@@ -251,6 +260,61 @@ struct dpdmux_rsp_get_skip_reset_flags {
 	uint8_t skip_reset_flags;
 };
 
+struct dpdmux_cmd_set_taildrop {
+	uint32_t	pad1;
+	uint16_t	if_id;
+	uint16_t	pad2;
+	uint16_t	oal_en;
+	uint8_t		units;
+	uint8_t		pad3;
+	uint32_t	threshold;
+};
+
+struct dpdmux_cmd_get_taildrop {
+	uint32_t	pad1;
+	uint16_t	if_id;
+};
+
+struct dpdmux_rsp_get_taildrop {
+	uint16_t	pad1;
+	uint16_t	pad2;
+	uint16_t	if_id;
+	uint16_t	pad3;
+	uint16_t	oal_en;
+	uint8_t		units;
+	uint8_t		pad4;
+	uint32_t	threshold;
+};
+
+struct dpdmux_cmd_dump_table {
+	uint16_t table_type;
+	uint16_t table_index;
+	uint32_t pad0;
+	uint64_t iova_addr;
+	uint32_t iova_size;
+};
+
+struct dpdmux_rsp_dump_table {
+	uint16_t num_entries;
+};
+
+struct dpdmux_dump_table_header {
+	uint16_t table_type;
+	uint16_t table_num_entries;
+	uint16_t table_max_entries;
+	uint8_t default_action;
+	uint8_t match_type;
+	uint8_t reserved[24];
+};
+
+struct dpdmux_dump_table_entry {
+	uint8_t key[DPDMUX_MAX_KEY_SIZE];
+	uint8_t mask[DPDMUX_MAX_KEY_SIZE];
+	uint8_t key_action;
+	uint16_t result[3];
+	uint8_t reserved[21];
+};
+
 #define DPDMUX_ERROR_ACTION_SHIFT		0
 #define DPDMUX_ERROR_ACTION_SIZE		4
 
@@ -260,5 +324,18 @@ struct dpdmux_cmd_set_errors_behavior {
 	uint16_t if_id;
 };
 
+#define MAX_SP_PROFILE_ID_SIZE	8
+
+struct dpdmux_cmd_set_sp_profile {
+	uint8_t sp_profile[MAX_SP_PROFILE_ID_SIZE];
+	uint8_t type;
+};
+
+struct dpdmux_cmd_sp_enable {
+	uint16_t if_id;
+	uint8_t type;
+	uint8_t en;
+};
+
 #pragma pack(pop)
 #endif /* _FSL_DPDMUX_CMD_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpkg.h b/drivers/net/dpaa2/mc/fsl_dpkg.h
index 70f2339ea5..834c765513 100644
--- a/drivers/net/dpaa2/mc/fsl_dpkg.h
+++ b/drivers/net/dpaa2/mc/fsl_dpkg.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  * Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPKG_H_
@@ -180,7 +180,8 @@ struct dpni_ext_set_rx_tc_dist {
 	struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
 };
 
-int dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
-			 uint8_t *key_cfg_buf);
+int
+dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+	void *key_cfg_buf);
 
 #endif /* __FSL_DPKG_H_ */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index ce84f4265e..3a5fcfa8a5 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2021 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef __FSL_DPNI_H
@@ -116,6 +116,11 @@ struct fsl_mc_io;
  * Flow steering table is shared between all traffic classes
  */
 #define DPNI_OPT_SHARED_FS				0x001000
+/*
+ * Fq frame data, context and annotations stashing disable.
+ * The stashing is enabled by default.
+ */
+#define DPNI_OPT_STASHING_DIS			0x002000
 /**
  * Software sequence maximum layout size
  */
@@ -147,6 +152,7 @@ int dpni_close(struct fsl_mc_io *mc_io,
  *		DPNI_OPT_HAS_KEY_MASKING
  *		DPNI_OPT_NO_FS
  *		DPNI_OPT_SINGLE_SENDER
+ *		DPNI_OPT_STASHING_DIS
  * @fs_entries: Number of entries in the flow steering table.
  *		This table is used to select the ingress queue for
  *		ingress traffic, targeting a GPP core or another.
@@ -335,6 +341,7 @@ int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
  *		DPNI_OPT_SHARED_CONGESTION
  *		DPNI_OPT_HAS_KEY_MASKING
  *		DPNI_OPT_NO_FS
+ *		DPNI_OPT_STASHING_DIS
  * @num_queues: Number of Tx and Rx queues used for traffic distribution.
  * @num_rx_tcs: Number of RX traffic classes (TCs), reserved for the DPNI.
  * @num_tx_tcs: Number of TX traffic classes (TCs), reserved for the DPNI.
@@ -394,7 +401,7 @@ int dpni_get_attributes(struct fsl_mc_io *mc_io,
  * error queue. To be used in dpni_set_errors_behavior() only if error_action
  * parameter is set to DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE.
  */
-#define DPNI_ERROR_DISC		0x80000000
+#define DPNI_ERROR_DISC			0x80000000
 
 /**
  * Extract out of frame header error
@@ -576,6 +583,8 @@ enum dpni_offload {
 	DPNI_OFF_TX_L3_CSUM,
 	DPNI_OFF_TX_L4_CSUM,
 	DPNI_FLCTYPE_HASH,
+	DPNI_HEADER_STASHING,
+	DPNI_PAYLOAD_STASHING,
 };
 
 int dpni_set_offload(struct fsl_mc_io *mc_io,
@@ -596,6 +605,26 @@ int dpni_get_qdid(struct fsl_mc_io *mc_io,
 		  enum dpni_queue_type qtype,
 		  uint16_t *qdid);
 
+int dpni_get_qdid_ex(struct fsl_mc_io *mc_io,
+		  uint32_t cmd_flags,
+		  uint16_t token,
+		  enum dpni_queue_type qtype,
+		  uint16_t *qdid);
+
+/**
+ * struct dpni_sp_info - Structure representing DPNI storage-profile information
+ * (relevant only for DPNI owned by AIOP)
+ * @spids: array of storage-profiles
+ */
+struct dpni_sp_info {
+	uint16_t spids[DPNI_MAX_SP];
+};
+
+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
+		     uint32_t cmd_flags,
+		     uint16_t token,
+		     struct dpni_sp_info *sp_info);
+
 int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
 			    uint32_t cmd_flags,
 			    uint16_t token,
@@ -1443,11 +1472,25 @@ enum dpni_confirmation_mode {
 int dpni_set_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
 				  enum dpni_confirmation_mode mode);
 
 int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
+				  uint8_t ceetm_ch_idx,
+				  enum dpni_confirmation_mode *mode);
+
+int dpni_set_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
+				  enum dpni_confirmation_mode mode);
+
+int dpni_get_queue_tx_confirmation_mode(struct fsl_mc_io *mc_io,
+				  uint32_t cmd_flags,
+				  uint16_t token,
+				  uint8_t ceetm_ch_idx, uint8_t index,
 				  enum dpni_confirmation_mode *mode);
 
 /**
@@ -1841,6 +1884,60 @@ void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
 				     const uint8_t *sw_sequence_layout_buf);
 
 /**
+ * When used for queue_idx in function dpni_set_rx_dist_default_queue will signal to dpni
+ * to drop all unclassified frames
+ */
+#define DPNI_FS_MISS_DROP		((uint16_t)-1)
+
+/**
+ * struct dpni_rx_dist_cfg - distribution configuration
+ * @dist_size:	distribution size; supported values: 1,2,3,4,6,7,8,
+ *		12,14,16,24,28,32,48,56,64,96,112,128,192,224,256,384,448,
+ *		512,768,896,1024
+ * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
+ *		the extractions to be used for the distribution key by calling
+ *		dpkg_prepare_key_cfg() relevant only when enable!=0 otherwise it can be '0'
+ * @enable: enable/disable the distribution.
+ * @tc: TC id for which distribution is set
+ * @fs_miss_flow_id: when packet misses all rules from flow steering table and hash is
+ *		disabled it will be put into this queue id; use DPNI_FS_MISS_DROP to drop
+ *		frames. The value of this field is used only when flow steering distribution
+ *		is enabled and hash distribution is disabled
+ */
+struct dpni_rx_dist_cfg {
+	uint16_t dist_size;
+	uint64_t key_cfg_iova;
+	uint8_t enable;
+	uint8_t tc;
+	uint16_t fs_miss_flow_id;
+};
+
+int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		const struct dpni_rx_dist_cfg *cfg);
+
+int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		const struct dpni_rx_dist_cfg *cfg);
+
+int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t tpid);
+
+int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint16_t tpid);
+
+/**
+ * struct dpni_custom_tpid_cfg - custom TPID configuration. Contains custom TPID values
+ *		used in current dpni object to detect 802.1q frames.
+ *	@tpid1: first tag. Not used if zero.
+ *	@tpid2: second tag. Not used if zero.
+ */
+struct dpni_custom_tpid_cfg {
+	uint16_t tpid1;
+	uint16_t tpid2;
+};
+
+int dpni_get_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		struct dpni_custom_tpid_cfg *tpid);
+/*
  * struct dpni_ptp_cfg - configure single step PTP (IEEE 1588)
  *	@en: enable single step PTP. When enabled the PTPv1 functionality will
  *		not work. If the field is zero, offset and ch_update parameters
@@ -1858,6 +1955,7 @@ struct dpni_single_step_cfg {
 	uint8_t ch_update;
 	uint16_t offset;
 	uint32_t peer_delay;
+	uint32_t ptp_onestep_reg_base;
 };
 
 int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
@@ -1885,61 +1983,35 @@ int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
 		uint16_t token, struct dpni_port_cfg *port_cfg);
 
-/**
- * When used for queue_idx in function dpni_set_rx_dist_default_queue will
- * signal to dpni to drop all unclassified frames
- */
-#define DPNI_FS_MISS_DROP		((uint16_t)-1)
-
-/**
- * struct dpni_rx_dist_cfg - distribution configuration
- * @dist_size:	distribution size; supported values: 1,2,3,4,6,7,8,
- *		12,14,16,24,28,32,48,56,64,96,112,128,192,224,256,384,448,
- *		512,768,896,1024
- * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
- *		the extractions to be used for the distribution key by calling
- *		dpkg_prepare_key_cfg() relevant only when enable!=0 otherwise
- *		it can be '0'
- * @enable: enable/disable the distribution.
- * @tc: TC id for which distribution is set
- * @fs_miss_flow_id: when packet misses all rules from flow steering table and
- *		hash is disabled it will be put into this queue id; use
- *		DPNI_FS_MISS_DROP to drop frames. The value of this field is
- *		used only when flow steering distribution is enabled and hash
- *		distribution is disabled
- */
-struct dpni_rx_dist_cfg {
-	uint16_t dist_size;
-	uint64_t key_cfg_iova;
-	uint8_t enable;
-	uint8_t tc;
-	uint16_t fs_miss_flow_id;
+enum dpni_table_type {
+	DPNI_FS_TABLE = 1,
+	DPNI_MAC_TABLE = 2,
+	DPNI_QOS_TABLE = 3,
+	DPNI_VLAN_TABLE = 4,
 };
 
-int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, const struct dpni_rx_dist_cfg *cfg);
-
-int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, const struct dpni_rx_dist_cfg *cfg);
-
-int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, uint16_t tpid);
-
-int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, uint16_t tpid);
+int dpni_dump_table(struct fsl_mc_io *mc_io,
+			 uint32_t cmd_flags,
+			 uint16_t token,
+			 uint16_t table_type,
+			 uint16_t table_index,
+			 uint64_t iova_addr,
+			 uint32_t iova_size,
+			 uint16_t *num_entries);
 
 /**
- * struct dpni_custom_tpid_cfg - custom TPID configuration. Contains custom TPID
- *	values used in current dpni object to detect 802.1q frames.
- *	@tpid1: first tag. Not used if zero.
- *	@tpid2: second tag. Not used if zero.
+ * SP Profile on Ingress DPNI
  */
-struct dpni_custom_tpid_cfg {
-	uint16_t tpid1;
-	uint16_t tpid2;
-};
+#define DPNI_SP_PROFILE_INGRESS 0x1
+/**
+ * SP Profile on Egress DPNI
+ */
+#define DPNI_SP_PROFILE_EGRESS	0x2
+
+int dpni_set_sp_profile(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t sp_profile[], uint8_t type);
 
-int dpni_get_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
-		uint16_t token, struct dpni_custom_tpid_cfg *tpid);
+int dpni_sp_enable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token,
+		uint8_t type, uint8_t en);
 
 #endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
index 781f936add..1152182e34 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2022 NXP
+ * Copyright 2016-2023 NXP
  *
  */
 #ifndef _FSL_DPNI_CMD_H
@@ -9,7 +9,7 @@
 
 /* DPNI Version */
 #define DPNI_VER_MAJOR				8
-#define DPNI_VER_MINOR				2
+#define DPNI_VER_MINOR				4
 
 #define DPNI_CMD_BASE_VERSION			1
 #define DPNI_CMD_VERSION_2			2
@@ -108,8 +108,8 @@
 #define DPNI_CMDID_GET_EARLY_DROP		DPNI_CMD_V3(0x26A)
 #define DPNI_CMDID_GET_OFFLOAD			DPNI_CMD_V2(0x26B)
 #define DPNI_CMDID_SET_OFFLOAD			DPNI_CMD_V2(0x26C)
-#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE	DPNI_CMD(0x266)
-#define DPNI_CMDID_GET_TX_CONFIRMATION_MODE	DPNI_CMD(0x26D)
+#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE	DPNI_CMD_V2(0x266)
+#define DPNI_CMDID_GET_TX_CONFIRMATION_MODE	DPNI_CMD_V2(0x26D)
 #define DPNI_CMDID_SET_OPR			DPNI_CMD_V2(0x26e)
 #define DPNI_CMDID_GET_OPR			DPNI_CMD_V2(0x26f)
 #define DPNI_CMDID_LOAD_SW_SEQUENCE		DPNI_CMD(0x270)
@@ -121,7 +121,16 @@
 #define DPNI_CMDID_REMOVE_CUSTOM_TPID		DPNI_CMD(0x276)
 #define DPNI_CMDID_GET_CUSTOM_TPID		DPNI_CMD(0x277)
 #define DPNI_CMDID_GET_LINK_CFG			DPNI_CMD(0x278)
+#define DPNI_CMDID_SET_SINGLE_STEP_CFG			DPNI_CMD(0x279)
+#define DPNI_CMDID_GET_SINGLE_STEP_CFG		DPNI_CMD_V2(0x27a)
 #define DPNI_CMDID_SET_PORT_CFG			DPNI_CMD(0x27B)
+#define DPNI_CMDID_GET_PORT_CFG			DPNI_CMD(0x27C)
+#define DPNI_CMDID_DUMP_TABLE           DPNI_CMD(0x27D)
+#define DPNI_CMDID_SET_SP_PROFILE		DPNI_CMD(0x27E)
+#define DPNI_CMDID_GET_QDID_EX			DPNI_CMD(0x27F)
+#define DPNI_CMDID_SP_ENABLE		    DPNI_CMD(0x280)
+#define DPNI_CMDID_SET_QUEUE_TX_CONFIRMATION_MODE	DPNI_CMD(0x281)
+#define DPNI_CMDID_GET_QUEUE_TX_CONFIRMATION_MODE	DPNI_CMD(0x282)
 
 /* Macros for accessing command fields smaller than 1byte */
 #define DPNI_MASK(field)	\
@@ -329,6 +338,10 @@ struct dpni_rsp_get_qdid {
 	uint16_t qdid;
 };
 
+struct dpni_rsp_get_qdid_ex {
+	uint16_t qdid[16];
+};
+
 struct dpni_rsp_get_sp_info {
 	uint16_t spids[2];
 };
@@ -748,7 +761,16 @@ struct dpni_cmd_set_taildrop {
 };
 
 struct dpni_tx_confirmation_mode {
-	uint32_t pad;
+	uint8_t ceetm_ch_idx;
+	uint8_t pad1;
+	uint16_t pad2;
+	uint8_t confirmation_mode;
+};
+
+struct dpni_queue_tx_confirmation_mode {
+	uint8_t ceetm_ch_idx;
+	uint8_t index;
+	uint16_t pad;
 	uint8_t confirmation_mode;
 };
 
@@ -894,6 +916,42 @@ struct dpni_sw_sequence_layout_entry {
 	uint16_t pad;
 };
 
+#define DPNI_RX_FS_DIST_ENABLE_SHIFT	0
+#define DPNI_RX_FS_DIST_ENABLE_SIZE		1
+struct dpni_cmd_set_rx_fs_dist {
+	uint16_t	dist_size;
+	uint8_t		enable;
+	uint8_t		tc;
+	uint16_t	miss_flow_id;
+	uint16_t	pad1;
+	uint64_t	key_cfg_iova;
+};
+
+#define DPNI_RX_HASH_DIST_ENABLE_SHIFT	0
+#define DPNI_RX_HASH_DIST_ENABLE_SIZE		1
+struct dpni_cmd_set_rx_hash_dist {
+	uint16_t	dist_size;
+	uint8_t		enable;
+	uint8_t		tc_id;
+	uint32_t	pad;
+	uint64_t	key_cfg_iova;
+};
+
+struct dpni_cmd_add_custom_tpid {
+	uint16_t	pad;
+	uint16_t	tpid;
+};
+
+struct dpni_cmd_remove_custom_tpid {
+	uint16_t	pad;
+	uint16_t	tpid;
+};
+
+struct dpni_rsp_get_custom_tpid {
+	uint16_t	tpid1;
+	uint16_t	tpid2;
+};
+
 #define DPNI_PTP_ENABLE_SHIFT			0
 #define DPNI_PTP_ENABLE_SIZE			1
 #define DPNI_PTP_CH_UPDATE_SHIFT		1
@@ -925,40 +983,45 @@ struct dpni_rsp_get_port_cfg {
 	uint32_t	bit_params;
 };
 
-#define DPNI_RX_FS_DIST_ENABLE_SHIFT	0
-#define DPNI_RX_FS_DIST_ENABLE_SIZE		1
-struct dpni_cmd_set_rx_fs_dist {
-	uint16_t	dist_size;
-	uint8_t		enable;
-	uint8_t		tc;
-	uint16_t	miss_flow_id;
-	uint16_t	pad1;
-	uint64_t	key_cfg_iova;
+struct dpni_cmd_dump_table {
+	uint16_t table_type;
+	uint16_t table_index;
+	uint32_t pad0;
+	uint64_t iova_addr;
+	uint32_t iova_size;
 };
 
-#define DPNI_RX_HASH_DIST_ENABLE_SHIFT	0
-#define DPNI_RX_HASH_DIST_ENABLE_SIZE		1
-struct dpni_cmd_set_rx_hash_dist {
-	uint16_t	dist_size;
-	uint8_t		enable;
-	uint8_t		tc_id;
-	uint32_t	pad;
-	uint64_t	key_cfg_iova;
+struct dpni_rsp_dump_table {
+	uint16_t num_entries;
 };
 
-struct dpni_cmd_add_custom_tpid {
-	uint16_t	pad;
-	uint16_t	tpid;
+struct dump_table_header {
+	uint16_t table_type;
+	uint16_t table_num_entries;
+	uint16_t table_max_entries;
+	uint8_t default_action;
+	uint8_t match_type;
+	uint8_t reserved[24];
 };
 
-struct dpni_cmd_remove_custom_tpid {
-	uint16_t	pad;
-	uint16_t	tpid;
+struct dump_table_entry {
+	uint8_t key[DPNI_MAX_KEY_SIZE];
+	uint8_t mask[DPNI_MAX_KEY_SIZE];
+	uint8_t key_action;
+	uint16_t result[3];
+	uint8_t reserved[21];
 };
 
-struct dpni_rsp_get_custom_tpid {
-	uint16_t	tpid1;
-	uint16_t	tpid2;
+#define MAX_SP_PROFILE_ID_SIZE	8
+
+struct dpni_cmd_set_sp_profile {
+	uint8_t sp_profile[MAX_SP_PROFILE_ID_SIZE];
+	uint8_t type;
+};
+
+struct dpni_cmd_sp_enable {
+	uint8_t type;
+	uint8_t en;
 };
 
 #pragma pack(pop)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 09/42] net/dpaa2: support link state for eth interfaces
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (7 preceding siblings ...)
  2024-10-23 11:59           ` [v5 08/42] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 10/42] net/dpaa2: update DPNI link status method vanshika.shukla
                             ` (33 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

This patch add support to update the duplex value along with
link status and link speed after setting the link UP.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 439b8f97a4..b120e2c815 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1988,7 +1988,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	if (ret) {
 		/* Unable to obtain dpni status; Not continuing */
 		DPAA2_PMD_ERR("Interface Link UP failed (%d)", ret);
-		return -EINVAL;
+		return ret;
 	}
 
 	/* Enable link if not already enabled */
@@ -1996,13 +1996,13 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 		ret = dpni_enable(dpni, CMD_PRI_LOW, priv->token);
 		if (ret) {
 			DPAA2_PMD_ERR("Interface Link UP failed (%d)", ret);
-			return -EINVAL;
+			return ret;
 		}
 	}
 	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
 	if (ret < 0) {
 		DPAA2_PMD_DEBUG("Unable to get link state (%d)", ret);
-		return -1;
+		return ret;
 	}
 
 	/* changing tx burst function to start enqueues */
@@ -2010,10 +2010,15 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = state.up;
 	dev->data->dev_link.link_speed = state.rate;
 
+	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
+		dev->data->dev_link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	else
+		dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+
 	if (state.up)
-		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
+		DPAA2_PMD_DEBUG("Port %d Link is Up", dev->data->port_id);
 	else
-		DPAA2_PMD_INFO("Port %d Link is Down", dev->data->port_id);
+		DPAA2_PMD_DEBUG("Port %d Link is Down", dev->data->port_id);
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 10/42] net/dpaa2: update DPNI link status method
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (8 preceding siblings ...)
  2024-10-23 11:59           ` [v5 09/42] net/dpaa2: support link state for eth interfaces vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 11/42] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
                             ` (32 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Brick Yang, Rohit Raj

From: Brick Yang <brick.yang@nxp.com>

If SFP module is not connected to the port and flow control is
configured using flow control API, link will show DOWN even after
connecting the SFP module and fiber cable.

This issue cannot be reproduced if only SFP module is connected and
fiber cable is disconnected before configuring flow control even
though link is down in this case too.

This patch improves it by getting configuration values from
dpni_get_link_cfg API instead of dpni_get_link_state API, which
provides us static configuration data.

Signed-off-by: Brick Yang <brick.yang@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index b120e2c815..0adebc0bf1 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2087,7 +2087,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	int ret = -EINVAL;
 	struct dpaa2_dev_priv *priv;
 	struct fsl_mc_io *dpni;
-	struct dpni_link_state state = {0};
+	struct dpni_link_cfg cfg = {0};
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -2099,14 +2099,14 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
+	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("error: dpni_get_link_state %d", ret);
+		DPAA2_PMD_ERR("error: dpni_get_link_cfg %d", ret);
 		return ret;
 	}
 
 	memset(fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	if (state.options & DPNI_LINK_OPT_PAUSE) {
+	if (cfg.options & DPNI_LINK_OPT_PAUSE) {
 		/* DPNI_LINK_OPT_PAUSE set
 		 *  if ASYM_PAUSE not set,
 		 *	RX Side flow control (handle received Pause frame)
@@ -2115,7 +2115,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	RX Side flow control (handle received Pause frame)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
-		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
+		if (!(cfg.options & DPNI_LINK_OPT_ASYM_PAUSE))
 			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
 			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
@@ -2127,7 +2127,7 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *  if ASYM_PAUSE not set,
 		 *	Flow control disabled
 		 */
-		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
+		if (cfg.options & DPNI_LINK_OPT_ASYM_PAUSE)
 			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
 			fc_conf->mode = RTE_ETH_FC_NONE;
@@ -2142,7 +2142,6 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	int ret = -EINVAL;
 	struct dpaa2_dev_priv *priv;
 	struct fsl_mc_io *dpni;
-	struct dpni_link_state state = {0};
 	struct dpni_link_cfg cfg = {0};
 
 	PMD_INIT_FUNC_TRACE();
@@ -2155,23 +2154,19 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	/* It is necessary to obtain the current state before setting fc_conf
+	/* It is necessary to obtain the current cfg before setting fc_conf
 	 * as MC would return error in case rate, autoneg or duplex values are
 	 * different.
 	 */
-	ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state);
+	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("Unable to get link state (err=%d)", ret);
+		DPAA2_PMD_ERR("Unable to get link cfg (err=%d)", ret);
 		return -1;
 	}
 
 	/* Disable link before setting configuration */
 	dpaa2_dev_set_link_down(dev);
 
-	/* Based on fc_conf, update cfg */
-	cfg.rate = state.rate;
-	cfg.options = state.options;
-
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
 	case RTE_ETH_FC_FULL:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 11/42] net/dpaa2: add new PMD API to check dpaa platform version
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (9 preceding siblings ...)
  2024-10-23 11:59           ` [v5 10/42] net/dpaa2: update DPNI link status method vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 12/42] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
                             ` (31 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

This patch add support to check the DPAA platform type from
the applications.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 16 +++++++++++++---
 drivers/net/dpaa2/dpaa2_flow.c    |  5 ++---
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  4 ++++
 drivers/net/dpaa2/version.map     |  1 +
 4 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 0adebc0bf1..bd6a578e30 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2161,7 +2161,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	ret = dpni_get_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
 		DPAA2_PMD_ERR("Unable to get link cfg (err=%d)", ret);
-		return -1;
+		return ret;
 	}
 
 	/* Disable link before setting configuration */
@@ -2203,7 +2203,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	default:
 		DPAA2_PMD_ERR("Incorrect Flow control flag (%d)",
 			      fc_conf->mode);
-		return -1;
+		return -EINVAL;
 	}
 
 	ret = dpni_set_link_cfg(dpni, CMD_PRI_LOW, priv->token, &cfg);
@@ -2885,8 +2885,18 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
+int
+rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 {
+	struct rte_eth_dev *dev;
+
+	if (eth_id >= RTE_MAX_ETHPORTS)
+		return false;
+
+	dev = &rte_eth_devices[eth_id];
+	if (!dev->device)
+		return false;
+
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 54b17e97c0..77367aa392 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3296,14 +3296,13 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 	if (idx >= 0) {
 		if (!rte_eth_dev_is_valid_port(idx))
 			return NULL;
+		if (!rte_pmd_dpaa2_dev_is_dpaa2(idx))
+			return NULL;
 		dest_dev = &rte_eth_devices[idx];
 	} else {
 		dest_dev = priv->eth_dev;
 	}
 
-	if (!dpaa2_dev_is_dpaa2(dest_dev))
-		return NULL;
-
 	return dest_dev;
 }
 
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index bebebcacdc..fc52a9218e 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -127,6 +127,10 @@ __rte_experimental
 uint32_t
 rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
 
+__rte_experimental
+int
+rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id);
+
 #if defined(RTE_LIBRTE_IEEE1588)
 __rte_experimental
 int
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 7323fc8869..233c6e6b2c 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -17,6 +17,7 @@ EXPERIMENTAL {
 	# added in 21.11
 	rte_pmd_dpaa2_get_tlu_hash;
 	# added in 24.11
+	rte_pmd_dpaa2_dev_is_dpaa2;
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
 	rte_pmd_dpaa2_mux_dump_counter;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 12/42] bus/fslmc: improve BMAN buffer acquire
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (10 preceding siblings ...)
  2024-10-23 11:59           ` [v5 11/42] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 13/42] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
                             ` (30 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Ignore reserved bits of BMan acquire response number.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 1f24cdce7e..3fdca9761d 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2023-2024 NXP
  *
  */
 
@@ -42,6 +42,8 @@
 /* opaque token for static dequeues */
 #define QMAN_SDQCR_TOKEN    0xbb
 
+#define BMAN_VALID_RSLT_NUM_MASK 0x7
+
 enum qbman_sdqcr_dct {
 	qbman_sdqcr_dct_null = 0,
 	qbman_sdqcr_dct_prio_ics,
@@ -2628,7 +2630,7 @@ struct qbman_acquire_rslt {
 	uint16_t reserved;
 	uint8_t num;
 	uint8_t reserved2[3];
-	uint64_t buf[7];
+	uint64_t buf[BMAN_VALID_RSLT_NUM_MASK];
 };
 
 static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
@@ -2636,8 +2638,9 @@ static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
+	int num;
 
-	if (!num_buffers || (num_buffers > 7))
+	if (!num_buffers || (num_buffers > BMAN_VALID_RSLT_NUM_MASK))
 		return -EINVAL;
 
 	/* Start the management command */
@@ -2668,12 +2671,13 @@ static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
 		return -EIO;
 	}
 
-	QBMAN_BUG_ON(r->num > num_buffers);
+	num = r->num & BMAN_VALID_RSLT_NUM_MASK;
+	QBMAN_BUG_ON(num > num_buffers);
 
 	/* Copy the acquired buffers to the caller's array */
-	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+	u64_from_le32_copy(buffers, &r->buf[0], num);
 
-	return (int)r->num;
+	return num;
 }
 
 static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
@@ -2681,8 +2685,9 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
+	int num;
 
-	if (!num_buffers || (num_buffers > 7))
+	if (!num_buffers || (num_buffers > BMAN_VALID_RSLT_NUM_MASK))
 		return -EINVAL;
 
 	/* Start the management command */
@@ -2713,12 +2718,13 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 		return -EIO;
 	}
 
-	QBMAN_BUG_ON(r->num > num_buffers);
+	num = r->num & BMAN_VALID_RSLT_NUM_MASK;
+	QBMAN_BUG_ON(num > num_buffers);
 
 	/* Copy the acquired buffers to the caller's array */
-	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+	u64_from_le32_copy(buffers, &r->buf[0], num);
 
-	return (int)r->num;
+	return num;
 }
 
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 13/42] bus/fslmc: get MC VFIO group FD directly
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (11 preceding siblings ...)
  2024-10-23 11:59           ` [v5 12/42] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 14/42] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
                             ` (29 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Get vfio group fd directly from file system instead of
from RTE API to avoid conflicting with PCIe VFIO.
FSL MC VFIO should have it's own logic which doe NOT depend on
RTE VFIO.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 88 ++++++++++++++++++++++++++--------
 drivers/bus/fslmc/meson.build  |  3 +-
 2 files changed, 71 insertions(+), 20 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index ecca593c34..54398c4643 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2021 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -30,6 +30,7 @@
 #include <rte_kvargs.h>
 #include <dev_driver.h>
 #include <rte_eal_memconfig.h>
+#include <eal_vfio.h>
 
 #include "private.h"
 #include "fslmc_vfio.h"
@@ -440,6 +441,59 @@ int rte_fslmc_vfio_dmamap(void)
 	return 0;
 }
 
+static int
+fslmc_vfio_open_group_fd(int iommu_group_num)
+{
+	int vfio_group_fd;
+	char filename[PATH_MAX];
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
+
+	/* if primary, try to open the group */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		/* try regular group format */
+		snprintf(filename, sizeof(filename),
+			VFIO_GROUP_FMT, iommu_group_num);
+		vfio_group_fd = open(filename, O_RDWR);
+		if (vfio_group_fd <= 0) {
+			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
+				filename, vfio_group_fd);
+		}
+
+		return vfio_group_fd;
+	}
+	/* if we're in a secondary process, request group fd from the primary
+	 * process via mp channel.
+	 */
+	p->req = SOCKET_REQ_GROUP;
+	p->group_num = iommu_group_num;
+	rte_strscpy(mp_req.name, EAL_VFIO_MP, sizeof(mp_req.name));
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_group_fd = -1;
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+	    mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		p = (struct vfio_mp_param *)mp_rep->param;
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+			vfio_group_fd = mp_rep->fds[0];
+		} else if (p->result == SOCKET_NO_FD) {
+			DPAA2_BUS_ERR("Bad VFIO group fd");
+			vfio_group_fd = 0;
+		}
+	}
+
+	free(mp_reply.msgs);
+	if (vfio_group_fd < 0) {
+		DPAA2_BUS_ERR("Cannot request group fd(%d)",
+			vfio_group_fd);
+	}
+	return vfio_group_fd;
+}
+
 static int
 fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		int *vfio_dev_fd, struct vfio_device_info *device_info)
@@ -455,7 +509,7 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		return -1;
 
 	/* get the actual group fd */
-	vfio_group_fd = rte_vfio_get_group_fd(iommu_group_no);
+	vfio_group_fd = vfio_group.fd;
 	if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
 		return -1;
 
@@ -891,6 +945,11 @@ fslmc_vfio_close_group(void)
 		}
 	}
 
+	if (vfio_group.fd > 0) {
+		close(vfio_group.fd);
+		vfio_group.fd = 0;
+	}
+
 	return 0;
 }
 
@@ -1081,7 +1140,6 @@ fslmc_vfio_setup_group(void)
 {
 	int groupid;
 	int ret;
-	int vfio_container_fd;
 	struct vfio_group_status status = { .argsz = sizeof(status) };
 
 	/* if already done once */
@@ -1100,16 +1158,9 @@ fslmc_vfio_setup_group(void)
 		return 0;
 	}
 
-	ret = rte_vfio_container_create();
-	if (ret < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return ret;
-	}
-	vfio_container_fd = ret;
-
 	/* Get the actual group fd */
-	ret = rte_vfio_container_group_bind(vfio_container_fd, groupid);
-	if (ret < 0)
+	ret = fslmc_vfio_open_group_fd(groupid);
+	if (ret <= 0)
 		return ret;
 	vfio_group.fd = ret;
 
@@ -1118,14 +1169,14 @@ fslmc_vfio_setup_group(void)
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO error getting group status");
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return ret;
 	}
 
 	if (!(status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
 		DPAA2_BUS_ERR("VFIO group not viable");
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return -EPERM;
 	}
 	/* Since Group is VIABLE, Store the groupid */
@@ -1136,11 +1187,10 @@ fslmc_vfio_setup_group(void)
 		/* Now connect this IOMMU group to given container */
 		ret = vfio_connect_container();
 		if (ret) {
-			DPAA2_BUS_ERR(
-				"Error connecting container with groupid %d",
-				groupid);
+			DPAA2_BUS_ERR("vfio group(%d) connect failed(%d)",
+				groupid, ret);
 			close(vfio_group.fd);
-			rte_vfio_clear_group(vfio_group.fd);
+			vfio_group.fd = 0;
 			return ret;
 		}
 	}
@@ -1151,7 +1201,7 @@ fslmc_vfio_setup_group(void)
 		DPAA2_BUS_ERR("Error getting device %s fd from group %d",
 			      fslmc_container, vfio_group.groupid);
 		close(vfio_group.fd);
-		rte_vfio_clear_group(vfio_group.fd);
+		vfio_group.fd = 0;
 		return ret;
 	}
 	container_device_fd = ret;
diff --git a/drivers/bus/fslmc/meson.build b/drivers/bus/fslmc/meson.build
index 162ca286fe..70098ad778 100644
--- a/drivers/bus/fslmc/meson.build
+++ b/drivers/bus/fslmc/meson.build
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018,2021 NXP
+# Copyright 2018-2023 NXP
 
 if not is_linux
     build = false
@@ -27,3 +27,4 @@ sources = files(
 )
 
 includes += include_directories('mc', 'qbman/include', 'portal')
+includes += include_directories('../../../lib/eal/linux')
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 14/42] bus/fslmc: enhance MC VFIO multiprocess support
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (12 preceding siblings ...)
  2024-10-23 11:59           ` [v5 13/42] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-11-09 17:07             ` Thomas Monjalon
  2024-10-23 11:59           ` [v5 15/42] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
                             ` (28 subsequent siblings)
  42 siblings, 1 reply; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Anatoly Burakov; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

MC VFIO is not registered into RTE VFIO. Primary process registers
MC vfio mp action for secondary process to request.
VFIO/Container handlers are provided via CMSG.
Primary process is responsible to connect MC VFIO group to container.

In addition, MC VFIO code is refactored according to container/group logic.
In general, VFIO container can support multiple groups per process.
Now we only support single MC group(dprc.x) per process, but we add
logic to support connecting multiple MC groups to container.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_bus.c  |  14 +-
 drivers/bus/fslmc/fslmc_vfio.c | 997 ++++++++++++++++++++++-----------
 drivers/bus/fslmc/fslmc_vfio.h |  35 +-
 drivers/bus/fslmc/version.map  |   1 +
 4 files changed, 693 insertions(+), 354 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 97473c278f..a966df1598 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -318,6 +318,7 @@ rte_fslmc_scan(void)
 	struct dirent *entry;
 	static int process_once;
 	int groupid;
+	char *group_name;
 
 	if (process_once) {
 		DPAA2_BUS_DEBUG("Fslmc bus already scanned. Not rescanning");
@@ -325,12 +326,19 @@ rte_fslmc_scan(void)
 	}
 	process_once = 1;
 
-	ret = fslmc_get_container_group(&groupid);
+	/* Now we only support single group per process.*/
+	group_name = getenv("DPRC");
+	if (!group_name) {
+		DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
+		return -EINVAL;
+	}
+
+	ret = fslmc_get_container_group(group_name, &groupid);
 	if (ret != 0)
 		goto scan_fail;
 
 	/* Scan devices on the group */
-	sprintf(fslmc_dirpath, "%s/%s", SYSFS_FSL_MC_DEVICES, fslmc_container);
+	sprintf(fslmc_dirpath, "%s/%s", SYSFS_FSL_MC_DEVICES, group_name);
 	dir = opendir(fslmc_dirpath);
 	if (!dir) {
 		DPAA2_BUS_ERR("Unable to open VFIO group directory");
@@ -338,7 +346,7 @@ rte_fslmc_scan(void)
 	}
 
 	/* Scan the DPRC container object */
-	ret = scan_one_fslmc_device(fslmc_container);
+	ret = scan_one_fslmc_device(group_name);
 	if (ret != 0) {
 		/* Error in parsing directory - exit gracefully */
 		goto scan_fail_cleanup;
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 54398c4643..63e84cb4d8 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2023 NXP
+ *   Copyright 2016-2024 NXP
  *
  */
 
@@ -40,14 +40,14 @@
 #include "portal/dpaa2_hw_pvt.h"
 #include "portal/dpaa2_hw_dpio.h"
 
-#define FSLMC_CONTAINER_MAX_LEN 8 /**< Of the format dprc.XX */
+#define FSLMC_VFIO_MP "fslmc_vfio_mp_sync"
 
-/* Number of VFIO containers & groups with in */
-static struct fslmc_vfio_group vfio_group;
-static struct fslmc_vfio_container vfio_container;
-static int container_device_fd;
-char *fslmc_container;
-static int fslmc_iommu_type;
+/* Container is composed by multiple groups, however,
+ * now each process only supports single group with in container.
+ */
+static struct fslmc_vfio_container s_vfio_container;
+/* Currently we only support single group/process. */
+const char *fslmc_group; /* dprc.x*/
 static uint32_t *msi_intr_vaddr;
 void *(*rte_mcp_ptr_list);
 
@@ -72,108 +72,545 @@ rte_fslmc_object_register(struct rte_dpaa2_object *object)
 	TAILQ_INSERT_TAIL(&dpaa2_obj_list, object, next);
 }
 
-int
-fslmc_get_container_group(int *groupid)
+static const char *
+fslmc_vfio_get_group_name(void)
 {
-	int ret;
-	char *container;
+	return fslmc_group;
+}
+
+static void
+fslmc_vfio_set_group_name(const char *group_name)
+{
+	fslmc_group = group_name;
+}
+
+static int
+fslmc_vfio_add_group(int vfio_group_fd,
+	int iommu_group_num, const char *group_name)
+{
+	struct fslmc_vfio_group *group;
+
+	group = rte_zmalloc(NULL, sizeof(struct fslmc_vfio_group), 0);
+	if (!group)
+		return -ENOMEM;
+	group->fd = vfio_group_fd;
+	group->groupid = iommu_group_num;
+	rte_strscpy(group->group_name, group_name, sizeof(group->group_name));
+	if (rte_vfio_noiommu_is_enabled() > 0)
+		group->iommu_type = RTE_VFIO_NOIOMMU;
+	else
+		group->iommu_type = VFIO_TYPE1_IOMMU;
+	LIST_INSERT_HEAD(&s_vfio_container.groups, group, next);
 
-	if (!fslmc_container) {
-		container = getenv("DPRC");
-		if (container == NULL) {
-			DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
-			return -EINVAL;
+	return 0;
+}
+
+static int
+fslmc_vfio_clear_group(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+	struct fslmc_vfio_device *dev;
+	int clear = 0;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			LIST_FOREACH(dev, &group->vfio_devices, next)
+				LIST_REMOVE(dev, next);
+
+			close(vfio_group_fd);
+			LIST_REMOVE(group, next);
+			rte_free(group);
+			clear = 1;
+
+			break;
 		}
+	}
 
-		if (strlen(container) >= FSLMC_CONTAINER_MAX_LEN) {
-			DPAA2_BUS_ERR("Invalid container name: %s", container);
-			return -1;
+	if (LIST_EMPTY(&s_vfio_container.groups)) {
+		if (s_vfio_container.fd > 0)
+			close(s_vfio_container.fd);
+
+		s_vfio_container.fd = -1;
+	}
+	if (clear)
+		return 0;
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_connect_container(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			group->connected = 1;
+
+			return 0;
+		}
+	}
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_container_connected(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			if (group->connected)
+				return 1;
+		}
+	}
+	return 0;
+}
+
+static int
+fslmc_vfio_iommu_type(int vfio_group_fd)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd)
+			return group->iommu_type;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_fd_by_name(const char *group_name)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (!strcmp(group->group_name, group_name))
+			return group->fd;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_fd_by_id(int group_id)
+{
+	struct fslmc_vfio_group *group;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->groupid == group_id)
+			return group->fd;
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_add_dev(int vfio_group_fd,
+	int dev_fd, const char *name)
+{
+	struct fslmc_vfio_group *group;
+	struct fslmc_vfio_device *dev;
+
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd) {
+			dev = rte_zmalloc(NULL,
+				sizeof(struct fslmc_vfio_device), 0);
+			dev->fd = dev_fd;
+			rte_strscpy(dev->dev_name, name, sizeof(dev->dev_name));
+			LIST_INSERT_HEAD(&group->vfio_devices, dev, next);
+			return 0;
 		}
+	}
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_group_remove_dev(int vfio_group_fd,
+	const char *name)
+{
+	struct fslmc_vfio_group *group = NULL;
+	struct fslmc_vfio_device *dev;
+	int removed = 0;
 
-		fslmc_container = strdup(container);
-		if (!fslmc_container) {
-			DPAA2_BUS_ERR("Mem alloc failure; Container name");
-			return -ENOMEM;
+	LIST_FOREACH(group, &s_vfio_container.groups, next) {
+		if (group->fd == vfio_group_fd)
+			break;
+	}
+
+	if (group) {
+		LIST_FOREACH(dev, &group->vfio_devices, next) {
+			if (!strcmp(dev->dev_name, name)) {
+				LIST_REMOVE(dev, next);
+				removed = 1;
+				break;
+			}
 		}
 	}
 
-	fslmc_iommu_type = (rte_vfio_noiommu_is_enabled() == 1) ?
-		RTE_VFIO_NOIOMMU : VFIO_TYPE1_IOMMU;
+	if (removed)
+		return 0;
+
+	return -ENODEV;
+}
+
+static int
+fslmc_vfio_container_fd(void)
+{
+	return s_vfio_container.fd;
+}
+
+static int
+fslmc_get_group_id(const char *group_name,
+	int *groupid)
+{
+	int ret;
 
 	/* get group number */
 	ret = rte_vfio_get_group_num(SYSFS_FSL_MC_DEVICES,
-				     fslmc_container, groupid);
+			group_name, groupid);
 	if (ret <= 0) {
-		DPAA2_BUS_ERR("Unable to find %s IOMMU group", fslmc_container);
-		return -1;
+		DPAA2_BUS_ERR("Unable to find %s IOMMU group", group_name);
+		if (ret < 0)
+			return ret;
+
+		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("Container: %s has VFIO iommu group id = %d",
-			fslmc_container, *groupid);
+	DPAA2_BUS_DEBUG("GROUP(%s) has VFIO iommu group id = %d",
+		group_name, *groupid);
 
 	return 0;
 }
 
 static int
-vfio_connect_container(void)
+fslmc_vfio_open_group_fd(const char *group_name)
 {
-	int fd, ret;
+	int vfio_group_fd;
+	char filename[PATH_MAX];
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
+	int iommu_group_num, ret;
 
-	if (vfio_container.used) {
-		DPAA2_BUS_DEBUG("No container available");
-		return -1;
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd > 0)
+		return vfio_group_fd;
+
+	ret = fslmc_get_group_id(group_name, &iommu_group_num);
+	if (ret)
+		return ret;
+	/* if primary, try to open the group */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		/* try regular group format */
+		snprintf(filename, sizeof(filename),
+			VFIO_GROUP_FMT, iommu_group_num);
+		vfio_group_fd = open(filename, O_RDWR);
+
+		goto add_vfio_group;
+	}
+	/* if we're in a secondary process, request group fd from the primary
+	 * process via mp channel.
+	 */
+	p->req = SOCKET_REQ_GROUP;
+	p->group_num = iommu_group_num;
+	rte_strscpy(mp_req.name, FSLMC_VFIO_MP, sizeof(mp_req.name));
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_group_fd = -1;
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+	    mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		p = (struct vfio_mp_param *)mp_rep->param;
+		if (p->result == SOCKET_OK && mp_rep->num_fds == 1)
+			vfio_group_fd = mp_rep->fds[0];
+		else if (p->result == SOCKET_NO_FD)
+			DPAA2_BUS_ERR("Bad VFIO group fd");
+	}
+
+	free(mp_reply.msgs);
+
+add_vfio_group:
+	if (vfio_group_fd < 0) {
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
+				filename, vfio_group_fd);
+		} else {
+			DPAA2_BUS_ERR("Cannot request group fd(%d)",
+				vfio_group_fd);
+		}
+	} else {
+		ret = fslmc_vfio_add_group(vfio_group_fd, iommu_group_num,
+			group_name);
+		if (ret)
+			return ret;
 	}
 
-	/* Try connecting to vfio container if already created */
-	if (!ioctl(vfio_group.fd, VFIO_GROUP_SET_CONTAINER,
-		&vfio_container.fd)) {
-		DPAA2_BUS_DEBUG(
-		    "Container pre-exists with FD[0x%x] for this group",
-		    vfio_container.fd);
-		vfio_group.container = &vfio_container;
+	return vfio_group_fd;
+}
+
+static int
+fslmc_vfio_check_extensions(int vfio_container_fd)
+{
+	int ret;
+	uint32_t idx, n_extensions = 0;
+	static const int type_id[] = {RTE_VFIO_TYPE1, RTE_VFIO_SPAPR,
+		RTE_VFIO_NOIOMMU};
+	static const char * const type_id_nm[] = {"Type 1",
+		"sPAPR", "No-IOMMU"};
+
+	for (idx = 0; idx < RTE_DIM(type_id); idx++) {
+		ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
+			type_id[idx]);
+		if (ret < 0) {
+			DPAA2_BUS_ERR("Could not get IOMMU type, error %i (%s)",
+				errno, strerror(errno));
+			close(vfio_container_fd);
+			return -errno;
+		} else if (ret == 1) {
+			/* we found a supported extension */
+			n_extensions++;
+		}
+		DPAA2_BUS_DEBUG("IOMMU type %d (%s) is %s",
+			type_id[idx], type_id_nm[idx],
+			ret ? "supported" : "not supported");
+	}
+
+	/* if we didn't find any supported IOMMU types, fail */
+	if (!n_extensions) {
+		close(vfio_container_fd);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int
+fslmc_vfio_open_container_fd(void)
+{
+	int ret, vfio_container_fd;
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	struct vfio_mp_param *p = (void *)mp_req.param;
+
+	if (fslmc_vfio_container_fd() > 0)
+		return fslmc_vfio_container_fd();
+
+	/* if we're in a primary process, try to open the container */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
+		if (vfio_container_fd < 0) {
+			DPAA2_BUS_ERR("Cannot open VFIO container(%s), err(%d)",
+				VFIO_CONTAINER_PATH, vfio_container_fd);
+			ret = vfio_container_fd;
+			goto err_exit;
+		}
+
+		/* check VFIO API version */
+		ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
+		if (ret < 0) {
+			DPAA2_BUS_ERR("Could not get VFIO API version(%d)",
+				ret);
+		} else if (ret != VFIO_API_VERSION) {
+			DPAA2_BUS_ERR("Unsupported VFIO API version(%d)",
+				ret);
+			ret = -ENOTSUP;
+		}
+		if (ret < 0) {
+			close(vfio_container_fd);
+			goto err_exit;
+		}
+
+		ret = fslmc_vfio_check_extensions(vfio_container_fd);
+		if (ret) {
+			DPAA2_BUS_ERR("No supported IOMMU extensions found(%d)",
+				ret);
+			close(vfio_container_fd);
+			goto err_exit;
+		}
+
+		goto success_exit;
+	}
+	/*
+	 * if we're in a secondary process, request container fd from the
+	 * primary process via mp channel
+	 */
+	p->req = SOCKET_REQ_CONTAINER;
+	rte_strscpy(mp_req.name, FSLMC_VFIO_MP, sizeof(mp_req.name));
+	mp_req.len_param = sizeof(*p);
+	mp_req.num_fds = 0;
+
+	vfio_container_fd = -1;
+	ret = rte_mp_request_sync(&mp_req, &mp_reply, &ts);
+	if (ret)
+		goto err_exit;
+
+	if (mp_reply.nb_received != 1) {
+		ret = -EIO;
+		goto err_exit;
+	}
+
+	mp_rep = &mp_reply.msgs[0];
+	p = (void *)mp_rep->param;
+	if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
+		vfio_container_fd = mp_rep->fds[0];
+		free(mp_reply.msgs);
+	}
+
+success_exit:
+	s_vfio_container.fd = vfio_container_fd;
+
+	return vfio_container_fd;
+
+err_exit:
+	if (mp_reply.msgs)
+		free(mp_reply.msgs);
+	DPAA2_BUS_ERR("Cannot request container fd err(%d)", ret);
+	return ret;
+}
+
+int
+fslmc_get_container_group(const char *group_name,
+	int *groupid)
+{
+	int ret;
+
+	if (!group_name) {
+		DPAA2_BUS_ERR("No group name provided!");
+
+		return -EINVAL;
+	}
+	ret = fslmc_get_group_id(group_name, groupid);
+	if (ret)
+		return ret;
+
+	fslmc_vfio_set_group_name(group_name);
+
+	return 0;
+}
+
+static int
+fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
+	const void *peer)
+{
+	int fd = -1;
+	int ret;
+	struct rte_mp_msg reply;
+	struct vfio_mp_param *r = (void *)reply.param;
+	const struct vfio_mp_param *m = (const void *)msg->param;
+
+	if (msg->len_param != sizeof(*m)) {
+		DPAA2_BUS_ERR("fslmc vfio received invalid message!");
+		return -EINVAL;
+	}
+
+	memset(&reply, 0, sizeof(reply));
+
+	switch (m->req) {
+	case SOCKET_REQ_GROUP:
+		r->req = SOCKET_REQ_GROUP;
+		r->group_num = m->group_num;
+		fd = fslmc_vfio_group_fd_by_id(m->group_num);
+		if (fd < 0) {
+			r->result = SOCKET_ERR;
+		} else if (!fd) {
+			/* if group exists but isn't bound to VFIO driver */
+			r->result = SOCKET_NO_FD;
+		} else {
+			/* if group exists and is bound to VFIO driver */
+			r->result = SOCKET_OK;
+			reply.num_fds = 1;
+			reply.fds[0] = fd;
+		}
+		break;
+	case SOCKET_REQ_CONTAINER:
+		r->req = SOCKET_REQ_CONTAINER;
+		fd = fslmc_vfio_container_fd();
+		if (fd <= 0) {
+			r->result = SOCKET_ERR;
+		} else {
+			r->result = SOCKET_OK;
+			reply.num_fds = 1;
+			reply.fds[0] = fd;
+		}
+		break;
+	default:
+		DPAA2_BUS_ERR("fslmc vfio received invalid message(%08x)",
+			m->req);
+		return -ENOTSUP;
+	}
+
+	rte_strscpy(reply.name, FSLMC_VFIO_MP, sizeof(reply.name));
+	reply.len_param = sizeof(*r);
+	ret = rte_mp_reply(&reply, peer);
+
+	return ret;
+}
+
+static int
+fslmc_vfio_mp_sync_setup(void)
+{
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		ret = rte_mp_action_register(FSLMC_VFIO_MP,
+			fslmc_vfio_mp_primary);
+		if (ret && rte_errno != ENOTSUP)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int
+vfio_connect_container(int vfio_container_fd,
+	int vfio_group_fd)
+{
+	int ret;
+	int iommu_type;
+
+	if (fslmc_vfio_container_connected(vfio_group_fd)) {
+		DPAA2_BUS_WARN("VFIO FD(%d) has connected to container",
+			vfio_group_fd);
 		return 0;
 	}
 
-	/* Opens main vfio file descriptor which represents the "container" */
-	fd = rte_vfio_get_container_fd();
-	if (fd < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return -errno;
+	iommu_type = fslmc_vfio_iommu_type(vfio_group_fd);
+	if (iommu_type < 0) {
+		DPAA2_BUS_ERR("Failed to get iommu type(%d)",
+			iommu_type);
+
+		return iommu_type;
 	}
 
 	/* Check whether support for SMMU type IOMMU present or not */
-	if (ioctl(fd, VFIO_CHECK_EXTENSION, fslmc_iommu_type)) {
+	if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type)) {
 		/* Connect group to container */
-		ret = ioctl(vfio_group.fd, VFIO_GROUP_SET_CONTAINER, &fd);
+		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
+			&vfio_container_fd);
 		if (ret) {
 			DPAA2_BUS_ERR("Failed to setup group container");
-			close(fd);
 			return -errno;
 		}
 
-		ret = ioctl(fd, VFIO_SET_IOMMU, fslmc_iommu_type);
+		ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
 		if (ret) {
 			DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-			close(fd);
 			return -errno;
 		}
 	} else {
 		DPAA2_BUS_ERR("No supported IOMMU available");
-		close(fd);
 		return -EINVAL;
 	}
 
-	vfio_container.used = 1;
-	vfio_container.fd = fd;
-	vfio_container.group = &vfio_group;
-	vfio_group.container = &vfio_container;
-
-	return 0;
+	return fslmc_vfio_connect_container(vfio_group_fd);
 }
 
-static int vfio_map_irq_region(struct fslmc_vfio_group *group)
+static int vfio_map_irq_region(void)
 {
-	int ret;
+	int ret, fd;
 	unsigned long *vaddr = NULL;
 	struct vfio_iommu_type1_dma_map map = {
 		.argsz = sizeof(map),
@@ -182,9 +619,23 @@ static int vfio_map_irq_region(struct fslmc_vfio_group *group)
 		.iova = 0x6030000,
 		.size = 0x1000,
 	};
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (!fslmc_vfio_container_connected(fd)) {
+		DPAA2_BUS_ERR("Container is not connected");
+		return -EIO;
+	}
 
 	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
-		PROT_READ, MAP_SHARED, container_device_fd, 0x6030000);
+		PROT_READ, MAP_SHARED, fd, 0x6030000);
 	if (vaddr == MAP_FAILED) {
 		DPAA2_BUS_INFO("Unable to map region (errno = %d)", errno);
 		return -errno;
@@ -192,8 +643,8 @@ static int vfio_map_irq_region(struct fslmc_vfio_group *group)
 
 	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
 	map.vaddr = (unsigned long)vaddr;
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA, &map);
-	if (ret == 0)
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA, &map);
+	if (!ret)
 		return 0;
 
 	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
@@ -204,8 +655,8 @@ static int fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
 static int fslmc_unmap_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
 
 static void
-fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
-		void *arg __rte_unused)
+fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
+	size_t len, void *arg __rte_unused)
 {
 	struct rte_memseg_list *msl;
 	struct rte_memseg *ms;
@@ -262,44 +713,54 @@ fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
 }
 
 static int
-fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr __rte_unused, size_t len)
+fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr,
+	size_t len)
 {
-	struct fslmc_vfio_group *group;
 	struct vfio_iommu_type1_dma_map dma_map = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
 		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
 	};
-	int ret;
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
+	int ret, fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
 		return 0;
 	}
 
 	dma_map.size = len;
 	dma_map.vaddr = vaddr;
-
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 	dma_map.iova = iovaddr;
-#else
-	dma_map.iova = dma_map.vaddr;
+
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+	if (vaddr != iovaddr) {
+		DPAA2_BUS_WARN("vaddr(0x%"PRIx64") != iovaddr(0x%"PRIx64")",
+			vaddr, iovaddr);
+	}
 #endif
 
 	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-
-	if (!group->container) {
+	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected ");
-		return -1;
+		return -EIO;
 	}
 
 	DPAA2_BUS_DEBUG("--> Map address: 0x%"PRIx64", size: %"PRIu64"",
 			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA, &dma_map);
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA,
+		&dma_map);
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO_IOMMU_MAP_DMA API(errno = %d)",
 				errno);
-		return -1;
+		return ret;
 	}
 
 	return 0;
@@ -308,14 +769,22 @@ fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr __rte_unused, size_t len)
 static int
 fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 {
-	struct fslmc_vfio_group *group;
 	struct vfio_iommu_type1_dma_unmap dma_unmap = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_unmap),
 		.flags = 0,
 	};
-	int ret;
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
+	int ret, fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (fd <= 0) {
+		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
+			__func__, fd);
+		if (fd < 0)
+			return fd;
+		return -rte_errno;
+	}
+	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
 		return 0;
 	}
@@ -324,16 +793,15 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 	dma_unmap.iova = vaddr;
 
 	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-
-	if (!group->container) {
+	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected ");
-		return -1;
+		return -EIO;
 	}
 
 	DPAA2_BUS_DEBUG("--> Unmap address: 0x%"PRIx64", size: %"PRIu64"",
 			(uint64_t)dma_unmap.iova, (uint64_t)dma_unmap.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap);
+	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_UNMAP_DMA,
+		&dma_unmap);
 	if (ret) {
 		DPAA2_BUS_ERR("VFIO_IOMMU_UNMAP_DMA API(errno = %d)",
 				errno);
@@ -367,41 +835,13 @@ fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 int
 rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 {
-	int ret;
-	struct fslmc_vfio_group *group;
-	struct vfio_iommu_type1_dma_map dma_map = {
-		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
-		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
-	};
-
-	if (fslmc_iommu_type == RTE_VFIO_NOIOMMU) {
-		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
-		return 0;
-	}
-
-	/* SET DMA MAP for IOMMU */
-	group = &vfio_group;
-	if (!group->container) {
-		DPAA2_BUS_ERR("Container is not connected");
-		return -1;
-	}
-
-	dma_map.size = size;
-	dma_map.vaddr = vaddr;
-	dma_map.iova = iova;
-
-	DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64,
-			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.iova,
-			(uint64_t)dma_map.size);
-	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA,
-		    &dma_map);
-	if (ret) {
-		DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)",
-			errno);
-		return ret;
-	}
+	return fslmc_map_dma(vaddr, iova, size);
+}
 
-	return 0;
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
+{
+	return fslmc_unmap_dma(iova, 0, size);
 }
 
 int rte_fslmc_vfio_dmamap(void)
@@ -431,7 +871,7 @@ int rte_fslmc_vfio_dmamap(void)
 	 * the interrupt region to SMMU. This should be removed once the
 	 * support is added in the Kernel.
 	 */
-	vfio_map_irq_region(&vfio_group);
+	vfio_map_irq_region();
 
 	/* Existing segments have been mapped and memory callback for hotplug
 	 * has been installed.
@@ -442,149 +882,19 @@ int rte_fslmc_vfio_dmamap(void)
 }
 
 static int
-fslmc_vfio_open_group_fd(int iommu_group_num)
-{
-	int vfio_group_fd;
-	char filename[PATH_MAX];
-	struct rte_mp_msg mp_req, *mp_rep;
-	struct rte_mp_reply mp_reply = {0};
-	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
-	struct vfio_mp_param *p = (struct vfio_mp_param *)mp_req.param;
-
-	/* if primary, try to open the group */
-	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
-		/* try regular group format */
-		snprintf(filename, sizeof(filename),
-			VFIO_GROUP_FMT, iommu_group_num);
-		vfio_group_fd = open(filename, O_RDWR);
-		if (vfio_group_fd <= 0) {
-			DPAA2_BUS_ERR("Open VFIO group(%s) failed(%d)",
-				filename, vfio_group_fd);
-		}
-
-		return vfio_group_fd;
-	}
-	/* if we're in a secondary process, request group fd from the primary
-	 * process via mp channel.
-	 */
-	p->req = SOCKET_REQ_GROUP;
-	p->group_num = iommu_group_num;
-	rte_strscpy(mp_req.name, EAL_VFIO_MP, sizeof(mp_req.name));
-	mp_req.len_param = sizeof(*p);
-	mp_req.num_fds = 0;
-
-	vfio_group_fd = -1;
-	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
-	    mp_reply.nb_received == 1) {
-		mp_rep = &mp_reply.msgs[0];
-		p = (struct vfio_mp_param *)mp_rep->param;
-		if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
-			vfio_group_fd = mp_rep->fds[0];
-		} else if (p->result == SOCKET_NO_FD) {
-			DPAA2_BUS_ERR("Bad VFIO group fd");
-			vfio_group_fd = 0;
-		}
-	}
-
-	free(mp_reply.msgs);
-	if (vfio_group_fd < 0) {
-		DPAA2_BUS_ERR("Cannot request group fd(%d)",
-			vfio_group_fd);
-	}
-	return vfio_group_fd;
-}
-
-static int
-fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
-		int *vfio_dev_fd, struct vfio_device_info *device_info)
+fslmc_vfio_setup_device(const char *dev_addr,
+	int *vfio_dev_fd, struct vfio_device_info *device_info)
 {
 	struct vfio_group_status group_status = {
 			.argsz = sizeof(group_status)
 	};
-	int vfio_group_fd, vfio_container_fd, iommu_group_no, ret;
+	int vfio_group_fd, ret;
+	const char *group_name = fslmc_vfio_get_group_name();
 
-	/* get group number */
-	ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_no);
-	if (ret < 0)
-		return -1;
-
-	/* get the actual group fd */
-	vfio_group_fd = vfio_group.fd;
-	if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
-		return -1;
-
-	/*
-	 * if vfio_group_fd == -ENOENT, that means the device
-	 * isn't managed by VFIO
-	 */
-	if (vfio_group_fd == -ENOENT) {
-		DPAA2_BUS_WARN(" %s not managed by VFIO driver, skipping",
-				dev_addr);
-		return 1;
-	}
-
-	/* Opens main vfio file descriptor which represents the "container" */
-	vfio_container_fd = rte_vfio_get_container_fd();
-	if (vfio_container_fd < 0) {
-		DPAA2_BUS_ERR("Failed to open VFIO container");
-		return -errno;
-	}
-
-	/* check if the group is viable */
-	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status);
-	if (ret) {
-		DPAA2_BUS_ERR("  %s cannot get group status, "
-				"error %i (%s)", dev_addr,
-				errno, strerror(errno));
-		close(vfio_group_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
-	} else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
-		DPAA2_BUS_ERR("  %s VFIO group is not viable!", dev_addr);
-		close(vfio_group_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
-	}
-	/* At this point, we know that this group is viable (meaning,
-	 * all devices are either bound to VFIO or not bound to anything)
-	 */
-
-	/* check if group does not have a container yet */
-	if (!(group_status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
-
-		/* add group to a container */
-		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
-				&vfio_container_fd);
-		if (ret) {
-			DPAA2_BUS_ERR("  %s cannot add VFIO group to container, "
-					"error %i (%s)", dev_addr,
-					errno, strerror(errno));
-			close(vfio_group_fd);
-			close(vfio_container_fd);
-			rte_vfio_clear_group(vfio_group_fd);
-			return -1;
-		}
-
-		/*
-		 * set an IOMMU type for container
-		 *
-		 */
-		if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
-			  fslmc_iommu_type)) {
-			ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU,
-				    fslmc_iommu_type);
-			if (ret) {
-				DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-				close(vfio_group_fd);
-				close(vfio_container_fd);
-				return -errno;
-			}
-		} else {
-			DPAA2_BUS_ERR("No supported IOMMU available");
-			close(vfio_group_fd);
-			close(vfio_container_fd);
-			return -EINVAL;
-		}
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (!fslmc_vfio_container_connected(vfio_group_fd)) {
+		DPAA2_BUS_ERR("Container is not connected");
+		return -EIO;
 	}
 
 	/* get a file descriptor for the device */
@@ -594,26 +904,21 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		 * the VFIO group or the container not having IOMMU configured.
 		 */
 
-		DPAA2_BUS_WARN("Getting a vfio_dev_fd for %s failed", dev_addr);
-		close(vfio_group_fd);
-		close(vfio_container_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
+		DPAA2_BUS_ERR("Getting a vfio_dev_fd for %s from %s failed",
+			dev_addr, group_name);
+		return -EIO;
 	}
 
 	/* test and setup the device */
 	ret = ioctl(*vfio_dev_fd, VFIO_DEVICE_GET_INFO, device_info);
 	if (ret) {
-		DPAA2_BUS_ERR("  %s cannot get device info, error %i (%s)",
-				dev_addr, errno, strerror(errno));
-		close(*vfio_dev_fd);
-		close(vfio_group_fd);
-		close(vfio_container_fd);
-		rte_vfio_clear_group(vfio_group_fd);
-		return -1;
+		DPAA2_BUS_ERR("%s cannot get device info err(%d)(%s)",
+			dev_addr, errno, strerror(errno));
+		return ret;
 	}
 
-	return 0;
+	return fslmc_vfio_group_add_dev(vfio_group_fd, *vfio_dev_fd,
+			dev_addr);
 }
 
 static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
@@ -625,8 +930,7 @@ static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
 	struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
 
-	fslmc_vfio_setup_device(SYSFS_FSL_MC_DEVICES, mcp_obj,
-			&mc_fd, &d_info);
+	fslmc_vfio_setup_device(mcp_obj, &mc_fd, &d_info);
 
 	/* getting device region info*/
 	ret = ioctl(mc_fd, VFIO_DEVICE_GET_REGION_INFO, &reg_info);
@@ -757,7 +1061,8 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 }
 
 static void
-fslmc_close_iodevices(struct rte_dpaa2_device *dev)
+fslmc_close_iodevices(struct rte_dpaa2_device *dev,
+	int vfio_fd)
 {
 	struct rte_dpaa2_object *object = NULL;
 	struct rte_dpaa2_driver *drv;
@@ -800,6 +1105,11 @@ fslmc_close_iodevices(struct rte_dpaa2_device *dev)
 		break;
 	}
 
+	ret = fslmc_vfio_group_remove_dev(vfio_fd, dev->device.name);
+	if (ret) {
+		DPAA2_BUS_ERR("Failed to remove %s from vfio",
+			dev->device.name);
+	}
 	DPAA2_BUS_LOG(DEBUG, "Device (%s) Closed",
 		      dev->device.name);
 }
@@ -811,17 +1121,21 @@ fslmc_close_iodevices(struct rte_dpaa2_device *dev)
 static int
 fslmc_process_iodevices(struct rte_dpaa2_device *dev)
 {
-	int dev_fd;
+	int dev_fd, ret;
 	struct vfio_device_info device_info = { .argsz = sizeof(device_info) };
 	struct rte_dpaa2_object *object = NULL;
 
-	fslmc_vfio_setup_device(SYSFS_FSL_MC_DEVICES, dev->device.name,
-			&dev_fd, &device_info);
+	ret = fslmc_vfio_setup_device(dev->device.name, &dev_fd,
+			&device_info);
+	if (ret)
+		return ret;
 
 	switch (dev->dev_type) {
 	case DPAA2_ETH:
-		rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
-					  device_info.num_irqs);
+		ret = rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
+				device_info.num_irqs);
+		if (ret)
+			return ret;
 		break;
 	case DPAA2_CON:
 	case DPAA2_IO:
@@ -913,6 +1227,10 @@ int
 fslmc_vfio_close_group(void)
 {
 	struct rte_dpaa2_device *dev, *dev_temp;
+	int vfio_group_fd;
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
 		if (dev->device.devargs &&
@@ -927,7 +1245,7 @@ fslmc_vfio_close_group(void)
 		case DPAA2_CRYPTO:
 		case DPAA2_QDMA:
 		case DPAA2_IO:
-			fslmc_close_iodevices(dev);
+			fslmc_close_iodevices(dev, vfio_group_fd);
 			break;
 		case DPAA2_CON:
 		case DPAA2_CI:
@@ -936,7 +1254,7 @@ fslmc_vfio_close_group(void)
 			if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 				continue;
 
-			fslmc_close_iodevices(dev);
+			fslmc_close_iodevices(dev, vfio_group_fd);
 			break;
 		case DPAA2_DPRTC:
 		default:
@@ -945,10 +1263,7 @@ fslmc_vfio_close_group(void)
 		}
 	}
 
-	if (vfio_group.fd > 0) {
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
-	}
+	fslmc_vfio_clear_group(vfio_group_fd);
 
 	return 0;
 }
@@ -1138,75 +1453,85 @@ fslmc_vfio_process_group(void)
 int
 fslmc_vfio_setup_group(void)
 {
-	int groupid;
-	int ret;
+	int vfio_group_fd, vfio_container_fd, ret;
 	struct vfio_group_status status = { .argsz = sizeof(status) };
+	const char *group_name = fslmc_vfio_get_group_name();
+
+	/* MC VFIO setup entry */
+	vfio_container_fd = fslmc_vfio_container_fd();
+	if (vfio_container_fd <= 0) {
+		vfio_container_fd = fslmc_vfio_open_container_fd();
+		if (vfio_container_fd < 0) {
+			DPAA2_BUS_ERR("Failed to create MC VFIO container");
+			return vfio_container_fd;
+		}
+	}
 
-	/* if already done once */
-	if (container_device_fd)
-		return 0;
-
-	ret = fslmc_get_container_group(&groupid);
-	if (ret)
-		return ret;
-
-	/* In case this group was already opened, continue without any
-	 * processing.
-	 */
-	if (vfio_group.groupid == groupid) {
-		DPAA2_BUS_ERR("groupid already exists %d", groupid);
-		return 0;
+	if (!group_name) {
+		DPAA2_BUS_DEBUG("DPAA2: DPRC not available");
+		return -EINVAL;
 	}
 
-	/* Get the actual group fd */
-	ret = fslmc_vfio_open_group_fd(groupid);
-	if (ret <= 0)
-		return ret;
-	vfio_group.fd = ret;
+	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd < 0) {
+		vfio_group_fd = fslmc_vfio_open_group_fd(group_name);
+		if (vfio_group_fd < 0) {
+			DPAA2_BUS_ERR("open group name(%s) failed(%d)",
+				group_name, vfio_group_fd);
+			return -rte_errno;
+		}
+	}
 
 	/* Check group viability */
-	ret = ioctl(vfio_group.fd, VFIO_GROUP_GET_STATUS, &status);
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &status);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO error getting group status");
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		DPAA2_BUS_ERR("VFIO(%s:fd=%d) error getting group status(%d)",
+			group_name, vfio_group_fd, ret);
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return ret;
 	}
 
 	if (!(status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
 		DPAA2_BUS_ERR("VFIO group not viable");
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return -EPERM;
 	}
-	/* Since Group is VIABLE, Store the groupid */
-	vfio_group.groupid = groupid;
 
 	/* check if group does not have a container yet */
 	if (!(status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
 		/* Now connect this IOMMU group to given container */
-		ret = vfio_connect_container();
-		if (ret) {
-			DPAA2_BUS_ERR("vfio group(%d) connect failed(%d)",
-				groupid, ret);
-			close(vfio_group.fd);
-			vfio_group.fd = 0;
-			return ret;
-		}
+		ret = vfio_connect_container(vfio_container_fd,
+			vfio_group_fd);
+	} else {
+		/* Here is supposed in secondary process,
+		 * group has been set to container in primary process.
+		 */
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+			DPAA2_BUS_WARN("This group has been set container?");
+		ret = fslmc_vfio_connect_container(vfio_group_fd);
+	}
+	if (ret) {
+		DPAA2_BUS_ERR("vfio group connect failed(%d)", ret);
+		fslmc_vfio_clear_group(vfio_group_fd);
+		return ret;
 	}
 
 	/* Get Device information */
-	ret = ioctl(vfio_group.fd, VFIO_GROUP_GET_DEVICE_FD, fslmc_container);
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_DEVICE_FD, group_name);
 	if (ret < 0) {
-		DPAA2_BUS_ERR("Error getting device %s fd from group %d",
-			      fslmc_container, vfio_group.groupid);
-		close(vfio_group.fd);
-		vfio_group.fd = 0;
+		DPAA2_BUS_ERR("Error getting device %s fd", group_name);
+		fslmc_vfio_clear_group(vfio_group_fd);
+		return ret;
+	}
+
+	ret = fslmc_vfio_mp_sync_setup();
+	if (ret) {
+		DPAA2_BUS_ERR("VFIO MP sync setup failed!");
+		fslmc_vfio_clear_group(vfio_group_fd);
 		return ret;
 	}
-	container_device_fd = ret;
-	DPAA2_BUS_DEBUG("VFIO Container FD is [0x%X]",
-			container_device_fd);
+
+	DPAA2_BUS_DEBUG("VFIO GROUP FD is %d", vfio_group_fd);
 
 	return 0;
 }
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index b6677bdd18..1695b6c078 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2019-2020 NXP
+ *   Copyright 2016,2019-2023 NXP
  *
  */
 
@@ -20,26 +20,28 @@
 #define DPAA2_MC_DPBP_DEVID	10
 #define DPAA2_MC_DPCI_DEVID	11
 
-typedef struct fslmc_vfio_device {
+struct fslmc_vfio_device {
+	LIST_ENTRY(fslmc_vfio_device) next;
 	int fd; /* fslmc root container device ?? */
 	int index; /*index of child object */
+	char dev_name[64];
 	struct fslmc_vfio_device *child; /* Child object */
-} fslmc_vfio_device;
+};
 
-typedef struct fslmc_vfio_group {
+struct fslmc_vfio_group {
+	LIST_ENTRY(fslmc_vfio_group) next;
 	int fd; /* /dev/vfio/"groupid" */
 	int groupid;
-	struct fslmc_vfio_container *container;
-	int object_index;
-	struct fslmc_vfio_device *vfio_device;
-} fslmc_vfio_group;
+	int connected;
+	char group_name[64]; /* dprc.x*/
+	int iommu_type;
+	LIST_HEAD(, fslmc_vfio_device) vfio_devices;
+};
 
-typedef struct fslmc_vfio_container {
+struct fslmc_vfio_container {
 	int fd; /* /dev/vfio/vfio */
-	int used;
-	int index; /* index in group list */
-	struct fslmc_vfio_group *group;
-} fslmc_vfio_container;
+	LIST_HEAD(, fslmc_vfio_group) groups;
+};
 
 extern char *fslmc_container;
 
@@ -57,8 +59,11 @@ int fslmc_vfio_setup_group(void);
 int fslmc_vfio_process_group(void);
 int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
-int fslmc_get_container_group(int *gropuid);
+int fslmc_get_container_group(const char *group_name, int *gropuid);
 int rte_fslmc_vfio_dmamap(void);
-int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova,
+		uint64_t size);
+int rte_fslmc_vfio_mem_dmaunmap(uint64_t iova,
+		uint64_t size);
 
 #endif /* _FSLMC_VFIO_H_ */
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index df1143733d..b49bc0a62c 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -118,6 +118,7 @@ INTERNAL {
 	rte_fslmc_get_device_count;
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
+	rte_fslmc_vfio_mem_dmaunmap;
 
 	local: *;
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 15/42] bus/fslmc: free VFIO group FD in case of add group failure
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (13 preceding siblings ...)
  2024-10-23 11:59           ` [v5 14/42] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 16/42] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
                             ` (27 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Free vfio_group_fd if add group fails to avoid resource leak

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 63e84cb4d8..3d466d3f1f 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -343,8 +343,10 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	} else {
 		ret = fslmc_vfio_add_group(vfio_group_fd, iommu_group_num,
 			group_name);
-		if (ret)
+		if (ret) {
+			close(vfio_group_fd);
 			return ret;
+		}
 	}
 
 	return vfio_group_fd;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 16/42] bus/fslmc: dynamic IOVA mode configuration
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (14 preceding siblings ...)
  2024-10-23 11:59           ` [v5 15/42] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 17/42] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
                             ` (26 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh
  Cc: Jun Yang, Vanshika Shukla

From: Jun Yang <jun.yang@nxp.com>

IOVA mode should not be configured with CFLAGS because
1) User can perform "--iova-mode" to configure IOVA.
2) IOVA mode is determined by negotiation between multiple devices.
   Eal is in VA mode only when all devices support VA mode.

Hence:
1) Remove RTE_LIBRTE_DPAA2_USE_PHYS_IOVA cflags.
   Instead, use rte_eal_iova_mode API to identify VA or PA mode.
2) Support memory IOMMU mapping and I/O IOMMU mapping(PCI space).
3) For memory IOMMU, in VA mode, IOVA:VA = 1:1;
   in PA mode, IOVA:VA = PA:VA. The mapping policy is determined by
   EAL memory driver.
4) For I/O IOMMU, IOVA:VA is up to I/O driver configuration.
   In general, it's aligned with memory IOMMU mapping.
5) Memory and I/O IOVA tables are created and update when DMA
   mapping is setup, which takes place of dpaax IOVA table.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     |  29 +-
 drivers/bus/fslmc/fslmc_bus.c            |  33 +-
 drivers/bus/fslmc/fslmc_vfio.c           | 655 ++++++++++++++++++-----
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c |   3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c |  10 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h |   3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h  | 111 ++--
 drivers/bus/fslmc/version.map            |   7 +-
 drivers/dma/dpaa2/dpaa2_qdma.c           |   1 +
 9 files changed, 601 insertions(+), 251 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index a3428fe28b..ba3774823b 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -33,9 +33,6 @@
 
 #include <fslmc_vfio.h>
 
-#include "portal/dpaa2_hw_pvt.h"
-#include "portal/dpaa2_hw_dpio.h"
-
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -149,6 +146,32 @@ struct rte_dpaa2_driver {
 	rte_dpaa2_remove_t remove;
 };
 
+int
+rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+__rte_internal
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size);
+__rte_internal
+uint64_t
+rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
+	uint64_t size);
+__rte_internal
+void *
+rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
+	uint64_t size);
+__rte_internal
+__rte_hot uint64_t
+rte_fslmc_mem_vaddr_to_iova(void *vaddr);
+__rte_internal
+__rte_hot void *
+rte_fslmc_mem_iova_to_vaddr(uint64_t iova);
+__rte_internal
+uint64_t
+rte_fslmc_io_vaddr_to_iova(void *vaddr);
+__rte_internal
+void *
+rte_fslmc_io_iova_to_vaddr(uint64_t iova);
+
 /**
  * Register a DPAA2 driver.
  *
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index a966df1598..107cc70833 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -27,7 +27,6 @@
 #define FSLMC_BUS_NAME	fslmc
 
 struct rte_fslmc_bus rte_fslmc_bus;
-uint8_t dpaa2_virt_mode;
 
 #define DPAA2_SEQN_DYNFIELD_NAME "dpaa2_seqn_dynfield"
 int dpaa2_seqn_dynfield_offset = -1;
@@ -457,22 +456,6 @@ rte_fslmc_probe(void)
 
 	probe_all = rte_fslmc_bus.bus.conf.scan_mode != RTE_BUS_SCAN_ALLOWLIST;
 
-	/* In case of PA, the FD addresses returned by qbman APIs are physical
-	 * addresses, which need conversion into equivalent VA address for
-	 * rte_mbuf. For that, a table (a serial array, in memory) is used to
-	 * increase translation efficiency.
-	 * This has to be done before probe as some device initialization
-	 * (during) probe allocate memory (dpaa2_sec) which needs to be pinned
-	 * to this table.
-	 *
-	 * Error is ignored as relevant logs are handled within dpaax and
-	 * handling for unavailable dpaax table too is transparent to caller.
-	 *
-	 * And, the IOVA table is only applicable in case of PA mode.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_populate();
-
 	TAILQ_FOREACH(dev, &rte_fslmc_bus.device_list, next) {
 		TAILQ_FOREACH(drv, &rte_fslmc_bus.driver_list, next) {
 			ret = rte_fslmc_match(drv, dev);
@@ -507,9 +490,6 @@ rte_fslmc_probe(void)
 		}
 	}
 
-	if (rte_eal_iova_mode() == RTE_IOVA_VA)
-		dpaa2_virt_mode = 1;
-
 	return 0;
 }
 
@@ -558,12 +538,6 @@ rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
 void
 rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver)
 {
-	/* Cleanup the PA->VA Translation table; From wherever this function
-	 * is called from.
-	 */
-	if (rte_eal_iova_mode() == RTE_IOVA_PA)
-		dpaax_iova_table_depopulate();
-
 	TAILQ_REMOVE(&rte_fslmc_bus.driver_list, driver, next);
 }
 
@@ -599,13 +573,12 @@ rte_dpaa2_get_iommu_class(void)
 	bool is_vfio_noiommu_enabled = 1;
 	bool has_iova_va;
 
+	if (rte_eal_iova_mode() == RTE_IOVA_PA)
+		return RTE_IOVA_PA;
+
 	if (TAILQ_EMPTY(&rte_fslmc_bus.device_list))
 		return RTE_IOVA_DC;
 
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-	return RTE_IOVA_PA;
-#endif
-
 	/* check if all devices on the bus support Virtual addressing or not */
 	has_iova_va = fslmc_all_device_support_iova();
 
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 3d466d3f1f..2bf0a7b835 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -19,6 +19,7 @@
 #include <libgen.h>
 #include <dirent.h>
 #include <sys/eventfd.h>
+#include <ctype.h>
 
 #include <eal_filesystem.h>
 #include <rte_mbuf.h>
@@ -47,9 +48,41 @@
  */
 static struct fslmc_vfio_container s_vfio_container;
 /* Currently we only support single group/process. */
-const char *fslmc_group; /* dprc.x*/
+static const char *fslmc_group; /* dprc.x*/
 static uint32_t *msi_intr_vaddr;
-void *(*rte_mcp_ptr_list);
+static void *(*rte_mcp_ptr_list);
+
+struct fslmc_dmaseg {
+	uint64_t vaddr;
+	uint64_t iova;
+	uint64_t size;
+
+	TAILQ_ENTRY(fslmc_dmaseg) next;
+};
+
+TAILQ_HEAD(fslmc_dmaseg_list, fslmc_dmaseg);
+
+struct fslmc_dmaseg_list fslmc_memsegs =
+		TAILQ_HEAD_INITIALIZER(fslmc_memsegs);
+struct fslmc_dmaseg_list fslmc_iosegs =
+		TAILQ_HEAD_INITIALIZER(fslmc_iosegs);
+
+static uint64_t fslmc_mem_va2iova = RTE_BAD_IOVA;
+static int fslmc_mem_map_num;
+
+struct fslmc_mem_param {
+	struct vfio_mp_param mp_param;
+	struct fslmc_dmaseg_list memsegs;
+	struct fslmc_dmaseg_list iosegs;
+	uint64_t mem_va2iova;
+	int mem_map_num;
+};
+
+enum {
+	FSLMC_VFIO_SOCKET_REQ_CONTAINER = 0x100,
+	FSLMC_VFIO_SOCKET_REQ_GROUP,
+	FSLMC_VFIO_SOCKET_REQ_MEM
+};
 
 void *
 dpaa2_get_mcp_ptr(int portal_idx)
@@ -63,6 +96,64 @@ dpaa2_get_mcp_ptr(int portal_idx)
 static struct rte_dpaa2_object_list dpaa2_obj_list =
 	TAILQ_HEAD_INITIALIZER(dpaa2_obj_list);
 
+static uint64_t
+fslmc_io_virt2phy(const void *virtaddr)
+{
+	FILE *fp = fopen("/proc/self/maps", "r");
+	char *line = NULL;
+	size_t linesz;
+	uint64_t start, end, phy;
+	const uint64_t va = (const uint64_t)virtaddr;
+	char tmp[1024];
+	int ret;
+
+	if (!fp)
+		return RTE_BAD_IOVA;
+	while (getdelim(&line, &linesz, '\n', fp) > 0) {
+		char *ptr = line;
+		int n;
+
+		/** Parse virtual address range.*/
+		n = 0;
+		while (*ptr && !isspace(*ptr)) {
+			tmp[n] = *ptr;
+			ptr++;
+			n++;
+		}
+		tmp[n] = 0;
+		ret = sscanf(tmp, "%" SCNx64 "-%" SCNx64, &start, &end);
+		if (ret != 2)
+			continue;
+		if (va < start || va >= end)
+			continue;
+
+		/** This virtual address is in this segment.*/
+		while (*ptr == ' ' || *ptr == 'r' ||
+			*ptr == 'w' || *ptr == 's' ||
+			*ptr == 'p' || *ptr == 'x' ||
+			*ptr == '-')
+			ptr++;
+
+		/** Extract phy address*/
+		n = 0;
+		while (*ptr && !isspace(*ptr)) {
+			tmp[n] = *ptr;
+			ptr++;
+			n++;
+		}
+		tmp[n] = 0;
+		phy = strtoul(tmp, 0, 16);
+		if (!phy)
+			continue;
+
+		fclose(fp);
+		return phy + va - start;
+	}
+
+	fclose(fp);
+	return RTE_BAD_IOVA;
+}
+
 /*register a fslmc bus based dpaa2 driver */
 void
 rte_fslmc_object_register(struct rte_dpaa2_object *object)
@@ -269,7 +360,7 @@ fslmc_get_group_id(const char *group_name,
 	ret = rte_vfio_get_group_num(SYSFS_FSL_MC_DEVICES,
 			group_name, groupid);
 	if (ret <= 0) {
-		DPAA2_BUS_ERR("Unable to find %s IOMMU group", group_name);
+		DPAA2_BUS_ERR("Find %s IOMMU group", group_name);
 		if (ret < 0)
 			return ret;
 
@@ -312,7 +403,7 @@ fslmc_vfio_open_group_fd(const char *group_name)
 	/* if we're in a secondary process, request group fd from the primary
 	 * process via mp channel.
 	 */
-	p->req = SOCKET_REQ_GROUP;
+	p->req = FSLMC_VFIO_SOCKET_REQ_GROUP;
 	p->group_num = iommu_group_num;
 	rte_strscpy(mp_req.name, FSLMC_VFIO_MP, sizeof(mp_req.name));
 	mp_req.len_param = sizeof(*p);
@@ -404,7 +495,7 @@ fslmc_vfio_open_container_fd(void)
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
 		if (vfio_container_fd < 0) {
-			DPAA2_BUS_ERR("Cannot open VFIO container(%s), err(%d)",
+			DPAA2_BUS_ERR("Open VFIO container(%s), err(%d)",
 				VFIO_CONTAINER_PATH, vfio_container_fd);
 			ret = vfio_container_fd;
 			goto err_exit;
@@ -413,7 +504,7 @@ fslmc_vfio_open_container_fd(void)
 		/* check VFIO API version */
 		ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
 		if (ret < 0) {
-			DPAA2_BUS_ERR("Could not get VFIO API version(%d)",
+			DPAA2_BUS_ERR("Get VFIO API version(%d)",
 				ret);
 		} else if (ret != VFIO_API_VERSION) {
 			DPAA2_BUS_ERR("Unsupported VFIO API version(%d)",
@@ -427,7 +518,7 @@ fslmc_vfio_open_container_fd(void)
 
 		ret = fslmc_vfio_check_extensions(vfio_container_fd);
 		if (ret) {
-			DPAA2_BUS_ERR("No supported IOMMU extensions found(%d)",
+			DPAA2_BUS_ERR("Unsupported IOMMU extensions found(%d)",
 				ret);
 			close(vfio_container_fd);
 			goto err_exit;
@@ -439,7 +530,7 @@ fslmc_vfio_open_container_fd(void)
 	 * if we're in a secondary process, request container fd from the
 	 * primary process via mp channel
 	 */
-	p->req = SOCKET_REQ_CONTAINER;
+	p->req = FSLMC_VFIO_SOCKET_REQ_CONTAINER;
 	rte_strscpy(mp_req.name, FSLMC_VFIO_MP, sizeof(mp_req.name));
 	mp_req.len_param = sizeof(*p);
 	mp_req.num_fds = 0;
@@ -469,7 +560,7 @@ fslmc_vfio_open_container_fd(void)
 err_exit:
 	if (mp_reply.msgs)
 		free(mp_reply.msgs);
-	DPAA2_BUS_ERR("Cannot request container fd err(%d)", ret);
+	DPAA2_BUS_ERR("Open container fd err(%d)", ret);
 	return ret;
 }
 
@@ -502,17 +593,19 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 	struct rte_mp_msg reply;
 	struct vfio_mp_param *r = (void *)reply.param;
 	const struct vfio_mp_param *m = (const void *)msg->param;
+	struct fslmc_mem_param *map;
 
 	if (msg->len_param != sizeof(*m)) {
-		DPAA2_BUS_ERR("fslmc vfio received invalid message!");
+		DPAA2_BUS_ERR("Invalid msg size(%d) for req(%d)",
+			msg->len_param, m->req);
 		return -EINVAL;
 	}
 
 	memset(&reply, 0, sizeof(reply));
 
 	switch (m->req) {
-	case SOCKET_REQ_GROUP:
-		r->req = SOCKET_REQ_GROUP;
+	case FSLMC_VFIO_SOCKET_REQ_GROUP:
+		r->req = FSLMC_VFIO_SOCKET_REQ_GROUP;
 		r->group_num = m->group_num;
 		fd = fslmc_vfio_group_fd_by_id(m->group_num);
 		if (fd < 0) {
@@ -526,9 +619,10 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 			reply.num_fds = 1;
 			reply.fds[0] = fd;
 		}
+		reply.len_param = sizeof(*r);
 		break;
-	case SOCKET_REQ_CONTAINER:
-		r->req = SOCKET_REQ_CONTAINER;
+	case FSLMC_VFIO_SOCKET_REQ_CONTAINER:
+		r->req = FSLMC_VFIO_SOCKET_REQ_CONTAINER;
 		fd = fslmc_vfio_container_fd();
 		if (fd <= 0) {
 			r->result = SOCKET_ERR;
@@ -537,20 +631,66 @@ fslmc_vfio_mp_primary(const struct rte_mp_msg *msg,
 			reply.num_fds = 1;
 			reply.fds[0] = fd;
 		}
+		reply.len_param = sizeof(*r);
+		break;
+	case FSLMC_VFIO_SOCKET_REQ_MEM:
+		map = (void *)reply.param;
+		r = &map->mp_param;
+		r->req = FSLMC_VFIO_SOCKET_REQ_MEM;
+		r->result = SOCKET_OK;
+		map->memsegs = fslmc_memsegs;
+		map->iosegs = fslmc_iosegs;
+		map->mem_va2iova = fslmc_mem_va2iova;
+		map->mem_map_num = fslmc_mem_map_num;
+		reply.len_param = sizeof(struct fslmc_mem_param);
 		break;
 	default:
-		DPAA2_BUS_ERR("fslmc vfio received invalid message(%08x)",
+		DPAA2_BUS_ERR("VFIO received invalid message(%08x)",
 			m->req);
 		return -ENOTSUP;
 	}
 
 	rte_strscpy(reply.name, FSLMC_VFIO_MP, sizeof(reply.name));
-	reply.len_param = sizeof(*r);
 	ret = rte_mp_reply(&reply, peer);
 
 	return ret;
 }
 
+static int
+fslmc_vfio_mp_sync_mem_req(void)
+{
+	struct rte_mp_msg mp_req, *mp_rep;
+	struct rte_mp_reply mp_reply = {0};
+	struct timespec ts = {.tv_sec = 5, .tv_nsec = 0};
+	int ret = 0;
+	struct vfio_mp_param *mp_param;
+	struct fslmc_mem_param *mem_rsp;
+
+	mp_param = (void *)mp_req.param;
+	memset(&mp_req, 0, sizeof(struct rte_mp_msg));
+	mp_param->req = FSLMC_VFIO_SOCKET_REQ_MEM;
+	rte_strscpy(mp_req.name, FSLMC_VFIO_MP, sizeof(mp_req.name));
+	mp_req.len_param = sizeof(struct vfio_mp_param);
+	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0 &&
+		mp_reply.nb_received == 1) {
+		mp_rep = &mp_reply.msgs[0];
+		mem_rsp = (struct fslmc_mem_param *)mp_rep->param;
+		if (mem_rsp->mp_param.result == SOCKET_OK) {
+			fslmc_memsegs = mem_rsp->memsegs;
+			fslmc_mem_va2iova = mem_rsp->mem_va2iova;
+			fslmc_mem_map_num = mem_rsp->mem_map_num;
+		} else {
+			DPAA2_BUS_ERR("Bad MEM SEG");
+			ret = -EINVAL;
+		}
+	} else {
+		ret = -EINVAL;
+	}
+	free(mp_reply.msgs);
+
+	return ret;
+}
+
 static int
 fslmc_vfio_mp_sync_setup(void)
 {
@@ -561,6 +701,10 @@ fslmc_vfio_mp_sync_setup(void)
 			fslmc_vfio_mp_primary);
 		if (ret && rte_errno != ENOTSUP)
 			return ret;
+	} else {
+		ret = fslmc_vfio_mp_sync_mem_req();
+		if (ret)
+			return ret;
 	}
 
 	return 0;
@@ -581,30 +725,34 @@ vfio_connect_container(int vfio_container_fd,
 
 	iommu_type = fslmc_vfio_iommu_type(vfio_group_fd);
 	if (iommu_type < 0) {
-		DPAA2_BUS_ERR("Failed to get iommu type(%d)",
-			iommu_type);
+		DPAA2_BUS_ERR("Get iommu type(%d)", iommu_type);
 
 		return iommu_type;
 	}
 
 	/* Check whether support for SMMU type IOMMU present or not */
-	if (ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type)) {
-		/* Connect group to container */
-		ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
+	ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, iommu_type);
+	if (ret <= 0) {
+		DPAA2_BUS_ERR("Unsupported IOMMU type(%d) ret(%d), err(%d)",
+			iommu_type, ret, -errno);
+		return -EINVAL;
+	}
+
+	ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
 			&vfio_container_fd);
-		if (ret) {
-			DPAA2_BUS_ERR("Failed to setup group container");
-			return -errno;
-		}
+	if (ret) {
+		DPAA2_BUS_ERR("Set group container ret(%d), err(%d)",
+			ret, -errno);
 
-		ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
-		if (ret) {
-			DPAA2_BUS_ERR("Failed to setup VFIO iommu");
-			return -errno;
-		}
-	} else {
-		DPAA2_BUS_ERR("No supported IOMMU available");
-		return -EINVAL;
+		return ret;
+	}
+
+	ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, iommu_type);
+	if (ret) {
+		DPAA2_BUS_ERR("Set iommu ret(%d), err(%d)",
+			ret, -errno);
+
+		return ret;
 	}
 
 	return fslmc_vfio_connect_container(vfio_group_fd);
@@ -625,11 +773,11 @@ static int vfio_map_irq_region(void)
 
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (!fslmc_vfio_container_connected(fd)) {
 		DPAA2_BUS_ERR("Container is not connected");
@@ -639,8 +787,8 @@ static int vfio_map_irq_region(void)
 	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
 		PROT_READ, MAP_SHARED, fd, 0x6030000);
 	if (vaddr == MAP_FAILED) {
-		DPAA2_BUS_INFO("Unable to map region (errno = %d)", errno);
-		return -errno;
+		DPAA2_BUS_ERR("Unable to map region (errno = %d)", errno);
+		return -ENOMEM;
 	}
 
 	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
@@ -650,141 +798,200 @@ static int vfio_map_irq_region(void)
 		return 0;
 
 	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
-	return -errno;
-}
-
-static int fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
-static int fslmc_unmap_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len);
-
-static void
-fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
-	size_t len, void *arg __rte_unused)
-{
-	struct rte_memseg_list *msl;
-	struct rte_memseg *ms;
-	size_t cur_len = 0, map_len = 0;
-	uint64_t virt_addr;
-	rte_iova_t iova_addr;
-	int ret;
-
-	msl = rte_mem_virt2memseg_list(addr);
-
-	while (cur_len < len) {
-		const void *va = RTE_PTR_ADD(addr, cur_len);
-
-		ms = rte_mem_virt2memseg(va, msl);
-		iova_addr = ms->iova;
-		virt_addr = ms->addr_64;
-		map_len = ms->len;
-
-		DPAA2_BUS_DEBUG("Request for %s, va=%p, "
-				"virt_addr=0x%" PRIx64 ", "
-				"iova=0x%" PRIx64 ", map_len=%zu",
-				type == RTE_MEM_EVENT_ALLOC ?
-					"alloc" : "dealloc",
-				va, virt_addr, iova_addr, map_len);
-
-		/* iova_addr may be set to RTE_BAD_IOVA */
-		if (iova_addr == RTE_BAD_IOVA) {
-			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping");
-			cur_len += map_len;
-			continue;
-		}
-
-		if (type == RTE_MEM_EVENT_ALLOC)
-			ret = fslmc_map_dma(virt_addr, iova_addr, map_len);
-		else
-			ret = fslmc_unmap_dma(virt_addr, iova_addr, map_len);
-
-		if (ret != 0) {
-			DPAA2_BUS_ERR("DMA Mapping/Unmapping failed. "
-					"Map=%d, addr=%p, len=%zu, err:(%d)",
-					type, va, map_len, ret);
-			return;
-		}
-
-		cur_len += map_len;
-	}
-
-	if (type == RTE_MEM_EVENT_ALLOC)
-		DPAA2_BUS_DEBUG("Total Mapped: addr=%p, len=%zu",
-				addr, len);
-	else
-		DPAA2_BUS_DEBUG("Total Unmapped: addr=%p, len=%zu",
-				addr, len);
+	return ret;
 }
 
 static int
-fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr,
-	size_t len)
+fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len)
 {
 	struct vfio_iommu_type1_dma_map dma_map = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_map),
 		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
 	};
-	int ret, fd;
+	int ret, fd, is_io = 0;
 	const char *group_name = fslmc_vfio_get_group_name();
+	struct fslmc_dmaseg *dmaseg = NULL;
+	uint64_t phy = 0;
+
+	if (rte_eal_iova_mode() == RTE_IOVA_VA) {
+		if (vaddr != iovaddr) {
+			DPAA2_BUS_ERR("IOVA:VA(%" PRIx64 " : %" PRIx64 ") %s",
+				iovaddr, vaddr,
+				"should be 1:1 for VA mode");
 
+			return -EINVAL;
+		}
+	}
+
+	phy = rte_mem_virt2phy((const void *)(uintptr_t)vaddr);
+	if (phy == RTE_BAD_IOVA) {
+		phy = fslmc_io_virt2phy((const void *)(uintptr_t)vaddr);
+		if (phy == RTE_BAD_IOVA)
+			return -ENOMEM;
+		is_io = 1;
+	} else if (fslmc_mem_va2iova != RTE_BAD_IOVA &&
+		fslmc_mem_va2iova != (iovaddr - vaddr)) {
+		DPAA2_BUS_WARN("Multiple MEM PA<->VA conversions.");
+	}
+	DPAA2_BUS_DEBUG("%s(%zu): VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+		is_io ? "DMA IO map size" : "DMA MEM map size",
+		len, vaddr, iovaddr, phy);
+
+	if (is_io)
+		goto io_mapping_check;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (!((vaddr + len) <= dmaseg->vaddr ||
+			(dmaseg->vaddr + dmaseg->size) <= vaddr)) {
+			DPAA2_BUS_ERR("MEM: New VA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				vaddr, vaddr + len);
+			DPAA2_BUS_ERR("MEM: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->vaddr,
+				dmaseg->vaddr + dmaseg->size);
+			return -EEXIST;
+		}
+		if (!((iovaddr + len) <= dmaseg->iova ||
+			(dmaseg->iova + dmaseg->size) <= iovaddr)) {
+			DPAA2_BUS_ERR("MEM: New IOVA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				iovaddr, iovaddr + len);
+			DPAA2_BUS_ERR("MEM: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->iova,
+				dmaseg->iova + dmaseg->size);
+			return -EEXIST;
+		}
+	}
+	goto start_mapping;
+
+io_mapping_check:
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if (!((vaddr + len) <= dmaseg->vaddr ||
+			(dmaseg->vaddr + dmaseg->size) <= vaddr)) {
+			DPAA2_BUS_ERR("IO: New VA Range (%" PRIx64 " ~ %" PRIx64 ")",
+				vaddr, vaddr + len);
+			DPAA2_BUS_ERR("IO: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->vaddr,
+				dmaseg->vaddr + dmaseg->size);
+			return -EEXIST;
+		}
+		if (!((iovaddr + len) <= dmaseg->iova ||
+			(dmaseg->iova + dmaseg->size) <= iovaddr)) {
+			DPAA2_BUS_ERR("IO: New IOVA Range(%" PRIx64 " ~ %" PRIx64 ")",
+				iovaddr, iovaddr + len);
+			DPAA2_BUS_ERR("IO: Overlap with (%" PRIx64 " ~ %" PRIx64 ")",
+				dmaseg->iova,
+				dmaseg->iova + dmaseg->size);
+			return -EEXIST;
+		}
+	}
+
+start_mapping:
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
-		return 0;
+		if (phy != iovaddr) {
+			DPAA2_BUS_ERR("IOVA should support with IOMMU");
+			return -EIO;
+		}
+		goto end_mapping;
 	}
 
 	dma_map.size = len;
 	dma_map.vaddr = vaddr;
 	dma_map.iova = iovaddr;
 
-#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-	if (vaddr != iovaddr) {
-		DPAA2_BUS_WARN("vaddr(0x%"PRIx64") != iovaddr(0x%"PRIx64")",
-			vaddr, iovaddr);
-	}
-#endif
-
 	/* SET DMA MAP for IOMMU */
 	if (!fslmc_vfio_container_connected(fd)) {
-		DPAA2_BUS_ERR("Container is not connected ");
+		DPAA2_BUS_ERR("Container is not connected");
 		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("--> Map address: 0x%"PRIx64", size: %"PRIu64"",
-			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.size);
 	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA,
 		&dma_map);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO_IOMMU_MAP_DMA API(errno = %d)",
-				errno);
+		DPAA2_BUS_ERR("%s(%d) VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+			is_io ? "DMA IO map err" : "DMA MEM map err",
+			errno, vaddr, iovaddr, phy);
 		return ret;
 	}
 
+end_mapping:
+	dmaseg = malloc(sizeof(struct fslmc_dmaseg));
+	if (!dmaseg) {
+		DPAA2_BUS_ERR("DMA segment malloc failed!");
+		return -ENOMEM;
+	}
+	dmaseg->vaddr = vaddr;
+	dmaseg->iova = iovaddr;
+	dmaseg->size = len;
+	if (is_io) {
+		TAILQ_INSERT_TAIL(&fslmc_iosegs, dmaseg, next);
+	} else {
+		fslmc_mem_map_num++;
+		if (fslmc_mem_map_num == 1)
+			fslmc_mem_va2iova = iovaddr - vaddr;
+		else
+			fslmc_mem_va2iova = RTE_BAD_IOVA;
+		TAILQ_INSERT_TAIL(&fslmc_memsegs, dmaseg, next);
+	}
+	DPAA2_BUS_LOG(NOTICE,
+		"%s(%zx): VA(%" PRIx64 "):IOVA(%" PRIx64 "):PHY(%" PRIx64 ")",
+		is_io ? "DMA I/O map size" : "DMA MEM map size",
+		len, vaddr, iovaddr, phy);
+
 	return 0;
 }
 
 static int
-fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
+fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr, size_t len)
 {
 	struct vfio_iommu_type1_dma_unmap dma_unmap = {
 		.argsz = sizeof(struct vfio_iommu_type1_dma_unmap),
 		.flags = 0,
 	};
-	int ret, fd;
+	int ret, fd, is_io = 0;
 	const char *group_name = fslmc_vfio_get_group_name();
+	struct fslmc_dmaseg *dmaseg = NULL;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (((vaddr && dmaseg->vaddr == vaddr) || !vaddr) &&
+			dmaseg->iova == iovaddr &&
+			dmaseg->size == len) {
+			is_io = 0;
+			break;
+		}
+	}
+
+	if (!dmaseg) {
+		TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+			if (((vaddr && dmaseg->vaddr == vaddr) || !vaddr) &&
+				dmaseg->iova == iovaddr &&
+				dmaseg->size == len) {
+				is_io = 1;
+				break;
+			}
+		}
+	}
+
+	if (!dmaseg) {
+		DPAA2_BUS_ERR("IOVA(%" PRIx64 ") with length(%zx) not mapped",
+			iovaddr, len);
+		return 0;
+	}
 
 	fd = fslmc_vfio_group_fd_by_name(group_name);
 	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s failed to open group fd(%d)",
-			__func__, fd);
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, fd);
 		if (fd < 0)
 			return fd;
-		return -rte_errno;
+		return -EIO;
 	}
 	if (fslmc_vfio_iommu_type(fd) == RTE_VFIO_NOIOMMU) {
 		DPAA2_BUS_DEBUG("Running in NOIOMMU mode");
@@ -792,7 +999,7 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 	}
 
 	dma_unmap.size = len;
-	dma_unmap.iova = vaddr;
+	dma_unmap.iova = iovaddr;
 
 	/* SET DMA MAP for IOMMU */
 	if (!fslmc_vfio_container_connected(fd)) {
@@ -800,19 +1007,164 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr __rte_unused, size_t len)
 		return -EIO;
 	}
 
-	DPAA2_BUS_DEBUG("--> Unmap address: 0x%"PRIx64", size: %"PRIu64"",
-			(uint64_t)dma_unmap.iova, (uint64_t)dma_unmap.size);
 	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_UNMAP_DMA,
 		&dma_unmap);
 	if (ret) {
-		DPAA2_BUS_ERR("VFIO_IOMMU_UNMAP_DMA API(errno = %d)",
-				errno);
-		return -1;
+		DPAA2_BUS_ERR("DMA un-map IOVA(%" PRIx64 " ~ %" PRIx64 ") err(%d)",
+			iovaddr, iovaddr + len, errno);
+		return ret;
 	}
 
+	if (is_io) {
+		TAILQ_REMOVE(&fslmc_iosegs, dmaseg, next);
+	} else {
+		TAILQ_REMOVE(&fslmc_memsegs, dmaseg, next);
+		fslmc_mem_map_num--;
+		if (TAILQ_EMPTY(&fslmc_memsegs))
+			fslmc_mem_va2iova = RTE_BAD_IOVA;
+	}
+
+	free(dmaseg);
+
 	return 0;
 }
 
+uint64_t
+rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
+	uint64_t size)
+{
+	struct fslmc_dmaseg *dmaseg;
+	uint64_t va;
+
+	va = (uint64_t)vaddr;
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (va >= dmaseg->vaddr &&
+			(va + size) < (dmaseg->vaddr + dmaseg->size)) {
+			return dmaseg->iova + va - dmaseg->vaddr;
+		}
+	}
+
+	return RTE_BAD_IOVA;
+}
+
+void *
+rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
+	uint64_t size)
+{
+	struct fslmc_dmaseg *dmaseg;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_memsegs, next) {
+		if (iova >= dmaseg->iova &&
+			(iova + size) < (dmaseg->iova + dmaseg->size))
+			return (void *)((uintptr_t)dmaseg->vaddr
+				+ (uintptr_t)(iova - dmaseg->iova));
+	}
+
+	return NULL;
+}
+
+__rte_hot uint64_t
+rte_fslmc_mem_vaddr_to_iova(void *vaddr)
+{
+	if (likely(fslmc_mem_va2iova != RTE_BAD_IOVA))
+		return (uint64_t)vaddr + fslmc_mem_va2iova;
+
+	return rte_fslmc_cold_mem_vaddr_to_iova(vaddr, 0);
+}
+
+__rte_hot void *
+rte_fslmc_mem_iova_to_vaddr(uint64_t iova)
+{
+	if (likely(fslmc_mem_va2iova != RTE_BAD_IOVA))
+		return (void *)((uintptr_t)iova - (uintptr_t)fslmc_mem_va2iova);
+
+	return rte_fslmc_cold_mem_iova_to_vaddr(iova, 0);
+}
+
+uint64_t
+rte_fslmc_io_vaddr_to_iova(void *vaddr)
+{
+	struct fslmc_dmaseg *dmaseg = NULL;
+	uint64_t va = (uint64_t)vaddr;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if ((va >= dmaseg->vaddr) &&
+			va < dmaseg->vaddr + dmaseg->size)
+			return dmaseg->iova + va - dmaseg->vaddr;
+	}
+
+	return RTE_BAD_IOVA;
+}
+
+void *
+rte_fslmc_io_iova_to_vaddr(uint64_t iova)
+{
+	struct fslmc_dmaseg *dmaseg = NULL;
+
+	TAILQ_FOREACH(dmaseg, &fslmc_iosegs, next) {
+		if ((iova >= dmaseg->iova) &&
+			iova < dmaseg->iova + dmaseg->size)
+			return (void *)((uintptr_t)dmaseg->vaddr
+				+ (uintptr_t)(iova - dmaseg->iova));
+	}
+
+	return NULL;
+}
+
+static void
+fslmc_memevent_cb(enum rte_mem_event type, const void *addr,
+	size_t len, void *arg __rte_unused)
+{
+	struct rte_memseg_list *msl;
+	struct rte_memseg *ms;
+	size_t cur_len = 0, map_len = 0;
+	uint64_t virt_addr;
+	rte_iova_t iova_addr;
+	int ret;
+
+	msl = rte_mem_virt2memseg_list(addr);
+
+	while (cur_len < len) {
+		const void *va = RTE_PTR_ADD(addr, cur_len);
+
+		ms = rte_mem_virt2memseg(va, msl);
+		iova_addr = ms->iova;
+		virt_addr = ms->addr_64;
+		map_len = ms->len;
+
+		DPAA2_BUS_DEBUG("%s, va=%p, virt=%" PRIx64 ", iova=%" PRIx64 ", len=%zu",
+			type == RTE_MEM_EVENT_ALLOC ? "alloc" : "dealloc",
+			va, virt_addr, iova_addr, map_len);
+
+		/* iova_addr may be set to RTE_BAD_IOVA */
+		if (iova_addr == RTE_BAD_IOVA) {
+			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping");
+			cur_len += map_len;
+			continue;
+		}
+
+		if (type == RTE_MEM_EVENT_ALLOC)
+			ret = fslmc_map_dma(virt_addr, iova_addr, map_len);
+		else
+			ret = fslmc_unmap_dma(virt_addr, iova_addr, map_len);
+
+		if (ret != 0) {
+			DPAA2_BUS_ERR("%s: Map=%d, addr=%p, len=%zu, err:(%d)",
+				type == RTE_MEM_EVENT_ALLOC ?
+				"DMA Mapping failed. " :
+				"DMA Unmapping failed. ",
+				type, va, map_len, ret);
+			return;
+		}
+
+		cur_len += map_len;
+	}
+
+	DPAA2_BUS_DEBUG("Total %s: addr=%p, len=%zu",
+		type == RTE_MEM_EVENT_ALLOC ? "Mapped" : "Unmapped",
+		addr, len);
+}
+
 static int
 fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 		const struct rte_memseg *ms, void *arg)
@@ -843,7 +1195,7 @@ rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 int
 rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 {
-	return fslmc_unmap_dma(iova, 0, size);
+	return fslmc_unmap_dma(0, iova, size);
 }
 
 int rte_fslmc_vfio_dmamap(void)
@@ -853,9 +1205,10 @@ int rte_fslmc_vfio_dmamap(void)
 	/* Lock before parsing and registering callback to memory subsystem */
 	rte_mcfg_mem_read_lock();
 
-	if (rte_memseg_walk(fslmc_dmamap_seg, &i) < 0) {
+	ret = rte_memseg_walk(fslmc_dmamap_seg, &i);
+	if (ret) {
 		rte_mcfg_mem_read_unlock();
-		return -1;
+		return ret;
 	}
 
 	ret = rte_mem_event_callback_register("fslmc_memevent_clb",
@@ -894,6 +1247,14 @@ fslmc_vfio_setup_device(const char *dev_addr,
 	const char *group_name = fslmc_vfio_get_group_name();
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, vfio_group_fd);
+		if (vfio_group_fd < 0)
+			return vfio_group_fd;
+		return -EIO;
+	}
+
 	if (!fslmc_vfio_container_connected(vfio_group_fd)) {
 		DPAA2_BUS_ERR("Container is not connected");
 		return -EIO;
@@ -1002,8 +1363,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
 	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
 	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 	if (ret)
-		DPAA2_BUS_ERR(
-			"Error disabling dpaa2 interrupts for fd %d",
+		DPAA2_BUS_ERR("Error disabling dpaa2 interrupts for fd %d",
 			rte_intr_fd_get(intr_handle));
 
 	return ret;
@@ -1028,7 +1388,7 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 		if (ret < 0) {
 			DPAA2_BUS_ERR("Cannot get IRQ(%d) info, error %i (%s)",
 				      i, errno, strerror(errno));
-			return -1;
+			return ret;
 		}
 
 		/* if this vector cannot be used with eventfd,
@@ -1042,8 +1402,8 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 		fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
 		if (fd < 0) {
 			DPAA2_BUS_ERR("Cannot set up eventfd, error %i (%s)",
-				      errno, strerror(errno));
-			return -1;
+				errno, strerror(errno));
+			return fd;
 		}
 
 		if (rte_intr_fd_set(intr_handle, fd))
@@ -1059,7 +1419,7 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
 	}
 
 	/* if we're here, we haven't found a suitable interrupt vector */
-	return -1;
+	return -EIO;
 }
 
 static void
@@ -1233,6 +1593,13 @@ fslmc_vfio_close_group(void)
 	const char *group_name = fslmc_vfio_get_group_name();
 
 	vfio_group_fd = fslmc_vfio_group_fd_by_name(group_name);
+	if (vfio_group_fd <= 0) {
+		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
+			__func__, group_name, vfio_group_fd);
+		if (vfio_group_fd < 0)
+			return vfio_group_fd;
+		return -EIO;
+	}
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
 		if (dev->device.devargs &&
@@ -1324,7 +1691,7 @@ fslmc_vfio_process_group(void)
 				ret = fslmc_process_mcp(dev);
 				if (ret) {
 					DPAA2_BUS_ERR("Unable to map MC Portal");
-					return -1;
+					return ret;
 				}
 				found_mportal = 1;
 			}
@@ -1341,7 +1708,7 @@ fslmc_vfio_process_group(void)
 	/* Cannot continue if there is not even a single mportal */
 	if (!found_mportal) {
 		DPAA2_BUS_ERR("No MC Portal device found. Not continuing");
-		return -1;
+		return -EIO;
 	}
 
 	/* Search for DPRC device next as it updates endpoint of
@@ -1353,7 +1720,7 @@ fslmc_vfio_process_group(void)
 			ret = fslmc_process_iodevices(dev);
 			if (ret) {
 				DPAA2_BUS_ERR("Unable to process dprc");
-				return -1;
+				return ret;
 			}
 			TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		}
@@ -1410,7 +1777,7 @@ fslmc_vfio_process_group(void)
 			if (ret) {
 				DPAA2_BUS_DEBUG("Dev (%s) init failed",
 						dev->device.name);
-				return -1;
+				return ret;
 			}
 
 			break;
@@ -1434,7 +1801,7 @@ fslmc_vfio_process_group(void)
 			if (ret) {
 				DPAA2_BUS_DEBUG("Dev (%s) init failed",
 						dev->device.name);
-				return -1;
+				return ret;
 			}
 
 			break;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index bc36607e64..85e4c16c03 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016,2020 NXP
+ *   Copyright 2016,2020-2023 NXP
  *
  */
 
@@ -28,7 +28,6 @@
 #include "portal/dpaa2_hw_pvt.h"
 #include "portal/dpaa2_hw_dpio.h"
 
-
 TAILQ_HEAD(dpbp_dev_list, dpaa2_dpbp_dev);
 static struct dpbp_dev_list dpbp_dev_list
 	= TAILQ_HEAD_INITIALIZER(dpbp_dev_list); /*!< DPBP device list */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index c3f6e24139..954d59d123 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -340,9 +340,8 @@ dpaa2_affine_qbman_swp(void)
 		}
 		RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
 
-		DPAA2_BUS_INFO(
-			"DPAA Portal=%p (%d) is affined to thread %" PRIu64,
-			dpio_dev, dpio_dev->index, tid);
+		DPAA2_BUS_DEBUG("Portal[%d] is affined to thread %" PRIu64,
+			dpio_dev->index, tid);
 	}
 	return 0;
 }
@@ -362,9 +361,8 @@ dpaa2_affine_qbman_ethrx_swp(void)
 		}
 		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
 
-		DPAA2_BUS_INFO(
-			"DPAA Portal=%p (%d) is affined for eth rx to thread %"
-			PRIu64, dpio_dev, dpio_dev->index, tid);
+		DPAA2_BUS_DEBUG("Portal_eth_rx[%d] is affined to thread %" PRIu64,
+			dpio_dev->index, tid);
 	}
 	return 0;
 }
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 7407f8d38d..328e1e788a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2023 NXP
  *
  */
 
@@ -12,6 +12,7 @@
 #include <mc/fsl_mc_sys.h>
 
 #include <rte_compat.h>
+#include <dpaa2_hw_pvt.h>
 
 struct dpaa2_io_portal_t {
 	struct dpaa2_dpio_dev *dpio_dev;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 4c30e6db18..74a1a8b2fa 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -14,6 +14,7 @@
 
 #include <mc/fsl_mc_sys.h>
 #include <fsl_qbman_portal.h>
+#include <bus_fslmc_driver.h>
 
 #ifndef false
 #define false      0
@@ -80,6 +81,8 @@
 #define DPAA2_PACKET_LAYOUT_ALIGN	64 /*changing from 256 */
 
 #define DPAA2_DPCI_MAX_QUEUES 2
+#define DPAA2_INVALID_FLOW_ID 0xffff
+#define DPAA2_INVALID_CGID 0xff
 
 struct dpaa2_queue;
 
@@ -366,83 +369,63 @@ enum qbman_fd_format {
  */
 #define DPAA2_EQ_RESP_ALWAYS		1
 
-/* Various structures representing contiguous memory maps */
-struct dpaa2_memseg {
-	TAILQ_ENTRY(dpaa2_memseg) next;
-	char *vaddr;
-	rte_iova_t iova;
-	size_t len;
-};
-
-#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
-extern uint8_t dpaa2_virt_mode;
-static void *dpaa2_mem_ptov(phys_addr_t paddr) __rte_unused;
-
-static void *dpaa2_mem_ptov(phys_addr_t paddr)
+static inline uint64_t
+dpaa2_mem_va_to_iova(void *va)
 {
-	void *va;
-
-	if (dpaa2_virt_mode)
-		return (void *)(size_t)paddr;
-
-	va = (void *)dpaax_iova_table_get_va(paddr);
-	if (likely(va != NULL))
-		return va;
-
-	/* If not, Fallback to full memseg list searching */
-	va = rte_mem_iova2virt(paddr);
+	if (likely(rte_eal_iova_mode() == RTE_IOVA_VA))
+		return (uint64_t)va;
 
-	return va;
+	return rte_fslmc_mem_vaddr_to_iova(va);
 }
 
-static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr) __rte_unused;
-
-static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
+static inline void *
+dpaa2_mem_iova_to_va(uint64_t iova)
 {
-	const struct rte_memseg *memseg;
-
-	if (dpaa2_virt_mode)
-		return vaddr;
+	if (likely(rte_eal_iova_mode() == RTE_IOVA_VA))
+		return (void *)(uintptr_t)iova;
 
-	memseg = rte_mem_virt2memseg((void *)(uintptr_t)vaddr, NULL);
-	if (memseg)
-		return memseg->iova + RTE_PTR_DIFF(vaddr, memseg->addr);
-	return (size_t)NULL;
+	return rte_fslmc_mem_iova_to_vaddr(iova);
 }
 
-/**
- * When we are using Physical addresses as IO Virtual Addresses,
- * Need to call conversion routines dpaa2_mem_vtop & dpaa2_mem_ptov
- * wherever required.
- * These routines are called with help of below MACRO's
- */
-
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_iova)
-
-/**
- * macro to convert Virtual address to IOVA
- */
-#define DPAA2_VADDR_TO_IOVA(_vaddr) dpaa2_mem_vtop((size_t)(_vaddr))
-
-/**
- * macro to convert IOVA to Virtual address
- */
-#define DPAA2_IOVA_TO_VADDR(_iova) dpaa2_mem_ptov((size_t)(_iova))
-
-/**
- * macro to convert modify the memory containing IOVA to Virtual address
- */
+#define DPAA2_VADDR_TO_IOVA(_vaddr) \
+	dpaa2_mem_va_to_iova((void *)(uintptr_t)_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) \
+	dpaa2_mem_iova_to_va((uint64_t)_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type) \
-	{_mem = (_type)(dpaa2_mem_ptov((size_t)(_mem))); }
+	{_mem = (_type)DPAA2_IOVA_TO_VADDR(_mem); }
+
+#define DPAA2_VAMODE_VADDR_TO_IOVA(_vaddr) ((uint64_t)_vaddr)
+#define DPAA2_VAMODE_IOVA_TO_VADDR(_iova) ((void *)_iova)
+#define DPAA2_VAMODE_MODIFY_IOVA_TO_VADDR(_mem, _type) \
+	{_mem = (_type)(_mem); }
+
+#define DPAA2_PAMODE_VADDR_TO_IOVA(_vaddr) \
+	rte_fslmc_mem_vaddr_to_iova((void *)_vaddr)
+#define DPAA2_PAMODE_IOVA_TO_VADDR(_iova) \
+	rte_fslmc_mem_iova_to_vaddr((uint64_t)_iova)
+#define DPAA2_PAMODE_MODIFY_IOVA_TO_VADDR(_mem, _type) \
+	{_mem = (_type)rte_fslmc_mem_iova_to_vaddr(_mem); }
+
+static inline uint64_t
+dpaa2_mem_va_to_iova_check(void *va, uint64_t size)
+{
+	uint64_t iova = rte_fslmc_cold_mem_vaddr_to_iova(va, size);
 
-#else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
+	if (iova == RTE_BAD_IOVA)
+		return RTE_BAD_IOVA;
 
-#define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
-#define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
+	/** Double check the iova is valid.*/
+	if (iova != rte_mem_virt2iova(va))
+		return RTE_BAD_IOVA;
+
+	return iova;
+}
 
-#endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
+#define DPAA2_VADDR_TO_IOVA_AND_CHECK(_vaddr, size) \
+	dpaa2_mem_va_to_iova_check(_vaddr, size)
+#define DPAA2_IOVA_TO_VADDR_AND_CHECK(_iova, size) \
+	rte_fslmc_cold_mem_iova_to_vaddr(_iova, size)
 
 static inline
 int check_swp_active_dqs(uint16_t dpio_index)
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index b49bc0a62c..2c36895285 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -24,7 +24,6 @@ INTERNAL {
 	dpaa2_seqn_dynfield_offset;
 	dpaa2_seqn;
 	dpaa2_svr_family;
-	dpaa2_virt_mode;
 	dpbp_disable;
 	dpbp_enable;
 	dpbp_get_attributes;
@@ -119,6 +118,12 @@ INTERNAL {
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
 	rte_fslmc_vfio_mem_dmaunmap;
+	rte_fslmc_cold_mem_vaddr_to_iova;
+	rte_fslmc_cold_mem_iova_to_vaddr;
+	rte_fslmc_mem_vaddr_to_iova;
+	rte_fslmc_mem_iova_to_vaddr;
+	rte_fslmc_io_vaddr_to_iova;
+	rte_fslmc_io_iova_to_vaddr;
 
 	local: *;
 };
diff --git a/drivers/dma/dpaa2/dpaa2_qdma.c b/drivers/dma/dpaa2/dpaa2_qdma.c
index 5780e49297..b2cf074c7d 100644
--- a/drivers/dma/dpaa2/dpaa2_qdma.c
+++ b/drivers/dma/dpaa2/dpaa2_qdma.c
@@ -10,6 +10,7 @@
 
 #include <mc/fsl_dpdmai.h>
 
+#include <dpaa2_hw_dpio.h>
 #include "rte_pmd_dpaa2_qdma.h"
 #include "dpaa2_qdma.h"
 #include "dpaa2_qdma_logs.h"
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 17/42] bus/fslmc: remove VFIO IRQ mapping
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (15 preceding siblings ...)
  2024-10-23 11:59           ` [v5 16/42] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 18/42] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
                             ` (25 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Remove unused GITS translator VFIO mapping.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/fslmc_vfio.c | 50 ----------------------------------
 1 file changed, 50 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 2bf0a7b835..9d913781ae 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -49,7 +49,6 @@
 static struct fslmc_vfio_container s_vfio_container;
 /* Currently we only support single group/process. */
 static const char *fslmc_group; /* dprc.x*/
-static uint32_t *msi_intr_vaddr;
 static void *(*rte_mcp_ptr_list);
 
 struct fslmc_dmaseg {
@@ -758,49 +757,6 @@ vfio_connect_container(int vfio_container_fd,
 	return fslmc_vfio_connect_container(vfio_group_fd);
 }
 
-static int vfio_map_irq_region(void)
-{
-	int ret, fd;
-	unsigned long *vaddr = NULL;
-	struct vfio_iommu_type1_dma_map map = {
-		.argsz = sizeof(map),
-		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
-		.vaddr = 0x6030000,
-		.iova = 0x6030000,
-		.size = 0x1000,
-	};
-	const char *group_name = fslmc_vfio_get_group_name();
-
-	fd = fslmc_vfio_group_fd_by_name(group_name);
-	if (fd <= 0) {
-		DPAA2_BUS_ERR("%s: Get fd by name(%s) failed(%d)",
-			__func__, group_name, fd);
-		if (fd < 0)
-			return fd;
-		return -EIO;
-	}
-	if (!fslmc_vfio_container_connected(fd)) {
-		DPAA2_BUS_ERR("Container is not connected");
-		return -EIO;
-	}
-
-	vaddr = (unsigned long *)mmap(NULL, 0x1000, PROT_WRITE |
-		PROT_READ, MAP_SHARED, fd, 0x6030000);
-	if (vaddr == MAP_FAILED) {
-		DPAA2_BUS_ERR("Unable to map region (errno = %d)", errno);
-		return -ENOMEM;
-	}
-
-	msi_intr_vaddr = (uint32_t *)((char *)(vaddr) + 64);
-	map.vaddr = (unsigned long)vaddr;
-	ret = ioctl(fslmc_vfio_container_fd(), VFIO_IOMMU_MAP_DMA, &map);
-	if (!ret)
-		return 0;
-
-	DPAA2_BUS_ERR("Unable to map DMA address (errno = %d)", errno);
-	return ret;
-}
-
 static int
 fslmc_map_dma(uint64_t vaddr, rte_iova_t iovaddr, size_t len)
 {
@@ -1222,12 +1178,6 @@ int rte_fslmc_vfio_dmamap(void)
 
 	DPAA2_BUS_DEBUG("Total %d segments found.", i);
 
-	/* TODO - This is a W.A. as VFIO currently does not add the mapping of
-	 * the interrupt region to SMMU. This should be removed once the
-	 * support is added in the Kernel.
-	 */
-	vfio_map_irq_region();
-
 	/* Existing segments have been mapped and memory callback for hotplug
 	 * has been installed.
 	 */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 18/42] bus/fslmc: create dpaa2 device with it's object
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (16 preceding siblings ...)
  2024-10-23 11:59           ` [v5 17/42] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 19/42] bus/fslmc: fix coverity issue vanshika.shukla
                             ` (24 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Create dpaa2 device with object instead of object ID.
Assign each dpaa2 object with it's container.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     | 39 ++++++++++++------------
 drivers/bus/fslmc/fslmc_vfio.c           |  3 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c |  6 ++--
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c |  8 ++---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c |  6 ++--
 drivers/bus/fslmc/portal/dpaa2_hw_dprc.c |  8 +++--
 drivers/event/dpaa2/dpaa2_hw_dpcon.c     |  8 ++---
 drivers/net/dpaa2/dpaa2_mux.c            |  6 ++--
 drivers/net/dpaa2/dpaa2_ptp.c            |  8 ++---
 9 files changed, 47 insertions(+), 45 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index ba3774823b..777ab24c10 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -89,25 +89,6 @@ enum rte_dpaa2_dev_type {
 	DPAA2_DEVTYPE_MAX,
 };
 
-TAILQ_HEAD(rte_dpaa2_object_list, rte_dpaa2_object);
-
-typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
-				      struct vfio_device_info *obj_info,
-				      int object_id);
-
-typedef void (*rte_dpaa2_obj_close_t)(int object_id);
-
-/**
- * A structure describing a DPAA2 object.
- */
-struct rte_dpaa2_object {
-	TAILQ_ENTRY(rte_dpaa2_object) next; /**< Next in list. */
-	const char *name;                   /**< Name of Object. */
-	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
-	rte_dpaa2_obj_create_t create;
-	rte_dpaa2_obj_close_t close;
-};
-
 /**
  * A structure describing a DPAA2 device.
  */
@@ -123,6 +104,7 @@ struct rte_dpaa2_device {
 	enum rte_dpaa2_dev_type dev_type;   /**< Device Type */
 	uint16_t object_id;                 /**< DPAA2 Object ID */
 	enum rte_dpaa2_dev_type ep_dev_type;   /**< Endpoint Device Type */
+	struct dpaa2_dprc_dev *container;
 	uint16_t ep_object_id;                 /**< Endpoint DPAA2 Object ID */
 	char ep_name[RTE_DEV_NAME_MAX_LEN];
 	struct rte_intr_handle *intr_handle; /**< Interrupt handle */
@@ -130,10 +112,29 @@ struct rte_dpaa2_device {
 	char name[FSLMC_OBJECT_MAX_LEN];    /**< DPAA2 Object name*/
 };
 
+typedef int (*rte_dpaa2_obj_create_t)(int vdev_fd,
+				      struct vfio_device_info *obj_info,
+				      struct rte_dpaa2_device *dev);
+
+typedef void (*rte_dpaa2_obj_close_t)(int object_id);
+
 typedef int (*rte_dpaa2_probe_t)(struct rte_dpaa2_driver *dpaa2_drv,
 				 struct rte_dpaa2_device *dpaa2_dev);
 typedef int (*rte_dpaa2_remove_t)(struct rte_dpaa2_device *dpaa2_dev);
 
+TAILQ_HEAD(rte_dpaa2_object_list, rte_dpaa2_object);
+
+/**
+ * A structure describing a DPAA2 object.
+ */
+struct rte_dpaa2_object {
+	TAILQ_ENTRY(rte_dpaa2_object) next; /**< Next in list. */
+	const char *name;                   /**< Name of Object. */
+	enum rte_dpaa2_dev_type dev_type;   /**< Type of device */
+	rte_dpaa2_obj_create_t create;
+	rte_dpaa2_obj_close_t close;
+};
+
 /**
  * A structure describing a DPAA2 driver.
  */
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 9d913781ae..5b382d93e4 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1458,8 +1458,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
 	case DPAA2_DPRC:
 		TAILQ_FOREACH(object, &dpaa2_obj_list, next) {
 			if (dev->dev_type == object->dev_type)
-				object->create(dev_fd, &device_info,
-					       dev->object_id);
+				object->create(dev_fd, &device_info, dev);
 			else
 				continue;
 		}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index 85e4c16c03..0ca3b2b2e4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -47,11 +47,11 @@ static struct dpaa2_dpbp_dev *get_dpbp_from_id(uint32_t dpbp_id)
 
 static int
 dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
-			 struct vfio_device_info *obj_info __rte_unused,
-			 int dpbp_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpbp_dev *dpbp_node;
-	int ret;
+	int ret, dpbp_id = obj->object_id;
 	static int register_once;
 
 	/* Allocate DPAA2 dpbp handle */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 99f2147ccb..9d7108bfdc 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2020 NXP
+ *   Copyright 2017, 2020, 2023 NXP
  *
  */
 
@@ -45,15 +45,15 @@ static struct dpaa2_dpci_dev *get_dpci_from_id(uint32_t dpci_id)
 
 static int
 rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
-			     struct vfio_device_info *obj_info __rte_unused,
-			     int dpci_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpci_dev *dpci_node;
 	struct dpci_attr attr;
 	struct dpci_rx_queue_cfg rx_queue_cfg;
 	struct dpci_rx_queue_attr rx_attr;
 	struct dpci_tx_queue_attr tx_attr;
-	int ret, i;
+	int ret, i, dpci_id = obj->object_id;
 
 	/* Allocate DPAA2 dpci handle */
 	dpci_node = rte_malloc(NULL, sizeof(struct dpaa2_dpci_dev), 0);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 954d59d123..67d4c83e8c 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -399,14 +399,14 @@ dpaa2_close_dpio_device(int object_id)
 
 static int
 dpaa2_create_dpio_device(int vdev_fd,
-			 struct vfio_device_info *obj_info,
-			 int object_id)
+	struct vfio_device_info *obj_info,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)};
 	struct qbman_swp_desc p_des;
 	struct dpio_attr attr;
-	int ret;
+	int ret, object_id = obj->object_id;
 
 	if (obj_info->num_regions < NUM_DPIO_REGIONS) {
 		DPAA2_BUS_ERR("Not sufficient number of DPIO regions");
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c b/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
index 65e2d799c3..a057cb1309 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dprc.c
@@ -23,13 +23,13 @@ static struct dprc_dev_list dprc_dev_list
 
 static int
 rte_dpaa2_create_dprc_device(int vdev_fd __rte_unused,
-			     struct vfio_device_info *obj_info __rte_unused,
-			     int dprc_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dprc_dev *dprc_node;
 	struct dprc_endpoint endpoint1, endpoint2;
 	struct rte_dpaa2_device *dev, *dev_tmp;
-	int ret;
+	int ret, dprc_id = obj->object_id;
 
 	/* Allocate DPAA2 dprc handle */
 	dprc_node = rte_malloc(NULL, sizeof(struct dpaa2_dprc_dev), 0);
@@ -50,6 +50,8 @@ rte_dpaa2_create_dprc_device(int vdev_fd __rte_unused,
 	}
 
 	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_tmp) {
+		/** DPRC is always created before it's children are created.*/
+		dev->container = dprc_node;
 		if (dev->dev_type == DPAA2_ETH) {
 			int link_state;
 
diff --git a/drivers/event/dpaa2/dpaa2_hw_dpcon.c b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
index 64b0136e24..ea5b0d4b85 100644
--- a/drivers/event/dpaa2/dpaa2_hw_dpcon.c
+++ b/drivers/event/dpaa2/dpaa2_hw_dpcon.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2020 NXP
+ *   Copyright 2017, 2020, 2023 NXP
  *
  */
 
@@ -45,12 +45,12 @@ static struct dpaa2_dpcon_dev *get_dpcon_from_id(uint32_t dpcon_id)
 
 static int
 rte_dpaa2_create_dpcon_device(int dev_fd __rte_unused,
-			      struct vfio_device_info *obj_info __rte_unused,
-			      int dpcon_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpcon_dev *dpcon_node;
 	struct dpcon_attr attr;
-	int ret;
+	int ret, dpcon_id = obj->object_id;
 
 	/* Allocate DPAA2 dpcon handle */
 	dpcon_node = rte_malloc(NULL, sizeof(struct dpaa2_dpcon_dev), 0);
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 3693f4b62e..f4b8d481af 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -374,12 +374,12 @@ rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
 
 static int
 dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
-			   struct vfio_device_info *obj_info __rte_unused,
-			   int dpdmux_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev;
 	struct dpdmux_attr attr;
-	int ret;
+	int ret, dpdmux_id = obj->object_id;
 	uint16_t maj_ver;
 	uint16_t min_ver;
 	uint8_t skip_reset_flags;
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index c08aa0f3bf..751e558c73 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2019 NXP
+ * Copyright 2019, 2023 NXP
  */
 
 #include <sys/queue.h>
@@ -134,11 +134,11 @@ int dpaa2_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
 #if defined(RTE_LIBRTE_IEEE1588)
 static int
 dpaa2_create_dprtc_device(int vdev_fd __rte_unused,
-			   struct vfio_device_info *obj_info __rte_unused,
-			   int dprtc_id)
+	struct vfio_device_info *obj_info __rte_unused,
+	struct rte_dpaa2_device *obj)
 {
 	struct dprtc_attr attr;
-	int ret;
+	int ret, dprtc_id = obj->object_id;
 
 	PMD_INIT_FUNC_TRACE();
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 19/42] bus/fslmc: fix coverity issue
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (17 preceding siblings ...)
  2024-10-23 11:59           ` [v5 18/42] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 20/42] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
                             ` (23 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Nipun Gupta, Roy Pledge,
	Youri Querry
  Cc: stable, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fix Issues reported by NXP Internal Coverity.

Fixes: 64f131a82fbe ("bus/fslmc: add qbman debug")
Cc: hemant.agrawal@nxp.com
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 49 +++++++++++++++++----------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index eea06988ff..0e471ec3fd 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2022 NXP
  */
 
 #include "compat.h"
@@ -37,6 +37,7 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
 		   struct qbman_bp_query_rslt *r)
 {
 	struct qbman_bp_query_desc *p;
+	struct qbman_bp_query_rslt *bp_query_rslt;
 
 	/* Start the management command */
 	p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s);
@@ -47,14 +48,16 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
 	p->bpid = bpid;
 
 	/* Complete the management command */
-	*r = *(struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s, p,
-						 QBMAN_BP_QUERY);
-	if (!r) {
+	bp_query_rslt = (struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s,
+						p, QBMAN_BP_QUERY);
+	if (!bp_query_rslt) {
 		pr_err("qbman: Query BPID %d failed, no response\n",
 			bpid);
 		return -EIO;
 	}
 
+	*r = *bp_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY);
 
@@ -202,20 +205,23 @@ int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
 		   struct qbman_fq_query_rslt *r)
 {
 	struct qbman_fq_query_desc *p;
+	struct qbman_fq_query_rslt *fq_query_rslt;
 
 	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->fqid = fqid;
-	*r = *(struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s, p,
-					  QBMAN_FQ_QUERY);
-	if (!r) {
+	fq_query_rslt = (struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s,
+					p, QBMAN_FQ_QUERY);
+	if (!fq_query_rslt) {
 		pr_err("qbman: Query FQID %d failed, no response\n",
 			fqid);
 		return -EIO;
 	}
 
+	*r = *fq_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY);
 
@@ -398,20 +404,23 @@ int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
 		    struct qbman_cgr_query_rslt *r)
 {
 	struct qbman_cgr_query_desc *p;
+	struct qbman_cgr_query_rslt *cgr_query_rslt;
 
 	p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->cgid = cgid;
-	*r = *(struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_CGR_QUERY);
-	if (!r) {
+	cgr_query_rslt = (struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s,
+					p, QBMAN_CGR_QUERY);
+	if (!cgr_query_rslt) {
 		pr_err("qbman: Query CGID %d failed, no response\n",
 			cgid);
 		return -EIO;
 	}
 
+	*r = *cgr_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_CGR_QUERY);
 
@@ -473,20 +482,23 @@ int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
 			struct qbman_wred_query_rslt *r)
 {
 	struct qbman_cgr_query_desc *p;
+	struct qbman_wred_query_rslt *wred_query_rslt;
 
 	p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->cgid = cgid;
-	*r = *(struct qbman_wred_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_WRED_QUERY);
-	if (!r) {
+	wred_query_rslt = (struct qbman_wred_query_rslt *)qbman_swp_mc_complete(
+					s, p, QBMAN_WRED_QUERY);
+	if (!wred_query_rslt) {
 		pr_err("qbman: Query CGID WRED %d failed, no response\n",
 			cgid);
 		return -EIO;
 	}
 
+	*r = *wred_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WRED_QUERY);
 
@@ -527,7 +539,7 @@ void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
 	if (mn == 0)
 		*maxth = ma;
 	else
-		*maxth = ((ma+256) * (1<<(mn-1)));
+		*maxth = ((uint64_t)(ma+256) * (1<<(mn-1)));
 
 	if (step_s == 0)
 		*minth = *maxth - step_i;
@@ -630,6 +642,7 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
 		       struct qbman_wqchan_query_rslt *r)
 {
 	struct qbman_wqchan_query_desc *p;
+	struct qbman_wqchan_query_rslt *wqchan_query_rslt;
 
 	/* Start the management command */
 	p = (struct qbman_wqchan_query_desc *)qbman_swp_mc_start(s);
@@ -640,14 +653,16 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
 	p->chid = chanid;
 
 	/* Complete the management command */
-	*r = *(struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(s, p,
-							QBMAN_WQ_QUERY);
-	if (!r) {
+	wqchan_query_rslt = (struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(
+						s, p, QBMAN_WQ_QUERY);
+	if (!wqchan_query_rslt) {
 		pr_err("qbman: Query WQ Channel %d failed, no response\n",
 			chanid);
 		return -EIO;
 	}
 
+	*r = *wqchan_query_rslt;
+
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WQ_QUERY);
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 20/42] bus/fslmc: change qbman eq desc from d to desc
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (18 preceding siblings ...)
  2024-10-23 11:59           ` [v5 19/42] bus/fslmc: fix coverity issue vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 21/42] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
                             ` (22 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Change qbman_eq_desc name to avoid redefining same variable.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 3fdca9761d..5d0cedc136 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1008,9 +1008,9 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
 				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
 		p[0] = cl[0] | s->eqcr.pi_vb;
 		if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
-			struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+			struct qbman_eq_desc *desc = (struct qbman_eq_desc *)p;
 
-			d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+			desc->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
 				((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
 		}
 		eqcr_pi++;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 21/42] bus/fslmc: introduce VFIO DMA mapping API for fslmc
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (19 preceding siblings ...)
  2024-10-23 11:59           ` [v5 20/42] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 22/42] net/dpaa2: change miss flow ID macro name vanshika.shukla
                             ` (21 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Declare rte_fslmc_vfio_mem_dmamap and rte_fslmc_vfio_mem_dmaunmap
in bus_fslmc_driver.h for external usage.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/bus/fslmc/bus_fslmc_driver.h     | 7 ++++++-
 drivers/bus/fslmc/fslmc_bus.c            | 2 +-
 drivers/bus/fslmc/fslmc_vfio.c           | 3 ++-
 drivers/bus/fslmc/fslmc_vfio.h           | 7 +------
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 2 +-
 5 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/bus/fslmc/bus_fslmc_driver.h b/drivers/bus/fslmc/bus_fslmc_driver.h
index 777ab24c10..1d4ce4785f 100644
--- a/drivers/bus/fslmc/bus_fslmc_driver.h
+++ b/drivers/bus/fslmc/bus_fslmc_driver.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2016,2021 NXP
+ *   Copyright 2016,2021-2023 NXP
  *
  */
 
@@ -135,6 +135,11 @@ struct rte_dpaa2_object {
 	rte_dpaa2_obj_close_t close;
 };
 
+int
+rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
+int
+rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size);
+
 /**
  * A structure describing a DPAA2 driver.
  */
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 107cc70833..fda0a4206d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -438,7 +438,7 @@ rte_fslmc_probe(void)
 	 * install callback handler.
 	 */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
-		ret = rte_fslmc_vfio_dmamap();
+		ret = fslmc_vfio_dmamap();
 		if (ret) {
 			DPAA2_BUS_ERR("Unable to DMA map existing VAs: (%d)",
 				      ret);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 5b382d93e4..b9fa1a30b5 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -1154,7 +1154,8 @@ rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 	return fslmc_unmap_dma(0, iova, size);
 }
 
-int rte_fslmc_vfio_dmamap(void)
+int
+fslmc_vfio_dmamap(void)
 {
 	int i = 0, ret;
 
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 1695b6c078..815970ec38 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -60,10 +60,5 @@ int fslmc_vfio_process_group(void);
 int fslmc_vfio_close_group(void);
 char *fslmc_get_container(void);
 int fslmc_get_container_group(const char *group_name, int *gropuid);
-int rte_fslmc_vfio_dmamap(void);
-int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova,
-		uint64_t size);
-int rte_fslmc_vfio_mem_dmaunmap(uint64_t iova,
-		uint64_t size);
-
+int fslmc_vfio_dmamap(void);
 #endif /* _FSLMC_VFIO_H_ */
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 886fb7fbb0..c054988513 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -23,7 +23,7 @@
 #include <dev_driver.h>
 #include "rte_dpaa2_mempool.h"
 
-#include "fslmc_vfio.h"
+#include <bus_fslmc_driver.h>
 #include <fslmc_logs.h>
 #include <mc/fsl_dpbp.h>
 #include <portal/dpaa2_hw_pvt.h>
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 22/42] net/dpaa2: change miss flow ID macro name
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (20 preceding siblings ...)
  2024-10-23 11:59           ` [v5 21/42] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 23/42] net/dpaa2: flow API refactor vanshika.shukla
                             ` (20 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Remove miss flow id macro name to DPNI_FS_MISS_DROP since its
conflicting with enum. Also, set default miss flow id to 0.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 77367aa392..b7f1f974c6 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -30,8 +30,7 @@
 int mc_l4_port_identification;
 
 static char *dpaa2_flow_control_log;
-static uint16_t dpaa2_flow_miss_flow_id =
-	DPNI_FS_MISS_DROP;
+static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
 #define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
 
@@ -3990,7 +3989,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
 		struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 		dpaa2_flow_miss_flow_id =
-			atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
+			(uint16_t)atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
 		if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
 			DPAA2_PMD_ERR(
 				"The missed flow ID %d exceeds the max flow ID %d",
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 23/42] net/dpaa2: flow API refactor
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (21 preceding siblings ...)
  2024-10-23 11:59           ` [v5 22/42] net/dpaa2: change miss flow ID macro name vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-11-09 19:01             ` Thomas Monjalon
  2024-10-23 11:59           ` [v5 24/42] net/dpaa2: dump Rx parser result vanshika.shukla
                             ` (19 subsequent siblings)
  42 siblings, 1 reply; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

1) Gather redundant code with same logic from various protocol
   handlers to create common functions.
2) struct dpaa2_key_profile is used to describe each extract's
   offset of rule and size. It's easy to insert new extract previous
   IP address extract.
3) IP address profile is used to describe ipv4/v6 addresses extracts
   located at end of rule.
4) L4 ports profile is used to describe the ports positions and offsets
   of rule.
5) Once the extracts of QoS/FS table are update, go through all
   the existing flows of this table to update the rule data.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c |   27 +-
 drivers/net/dpaa2/dpaa2_ethdev.h |   90 +-
 drivers/net/dpaa2/dpaa2_flow.c   | 4839 ++++++++++++------------------
 3 files changed, 2030 insertions(+), 2926 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index bd6a578e30..e55de5b614 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2808,39 +2808,20 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	/* Init fields w.r.t. classification */
 	memset(&priv->extract.qos_key_extract, 0,
 		sizeof(struct dpaa2_key_extract));
-	priv->extract.qos_extract_param = (size_t)rte_malloc(NULL, 256, 64);
+	priv->extract.qos_extract_param = rte_malloc(NULL, 256, 64);
 	if (!priv->extract.qos_extract_param) {
-		DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow "
-			    " classification ", ret);
+		DPAA2_PMD_ERR("Memory alloc failed");
 		goto init_err;
 	}
-	priv->extract.qos_key_extract.key_info.ipv4_src_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv4_dst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv6_src_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	priv->extract.qos_key_extract.key_info.ipv6_dst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
 
 	for (i = 0; i < MAX_TCS; i++) {
 		memset(&priv->extract.tc_key_extract[i], 0,
 			sizeof(struct dpaa2_key_extract));
-		priv->extract.tc_extract_param[i] =
-			(size_t)rte_malloc(NULL, 256, 64);
+		priv->extract.tc_extract_param[i] = rte_malloc(NULL, 256, 64);
 		if (!priv->extract.tc_extract_param[i]) {
-			DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classification",
-				     ret);
+			DPAA2_PMD_ERR("Memory alloc failed");
 			goto init_err;
 		}
-		priv->extract.tc_key_extract[i].key_info.ipv4_src_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv4_dst_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv6_src_offset =
-			IP_ADDRESS_OFFSET_INVALID;
-		priv->extract.tc_key_extract[i].key_info.ipv6_dst_offset =
-			IP_ADDRESS_OFFSET_INVALID;
 	}
 
 	ret = dpni_set_max_frame_length(dpni_dev, CMD_PRI_LOW, priv->token,
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 6625afaba3..ea1c1b5117 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -145,14 +145,6 @@ extern bool dpaa2_enable_ts[];
 extern uint64_t dpaa2_timestamp_rx_dynflag;
 extern int dpaa2_timestamp_dynfield_offset;
 
-#define DPAA2_QOS_TABLE_RECONFIGURE	1
-#define DPAA2_FS_TABLE_RECONFIGURE	2
-
-#define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4
-#define DPAA2_FS_TABLE_IPADDR_EXTRACT 8
-
-#define DPAA2_FLOW_MAX_KEY_SIZE		16
-
 /* Externally defined */
 extern const struct rte_flow_ops dpaa2_flow_ops;
 
@@ -160,29 +152,85 @@ extern const struct rte_tm_ops dpaa2_tm_ops;
 
 extern bool dpaa2_enable_err_queue;
 
-#define IP_ADDRESS_OFFSET_INVALID (-1)
+struct ipv4_sd_addr_extract_rule {
+	uint32_t ipv4_src;
+	uint32_t ipv4_dst;
+};
+
+struct ipv6_sd_addr_extract_rule {
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+};
 
-struct dpaa2_key_info {
+struct ipv4_ds_addr_extract_rule {
+	uint32_t ipv4_dst;
+	uint32_t ipv4_src;
+};
+
+struct ipv6_ds_addr_extract_rule {
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+union ip_addr_extract_rule {
+	struct ipv4_sd_addr_extract_rule ipv4_sd_addr;
+	struct ipv6_sd_addr_extract_rule ipv6_sd_addr;
+	struct ipv4_ds_addr_extract_rule ipv4_ds_addr;
+	struct ipv6_ds_addr_extract_rule ipv6_ds_addr;
+};
+
+union ip_src_addr_extract_rule {
+	uint32_t ipv4_src;
+	uint8_t ipv6_src[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+union ip_dst_addr_extract_rule {
+	uint32_t ipv4_dst;
+	uint8_t ipv6_dst[NH_FLD_IPV6_ADDR_SIZE];
+};
+
+enum ip_addr_extract_type {
+	IP_NONE_ADDR_EXTRACT,
+	IP_SRC_EXTRACT,
+	IP_DST_EXTRACT,
+	IP_SRC_DST_EXTRACT,
+	IP_DST_SRC_EXTRACT
+};
+
+struct key_prot_field {
+	enum net_prot prot;
+	uint32_t key_field;
+};
+
+struct dpaa2_key_profile {
+	uint8_t num;
 	uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
 	uint8_t key_size[DPKG_MAX_NUM_OF_EXTRACTS];
-	/* Special for IP address. */
-	int ipv4_src_offset;
-	int ipv4_dst_offset;
-	int ipv6_src_offset;
-	int ipv6_dst_offset;
-	uint8_t key_total_size;
+
+	enum ip_addr_extract_type ip_addr_type;
+	uint8_t ip_addr_extract_pos;
+	uint8_t ip_addr_extract_off;
+
+	uint8_t l4_src_port_present;
+	uint8_t l4_src_port_pos;
+	uint8_t l4_src_port_offset;
+	uint8_t l4_dst_port_present;
+	uint8_t l4_dst_port_pos;
+	uint8_t l4_dst_port_offset;
+	struct key_prot_field prot_field[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint16_t key_max_size;
 };
 
 struct dpaa2_key_extract {
 	struct dpkg_profile_cfg dpkg;
-	struct dpaa2_key_info key_info;
+	struct dpaa2_key_profile key_profile;
 };
 
 struct extract_s {
 	struct dpaa2_key_extract qos_key_extract;
 	struct dpaa2_key_extract tc_key_extract[MAX_TCS];
-	uint64_t qos_extract_param;
-	uint64_t tc_extract_param[MAX_TCS];
+	uint8_t *qos_extract_param;
+	uint8_t *tc_extract_param[MAX_TCS];
 };
 
 struct dpaa2_dev_priv {
@@ -233,7 +281,8 @@ struct dpaa2_dev_priv {
 	/* Stores correction offset for one step timestamping */
 	uint16_t ptp_correction_offset;
 
-	LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */
+	struct dpaa2_dev_flow *curr;
+	LIST_HEAD(, dpaa2_dev_flow) flows;
 	LIST_HEAD(nodes, dpaa2_tm_node) nodes;
 	LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles;
 };
@@ -292,7 +341,6 @@ uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
 void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci, struct dpaa2_queue *dpaa2_q);
 void dpaa2_flow_clean(struct rte_eth_dev *dev);
 uint16_t dpaa2_dev_tx_conf(void *queue)  __rte_unused;
-int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev);
 
 int dpaa2_timesync_enable(struct rte_eth_dev *dev);
 int dpaa2_timesync_disable(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index b7f1f974c6..9e03ad5401 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2022 NXP
  */
 
 #include <sys/queue.h>
@@ -27,41 +27,40 @@
  * MC/WRIOP are not able to identify
  * the l4 protocol with l4 ports.
  */
-int mc_l4_port_identification;
+static int mc_l4_port_identification;
 
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
-#define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
-
-enum flow_rule_ipaddr_type {
-	FLOW_NONE_IPADDR,
-	FLOW_IPV4_ADDR,
-	FLOW_IPV6_ADDR
+enum dpaa2_flow_entry_size {
+	DPAA2_FLOW_ENTRY_MIN_SIZE = (DPNI_MAX_KEY_SIZE / 2),
+	DPAA2_FLOW_ENTRY_MAX_SIZE = DPNI_MAX_KEY_SIZE
 };
 
-struct flow_rule_ipaddr {
-	enum flow_rule_ipaddr_type ipaddr_type;
-	int qos_ipsrc_offset;
-	int qos_ipdst_offset;
-	int fs_ipsrc_offset;
-	int fs_ipdst_offset;
+enum dpaa2_flow_dist_type {
+	DPAA2_FLOW_QOS_TYPE = 1 << 0,
+	DPAA2_FLOW_FS_TYPE = 1 << 1
 };
 
-struct rte_flow {
-	LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
+#define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
+#define DPAA2_FLOW_MAX_KEY_SIZE			16
+
+struct dpaa2_dev_flow {
+	LIST_ENTRY(dpaa2_dev_flow) next;
 	struct dpni_rule_cfg qos_rule;
+	uint8_t *qos_key_addr;
+	uint8_t *qos_mask_addr;
+	uint16_t qos_rule_size;
 	struct dpni_rule_cfg fs_rule;
 	uint8_t qos_real_key_size;
 	uint8_t fs_real_key_size;
+	uint8_t *fs_key_addr;
+	uint8_t *fs_mask_addr;
+	uint16_t fs_rule_size;
 	uint8_t tc_id; /** Traffic Class ID. */
 	uint8_t tc_index; /** index within this Traffic Class. */
-	enum rte_flow_action_type action;
-	/* Special for IP address to specify the offset
-	 * in key/mask.
-	 */
-	struct flow_rule_ipaddr ipaddr_rule;
-	struct dpni_fs_action_cfg action_cfg;
+	enum rte_flow_action_type action_type;
+	struct dpni_fs_action_cfg fs_action_cfg;
 };
 
 static const
@@ -94,9 +93,6 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
 };
 
-/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
-#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
-
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
 	.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
@@ -151,11 +147,12 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
-
 #endif
 
-static inline void dpaa2_prot_field_string(
-	enum net_prot prot, uint32_t field,
+#define DPAA2_FLOW_DUMP printf
+
+static inline void
+dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 	char *string)
 {
 	if (!dpaa2_flow_control_log)
@@ -230,60 +227,84 @@ static inline void dpaa2_prot_field_string(
 	}
 }
 
-static inline void dpaa2_flow_qos_table_extracts_log(
-	const struct dpaa2_dev_priv *priv, FILE *f)
+static inline void
+dpaa2_flow_qos_extracts_log(const struct dpaa2_dev_priv *priv)
 {
 	int idx;
 	char string[32];
+	const struct dpkg_profile_cfg *dpkg =
+		&priv->extract.qos_key_extract.dpkg;
+	const struct dpkg_extract *extract;
+	enum dpkg_extract_type type;
+	enum net_prot prot;
+	uint32_t field;
 
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "Setup QoS table: number of extracts: %d\r\n",
-			priv->extract.qos_key_extract.dpkg.num_extracts);
-	for (idx = 0; idx < priv->extract.qos_key_extract.dpkg.num_extracts;
-		idx++) {
-		dpaa2_prot_field_string(priv->extract.qos_key_extract.dpkg
-			.extracts[idx].extract.from_hdr.prot,
-			priv->extract.qos_key_extract.dpkg.extracts[idx]
-			.extract.from_hdr.field,
-			string);
-		fprintf(f, "%s", string);
-		if ((idx + 1) < priv->extract.qos_key_extract.dpkg.num_extracts)
-			fprintf(f, " / ");
-	}
-	fprintf(f, "\r\n");
+	DPAA2_FLOW_DUMP("QoS table: %d extracts\r\n",
+		dpkg->num_extracts);
+	for (idx = 0; idx < dpkg->num_extracts; idx++) {
+		extract = &dpkg->extracts[idx];
+		type = extract->type;
+		if (type == DPKG_EXTRACT_FROM_HDR) {
+			prot = extract->extract.from_hdr.prot;
+			field = extract->extract.from_hdr.field;
+			dpaa2_prot_field_string(prot, field,
+				string);
+		} else if (type == DPKG_EXTRACT_FROM_DATA) {
+			sprintf(string, "raw offset/len: %d/%d",
+				extract->extract.from_data.offset,
+				extract->extract.from_data.size);
+		}
+		DPAA2_FLOW_DUMP("%s", string);
+		if ((idx + 1) < dpkg->num_extracts)
+			DPAA2_FLOW_DUMP(" / ");
+	}
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_fs_table_extracts_log(
-	const struct dpaa2_dev_priv *priv, int tc_id, FILE *f)
+static inline void
+dpaa2_flow_fs_extracts_log(const struct dpaa2_dev_priv *priv,
+	int tc_id)
 {
 	int idx;
 	char string[32];
+	const struct dpkg_profile_cfg *dpkg =
+		&priv->extract.tc_key_extract[tc_id].dpkg;
+	const struct dpkg_extract *extract;
+	enum dpkg_extract_type type;
+	enum net_prot prot;
+	uint32_t field;
 
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "Setup FS table: number of extracts of TC[%d]: %d\r\n",
-			tc_id, priv->extract.tc_key_extract[tc_id]
-			.dpkg.num_extracts);
-	for (idx = 0; idx < priv->extract.tc_key_extract[tc_id]
-		.dpkg.num_extracts; idx++) {
-		dpaa2_prot_field_string(priv->extract.tc_key_extract[tc_id]
-			.dpkg.extracts[idx].extract.from_hdr.prot,
-			priv->extract.tc_key_extract[tc_id].dpkg.extracts[idx]
-			.extract.from_hdr.field,
-			string);
-		fprintf(f, "%s", string);
-		if ((idx + 1) < priv->extract.tc_key_extract[tc_id]
-			.dpkg.num_extracts)
-			fprintf(f, " / ");
-	}
-	fprintf(f, "\r\n");
+	DPAA2_FLOW_DUMP("FS table: %d extracts in TC[%d]\r\n",
+		dpkg->num_extracts, tc_id);
+	for (idx = 0; idx < dpkg->num_extracts; idx++) {
+		extract = &dpkg->extracts[idx];
+		type = extract->type;
+		if (type == DPKG_EXTRACT_FROM_HDR) {
+			prot = extract->extract.from_hdr.prot;
+			field = extract->extract.from_hdr.field;
+			dpaa2_prot_field_string(prot, field,
+				string);
+		} else if (type == DPKG_EXTRACT_FROM_DATA) {
+			sprintf(string, "raw offset/len: %d/%d",
+				extract->extract.from_data.offset,
+				extract->extract.from_data.size);
+		}
+		DPAA2_FLOW_DUMP("%s", string);
+		if ((idx + 1) < dpkg->num_extracts)
+			DPAA2_FLOW_DUMP(" / ");
+	}
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_qos_entry_log(
-	const char *log_info, const struct rte_flow *flow, int qos_index, FILE *f)
+static inline void
+dpaa2_flow_qos_entry_log(const char *log_info,
+	const struct dpaa2_dev_flow *flow, int qos_index)
 {
 	int idx;
 	uint8_t *key, *mask;
@@ -291,27 +312,34 @@ static inline void dpaa2_flow_qos_entry_log(
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "\r\n%s QoS entry[%d] for TC[%d], extracts size is %d\r\n",
-		log_info, qos_index, flow->tc_id, flow->qos_real_key_size);
-
-	key = (uint8_t *)(size_t)flow->qos_rule.key_iova;
-	mask = (uint8_t *)(size_t)flow->qos_rule.mask_iova;
+	if (qos_index >= 0) {
+		DPAA2_FLOW_DUMP("%s QoS entry[%d](size %d/%d) for TC[%d]\r\n",
+			log_info, qos_index, flow->qos_rule_size,
+			flow->qos_rule.key_size,
+			flow->tc_id);
+	} else {
+		DPAA2_FLOW_DUMP("%s QoS entry(size %d/%d) for TC[%d]\r\n",
+			log_info, flow->qos_rule_size,
+			flow->qos_rule.key_size,
+			flow->tc_id);
+	}
 
-	fprintf(f, "key:\r\n");
-	for (idx = 0; idx < flow->qos_real_key_size; idx++)
-		fprintf(f, "%02x ", key[idx]);
+	key = flow->qos_key_addr;
+	mask = flow->qos_mask_addr;
 
-	fprintf(f, "\r\nmask:\r\n");
-	for (idx = 0; idx < flow->qos_real_key_size; idx++)
-		fprintf(f, "%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("key:\r\n");
+	for (idx = 0; idx < flow->qos_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", key[idx]);
 
-	fprintf(f, "\r\n%s QoS ipsrc: %d, ipdst: %d\r\n", log_info,
-		flow->ipaddr_rule.qos_ipsrc_offset,
-		flow->ipaddr_rule.qos_ipdst_offset);
+	DPAA2_FLOW_DUMP("\r\nmask:\r\n");
+	for (idx = 0; idx < flow->qos_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("\r\n");
 }
 
-static inline void dpaa2_flow_fs_entry_log(
-	const char *log_info, const struct rte_flow *flow, FILE *f)
+static inline void
+dpaa2_flow_fs_entry_log(const char *log_info,
+	const struct dpaa2_dev_flow *flow)
 {
 	int idx;
 	uint8_t *key, *mask;
@@ -319,187 +347,432 @@ static inline void dpaa2_flow_fs_entry_log(
 	if (!dpaa2_flow_control_log)
 		return;
 
-	fprintf(f, "\r\n%s FS/TC entry[%d] of TC[%d], extracts size is %d\r\n",
-		log_info, flow->tc_index, flow->tc_id, flow->fs_real_key_size);
+	DPAA2_FLOW_DUMP("%s FS/TC entry[%d](size %d/%d) of TC[%d]\r\n",
+		log_info, flow->tc_index,
+		flow->fs_rule_size, flow->fs_rule.key_size,
+		flow->tc_id);
+
+	key = flow->fs_key_addr;
+	mask = flow->fs_mask_addr;
+
+	DPAA2_FLOW_DUMP("key:\r\n");
+	for (idx = 0; idx < flow->fs_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", key[idx]);
+
+	DPAA2_FLOW_DUMP("\r\nmask:\r\n");
+	for (idx = 0; idx < flow->fs_rule_size; idx++)
+		DPAA2_FLOW_DUMP("%02x ", mask[idx]);
+	DPAA2_FLOW_DUMP("\r\n");
+}
 
-	key = (uint8_t *)(size_t)flow->fs_rule.key_iova;
-	mask = (uint8_t *)(size_t)flow->fs_rule.mask_iova;
+static int
+dpaa2_flow_ip_address_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_IPV4 &&
+		(field == NH_FLD_IPV4_SRC_IP ||
+		field == NH_FLD_IPV4_DST_IP))
+		return true;
+	else if (prot == NET_PROT_IPV6 &&
+		(field == NH_FLD_IPV6_SRC_IP ||
+		field == NH_FLD_IPV6_DST_IP))
+		return true;
+	else if (prot == NET_PROT_IP &&
+		(field == NH_FLD_IP_SRC ||
+		field == NH_FLD_IP_DST))
+		return true;
 
-	fprintf(f, "key:\r\n");
-	for (idx = 0; idx < flow->fs_real_key_size; idx++)
-		fprintf(f, "%02x ", key[idx]);
+	return false;
+}
 
-	fprintf(f, "\r\nmask:\r\n");
-	for (idx = 0; idx < flow->fs_real_key_size; idx++)
-		fprintf(f, "%02x ", mask[idx]);
+static int
+dpaa2_flow_l4_src_port_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_TCP &&
+		field == NH_FLD_TCP_PORT_SRC)
+		return true;
+	else if (prot == NET_PROT_UDP &&
+		field == NH_FLD_UDP_PORT_SRC)
+		return true;
+	else if (prot == NET_PROT_SCTP &&
+		field == NH_FLD_SCTP_PORT_SRC)
+		return true;
+
+	return false;
+}
 
-	fprintf(f, "\r\n%s FS ipsrc: %d, ipdst: %d\r\n", log_info,
-		flow->ipaddr_rule.fs_ipsrc_offset,
-		flow->ipaddr_rule.fs_ipdst_offset);
+static int
+dpaa2_flow_l4_dst_port_extract(enum net_prot prot,
+	uint32_t field)
+{
+	if (prot == NET_PROT_TCP &&
+		field == NH_FLD_TCP_PORT_DST)
+		return true;
+	else if (prot == NET_PROT_UDP &&
+		field == NH_FLD_UDP_PORT_DST)
+		return true;
+	else if (prot == NET_PROT_SCTP &&
+		field == NH_FLD_SCTP_PORT_DST)
+		return true;
+
+	return false;
 }
 
-static inline void dpaa2_flow_extract_key_set(
-	struct dpaa2_key_info *key_info, int index, uint8_t size)
+static int
+dpaa2_flow_add_qos_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow)
 {
-	key_info->key_size[index] = size;
-	if (index > 0) {
-		key_info->key_offset[index] =
-			key_info->key_offset[index - 1] +
-			key_info->key_size[index - 1];
-	} else {
-		key_info->key_offset[index] = 0;
+	uint16_t qos_index;
+	int ret;
+	struct fsl_mc_io *dpni = priv->hw;
+
+	if (priv->num_rx_tc <= 1 &&
+		flow->action_type != RTE_FLOW_ACTION_TYPE_RSS) {
+		DPAA2_PMD_WARN("No QoS Table for FS");
+		return -EINVAL;
 	}
-	key_info->key_total_size += size;
+
+	/* QoS entry added is only effective for multiple TCs.*/
+	qos_index = flow->tc_id * priv->fs_entries + flow->tc_index;
+	if (qos_index >= priv->qos_entries) {
+		DPAA2_PMD_ERR("QoS table full(%d >= %d)",
+			qos_index, priv->qos_entries);
+		return -EINVAL;
+	}
+
+	dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
+
+	ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+			priv->token, &flow->qos_rule,
+			flow->tc_id, qos_index,
+			0, 0);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("Add entry(%d) to table(%d) failed",
+			qos_index, flow->tc_id);
+		return ret;
+	}
+
+	return 0;
 }
 
-static int dpaa2_flow_extract_add(
-	struct dpaa2_key_extract *key_extract,
-	enum net_prot prot,
-	uint32_t field, uint8_t field_size)
+static int
+dpaa2_flow_add_fs_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow)
 {
-	int index, ip_src = -1, ip_dst = -1;
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
+	int ret;
+	struct fsl_mc_io *dpni = priv->hw;
 
-	if (dpkg->num_extracts >=
-		DPKG_MAX_NUM_OF_EXTRACTS) {
-		DPAA2_PMD_WARN("Number of extracts overflows");
-		return -1;
+	if (flow->tc_index >= priv->fs_entries) {
+		DPAA2_PMD_ERR("FS table full(%d >= %d)",
+			flow->tc_index, priv->fs_entries);
+		return -EINVAL;
 	}
-	/* Before reorder, the IP SRC and IP DST are already last
-	 * extract(s).
-	 */
-	for (index = 0; index < dpkg->num_extracts; index++) {
-		if (dpkg->extracts[index].extract.from_hdr.prot ==
-			NET_PROT_IP) {
-			if (dpkg->extracts[index].extract.from_hdr.field ==
-				NH_FLD_IP_SRC) {
-				ip_src = index;
-			}
-			if (dpkg->extracts[index].extract.from_hdr.field ==
-				NH_FLD_IP_DST) {
-				ip_dst = index;
+
+	dpaa2_flow_fs_entry_log("Start add", flow);
+
+	ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
+			priv->token, flow->tc_id,
+			flow->tc_index, &flow->fs_rule,
+			&flow->fs_action_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("Add rule(%d) to FS table(%d) failed",
+			flow->tc_index, flow->tc_id);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_rule_insert_hole(struct dpaa2_dev_flow *flow,
+	int offset, int size,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int end;
+
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		end = flow->qos_rule_size;
+		if (end > offset) {
+			memmove(flow->qos_key_addr + offset + size,
+					flow->qos_key_addr + offset,
+					end - offset);
+			memset(flow->qos_key_addr + offset,
+					0, size);
+
+			memmove(flow->qos_mask_addr + offset + size,
+					flow->qos_mask_addr + offset,
+					end - offset);
+			memset(flow->qos_mask_addr + offset,
+					0, size);
+		}
+		flow->qos_rule_size += size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		end = flow->fs_rule_size;
+		if (end > offset) {
+			memmove(flow->fs_key_addr + offset + size,
+					flow->fs_key_addr + offset,
+					end - offset);
+			memset(flow->fs_key_addr + offset,
+					0, size);
+
+			memmove(flow->fs_mask_addr + offset + size,
+					flow->fs_mask_addr + offset,
+					end - offset);
+			memset(flow->fs_mask_addr + offset,
+					0, size);
+		}
+		flow->fs_rule_size += size;
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_rule_add_all(struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type,
+	uint16_t entry_size, uint8_t tc_id)
+{
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	int ret;
+
+	while (curr) {
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			if (priv->num_rx_tc > 1 ||
+				curr->action_type ==
+				RTE_FLOW_ACTION_TYPE_RSS) {
+				curr->qos_rule.key_size = entry_size;
+				ret = dpaa2_flow_add_qos_rule(priv, curr);
+				if (ret)
+					return ret;
 			}
 		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE &&
+			curr->tc_id == tc_id) {
+			curr->fs_rule.key_size = entry_size;
+			ret = dpaa2_flow_add_fs_rule(priv, curr);
+			if (ret)
+				return ret;
+		}
+		curr = LIST_NEXT(curr, next);
 	}
 
-	if (ip_src >= 0)
-		RTE_ASSERT((ip_src + 2) >= dpkg->num_extracts);
+	return 0;
+}
 
-	if (ip_dst >= 0)
-		RTE_ASSERT((ip_dst + 2) >= dpkg->num_extracts);
+static int
+dpaa2_flow_qos_rule_insert_hole(struct dpaa2_dev_priv *priv,
+	int offset, int size)
+{
+	struct dpaa2_dev_flow *curr;
+	int ret;
 
-	if (prot == NET_PROT_IP &&
-		(field == NH_FLD_IP_SRC ||
-		field == NH_FLD_IP_DST)) {
-		index = dpkg->num_extracts;
+	curr = priv->curr;
+	if (!curr) {
+		DPAA2_PMD_ERR("Current qos flow insert hole failed.");
+		return -EINVAL;
 	} else {
-		if (ip_src >= 0 && ip_dst >= 0)
-			index = dpkg->num_extracts - 2;
-		else if (ip_src >= 0 || ip_dst >= 0)
-			index = dpkg->num_extracts - 1;
-		else
-			index = dpkg->num_extracts;
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	dpkg->extracts[index].type =	DPKG_EXTRACT_FROM_HDR;
-	dpkg->extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-	dpkg->extracts[index].extract.from_hdr.prot = prot;
-	dpkg->extracts[index].extract.from_hdr.field = field;
-	if (prot == NET_PROT_IP &&
-		(field == NH_FLD_IP_SRC ||
-		field == NH_FLD_IP_DST)) {
-		dpaa2_flow_extract_key_set(key_info, index, 0);
+	curr = LIST_FIRST(&priv->flows);
+	while (curr) {
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		curr = LIST_NEXT(curr, next);
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_flow_fs_rule_insert_hole(struct dpaa2_dev_priv *priv,
+	int offset, int size, int tc_id)
+{
+	struct dpaa2_dev_flow *curr;
+	int ret;
+
+	curr = priv->curr;
+	if (!curr || curr->tc_id != tc_id) {
+		DPAA2_PMD_ERR("Current flow insert hole failed.");
+		return -EINVAL;
 	} else {
-		dpaa2_flow_extract_key_set(key_info, index, field_size);
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	if (prot == NET_PROT_IP) {
-		if (field == NH_FLD_IP_SRC) {
-			if (key_info->ipv4_dst_offset >= 0) {
-				key_info->ipv4_src_offset =
-					key_info->ipv4_dst_offset +
-					NH_FLD_IPV4_ADDR_SIZE;
-			} else {
-				key_info->ipv4_src_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-			if (key_info->ipv6_dst_offset >= 0) {
-				key_info->ipv6_src_offset =
-					key_info->ipv6_dst_offset +
-					NH_FLD_IPV6_ADDR_SIZE;
-			} else {
-				key_info->ipv6_src_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-		} else if (field == NH_FLD_IP_DST) {
-			if (key_info->ipv4_src_offset >= 0) {
-				key_info->ipv4_dst_offset =
-					key_info->ipv4_src_offset +
-					NH_FLD_IPV4_ADDR_SIZE;
-			} else {
-				key_info->ipv4_dst_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
-			if (key_info->ipv6_src_offset >= 0) {
-				key_info->ipv6_dst_offset =
-					key_info->ipv6_src_offset +
-					NH_FLD_IPV6_ADDR_SIZE;
-			} else {
-				key_info->ipv6_dst_offset =
-					key_info->key_offset[index - 1] +
-						key_info->key_size[index - 1];
-			}
+	curr = LIST_FIRST(&priv->flows);
+
+	while (curr) {
+		if (curr->tc_id != tc_id) {
+			curr = LIST_NEXT(curr, next);
+			continue;
 		}
+		ret = dpaa2_flow_rule_insert_hole(curr, offset, size,
+				DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		curr = LIST_NEXT(curr, next);
 	}
 
-	if (index == dpkg->num_extracts) {
-		dpkg->num_extracts++;
-		return 0;
+	return 0;
+}
+
+/* Move IPv4/IPv6 addresses to fill new extract previous IP address.
+ * Current MC/WRIOP only support generic IP extract but IP address
+ * is not fixed, so we have to put them at end of extracts, otherwise,
+ * the extracts position following them can't be identified.
+ */
+static int
+dpaa2_flow_key_profile_advance(enum net_prot prot,
+	uint32_t field, uint8_t field_size,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off += field_size;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, field_size);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, field_size, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = field_size;
+	key_profile->prot_field[pos].prot = prot;
+	key_profile->prot_field[pos].key_field = field;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	if (dpaa2_flow_l4_src_port_extract(prot, field)) {
+		key_profile->l4_src_port_present = 1;
+		key_profile->l4_src_port_pos = pos;
+		key_profile->l4_src_port_offset =
+			key_profile->key_offset[pos];
+	} else if (dpaa2_flow_l4_dst_port_extract(prot, field)) {
+		key_profile->l4_dst_port_present = 1;
+		key_profile->l4_dst_port_pos = pos;
+		key_profile->l4_dst_port_offset =
+			key_profile->key_offset[pos];
+	}
+	key_profile->key_max_size += field_size;
+
+	return pos;
+}
+
+static int
+dpaa2_flow_extract_add_hdr(enum net_prot prot,
+	uint32_t field, uint8_t field_size,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
 	}
 
-	if (ip_src >= 0) {
-		ip_src++;
-		dpkg->extracts[ip_src].type =
-			DPKG_EXTRACT_FROM_HDR;
-		dpkg->extracts[ip_src].extract.from_hdr.type =
-			DPKG_FULL_FIELD;
-		dpkg->extracts[ip_src].extract.from_hdr.prot =
-			NET_PROT_IP;
-		dpkg->extracts[ip_src].extract.from_hdr.field =
-			NH_FLD_IP_SRC;
-		dpaa2_flow_extract_key_set(key_info, ip_src, 0);
-		key_info->ipv4_src_offset += field_size;
-		key_info->ipv6_src_offset += field_size;
-	}
-	if (ip_dst >= 0) {
-		ip_dst++;
-		dpkg->extracts[ip_dst].type =
-			DPKG_EXTRACT_FROM_HDR;
-		dpkg->extracts[ip_dst].extract.from_hdr.type =
-			DPKG_FULL_FIELD;
-		dpkg->extracts[ip_dst].extract.from_hdr.prot =
-			NET_PROT_IP;
-		dpkg->extracts[ip_dst].extract.from_hdr.field =
-			NH_FLD_IP_DST;
-		dpaa2_flow_extract_key_set(key_info, ip_dst, 0);
-		key_info->ipv4_dst_offset += field_size;
-		key_info->ipv6_dst_offset += field_size;
+	pos = dpaa2_flow_key_profile_advance(prot,
+			field, field_size, priv,
+			dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
 	}
 
+	extracts[pos].type = DPKG_EXTRACT_FROM_HDR;
+	extracts[pos].extract.from_hdr.prot = prot;
+	extracts[pos].extract.from_hdr.type = DPKG_FULL_FIELD;
+	extracts[pos].extract.from_hdr.field = field;
+
 	dpkg->num_extracts++;
 
 	return 0;
 }
 
-static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
-				      int size)
+static int
+dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
+	int size)
 {
 	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
+	struct dpaa2_key_profile *key_info = &key_extract->key_profile;
 	int last_extract_size, index;
 
 	if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
@@ -527,83 +800,58 @@ static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
 			DPAA2_FLOW_MAX_KEY_SIZE * index;
 	}
 
-	key_info->key_total_size = size;
+	key_info->key_max_size = size;
 	return 0;
 }
 
-/* Protocol discrimination.
- * Discriminate IPv4/IPv6/vLan by Eth type.
- * Discriminate UDP/TCP/ICMP by next proto of IP.
- */
 static inline int
-dpaa2_flow_proto_discrimination_extract(
-	struct dpaa2_key_extract *key_extract,
-	enum rte_flow_item_type type)
+dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t key_field)
 {
-	if (type == RTE_FLOW_ITEM_TYPE_ETH) {
-		return dpaa2_flow_extract_add(
-				key_extract, NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				sizeof(rte_be16_t));
-	} else if (type == (enum rte_flow_item_type)
-		DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
-		return dpaa2_flow_extract_add(
-				key_extract, NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				NH_FLD_IP_PROTO_SIZE);
-	}
-
-	return -1;
-}
+	int pos;
+	struct key_prot_field *prot_field;
 
-static inline int dpaa2_flow_extract_search(
-	struct dpkg_profile_cfg *dpkg,
-	enum net_prot prot, uint32_t field)
-{
-	int i;
+	if (dpaa2_flow_ip_address_extract(prot, key_field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
 
-	for (i = 0; i < dpkg->num_extracts; i++) {
-		if (dpkg->extracts[i].extract.from_hdr.prot == prot &&
-			dpkg->extracts[i].extract.from_hdr.field == field) {
-			return i;
+	prot_field = key_profile->prot_field;
+	for (pos = 0; pos < key_profile->num; pos++) {
+		if (prot_field[pos].prot == prot &&
+			prot_field[pos].key_field == key_field) {
+			return pos;
 		}
 	}
 
-	return -1;
+	if (dpaa2_flow_l4_src_port_extract(prot, key_field)) {
+		if (key_profile->l4_src_port_present)
+			return key_profile->l4_src_port_pos;
+	} else if (dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
+		if (key_profile->l4_dst_port_present)
+			return key_profile->l4_dst_port_pos;
+	}
+
+	return -ENXIO;
 }
 
-static inline int dpaa2_flow_extract_key_offset(
-	struct dpaa2_key_extract *key_extract,
-	enum net_prot prot, uint32_t field)
+static inline int
+dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t key_field)
 {
 	int i;
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_info *key_info = &key_extract->key_info;
 
-	if (prot == NET_PROT_IPV4 ||
-		prot == NET_PROT_IPV6)
-		i = dpaa2_flow_extract_search(dpkg, NET_PROT_IP, field);
+	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
+
+	if (i >= 0)
+		return key_profile->key_offset[i];
 	else
-		i = dpaa2_flow_extract_search(dpkg, prot, field);
-
-	if (i >= 0) {
-		if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_SRC)
-			return key_info->ipv4_src_offset;
-		else if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_DST)
-			return key_info->ipv4_dst_offset;
-		else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_SRC)
-			return key_info->ipv6_src_offset;
-		else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_DST)
-			return key_info->ipv6_dst_offset;
-		else
-			return key_info->key_offset[i];
-	} else {
-		return -1;
-	}
+		return i;
 }
 
-struct proto_discrimination {
-	enum rte_flow_item_type type;
+struct prev_proto_field_id {
+	enum net_prot prot;
 	union {
 		rte_be16_t eth_type;
 		uint8_t ip_proto;
@@ -611,103 +859,134 @@ struct proto_discrimination {
 };
 
 static int
-dpaa2_flow_proto_discrimination_rule(
-	struct dpaa2_dev_priv *priv, struct rte_flow *flow,
-	struct proto_discrimination proto, int group)
+dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct prev_proto_field_id *prev_proto,
+	int group,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	enum net_prot prot;
-	uint32_t field;
 	int offset;
-	size_t key_iova;
-	size_t mask_iova;
+	uint8_t *key_addr;
+	uint8_t *mask_addr;
+	uint32_t field = 0;
 	rte_be16_t eth_type;
 	uint8_t ip_proto;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
 
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		prot = NET_PROT_ETH;
+	if (prev_proto->prot == NET_PROT_ETH) {
 		field = NH_FLD_ETH_TYPE;
-	} else if (proto.type == DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
-		prot = NET_PROT_IP;
+	} else if (prev_proto->prot == NET_PROT_IP) {
 		field = NH_FLD_IP_PROTO;
 	} else {
-		DPAA2_PMD_ERR(
-			"Only Eth and IP support to discriminate next proto.");
-		return -1;
-	}
-
-	offset = dpaa2_flow_extract_key_offset(&priv->extract.qos_key_extract,
-			prot, field);
-	if (offset < 0) {
-		DPAA2_PMD_ERR("QoS prot %d field %d extract failed",
-				prot, field);
-		return -1;
-	}
-	key_iova = flow->qos_rule.key_iova + offset;
-	mask_iova = flow->qos_rule.mask_iova + offset;
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		eth_type = proto.eth_type;
-		memcpy((void *)key_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-		eth_type = 0xffff;
-		memcpy((void *)mask_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-	} else {
-		ip_proto = proto.ip_proto;
-		memcpy((void *)key_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-		ip_proto = 0xff;
-		memcpy((void *)mask_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-	}
-
-	offset = dpaa2_flow_extract_key_offset(
-			&priv->extract.tc_key_extract[group],
-			prot, field);
-	if (offset < 0) {
-		DPAA2_PMD_ERR("FS prot %d field %d extract failed",
-				prot, field);
-		return -1;
+		DPAA2_PMD_ERR("Prev proto(%d) not support!",
+			prev_proto->prot);
+		return -EINVAL;
 	}
-	key_iova = flow->fs_rule.key_iova + offset;
-	mask_iova = flow->fs_rule.mask_iova + offset;
 
-	if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
-		eth_type = proto.eth_type;
-		memcpy((void *)key_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-		eth_type = 0xffff;
-		memcpy((void *)mask_iova, (const void *)(&eth_type),
-			sizeof(rte_be16_t));
-	} else {
-		ip_proto = proto.ip_proto;
-		memcpy((void *)key_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
-		ip_proto = 0xff;
-		memcpy((void *)mask_iova, (const void *)(&ip_proto),
-			sizeof(uint8_t));
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		key_extract = &priv->extract.qos_key_extract;
+		key_profile = &key_extract->key_profile;
+
+		offset = dpaa2_flow_extract_key_offset(key_profile,
+				prev_proto->prot, field);
+		if (offset < 0) {
+			DPAA2_PMD_ERR("%s QoS key extract failed", __func__);
+			return -EINVAL;
+		}
+		key_addr = flow->qos_key_addr + offset;
+		mask_addr = flow->qos_mask_addr + offset;
+		if (prev_proto->prot == NET_PROT_ETH) {
+			eth_type = prev_proto->eth_type;
+			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
+			eth_type = 0xffff;
+			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
+			flow->qos_rule_size += sizeof(rte_be16_t);
+		} else if (prev_proto->prot == NET_PROT_IP) {
+			ip_proto = prev_proto->ip_proto;
+			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
+			ip_proto = 0xff;
+			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
+			flow->qos_rule_size += sizeof(uint8_t);
+		} else {
+			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
+				prev_proto->prot);
+			return -EINVAL;
+		}
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		key_extract = &priv->extract.tc_key_extract[group];
+		key_profile = &key_extract->key_profile;
+
+		offset = dpaa2_flow_extract_key_offset(key_profile,
+				prev_proto->prot, field);
+		if (offset < 0) {
+			DPAA2_PMD_ERR("%s TC[%d] key extract failed",
+				__func__, group);
+			return -EINVAL;
+		}
+		key_addr = flow->fs_key_addr + offset;
+		mask_addr = flow->fs_mask_addr + offset;
+
+		if (prev_proto->prot == NET_PROT_ETH) {
+			eth_type = prev_proto->eth_type;
+			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
+			eth_type = 0xffff;
+			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
+			flow->fs_rule_size += sizeof(rte_be16_t);
+		} else if (prev_proto->prot == NET_PROT_IP) {
+			ip_proto = prev_proto->ip_proto;
+			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
+			ip_proto = 0xff;
+			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
+			flow->fs_rule_size += sizeof(uint8_t);
+		} else {
+			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
+				prev_proto->prot);
+			return -EINVAL;
+		}
 	}
 
 	return 0;
 }
 
 static inline int
-dpaa2_flow_rule_data_set(
-	struct dpaa2_key_extract *key_extract,
-	struct dpni_rule_cfg *rule,
-	enum net_prot prot, uint32_t field,
-	const void *key, const void *mask, int size)
+dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	enum net_prot prot, uint32_t field, int size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int offset = dpaa2_flow_extract_key_offset(key_extract,
-				prot, field);
+	int offset;
 
+	if (dpaa2_flow_ip_address_extract(prot, field)) {
+		DPAA2_PMD_ERR("%s only for none IP address extract",
+			__func__);
+		return -EINVAL;
+	}
+
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			prot, field);
 	if (offset < 0) {
-		DPAA2_PMD_ERR("prot %d, field %d extract failed",
+		DPAA2_PMD_ERR("P(%d)/F(%d) does not exist!",
 			prot, field);
-		return -1;
+		return -EINVAL;
 	}
 
-	memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
-	memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, size);
+		memcpy((flow->qos_mask_addr + offset), mask, size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size = offset + size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, size);
+		memcpy((flow->fs_mask_addr + offset), mask, size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size = offset + size;
+	}
 
 	return 0;
 }
@@ -724,145 +1003,13 @@ dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
 	return 0;
 }
 
-static inline int
-_dpaa2_flow_rule_move_ipaddr_tail(
-	struct dpaa2_key_extract *key_extract,
-	struct dpni_rule_cfg *rule, int src_offset,
-	uint32_t field, bool ipv4)
-{
-	size_t key_src;
-	size_t mask_src;
-	size_t key_dst;
-	size_t mask_dst;
-	int dst_offset, len;
-	enum net_prot prot;
-	char tmp[NH_FLD_IPV6_ADDR_SIZE];
-
-	if (field != NH_FLD_IP_SRC &&
-		field != NH_FLD_IP_DST) {
-		DPAA2_PMD_ERR("Field of IP addr reorder must be IP SRC/DST");
-		return -1;
-	}
-	if (ipv4)
-		prot = NET_PROT_IPV4;
-	else
-		prot = NET_PROT_IPV6;
-	dst_offset = dpaa2_flow_extract_key_offset(key_extract,
-				prot, field);
-	if (dst_offset < 0) {
-		DPAA2_PMD_ERR("Field %d reorder extract failed", field);
-		return -1;
-	}
-	key_src = rule->key_iova + src_offset;
-	mask_src = rule->mask_iova + src_offset;
-	key_dst = rule->key_iova + dst_offset;
-	mask_dst = rule->mask_iova + dst_offset;
-	if (ipv4)
-		len = sizeof(rte_be32_t);
-	else
-		len = NH_FLD_IPV6_ADDR_SIZE;
-
-	memcpy(tmp, (char *)key_src, len);
-	memset((char *)key_src, 0, len);
-	memcpy((char *)key_dst, tmp, len);
-
-	memcpy(tmp, (char *)mask_src, len);
-	memset((char *)mask_src, 0, len);
-	memcpy((char *)mask_dst, tmp, len);
-
-	return 0;
-}
-
-static inline int
-dpaa2_flow_rule_move_ipaddr_tail(
-	struct rte_flow *flow, struct dpaa2_dev_priv *priv,
-	int fs_group)
+static int
+dpaa2_flow_extract_support(const uint8_t *mask_src,
+	enum rte_flow_item_type type)
 {
-	int ret;
-	enum net_prot prot;
-
-	if (flow->ipaddr_rule.ipaddr_type == FLOW_NONE_IPADDR)
-		return 0;
-
-	if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR)
-		prot = NET_PROT_IPV4;
-	else
-		prot = NET_PROT_IPV6;
-
-	if (flow->ipaddr_rule.qos_ipsrc_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				flow->ipaddr_rule.qos_ipsrc_offset,
-				NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS src address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_SRC);
-	}
-
-	if (flow->ipaddr_rule.qos_ipdst_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				flow->ipaddr_rule.qos_ipdst_offset,
-				NH_FLD_IP_DST, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS dst address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_DST);
-	}
-
-	if (flow->ipaddr_rule.fs_ipsrc_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.tc_key_extract[fs_group],
-				&flow->fs_rule,
-				flow->ipaddr_rule.fs_ipsrc_offset,
-				NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("FS src address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.fs_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[fs_group],
-				prot, NH_FLD_IP_SRC);
-	}
-	if (flow->ipaddr_rule.fs_ipdst_offset >= 0) {
-		ret = _dpaa2_flow_rule_move_ipaddr_tail(
-				&priv->extract.tc_key_extract[fs_group],
-				&flow->fs_rule,
-				flow->ipaddr_rule.fs_ipdst_offset,
-				NH_FLD_IP_DST, prot == NET_PROT_IPV4);
-		if (ret) {
-			DPAA2_PMD_ERR("FS dst address reorder failed");
-			return -1;
-		}
-		flow->ipaddr_rule.fs_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[fs_group],
-				prot, NH_FLD_IP_DST);
-	}
-
-	return 0;
-}
-
-static int
-dpaa2_flow_extract_support(
-	const uint8_t *mask_src,
-	enum rte_flow_item_type type)
-{
-	char mask[64];
-	int i, size = 0;
-	const char *mask_support = 0;
+	char mask[64];
+	int i, size = 0;
+	const char *mask_support = 0;
 
 	switch (type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
@@ -902,7 +1049,7 @@ dpaa2_flow_extract_support(
 		size = sizeof(struct rte_flow_item_gre);
 		break;
 	default:
-		return -1;
+		return -EINVAL;
 	}
 
 	memcpy(mask, mask_support, size);
@@ -917,491 +1064,444 @@ dpaa2_flow_extract_support(
 }
 
 static int
-dpaa2_configure_flow_eth(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct prev_proto_field_id *prev_prot,
+	enum dpaa2_flow_dist_type dist_type,
+	int group, int *recfg)
 {
-	int index, ret;
-	int local_cfg = 0;
-	uint32_t group;
-	const struct rte_flow_item_eth *spec, *mask;
-
-	/* TODO: Currently upper bound of range parameter is not implemented */
-	const struct rte_flow_item_eth *last __rte_unused;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
-
-	group = attr->group;
-
-	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_eth *)pattern->spec;
-	last    = (const struct rte_flow_item_eth *)pattern->last;
-	mask    = (const struct rte_flow_item_eth *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_eth_mask);
-	if (!spec) {
-		/* Don't care any field of eth header,
-		 * only care eth protocol.
-		 */
-		DPAA2_PMD_WARN("No pattern spec for Eth flow, just skip");
-		return 0;
-	}
-
-	/* Get traffic class index and flow id to be configured */
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
-
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
-		DPAA2_PMD_WARN("Extract field(s) of ethernet not support.");
-
-		return -1;
-	}
-
-	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_SA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_SA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_SA failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_SA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_SA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_SA failed.");
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	int ret, index, local_cfg = 0, size = 0;
+	struct dpaa2_key_extract *extract;
+	struct dpaa2_key_profile *key_profile;
+	enum net_prot prot = prev_prot->prot;
+	uint32_t key_field = 0;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH_SA rule set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_SA,
-				&spec->hdr.src_addr.addr_bytes,
-				&mask->hdr.src_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_SA,
-				&spec->hdr.src_addr.addr_bytes,
-				&mask->hdr.src_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
-			return -1;
-		}
+	if (prot == NET_PROT_ETH) {
+		key_field = NH_FLD_ETH_TYPE;
+		size = sizeof(rte_be16_t);
+	} else if (prot == NET_PROT_IP) {
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else if (prot == NET_PROT_IPV4) {
+		prot = NET_PROT_IP;
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else if (prot == NET_PROT_IPV6) {
+		prot = NET_PROT_IP;
+		key_field = NH_FLD_IP_PROTO;
+		size = sizeof(uint8_t);
+	} else {
+		DPAA2_PMD_ERR("Invalid Prev prot(%d)", prot);
+		return -EINVAL;
 	}
 
-	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_DA);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_DA,
-					RTE_ETHER_ADDR_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_DA failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		extract = &priv->extract.qos_key_extract;
+		key_profile = &extract->key_profile;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_DA);
+		index = dpaa2_flow_extract_search(key_profile,
+				prot, key_field);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_DA,
-					RTE_ETHER_ADDR_LEN);
+			ret = dpaa2_flow_extract_add_hdr(prot,
+					key_field, size, priv,
+					DPAA2_FLOW_QOS_TYPE, group,
+					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_DA failed.");
+				DPAA2_PMD_ERR("QOS prev extract add failed");
 
-				return -1;
+				return -EINVAL;
 			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH DA rule set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_DA,
-				&spec->hdr.dst_addr.addr_bytes,
-				&mask->hdr.dst_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
-			return -1;
+			local_cfg |= DPAA2_FLOW_QOS_TYPE;
 		}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_DA,
-				&spec->hdr.dst_addr.addr_bytes,
-				&mask->hdr.dst_addr.addr_bytes,
-				sizeof(struct rte_ether_addr));
+		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
-			return -1;
+			DPAA2_PMD_ERR("QoS prev rule set failed");
+			return -EINVAL;
 		}
 	}
 
-	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ETH, NH_FLD_ETH_TYPE,
-					RTE_ETHER_TYPE_LEN);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ETH_TYPE failed.");
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		extract = &priv->extract.tc_key_extract[group];
+		key_profile = &extract->key_profile;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
+		index = dpaa2_flow_extract_search(key_profile,
+				prot, key_field);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ETH, NH_FLD_ETH_TYPE,
-					RTE_ETHER_TYPE_LEN);
+			ret = dpaa2_flow_extract_add_hdr(prot,
+					key_field, size, priv,
+					DPAA2_FLOW_FS_TYPE, group,
+					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ETH_TYPE failed.");
+				DPAA2_PMD_ERR("FS[%d] prev extract add failed",
+					group);
 
-				return -1;
+				return -EINVAL;
 			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ETH TYPE rule set failed");
-				return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				&spec->hdr.ether_type,
-				&mask->hdr.ether_type,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
-			return -1;
+			local_cfg |= DPAA2_FLOW_FS_TYPE;
 		}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ETH,
-				NH_FLD_ETH_TYPE,
-				&spec->hdr.ether_type,
-				&mask->hdr.ether_type,
-				sizeof(rte_be16_t));
+		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
-			return -1;
+			DPAA2_PMD_ERR("FS[%d] prev rule set failed",
+				group);
+			return -EINVAL;
 		}
 	}
 
-	(*device_configured) |= local_cfg;
+	if (recfg)
+		*recfg = local_cfg;
 
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_vlan(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
+	enum net_prot prot, uint32_t field,
+	const void *key, const void *mask, int size,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int index, ret;
-	int local_cfg = 0;
-	uint32_t group;
-	const struct rte_flow_item_vlan *spec, *mask;
-
-	const struct rte_flow_item_vlan *last __rte_unused;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-
-	group = attr->group;
+	int index, ret, local_cfg = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
 
-	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_vlan *)pattern->spec;
-	last    = (const struct rte_flow_item_vlan *)pattern->last;
-	mask    = (const struct rte_flow_item_vlan *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask);
+	if (dpaa2_flow_ip_address_extract(prot, field))
+		return -EINVAL;
 
-	/* Get traffic class index and flow id to be configured */
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
 
-	if (!spec) {
-		/* Don't care any field of vlan header,
-		 * only care vlan protocol.
-		 */
-		/* Eth type is actually used for vLan classification.
-		 */
-		struct proto_discrimination proto;
+	key_profile = &key_extract->key_profile;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-						&priv->extract.qos_key_extract,
-						RTE_FLOW_ITEM_TYPE_ETH);
-			if (ret) {
-				DPAA2_PMD_ERR(
-				"QoS Ext ETH_TYPE to discriminate vLan failed");
+	index = dpaa2_flow_extract_search(key_profile,
+			prot, field);
+	if (index < 0) {
+		ret = dpaa2_flow_extract_add_hdr(prot,
+				field, size, priv,
+				dist_type, tc_id, NULL);
+		if (ret) {
+			DPAA2_PMD_ERR("QoS Extract P(%d)/F(%d) failed",
+				prot, field);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+			return ret;
 		}
+		local_cfg |= dist_type;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ETH, NH_FLD_ETH_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					RTE_FLOW_ITEM_TYPE_ETH);
-			if (ret) {
-				DPAA2_PMD_ERR(
-				"FS Ext ETH_TYPE to discriminate vLan failed.");
+	ret = dpaa2_flow_hdr_rule_data_set(flow, key_profile,
+			prot, field, size, key, mask, dist_type);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS P(%d)/F(%d) rule data set failed",
+			prot, field);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		return ret;
+	}
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"Move ipaddr before vLan discrimination set failed");
-			return -1;
-		}
+	if (recfg)
+		*recfg |= local_cfg;
 
-		proto.type = RTE_FLOW_ITEM_TYPE_ETH;
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("vLan discrimination rule set failed");
-			return -1;
-		}
+	return 0;
+}
 
-		(*device_configured) |= local_cfg;
+static int
+dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
+	enum net_prot prot, uint32_t field,
+	const void *key, const void *mask, int size,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int local_cfg = 0, num, ipaddr_extract_len = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
+	struct dpkg_profile_cfg *dpkg;
+	uint8_t *key_addr, *mask_addr;
+	union ip_addr_extract_rule *ip_addr_data;
+	union ip_addr_extract_rule *ip_addr_mask;
+	enum net_prot orig_prot;
+	uint32_t orig_field;
+
+	if (prot != NET_PROT_IPV4 && prot != NET_PROT_IPV6)
+		return -EINVAL;
 
-		return 0;
+	if (prot == NET_PROT_IPV4 && field != NH_FLD_IPV4_SRC_IP &&
+		field != NH_FLD_IPV4_DST_IP) {
+		return -EINVAL;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VLAN)) {
-		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
-
-		return -1;
+	if (prot == NET_PROT_IPV6 && field != NH_FLD_IPV6_SRC_IP &&
+		field != NH_FLD_IPV6_DST_IP) {
+		return -EINVAL;
 	}
 
-	if (!mask->hdr.vlan_tci)
-		return 0;
-
-	index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_VLAN, NH_FLD_VLAN_TCI);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-						&priv->extract.qos_key_extract,
-						NET_PROT_VLAN,
-						NH_FLD_VLAN_TCI,
-						sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract add VLAN_TCI failed.");
+	orig_prot = prot;
+	orig_field = field;
 
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+	if (prot == NET_PROT_IPV4 &&
+		field == NH_FLD_IPV4_SRC_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_SRC;
+	} else if (prot == NET_PROT_IPV4 &&
+		field == NH_FLD_IPV4_DST_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_DST;
+	} else if (prot == NET_PROT_IPV6 &&
+		field == NH_FLD_IPV6_SRC_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_SRC;
+	} else if (prot == NET_PROT_IPV6 &&
+		field == NH_FLD_IPV6_DST_IP) {
+		prot = NET_PROT_IP;
+		field = NH_FLD_IP_DST;
+	} else {
+		DPAA2_PMD_ERR("Inval P(%d)/F(%d) to extract ip address",
+			prot, field);
+		return -EINVAL;
 	}
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_VLAN, NH_FLD_VLAN_TCI);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.tc_key_extract[group],
-				NET_PROT_VLAN,
-				NH_FLD_VLAN_TCI,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract add VLAN_TCI failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		key_extract = &priv->extract.qos_key_extract;
+		key_profile = &key_extract->key_profile;
+		dpkg = &key_extract->dpkg;
+		num = key_profile->num;
+		key_addr = flow->qos_key_addr;
+		mask_addr = flow->qos_mask_addr;
+	} else {
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+		key_profile = &key_extract->key_profile;
+		dpkg = &key_extract->dpkg;
+		num = key_profile->num;
+		key_addr = flow->fs_key_addr;
+		mask_addr = flow->fs_mask_addr;
 	}
 
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before VLAN TCI rule set failed");
-		return -1;
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_VLAN,
-				NH_FLD_VLAN_TCI,
-				&spec->hdr.vlan_tci,
-				&mask->hdr.vlan_tci,
-				sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
-		return -1;
+	if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT) {
+		if (field == NH_FLD_IP_SRC)
+			key_profile->ip_addr_type = IP_SRC_EXTRACT;
+		else
+			key_profile->ip_addr_type = IP_DST_EXTRACT;
+		ipaddr_extract_len = size;
+
+		key_profile->ip_addr_extract_pos = num;
+		if (num > 0) {
+			key_profile->ip_addr_extract_off =
+				key_profile->key_offset[num - 1] +
+				key_profile->key_size[num - 1];
+		} else {
+			key_profile->ip_addr_extract_off = 0;
+		}
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	} else if (key_profile->ip_addr_type == IP_SRC_EXTRACT) {
+		if (field == NH_FLD_IP_SRC) {
+			ipaddr_extract_len = size;
+			goto rule_configure;
+		}
+		key_profile->ip_addr_type = IP_SRC_DST_EXTRACT;
+		ipaddr_extract_len = size * 2;
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	} else if (key_profile->ip_addr_type == IP_DST_EXTRACT) {
+		if (field == NH_FLD_IP_DST) {
+			ipaddr_extract_len = size;
+			goto rule_configure;
+		}
+		key_profile->ip_addr_type = IP_DST_SRC_EXTRACT;
+		ipaddr_extract_len = size * 2;
+		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
+	}
+	key_profile->num++;
+
+	dpkg->extracts[num].extract.from_hdr.prot = prot;
+	dpkg->extracts[num].extract.from_hdr.field = field;
+	dpkg->extracts[num].extract.from_hdr.type = DPKG_FULL_FIELD;
+	dpkg->num_extracts++;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		local_cfg = DPAA2_FLOW_QOS_TYPE;
+	else
+		local_cfg = DPAA2_FLOW_FS_TYPE;
+
+rule_configure:
+	key_addr += key_profile->ip_addr_extract_off;
+	ip_addr_data = (union ip_addr_extract_rule *)key_addr;
+	mask_addr += key_profile->ip_addr_extract_off;
+	ip_addr_mask = (union ip_addr_extract_rule *)mask_addr;
+
+	if (orig_prot == NET_PROT_IPV4 &&
+		orig_field == NH_FLD_IPV4_SRC_IP) {
+		if (key_profile->ip_addr_type == IP_SRC_EXTRACT ||
+			key_profile->ip_addr_type == IP_SRC_DST_EXTRACT) {
+			memcpy(&ip_addr_data->ipv4_sd_addr.ipv4_src,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_sd_addr.ipv4_src,
+				mask, size);
+		} else {
+			memcpy(&ip_addr_data->ipv4_ds_addr.ipv4_src,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_ds_addr.ipv4_src,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV4 &&
+		orig_field == NH_FLD_IPV4_DST_IP) {
+		if (key_profile->ip_addr_type == IP_DST_EXTRACT ||
+			key_profile->ip_addr_type == IP_DST_SRC_EXTRACT) {
+			memcpy(&ip_addr_data->ipv4_ds_addr.ipv4_dst,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_ds_addr.ipv4_dst,
+				mask, size);
+		} else {
+			memcpy(&ip_addr_data->ipv4_sd_addr.ipv4_dst,
+				key, size);
+			memcpy(&ip_addr_mask->ipv4_sd_addr.ipv4_dst,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV6 &&
+		orig_field == NH_FLD_IPV6_SRC_IP) {
+		if (key_profile->ip_addr_type == IP_SRC_EXTRACT ||
+			key_profile->ip_addr_type == IP_SRC_DST_EXTRACT) {
+			memcpy(ip_addr_data->ipv6_sd_addr.ipv6_src,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_sd_addr.ipv6_src,
+				mask, size);
+		} else {
+			memcpy(ip_addr_data->ipv6_ds_addr.ipv6_src,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_ds_addr.ipv6_src,
+				mask, size);
+		}
+	} else if (orig_prot == NET_PROT_IPV6 &&
+		orig_field == NH_FLD_IPV6_DST_IP) {
+		if (key_profile->ip_addr_type == IP_DST_EXTRACT ||
+			key_profile->ip_addr_type == IP_DST_SRC_EXTRACT) {
+			memcpy(ip_addr_data->ipv6_ds_addr.ipv6_dst,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_ds_addr.ipv6_dst,
+				mask, size);
+		} else {
+			memcpy(ip_addr_data->ipv6_sd_addr.ipv6_dst,
+				key, size);
+			memcpy(ip_addr_mask->ipv6_sd_addr.ipv6_dst,
+				mask, size);
+		}
 	}
 
-	ret = dpaa2_flow_rule_data_set(
-			&priv->extract.tc_key_extract[group],
-			&flow->fs_rule,
-			NET_PROT_VLAN,
-			NH_FLD_VLAN_TCI,
-			&spec->hdr.vlan_tci,
-			&mask->hdr.vlan_tci,
-			sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
-		return -1;
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		flow->qos_rule_size =
+			key_profile->ip_addr_extract_off + ipaddr_extract_len;
+	} else {
+		flow->fs_rule_size =
+			key_profile->ip_addr_extract_off + ipaddr_extract_len;
 	}
 
-	(*device_configured) |= local_cfg;
+	if (recfg)
+		*recfg |= local_cfg;
 
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_ip_discrimation(
-	struct dpaa2_dev_priv *priv, struct rte_flow *flow,
+dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
-	int *local_cfg,	int *device_configured,
-	uint32_t group)
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	struct proto_discrimination proto;
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_eth *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.qos_key_extract.dpkg,
-			NET_PROT_ETH, NH_FLD_ETH_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.qos_key_extract,
-				RTE_FLOW_ITEM_TYPE_ETH);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"QoS Extract ETH_TYPE to discriminate IP failed.");
-			return -1;
-		}
-		(*local_cfg) |= DPAA2_QOS_TABLE_RECONFIGURE;
-	}
+	group = attr->group;
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_ETH, NH_FLD_ETH_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				RTE_FLOW_ITEM_TYPE_ETH);
-		if (ret) {
-			DPAA2_PMD_ERR(
-			"FS Extract ETH_TYPE to discriminate IP failed.");
-			return -1;
-		}
-		(*local_cfg) |= DPAA2_FS_TABLE_RECONFIGURE;
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+			pattern->mask : &dpaa2_flow_item_eth_mask;
+	if (!spec) {
+		DPAA2_PMD_WARN("No pattern spec for Eth flow");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before IP discrimination set failed");
-		return -1;
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH)) {
+		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
+
+		return -EINVAL;
 	}
 
-	proto.type = RTE_FLOW_ITEM_TYPE_ETH;
-	if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
-	else
-		proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	ret = dpaa2_flow_proto_discrimination_rule(priv, flow, proto, group);
-	if (ret) {
-		DPAA2_PMD_ERR("IP discrimination rule set failed");
-		return -1;
+	if (memcmp((const char *)&mask->src,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_SA, &spec->src.addr_bytes,
+			&mask->src.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_SA, &spec->src.addr_bytes,
+			&mask->src.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->dst,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_DA, &spec->dst.addr_bytes,
+			&mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_DA, &spec->dst.addr_bytes,
+			&mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->type,
+		zero_cmp, sizeof(rte_be16_t))) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_TYPE, &spec->type,
+			&mask->type, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ETH,
+			NH_FLD_ETH_TYPE, &spec->type,
+			&mask->type, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
-	(*device_configured) |= (*local_cfg);
+	(*device_configured) |= local_cfg;
 
 	return 0;
 }
 
-
 static int
-dpaa2_configure_flow_generic_ip(
-	struct rte_flow *flow,
+dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
@@ -1409,419 +1509,338 @@ dpaa2_configure_flow_generic_ip(
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
-	const struct rte_flow_item_ipv4 *spec_ipv4 = 0,
-		*mask_ipv4 = 0;
-	const struct rte_flow_item_ipv6 *spec_ipv6 = 0,
-		*mask_ipv6 = 0;
-	const void *key, *mask;
-	enum net_prot prot;
-
+	const struct rte_flow_item_vlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
-	int size;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
-		spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec;
-		mask_ipv4 = (const struct rte_flow_item_ipv4 *)
-			(pattern->mask ? pattern->mask :
-					&dpaa2_flow_item_ipv4_mask);
-	} else {
-		spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec;
-		mask_ipv6 = (const struct rte_flow_item_ipv6 *)
-			(pattern->mask ? pattern->mask :
-					&dpaa2_flow_item_ipv6_mask);
-	}
+	spec = pattern->spec;
+	mask = pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	ret = dpaa2_configure_flow_ip_discrimation(priv,
-			flow, pattern, &local_cfg,
-			device_configured, group);
-	if (ret) {
-		DPAA2_PMD_ERR("IP discrimination failed!");
-		return -1;
+	if (!spec) {
+		struct prev_proto_field_id prev_proto;
+
+		prev_proto.prot = NET_PROT_ETH;
+		prev_proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_proto,
+				DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+				       RTE_FLOW_ITEM_TYPE_VLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+		return -EINVAL;
 	}
 
-	if (!spec_ipv4 && !spec_ipv6)
+	if (!mask->tci)
 		return 0;
 
-	if (mask_ipv4) {
-		if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
-			RTE_FLOW_ITEM_TYPE_IPV4)) {
-			DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
+					      NH_FLD_VLAN_TCI, &spec->tci,
+					      &mask->tci, sizeof(rte_be16_t),
+					      priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
 
-			return -1;
-		}
-	}
-
-	if (mask_ipv6) {
-		if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
-			RTE_FLOW_ITEM_TYPE_IPV6)) {
-			DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
-
-			return -1;
-		}
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
+					      NH_FLD_VLAN_TCI, &spec->tci,
+					      &mask->tci, sizeof(rte_be16_t),
+					      priv, group, &local_cfg,
+					      DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
 
-	if (mask_ipv4 && (mask_ipv4->hdr.src_addr ||
-		mask_ipv4->hdr.dst_addr)) {
-		flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR;
-	} else if (mask_ipv6 &&
-		(memcmp(&mask_ipv6->hdr.src_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE) ||
-		memcmp(&mask_ipv6->hdr.dst_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		flow->ipaddr_rule.ipaddr_type = FLOW_IPV6_ADDR;
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.src_addr) ||
-		(mask_ipv6 &&
-			memcmp(&mask_ipv6->hdr.src_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_IP,
-					NH_FLD_IP_SRC,
-					0);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_SRC failed.");
+	(*device_configured) |= local_cfg;
+	return 0;
+}
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+static int
+dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
+			  const struct rte_flow_attr *attr,
+			  const struct rte_flow_item *pattern,
+			  const struct rte_flow_action actions[] __rte_unused,
+			  struct rte_flow_error *error __rte_unused,
+			  int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ipv4 *spec_ipv4 = 0, *mask_ipv4 = 0;
+	const void *key, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	int size;
+	struct prev_proto_field_id prev_prot;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_SRC,
-					0);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_SRC failed.");
+	group = attr->group;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	/* Parse pattern list to get the matching parameters */
+	spec_ipv4 = pattern->spec;
+	mask_ipv4 = pattern->mask ?
+		    pattern->mask : &dpaa2_flow_item_ipv4_mask;
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.src_addr;
-		else
-			key = &spec_ipv6->hdr.src_addr;
-		if (mask_ipv4) {
-			mask = &mask_ipv4->hdr.src_addr;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-			prot = NET_PROT_IPV4;
-		} else {
-			mask = &mask_ipv6->hdr.src_addr;
-			size = NH_FLD_IPV6_ADDR_SIZE;
-			prot = NET_PROT_IPV6;
-		}
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				prot, NH_FLD_IP_SRC,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_SRC rule data set failed");
-			return -1;
-		}
+	prev_prot.prot = NET_PROT_ETH;
+	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				prot, NH_FLD_IP_SRC,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_SRC rule data set failed");
-			return -1;
-		}
+	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE, group,
+			&local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("IPv4 identification failed!");
+		return ret;
+	}
 
-		flow->ipaddr_rule.qos_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_SRC);
-		flow->ipaddr_rule.fs_ipsrc_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[group],
-				prot, NH_FLD_IP_SRC);
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.dst_addr) ||
-		(mask_ipv6 &&
-			memcmp(&mask_ipv6->hdr.dst_addr,
-				zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_DST);
-		if (index < 0) {
-			if (mask_ipv4)
-				size = NH_FLD_IPV4_ADDR_SIZE;
-			else
-				size = NH_FLD_IPV6_ADDR_SIZE;
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_IP,
-					NH_FLD_IP_DST,
-					size);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+	if (!spec_ipv4)
+		return 0;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+				       RTE_FLOW_ITEM_TYPE_IPV4)) {
+		DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+		return -EINVAL;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_DST);
-		if (index < 0) {
-			if (mask_ipv4)
-				size = NH_FLD_IPV4_ADDR_SIZE;
-			else
-				size = NH_FLD_IPV6_ADDR_SIZE;
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_DST,
-					size);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+	if (mask_ipv4->hdr.src_addr) {
+		key = &spec_ipv4->hdr.src_addr;
+		mask = &mask_ipv4->hdr.src_addr;
+		size = sizeof(rte_be32_t);
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv4->hdr.dst_addr) {
+		key = &spec_ipv4->hdr.dst_addr;
+		mask = &mask_ipv4->hdr.dst_addr;
+		size = sizeof(rte_be32_t);
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
+							 NH_FLD_IPV4_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv4->hdr.next_proto_id) {
+		key = &spec_ipv4->hdr.next_proto_id;
+		mask = &mask_ipv4->hdr.next_proto_id;
+		size = sizeof(uint8_t);
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	(*device_configured) |= local_cfg;
+	return 0;
+}
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.dst_addr;
-		else
-			key = &spec_ipv6->hdr.dst_addr;
-		if (mask_ipv4) {
-			mask = &mask_ipv4->hdr.dst_addr;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-			prot = NET_PROT_IPV4;
-		} else {
-			mask = &mask_ipv6->hdr.dst_addr;
-			size = NH_FLD_IPV6_ADDR_SIZE;
-			prot = NET_PROT_IPV6;
-		}
+static int
+dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
+			  const struct rte_flow_attr *attr,
+			  const struct rte_flow_item *pattern,
+			  const struct rte_flow_action actions[] __rte_unused,
+			  struct rte_flow_error *error __rte_unused,
+			  int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ipv6 *spec_ipv6 = 0, *mask_ipv6 = 0;
+	const void *key, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
+	int size;
+	struct prev_proto_field_id prev_prot;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				prot, NH_FLD_IP_DST,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_DST rule data set failed");
-			return -1;
-		}
+	group = attr->group;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				prot, NH_FLD_IP_DST,
-				key,	mask, size);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_DST rule data set failed");
-			return -1;
-		}
-		flow->ipaddr_rule.qos_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.qos_key_extract,
-				prot, NH_FLD_IP_DST);
-		flow->ipaddr_rule.fs_ipdst_offset =
-			dpaa2_flow_extract_key_offset(
-				&priv->extract.tc_key_extract[group],
-				prot, NH_FLD_IP_DST);
-	}
-
-	if ((mask_ipv4 && mask_ipv4->hdr.next_proto_id) ||
-		(mask_ipv6 && mask_ipv6->hdr.proto)) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-				&priv->extract.qos_key_extract,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				NH_FLD_IP_PROTO_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+	/* Parse pattern list to get the matching parameters */
+	spec_ipv6 = pattern->spec;
+	mask_ipv6 = pattern->mask ? pattern->mask : &dpaa2_flow_item_ipv6_mask;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_IP,
-					NH_FLD_IP_PROTO,
-					NH_FLD_IP_PROTO_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+	prev_prot.prot = NET_PROT_ETH;
+	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("IPv6 identification failed!");
+		return ret;
+	}
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr after NH_FLD_IP_PROTO rule set failed");
-			return -1;
-		}
+	if (!spec_ipv6)
+		return 0;
 
-		if (spec_ipv4)
-			key = &spec_ipv4->hdr.next_proto_id;
-		else
-			key = &spec_ipv6->hdr.proto;
-		if (mask_ipv4)
-			mask = &mask_ipv4->hdr.next_proto_id;
-		else
-			mask = &mask_ipv6->hdr.proto;
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				key,	mask, NH_FLD_IP_PROTO_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_IP_PROTO rule data set failed");
-			return -1;
-		}
+	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+				       RTE_FLOW_ITEM_TYPE_IPV6)) {
+		DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
+		return -EINVAL;
+	}
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_IP,
-				NH_FLD_IP_PROTO,
-				key,	mask, NH_FLD_IP_PROTO_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_IP_PROTO rule data set failed");
-			return -1;
-		}
+	if (memcmp((const char *)&mask_ipv6->hdr.src_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
+		key = &spec_ipv6->hdr.src_addr;
+		mask = &mask_ipv6->hdr.src_addr;
+		size = NH_FLD_IPV6_ADDR_SIZE;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_SRC_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask_ipv6->hdr.dst_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
+		key = &spec_ipv6->hdr.dst_addr;
+		mask = &mask_ipv6->hdr.dst_addr;
+		size = NH_FLD_IPV6_ADDR_SIZE;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
+							 NH_FLD_IPV6_DST_IP,
+							 key, mask, size, priv,
+							 group, &local_cfg,
+							 DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask_ipv6->hdr.proto) {
+		key = &spec_ipv6->hdr.proto;
+		mask = &mask_ipv6->hdr.proto;
+		size = sizeof(uint8_t);
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
+						      NH_FLD_IP_PROTO, key,
+						      mask, size, priv, group,
+						      &local_cfg,
+						      DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
-
 	return 0;
 }
 
 static int
-dpaa2_configure_flow_icmp(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_icmp *spec, *mask;
-
-	const struct rte_flow_item_icmp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_icmp *)pattern->spec;
-	last    = (const struct rte_flow_item_icmp *)pattern->last;
-	mask    = (const struct rte_flow_item_icmp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_icmp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_icmp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		/* Don't care any field of ICMP header,
-		 * only care ICMP protocol.
-		 * Example: flow create 0 ingress pattern icmp /
-		 */
 		/* Next proto of Generical IP is actually used
 		 * for ICMP identification.
+		 * Example: flow create 0 ingress pattern icmp
 		 */
-		struct proto_discrimination proto;
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate ICMP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate ICMP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before ICMP discrimination set failed");
-			return -1;
-		}
+		struct prev_proto_field_id prev_proto;
 
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_ICMP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("ICMP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_ICMP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
-
 		return 0;
 	}
 
@@ -1829,145 +1848,39 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_ICMP)) {
 		DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.icmp_type) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_TYPE,
-					NH_FLD_ICMP_TYPE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ICMP_TYPE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_TYPE,
-					NH_FLD_ICMP_TYPE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ICMP_TYPE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before ICMP TYPE set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_TYPE, &spec->hdr.icmp_type,
+			&mask->hdr.icmp_type, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_TYPE,
-				&spec->hdr.icmp_type,
-				&mask->hdr.icmp_type,
-				NH_FLD_ICMP_TYPE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ICMP_TYPE rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_TYPE,
-				&spec->hdr.icmp_type,
-				&mask->hdr.icmp_type,
-				NH_FLD_ICMP_TYPE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ICMP_TYPE rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_TYPE, &spec->hdr.icmp_type,
+			&mask->hdr.icmp_type, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.icmp_code) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_CODE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_CODE,
-					NH_FLD_ICMP_CODE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add ICMP_CODE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_ICMP, NH_FLD_ICMP_CODE);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_ICMP,
-					NH_FLD_ICMP_CODE,
-					NH_FLD_ICMP_CODE_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add ICMP_CODE failed.");
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_CODE, &spec->hdr.icmp_code,
+			&mask->hdr.icmp_code, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr after ICMP CODE set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_CODE,
-				&spec->hdr.icmp_code,
-				&mask->hdr.icmp_code,
-				NH_FLD_ICMP_CODE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS NH_FLD_ICMP_CODE rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_ICMP,
-				NH_FLD_ICMP_CODE,
-				&spec->hdr.icmp_code,
-				&mask->hdr.icmp_code,
-				NH_FLD_ICMP_CODE_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR("FS NH_FLD_ICMP_CODE rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_ICMP,
+			NH_FLD_ICMP_CODE, &spec->hdr.icmp_code,
+			&mask->hdr.icmp_code, sizeof(uint8_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -1976,84 +1889,41 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_udp(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_udp *spec, *mask;
-
-	const struct rte_flow_item_udp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_udp *)pattern->spec;
-	last    = (const struct rte_flow_item_udp *)pattern->last;
-	mask    = (const struct rte_flow_item_udp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_udp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_udp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate UDP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate UDP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before UDP discrimination set failed");
-			return -1;
-		}
+		struct prev_proto_field_id prev_proto;
 
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_UDP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("UDP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_UDP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2065,149 +1935,40 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_UDP)) {
 		DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add UDP_SRC failed.");
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_SRC,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add UDP_SRC failed.");
+	if (mask->hdr.dst_port) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before UDP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_UDP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_UDP_PORT_SRC rule data set failed");
-			return -1;
-		}
-	}
-
-	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_DST,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add UDP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_UDP,
-					NH_FLD_UDP_PORT_DST,
-					NH_FLD_UDP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add UDP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before UDP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_UDP_PORT_DST rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_UDP,
-				NH_FLD_UDP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_UDP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_UDP_PORT_DST rule data set failed");
-			return -1;
-		}
-	}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_UDP,
+			NH_FLD_UDP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
 
 	(*device_configured) |= local_cfg;
 
@@ -2215,84 +1976,41 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_tcp(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_tcp *spec, *mask;
-
-	const struct rte_flow_item_tcp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_tcp *)pattern->spec;
-	last    = (const struct rte_flow_item_tcp *)pattern->last;
-	mask    = (const struct rte_flow_item_tcp *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_tcp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_tcp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate TCP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-				&priv->extract.tc_key_extract[group],
-				DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate TCP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before TCP discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_TCP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("TCP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_TCP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2304,149 +2022,39 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 		RTE_FLOW_ITEM_TYPE_TCP)) {
 		DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_SRC,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add TCP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_SRC,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add TCP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before TCP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_TCP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_TCP_PORT_SRC rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_DST,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add TCP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_TCP,
-					NH_FLD_TCP_PORT_DST,
-					NH_FLD_TCP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add TCP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before TCP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_TCP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_TCP,
-				NH_FLD_TCP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_TCP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_TCP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_TCP,
+			NH_FLD_TCP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -2455,85 +2063,41 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_sctp(struct rte_flow *flow,
-			  struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_sctp *spec, *mask;
-
-	const struct rte_flow_item_sctp *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_sctp *)pattern->spec;
-	last    = (const struct rte_flow_item_sctp *)pattern->last;
-	mask    = (const struct rte_flow_item_sctp *)
-			(pattern->mask ? pattern->mask :
-				&dpaa2_flow_item_sctp_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_sctp_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec || !mc_l4_port_identification) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate SCTP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate SCTP failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_SCTP;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("SCTP discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_SCTP;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
@@ -2549,145 +2113,35 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 	}
 
 	if (mask->hdr.src_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_SRC,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add SCTP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_SRC,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add SCTP_SRC failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP_PORT_SRC set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_SCTP_PORT_SRC rule data set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_SRC,
-				&spec->hdr.src_port,
-				&mask->hdr.src_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_SCTP_PORT_SRC rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_SRC, &spec->hdr.src_port,
+			&mask->hdr.src_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	if (mask->hdr.dst_port) {
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.qos_key_extract,
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_DST,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("QoS Extract add SCTP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
-		if (index < 0) {
-			ret = dpaa2_flow_extract_add(
-					&priv->extract.tc_key_extract[group],
-					NET_PROT_SCTP,
-					NH_FLD_SCTP_PORT_DST,
-					NH_FLD_SCTP_PORT_SIZE);
-			if (ret) {
-				DPAA2_PMD_ERR("FS Extract add SCTP_DST failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move ipaddr before SCTP_PORT_DST set failed");
-			return -1;
-		}
-
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"QoS NH_FLD_SCTP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
 
-		ret = dpaa2_flow_rule_data_set(
-				&priv->extract.tc_key_extract[group],
-				&flow->fs_rule,
-				NET_PROT_SCTP,
-				NH_FLD_SCTP_PORT_DST,
-				&spec->hdr.dst_port,
-				&mask->hdr.dst_port,
-				NH_FLD_SCTP_PORT_SIZE);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"FS NH_FLD_SCTP_PORT_DST rule data set failed");
-			return -1;
-		}
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_SCTP,
+			NH_FLD_SCTP_PORT_DST, &spec->hdr.dst_port,
+			&mask->hdr.dst_port, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
 	}
 
 	(*device_configured) |= local_cfg;
@@ -2696,88 +2150,46 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_gre(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
-	int index, ret;
-	int local_cfg = 0;
+	int ret, local_cfg = 0;
 	uint32_t group;
 	const struct rte_flow_item_gre *spec, *mask;
-
-	const struct rte_flow_item_gre *last __rte_unused;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	group = attr->group;
 
 	/* Parse pattern list to get the matching parameters */
-	spec    = (const struct rte_flow_item_gre *)pattern->spec;
-	last    = (const struct rte_flow_item_gre *)pattern->last;
-	mask    = (const struct rte_flow_item_gre *)
-		(pattern->mask ? pattern->mask : &dpaa2_flow_item_gre_mask);
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_gre_mask;
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct proto_discrimination proto;
+		struct prev_proto_field_id prev_proto;
 
-		index = dpaa2_flow_extract_search(
-				&priv->extract.qos_key_extract.dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.qos_key_extract,
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"QoS Extract IP protocol to discriminate GRE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-		}
-
-		index = dpaa2_flow_extract_search(
-				&priv->extract.tc_key_extract[group].dpkg,
-				NET_PROT_IP, NH_FLD_IP_PROTO);
-		if (index < 0) {
-			ret = dpaa2_flow_proto_discrimination_extract(
-					&priv->extract.tc_key_extract[group],
-					DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
-			if (ret) {
-				DPAA2_PMD_ERR(
-					"FS Extract IP protocol to discriminate GRE failed.");
-
-				return -1;
-			}
-			local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-		}
-
-		ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-		if (ret) {
-			DPAA2_PMD_ERR(
-				"Move IP addr before GRE discrimination set failed");
-			return -1;
-		}
-
-		proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
-		proto.ip_proto = IPPROTO_GRE;
-		ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
-							proto, group);
-		if (ret) {
-			DPAA2_PMD_ERR("GRE discrimination rule set failed");
-			return -1;
-		}
+		prev_proto.prot = NET_PROT_IP;
+		prev_proto.ip_proto = IPPROTO_GRE;
+		ret = dpaa2_flow_identify_by_prev_prot(priv,
+			flow, &prev_proto,
+			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
 
 		(*device_configured) |= local_cfg;
 
-		return 0;
+		if (!spec)
+			return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2790,74 +2202,19 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 	if (!mask->protocol)
 		return 0;
 
-	index = dpaa2_flow_extract_search(
-			&priv->extract.qos_key_extract.dpkg,
-			NET_PROT_GRE, NH_FLD_GRE_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.qos_key_extract,
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract add GRE_TYPE failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-	}
-
-	index = dpaa2_flow_extract_search(
-			&priv->extract.tc_key_extract[group].dpkg,
-			NET_PROT_GRE, NH_FLD_GRE_TYPE);
-	if (index < 0) {
-		ret = dpaa2_flow_extract_add(
-				&priv->extract.tc_key_extract[group],
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				sizeof(rte_be16_t));
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract add GRE_TYPE failed.");
-
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-	}
-
-	ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"Move ipaddr before GRE_TYPE set failed");
-		return -1;
-	}
-
-	ret = dpaa2_flow_rule_data_set(
-				&priv->extract.qos_key_extract,
-				&flow->qos_rule,
-				NET_PROT_GRE,
-				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
-				sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"QoS NH_FLD_GRE_TYPE rule data set failed");
-		return -1;
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GRE,
+			NH_FLD_GRE_TYPE, &spec->protocol,
+			&mask->protocol, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_rule_data_set(
-			&priv->extract.tc_key_extract[group],
-			&flow->fs_rule,
-			NET_PROT_GRE,
-			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
-			sizeof(rte_be16_t));
-	if (ret) {
-		DPAA2_PMD_ERR(
-			"FS NH_FLD_GRE_TYPE rule data set failed");
-		return -1;
-	}
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GRE,
+			NH_FLD_GRE_TYPE, &spec->protocol,
+			&mask->protocol, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
 
 	(*device_configured) |= local_cfg;
 
@@ -2865,404 +2222,109 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_raw(struct rte_flow *flow,
-			 struct rte_eth_dev *dev,
-			 const struct rte_flow_attr *attr,
-			 const struct rte_flow_item *pattern,
-			 const struct rte_flow_action actions[] __rte_unused,
-			 struct rte_flow_error *error __rte_unused,
-			 int *device_configured)
+dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_raw *spec = pattern->spec;
 	const struct rte_flow_item_raw *mask = pattern->mask;
 	int prev_key_size =
-		priv->extract.qos_key_extract.key_info.key_total_size;
+		priv->extract.qos_key_extract.key_profile.key_max_size;
 	int local_cfg = 0, ret;
 	uint32_t group;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
-		DPAA2_PMD_ERR("spec or mask not present.");
-		return -EINVAL;
-	}
-	/* Only supports non-relative with offset 0 */
-	if (spec->relative || spec->offset != 0 ||
-	    spec->search || spec->limit) {
-		DPAA2_PMD_ERR("relative and non zero offset not supported.");
-		return -EINVAL;
-	}
-	/* Spec len and mask len should be same */
-	if (spec->length != mask->length) {
-		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
-		return -EINVAL;
-	}
-
-	/* Get traffic class index and flow id to be configured */
-	group = attr->group;
-	flow->tc_id = group;
-	flow->tc_index = attr->priority;
-
-	if (prev_key_size <= spec->length) {
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
-						 spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
-
-		ret = dpaa2_flow_extract_add_raw(
-					&priv->extract.tc_key_extract[group],
-					spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
-	}
-
-	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
-					   mask->pattern, spec->length);
-	if (ret) {
-		DPAA2_PMD_ERR("QoS RAW rule data set failed");
-		return -1;
-	}
-
-	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
-					   mask->pattern, spec->length);
-	if (ret) {
-		DPAA2_PMD_ERR("FS RAW rule data set failed");
-		return -1;
-	}
-
-	(*device_configured) |= local_cfg;
-
-	return 0;
-}
-
-static inline int
-dpaa2_fs_action_supported(enum rte_flow_action_type action)
-{
-	int i;
-
-	for (i = 0; i < (int)(sizeof(dpaa2_supported_fs_action_type) /
-					sizeof(enum rte_flow_action_type)); i++) {
-		if (action == dpaa2_supported_fs_action_type[i])
-			return 1;
-	}
-
-	return 0;
-}
-/* The existing QoS/FS entry with IP address(es)
- * needs update after
- * new extract(s) are inserted before IP
- * address(es) extract(s).
- */
-static int
-dpaa2_flow_entry_update(
-	struct dpaa2_dev_priv *priv, uint8_t tc_id)
-{
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
-	int ret;
-	int qos_ipsrc_offset = -1, qos_ipdst_offset = -1;
-	int fs_ipsrc_offset = -1, fs_ipdst_offset = -1;
-	struct dpaa2_key_extract *qos_key_extract =
-		&priv->extract.qos_key_extract;
-	struct dpaa2_key_extract *tc_key_extract =
-		&priv->extract.tc_key_extract[tc_id];
-	char ipsrc_key[NH_FLD_IPV6_ADDR_SIZE];
-	char ipdst_key[NH_FLD_IPV6_ADDR_SIZE];
-	char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
-	char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
-	int extend = -1, extend1, size = -1;
-	uint16_t qos_index;
-
-	while (curr) {
-		if (curr->ipaddr_rule.ipaddr_type ==
-			FLOW_NONE_IPADDR) {
-			curr = LIST_NEXT(curr, next);
-			continue;
-		}
-
-		if (curr->ipaddr_rule.ipaddr_type ==
-			FLOW_IPV4_ADDR) {
-			qos_ipsrc_offset =
-				qos_key_extract->key_info.ipv4_src_offset;
-			qos_ipdst_offset =
-				qos_key_extract->key_info.ipv4_dst_offset;
-			fs_ipsrc_offset =
-				tc_key_extract->key_info.ipv4_src_offset;
-			fs_ipdst_offset =
-				tc_key_extract->key_info.ipv4_dst_offset;
-			size = NH_FLD_IPV4_ADDR_SIZE;
-		} else {
-			qos_ipsrc_offset =
-				qos_key_extract->key_info.ipv6_src_offset;
-			qos_ipdst_offset =
-				qos_key_extract->key_info.ipv6_dst_offset;
-			fs_ipsrc_offset =
-				tc_key_extract->key_info.ipv6_src_offset;
-			fs_ipdst_offset =
-				tc_key_extract->key_info.ipv6_dst_offset;
-			size = NH_FLD_IPV6_ADDR_SIZE;
-		}
-
-		qos_index = curr->tc_id * priv->fs_entries +
-			curr->tc_index;
-
-		dpaa2_flow_qos_entry_log("Before update", curr, qos_index, stdout);
-
-		if (priv->num_rx_tc > 1) {
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
-					priv->token, &curr->qos_rule);
-			if (ret) {
-				DPAA2_PMD_ERR("Qos entry remove failed.");
-				return -1;
-			}
-		}
-
-		extend = -1;
-
-		if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
-			RTE_ASSERT(qos_ipsrc_offset >=
-				curr->ipaddr_rule.qos_ipsrc_offset);
-			extend1 = qos_ipsrc_offset -
-				curr->ipaddr_rule.qos_ipsrc_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-
-			memcpy(ipsrc_key,
-				(char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				0, size);
-
-			memcpy(ipsrc_mask,
-				(char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				0, size);
-
-			curr->ipaddr_rule.qos_ipsrc_offset = qos_ipsrc_offset;
-		}
-
-		if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
-			RTE_ASSERT(qos_ipdst_offset >=
-				curr->ipaddr_rule.qos_ipdst_offset);
-			extend1 = qos_ipdst_offset -
-				curr->ipaddr_rule.qos_ipdst_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-
-			memcpy(ipdst_key,
-				(char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				0, size);
-
-			memcpy(ipdst_mask,
-				(char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				0, size);
-
-			curr->ipaddr_rule.qos_ipdst_offset = qos_ipdst_offset;
-		}
-
-		if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-			memcpy((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				ipsrc_key,
-				size);
-			memcpy((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipsrc_offset,
-				ipsrc_mask,
-				size);
-		}
-		if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
-			RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
-				(size == NH_FLD_IPV6_ADDR_SIZE));
-			memcpy((char *)(size_t)curr->qos_rule.key_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				ipdst_key,
-				size);
-			memcpy((char *)(size_t)curr->qos_rule.mask_iova +
-				curr->ipaddr_rule.qos_ipdst_offset,
-				ipdst_mask,
-				size);
-		}
-
-		if (extend >= 0)
-			curr->qos_real_key_size += extend;
-
-		curr->qos_rule.key_size = FIXED_ENTRY_SIZE;
-
-		dpaa2_flow_qos_entry_log("Start update", curr, qos_index, stdout);
-
-		if (priv->num_rx_tc > 1) {
-			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
-					priv->token, &curr->qos_rule,
-					curr->tc_id, qos_index,
-					0, 0);
-			if (ret) {
-				DPAA2_PMD_ERR("Qos entry update failed.");
-				return -1;
-			}
-		}
-
-		if (!dpaa2_fs_action_supported(curr->action)) {
-			curr = LIST_NEXT(curr, next);
-			continue;
-		}
+		DPAA2_PMD_ERR("spec or mask not present.");
+		return -EINVAL;
+	}
+	/* Only supports non-relative with offset 0 */
+	if (spec->relative || spec->offset != 0 ||
+	    spec->search || spec->limit) {
+		DPAA2_PMD_ERR("relative and non zero offset not supported.");
+		return -EINVAL;
+	}
+	/* Spec len and mask len should be same */
+	if (spec->length != mask->length) {
+		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
+		return -EINVAL;
+	}
 
-		dpaa2_flow_fs_entry_log("Before update", curr, stdout);
-		extend = -1;
+	/* Get traffic class index and flow id to be configured */
+	group = attr->group;
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
 
-		ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW,
-				priv->token, curr->tc_id, &curr->fs_rule);
+	if (prev_key_size <= spec->length) {
+		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
+						 spec->length);
 		if (ret) {
-			DPAA2_PMD_ERR("FS entry remove failed.");
+			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
 			return -1;
 		}
+		local_cfg |= DPAA2_FLOW_QOS_TYPE;
 
-		if (curr->ipaddr_rule.fs_ipsrc_offset >= 0 &&
-			tc_id == curr->tc_id) {
-			RTE_ASSERT(fs_ipsrc_offset >=
-				curr->ipaddr_rule.fs_ipsrc_offset);
-			extend1 = fs_ipsrc_offset -
-				curr->ipaddr_rule.fs_ipsrc_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			memcpy(ipsrc_key,
-				(char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				0, size);
-
-			memcpy(ipsrc_mask,
-				(char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				0, size);
-
-			curr->ipaddr_rule.fs_ipsrc_offset = fs_ipsrc_offset;
+		ret = dpaa2_flow_extract_add_raw(&priv->extract.tc_key_extract[group],
+					spec->length);
+		if (ret) {
+			DPAA2_PMD_ERR("FS Extract RAW add failed.");
+			return -1;
 		}
+		local_cfg |= DPAA2_FLOW_FS_TYPE;
+	}
 
-		if (curr->ipaddr_rule.fs_ipdst_offset >= 0 &&
-			tc_id == curr->tc_id) {
-			RTE_ASSERT(fs_ipdst_offset >=
-				curr->ipaddr_rule.fs_ipdst_offset);
-			extend1 = fs_ipdst_offset -
-				curr->ipaddr_rule.fs_ipdst_offset;
-			if (extend >= 0)
-				RTE_ASSERT(extend == extend1);
-			else
-				extend = extend1;
-
-			memcpy(ipdst_key,
-				(char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				0, size);
-
-			memcpy(ipdst_mask,
-				(char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				size);
-			memset((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				0, size);
-
-			curr->ipaddr_rule.fs_ipdst_offset = fs_ipdst_offset;
-		}
+	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
+					   mask->pattern, spec->length);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS RAW rule data set failed");
+		return -1;
+	}
 
-		if (curr->ipaddr_rule.fs_ipsrc_offset >= 0) {
-			memcpy((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				ipsrc_key,
-				size);
-			memcpy((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipsrc_offset,
-				ipsrc_mask,
-				size);
-		}
-		if (curr->ipaddr_rule.fs_ipdst_offset >= 0) {
-			memcpy((char *)(size_t)curr->fs_rule.key_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				ipdst_key,
-				size);
-			memcpy((char *)(size_t)curr->fs_rule.mask_iova +
-				curr->ipaddr_rule.fs_ipdst_offset,
-				ipdst_mask,
-				size);
-		}
+	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
+					   mask->pattern, spec->length);
+	if (ret) {
+		DPAA2_PMD_ERR("FS RAW rule data set failed");
+		return -1;
+	}
 
-		if (extend >= 0)
-			curr->fs_real_key_size += extend;
-		curr->fs_rule.key_size = FIXED_ENTRY_SIZE;
+	(*device_configured) |= local_cfg;
 
-		dpaa2_flow_fs_entry_log("Start update", curr, stdout);
+	return 0;
+}
 
-		ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
-				priv->token, curr->tc_id, curr->tc_index,
-				&curr->fs_rule, &curr->action_cfg);
-		if (ret) {
-			DPAA2_PMD_ERR("FS entry update failed.");
-			return -1;
-		}
+static inline int
+dpaa2_fs_action_supported(enum rte_flow_action_type action)
+{
+	int i;
+	int action_num = sizeof(dpaa2_supported_fs_action_type) /
+		sizeof(enum rte_flow_action_type);
 
-		curr = LIST_NEXT(curr, next);
+	for (i = 0; i < action_num; i++) {
+		if (action == dpaa2_supported_fs_action_type[i])
+			return true;
 	}
 
-	return 0;
+	return false;
 }
 
 static inline int
-dpaa2_flow_verify_attr(
-	struct dpaa2_dev_priv *priv,
+dpaa2_flow_verify_attr(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr)
 {
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
 
 	while (curr) {
 		if (curr->tc_id == attr->group &&
 			curr->tc_index == attr->priority) {
-			DPAA2_PMD_ERR(
-				"Flow with group %d and priority %d already exists.",
+			DPAA2_PMD_ERR("Flow(TC[%d].entry[%d] exists",
 				attr->group, attr->priority);
 
-			return -1;
+			return -EINVAL;
 		}
 		curr = LIST_NEXT(curr, next);
 	}
@@ -3275,18 +2337,16 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_action *action)
 {
 	const struct rte_flow_action_port_id *port_id;
+	const struct rte_flow_action_ethdev *ethdev;
 	int idx = -1;
 	struct rte_eth_dev *dest_dev;
 
 	if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
-		port_id = (const struct rte_flow_action_port_id *)
-					action->conf;
+		port_id = action->conf;
 		if (!port_id->original)
 			idx = port_id->id;
 	} else if (action->type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) {
-		const struct rte_flow_action_ethdev *ethdev;
-
-		ethdev = (const struct rte_flow_action_ethdev *)action->conf;
+		ethdev = action->conf;
 		idx = ethdev->port_id;
 	} else {
 		return NULL;
@@ -3306,8 +2366,7 @@ dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
 }
 
 static inline int
-dpaa2_flow_verify_action(
-	struct dpaa2_dev_priv *priv,
+dpaa2_flow_verify_action(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_action actions[])
 {
@@ -3319,15 +2378,14 @@ dpaa2_flow_verify_action(
 	while (!end_of_list) {
 		switch (actions[j].type) {
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
-			dest_queue = (const struct rte_flow_action_queue *)
-					(actions[j].conf);
+			dest_queue = actions[j].conf;
 			rxq = priv->rx_vq[dest_queue->index];
 			if (attr->group != rxq->tc_index) {
-				DPAA2_PMD_ERR(
-					"RXQ[%d] does not belong to the group %d",
-					dest_queue->index, attr->group);
+				DPAA2_PMD_ERR("FSQ(%d.%d) not in TC[%d]",
+					rxq->tc_index, rxq->flow_id,
+					attr->group);
 
-				return -1;
+				return -ENOTSUP;
 			}
 			break;
 		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
@@ -3341,20 +2399,17 @@ dpaa2_flow_verify_action(
 			rss_conf = (const struct rte_flow_action_rss *)
 					(actions[j].conf);
 			if (rss_conf->queue_num > priv->dist_queues) {
-				DPAA2_PMD_ERR(
-					"RSS number exceeds the distribution size");
+				DPAA2_PMD_ERR("RSS number too large");
 				return -ENOTSUP;
 			}
 			for (i = 0; i < (int)rss_conf->queue_num; i++) {
 				if (rss_conf->queue[i] >= priv->nb_rx_queues) {
-					DPAA2_PMD_ERR(
-						"RSS queue index exceeds the number of RXQs");
+					DPAA2_PMD_ERR("RSS queue not in range");
 					return -ENOTSUP;
 				}
 				rxq = priv->rx_vq[rss_conf->queue[i]];
 				if (rxq->tc_index != attr->group) {
-					DPAA2_PMD_ERR(
-						"Queue/Group combination are not supported");
+					DPAA2_PMD_ERR("RSS queue not in group");
 					return -ENOTSUP;
 				}
 			}
@@ -3374,28 +2429,248 @@ dpaa2_flow_verify_action(
 }
 
 static int
-dpaa2_generic_flow_set(struct rte_flow *flow,
-		       struct rte_eth_dev *dev,
-		       const struct rte_flow_attr *attr,
-		       const struct rte_flow_item pattern[],
-		       const struct rte_flow_action actions[],
-		       struct rte_flow_error *error)
+dpaa2_configure_flow_fs_action(struct dpaa2_dev_priv *priv,
+	struct dpaa2_dev_flow *flow,
+	const struct rte_flow_action *rte_action)
 {
+	struct rte_eth_dev *dest_dev;
+	struct dpaa2_dev_priv *dest_priv;
 	const struct rte_flow_action_queue *dest_queue;
+	struct dpaa2_queue *dest_q;
+
+	memset(&flow->fs_action_cfg, 0,
+		sizeof(struct dpni_fs_action_cfg));
+	flow->action_type = rte_action->type;
+
+	if (flow->action_type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		dest_queue = rte_action->conf;
+		dest_q = priv->rx_vq[dest_queue->index];
+		flow->fs_action_cfg.flow_id = dest_q->flow_id;
+	} else if (flow->action_type == RTE_FLOW_ACTION_TYPE_PORT_ID ||
+		   flow->action_type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT) {
+		dest_dev = dpaa2_flow_redirect_dev(priv, rte_action);
+		if (!dest_dev) {
+			DPAA2_PMD_ERR("Invalid device to redirect");
+			return -EINVAL;
+		}
+
+		dest_priv = dest_dev->data->dev_private;
+		dest_q = dest_priv->tx_vq[0];
+		flow->fs_action_cfg.options =
+			DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
+		flow->fs_action_cfg.redirect_obj_token =
+			dest_priv->token;
+		flow->fs_action_cfg.flow_id = dest_q->flow_id;
+	}
+
+	return 0;
+}
+
+static inline uint16_t
+dpaa2_flow_entry_size(uint16_t key_max_size)
+{
+	if (key_max_size > DPAA2_FLOW_ENTRY_MAX_SIZE) {
+		DPAA2_PMD_ERR("Key size(%d) > max(%d)",
+			key_max_size,
+			DPAA2_FLOW_ENTRY_MAX_SIZE);
+
+		return 0;
+	}
+
+	if (key_max_size > DPAA2_FLOW_ENTRY_MIN_SIZE)
+		return DPAA2_FLOW_ENTRY_MAX_SIZE;
+
+	/* Current MC only support fixed entry size(56)*/
+	return DPAA2_FLOW_ENTRY_MAX_SIZE;
+}
+
+static inline int
+dpaa2_flow_clear_fs_table(struct dpaa2_dev_priv *priv,
+	uint8_t tc_id)
+{
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	int need_clear = 0, ret;
+	struct fsl_mc_io *dpni = priv->hw;
+
+	while (curr) {
+		if (curr->tc_id == tc_id) {
+			need_clear = 1;
+			break;
+		}
+		curr = LIST_NEXT(curr, next);
+	}
+
+	if (need_clear) {
+		ret = dpni_clear_fs_entries(dpni, CMD_PRI_LOW,
+				priv->token, tc_id);
+		if (ret) {
+			DPAA2_PMD_ERR("TC[%d] clear failed", tc_id);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static int
+dpaa2_configure_fs_rss_table(struct dpaa2_dev_priv *priv,
+	uint8_t tc_id, uint16_t dist_size, int rss_dist)
+{
+	struct dpaa2_key_extract *tc_extract;
+	uint8_t *key_cfg_buf;
+	uint64_t key_cfg_iova;
+	int ret;
+	struct dpni_rx_dist_cfg tc_cfg;
+	struct fsl_mc_io *dpni = priv->hw;
+	uint16_t entry_size;
+	uint16_t key_max_size;
+
+	ret = dpaa2_flow_clear_fs_table(priv, tc_id);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] clear failed", tc_id);
+		return ret;
+	}
+
+	tc_extract = &priv->extract.tc_key_extract[tc_id];
+	key_cfg_buf = priv->extract.tc_extract_param[tc_id];
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+
+	key_max_size = tc_extract->key_profile.key_max_size;
+	entry_size = dpaa2_flow_entry_size(key_max_size);
+
+	dpaa2_flow_fs_extracts_log(priv, tc_id);
+	ret = dpkg_prepare_key_cfg(&tc_extract->dpkg,
+			key_cfg_buf);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] prepare key failed", tc_id);
+		return ret;
+	}
+
+	memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
+	tc_cfg.dist_size = dist_size;
+	tc_cfg.key_cfg_iova = key_cfg_iova;
+	if (rss_dist)
+		tc_cfg.enable = true;
+	else
+		tc_cfg.enable = false;
+	tc_cfg.tc = tc_id;
+	ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
+			priv->token, &tc_cfg);
+	if (ret < 0) {
+		if (rss_dist) {
+			DPAA2_PMD_ERR("RSS TC[%d] set failed",
+				tc_id);
+		} else {
+			DPAA2_PMD_ERR("FS TC[%d] hash disable failed",
+				tc_id);
+		}
+
+		return ret;
+	}
+
+	if (rss_dist)
+		return 0;
+
+	tc_cfg.enable = true;
+	tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
+	ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
+			priv->token, &tc_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("TC[%d] FS configured failed", tc_id);
+		return ret;
+	}
+
+	ret = dpaa2_flow_rule_add_all(priv, DPAA2_FLOW_FS_TYPE,
+			entry_size, tc_id);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
+	int rss_dist)
+{
+	struct dpaa2_key_extract *qos_extract;
+	uint8_t *key_cfg_buf;
+	uint64_t key_cfg_iova;
+	int ret;
+	struct dpni_qos_tbl_cfg qos_cfg;
+	struct fsl_mc_io *dpni = priv->hw;
+	uint16_t entry_size;
+	uint16_t key_max_size;
+
+	if (!rss_dist && priv->num_rx_tc <= 1) {
+		/* QoS table is effecitive for FS multiple TCs or RSS.*/
+		return 0;
+	}
+
+	if (LIST_FIRST(&priv->flows)) {
+		ret = dpni_clear_qos_table(dpni, CMD_PRI_LOW,
+				priv->token);
+		if (ret < 0) {
+			DPAA2_PMD_ERR("QoS table clear failed");
+			return ret;
+		}
+	}
+
+	qos_extract = &priv->extract.qos_key_extract;
+	key_cfg_buf = priv->extract.qos_extract_param;
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+
+	key_max_size = qos_extract->key_profile.key_max_size;
+	entry_size = dpaa2_flow_entry_size(key_max_size);
+
+	dpaa2_flow_qos_extracts_log(priv);
+
+	ret = dpkg_prepare_key_cfg(&qos_extract->dpkg,
+			key_cfg_buf);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("QoS prepare extract failed");
+		return ret;
+	}
+	memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
+	qos_cfg.keep_entries = true;
+	qos_cfg.key_cfg_iova = key_cfg_iova;
+	if (rss_dist) {
+		qos_cfg.discard_on_miss = true;
+	} else {
+		qos_cfg.discard_on_miss = false;
+		qos_cfg.default_tc = 0;
+	}
+
+	ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
+			priv->token, &qos_cfg);
+	if (ret < 0) {
+		DPAA2_PMD_ERR("QoS table set failed");
+		return ret;
+	}
+
+	ret = dpaa2_flow_rule_add_all(priv, DPAA2_FLOW_QOS_TYPE,
+			entry_size, 0);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item pattern[],
+	const struct rte_flow_action actions[],
+	struct rte_flow_error *error)
+{
 	const struct rte_flow_action_rss *rss_conf;
 	int is_keycfg_configured = 0, end_of_list = 0;
 	int ret = 0, i = 0, j = 0;
-	struct dpni_rx_dist_cfg tc_cfg;
-	struct dpni_qos_tbl_cfg qos_cfg;
-	struct dpni_fs_action_cfg action;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct dpaa2_queue *dest_q;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
-	size_t param;
-	struct rte_flow *curr = LIST_FIRST(&priv->flows);
-	uint16_t qos_index;
-	struct rte_eth_dev *dest_dev;
-	struct dpaa2_dev_priv *dest_priv;
+	struct dpaa2_dev_flow *curr = LIST_FIRST(&priv->flows);
+	uint16_t dist_size, key_size;
+	struct dpaa2_key_extract *qos_key_extract;
+	struct dpaa2_key_extract *tc_key_extract;
 
 	ret = dpaa2_flow_verify_attr(priv, attr);
 	if (ret)
@@ -3413,7 +2688,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("ETH flow configuration failed!");
+				DPAA2_PMD_ERR("ETH flow config failed!");
 				return ret;
 			}
 			break;
@@ -3422,17 +2697,25 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("vLan flow configuration failed!");
+				DPAA2_PMD_ERR("vLan flow config failed!");
 				return ret;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
+			ret = dpaa2_configure_flow_ipv4(flow,
+					dev, attr, &pattern[i], actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("IPV4 flow config failed!");
+				return ret;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ret = dpaa2_configure_flow_generic_ip(flow,
+			ret = dpaa2_configure_flow_ipv6(flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("IP flow configuration failed!");
+				DPAA2_PMD_ERR("IPV6 flow config failed!");
 				return ret;
 			}
 			break;
@@ -3441,7 +2724,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("ICMP flow configuration failed!");
+				DPAA2_PMD_ERR("ICMP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3450,7 +2733,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("UDP flow configuration failed!");
+				DPAA2_PMD_ERR("UDP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3459,7 +2742,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("TCP flow configuration failed!");
+				DPAA2_PMD_ERR("TCP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3468,7 +2751,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("SCTP flow configuration failed!");
+				DPAA2_PMD_ERR("SCTP flow config failed!");
 				return ret;
 			}
 			break;
@@ -3477,17 +2760,17 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					dev, attr, &pattern[i], actions, error,
 					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("GRE flow configuration failed!");
+				DPAA2_PMD_ERR("GRE flow config failed!");
 				return ret;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow,
-						       dev, attr, &pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					dev, attr, &pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
-				DPAA2_PMD_ERR("RAW flow configuration failed!");
+				DPAA2_PMD_ERR("RAW flow config failed!");
 				return ret;
 			}
 			break;
@@ -3502,6 +2785,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 		i++;
 	}
 
+	qos_key_extract = &priv->extract.qos_key_extract;
+	key_size = qos_key_extract->key_profile.key_max_size;
+	flow->qos_rule.key_size = dpaa2_flow_entry_size(key_size);
+
+	tc_key_extract = &priv->extract.tc_key_extract[flow->tc_id];
+	key_size = tc_key_extract->key_profile.key_max_size;
+	flow->fs_rule.key_size = dpaa2_flow_entry_size(key_size);
+
 	/* Let's parse action on matching traffic */
 	end_of_list = 0;
 	while (!end_of_list) {
@@ -3509,150 +2800,33 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
 		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
 		case RTE_FLOW_ACTION_TYPE_PORT_ID:
-			memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
-			flow->action = actions[j].type;
-
-			if (actions[j].type == RTE_FLOW_ACTION_TYPE_QUEUE) {
-				dest_queue = (const struct rte_flow_action_queue *)
-								(actions[j].conf);
-				dest_q = priv->rx_vq[dest_queue->index];
-				action.flow_id = dest_q->flow_id;
-			} else {
-				dest_dev = dpaa2_flow_redirect_dev(priv,
-								   &actions[j]);
-				if (!dest_dev) {
-					DPAA2_PMD_ERR("Invalid destination device to redirect!");
-					return -1;
-				}
-
-				dest_priv = dest_dev->data->dev_private;
-				dest_q = dest_priv->tx_vq[0];
-				action.options =
-						DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
-				action.redirect_obj_token = dest_priv->token;
-				action.flow_id = dest_q->flow_id;
-			}
+			ret = dpaa2_configure_flow_fs_action(priv, flow,
+							     &actions[j]);
+			if (ret)
+				return ret;
 
 			/* Configure FS table first*/
-			if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
-				dpaa2_flow_fs_table_extracts_log(priv,
-							flow->tc_id, stdout);
-				if (dpkg_prepare_key_cfg(
-				&priv->extract.tc_key_extract[flow->tc_id].dpkg,
-				(uint8_t *)(size_t)priv->extract
-				.tc_extract_param[flow->tc_id]) < 0) {
-					DPAA2_PMD_ERR(
-					"Unable to prepare extract parameters");
-					return -1;
-				}
-
-				memset(&tc_cfg, 0,
-					sizeof(struct dpni_rx_dist_cfg));
-				tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc;
-				tc_cfg.key_cfg_iova =
-					(uint64_t)priv->extract.tc_extract_param[flow->tc_id];
-				tc_cfg.tc = flow->tc_id;
-				tc_cfg.enable = false;
-				ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
-						priv->token, &tc_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"TC hash cannot be disabled.(%d)",
-						ret);
-					return -1;
-				}
-				tc_cfg.enable = true;
-				tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
-				ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
-							 priv->token, &tc_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"TC distribution cannot be configured.(%d)",
-						ret);
-					return -1;
-				}
+			dist_size = priv->nb_rx_queues / priv->num_rx_tc;
+			if (is_keycfg_configured & DPAA2_FLOW_FS_TYPE) {
+				ret = dpaa2_configure_fs_rss_table(priv,
+								   flow->tc_id,
+								   dist_size,
+								   false);
+				if (ret)
+					return ret;
 			}
 
 			/* Configure QoS table then.*/
-			if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
-				dpaa2_flow_qos_table_extracts_log(priv, stdout);
-				if (dpkg_prepare_key_cfg(
-					&priv->extract.qos_key_extract.dpkg,
-					(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
-					DPAA2_PMD_ERR(
-						"Unable to prepare extract parameters");
-					return -1;
-				}
-
-				memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
-				qos_cfg.discard_on_miss = false;
-				qos_cfg.default_tc = 0;
-				qos_cfg.keep_entries = true;
-				qos_cfg.key_cfg_iova =
-					(size_t)priv->extract.qos_extract_param;
-				/* QoS table is effective for multiple TCs. */
-				if (priv->num_rx_tc > 1) {
-					ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
-						priv->token, &qos_cfg);
-					if (ret < 0) {
-						DPAA2_PMD_ERR(
-						"RSS QoS table can not be configured(%d)",
-							ret);
-						return -1;
-					}
-				}
-			}
-
-			flow->qos_real_key_size = priv->extract
-				.qos_key_extract.key_info.key_total_size;
-			if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
-				if (flow->ipaddr_rule.qos_ipdst_offset >=
-					flow->ipaddr_rule.qos_ipsrc_offset) {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipdst_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				} else {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipsrc_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				}
-			} else if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV6_ADDR) {
-				if (flow->ipaddr_rule.qos_ipdst_offset >=
-					flow->ipaddr_rule.qos_ipsrc_offset) {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipdst_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				} else {
-					flow->qos_real_key_size =
-						flow->ipaddr_rule.qos_ipsrc_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				}
+			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
+				ret = dpaa2_configure_qos_table(priv, false);
+				if (ret)
+					return ret;
 			}
 
-			/* QoS entry added is only effective for multiple TCs.*/
 			if (priv->num_rx_tc > 1) {
-				qos_index = flow->tc_id * priv->fs_entries +
-					flow->tc_index;
-				if (qos_index >= priv->qos_entries) {
-					DPAA2_PMD_ERR("QoS table with %d entries full",
-						priv->qos_entries);
-					return -1;
-				}
-				flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
-
-				dpaa2_flow_qos_entry_log("Start add", flow,
-							qos_index, stdout);
-
-				ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
-						priv->token, &flow->qos_rule,
-						flow->tc_id, qos_index,
-						0, 0);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-						"Error in adding entry to QoS table(%d)", ret);
+				ret = dpaa2_flow_add_qos_rule(priv, flow);
+				if (ret)
 					return ret;
-				}
 			}
 
 			if (flow->tc_index >= priv->fs_entries) {
@@ -3661,140 +2835,47 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 				return -1;
 			}
 
-			flow->fs_real_key_size =
-				priv->extract.tc_key_extract[flow->tc_id]
-				.key_info.key_total_size;
-
-			if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV4_ADDR) {
-				if (flow->ipaddr_rule.fs_ipdst_offset >=
-					flow->ipaddr_rule.fs_ipsrc_offset) {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipdst_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				} else {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipsrc_offset +
-						NH_FLD_IPV4_ADDR_SIZE;
-				}
-			} else if (flow->ipaddr_rule.ipaddr_type ==
-				FLOW_IPV6_ADDR) {
-				if (flow->ipaddr_rule.fs_ipdst_offset >=
-					flow->ipaddr_rule.fs_ipsrc_offset) {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipdst_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				} else {
-					flow->fs_real_key_size =
-						flow->ipaddr_rule.fs_ipsrc_offset +
-						NH_FLD_IPV6_ADDR_SIZE;
-				}
-			}
-
-			flow->fs_rule.key_size = FIXED_ENTRY_SIZE;
-
-			dpaa2_flow_fs_entry_log("Start add", flow, stdout);
-
-			ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
-						flow->tc_id, flow->tc_index,
-						&flow->fs_rule, &action);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"Error in adding entry to FS table(%d)", ret);
+			ret = dpaa2_flow_add_fs_rule(priv, flow);
+			if (ret)
 				return ret;
-			}
-			memcpy(&flow->action_cfg, &action,
-				sizeof(struct dpni_fs_action_cfg));
+
 			break;
 		case RTE_FLOW_ACTION_TYPE_RSS:
-			rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
+			rss_conf = actions[j].conf;
+			flow->action_type = RTE_FLOW_ACTION_TYPE_RSS;
 
-			flow->action = RTE_FLOW_ACTION_TYPE_RSS;
 			ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types,
-					&priv->extract.tc_key_extract[flow->tc_id].dpkg);
+					&tc_key_extract->dpkg);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"unable to set flow distribution.please check queue config");
+				DPAA2_PMD_ERR("TC[%d] distset RSS failed",
+					      flow->tc_id);
 				return ret;
 			}
 
-			/* Allocate DMA'ble memory to write the rules */
-			param = (size_t)rte_malloc(NULL, 256, 64);
-			if (!param) {
-				DPAA2_PMD_ERR("Memory allocation failure");
-				return -1;
-			}
-
-			if (dpkg_prepare_key_cfg(
-				&priv->extract.tc_key_extract[flow->tc_id].dpkg,
-				(uint8_t *)param) < 0) {
-				DPAA2_PMD_ERR(
-				"Unable to prepare extract parameters");
-				rte_free((void *)param);
-				return -1;
-			}
-
-			memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
-			tc_cfg.dist_size = rss_conf->queue_num;
-			tc_cfg.key_cfg_iova = (size_t)param;
-			tc_cfg.enable = true;
-			tc_cfg.tc = flow->tc_id;
-			ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
-						 priv->token, &tc_cfg);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"RSS TC table cannot be configured: %d",
-					ret);
-				rte_free((void *)param);
-				return -1;
+			dist_size = rss_conf->queue_num;
+			if (is_keycfg_configured & DPAA2_FLOW_FS_TYPE) {
+				ret = dpaa2_configure_fs_rss_table(priv,
+								   flow->tc_id,
+								   dist_size,
+								   true);
+				if (ret)
+					return ret;
 			}
 
-			rte_free((void *)param);
-			if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
-				if (dpkg_prepare_key_cfg(
-					&priv->extract.qos_key_extract.dpkg,
-					(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
-					DPAA2_PMD_ERR(
-					"Unable to prepare extract parameters");
-					return -1;
-				}
-				memset(&qos_cfg, 0,
-					sizeof(struct dpni_qos_tbl_cfg));
-				qos_cfg.discard_on_miss = true;
-				qos_cfg.keep_entries = true;
-				qos_cfg.key_cfg_iova =
-					(size_t)priv->extract.qos_extract_param;
-				ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
-							 priv->token, &qos_cfg);
-				if (ret < 0) {
-					DPAA2_PMD_ERR(
-					"RSS QoS dist can't be configured-%d",
-					ret);
-					return -1;
-				}
+			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
+				ret = dpaa2_configure_qos_table(priv, true);
+				if (ret)
+					return ret;
 			}
 
-			/* Add Rule into QoS table */
-			qos_index = flow->tc_id * priv->fs_entries +
-				flow->tc_index;
-			if (qos_index >= priv->qos_entries) {
-				DPAA2_PMD_ERR("QoS table with %d entries full",
-					priv->qos_entries);
-				return -1;
-			}
+			ret = dpaa2_flow_add_qos_rule(priv, flow);
+			if (ret)
+				return ret;
 
-			flow->qos_real_key_size =
-			  priv->extract.qos_key_extract.key_info.key_total_size;
-			flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
-			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-						&flow->qos_rule, flow->tc_id,
-						qos_index, 0, 0);
-			if (ret < 0) {
-				DPAA2_PMD_ERR(
-				"Error in entry addition in QoS table(%d)",
-				ret);
+			ret = dpaa2_flow_add_fs_rule(priv, flow);
+			if (ret)
 				return ret;
-			}
+
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
@@ -3808,16 +2889,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 	}
 
 	if (!ret) {
-		if (is_keycfg_configured &
-			(DPAA2_QOS_TABLE_RECONFIGURE |
-			DPAA2_FS_TABLE_RECONFIGURE)) {
-			ret = dpaa2_flow_entry_update(priv, flow->tc_id);
-			if (ret) {
-				DPAA2_PMD_ERR("Flow entry update failed.");
-
-				return -1;
-			}
-		}
 		/* New rules are inserted. */
 		if (!curr) {
 			LIST_INSERT_HEAD(&priv->flows, flow, next);
@@ -3832,7 +2903,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 
 static inline int
 dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
-		      const struct rte_flow_attr *attr)
+	const struct rte_flow_attr *attr)
 {
 	int ret = 0;
 
@@ -3906,18 +2977,18 @@ dpaa2_dev_verify_actions(const struct rte_flow_action actions[])
 	}
 	for (j = 0; actions[j].type != RTE_FLOW_ACTION_TYPE_END; j++) {
 		if (actions[j].type != RTE_FLOW_ACTION_TYPE_DROP &&
-				!actions[j].conf)
+		    !actions[j].conf)
 			ret = -EINVAL;
 	}
 	return ret;
 }
 
-static
-int dpaa2_flow_validate(struct rte_eth_dev *dev,
-			const struct rte_flow_attr *flow_attr,
-			const struct rte_flow_item pattern[],
-			const struct rte_flow_action actions[],
-			struct rte_flow_error *error)
+static int
+dpaa2_flow_validate(struct rte_eth_dev *dev,
+	const struct rte_flow_attr *flow_attr,
+	const struct rte_flow_item pattern[],
+	const struct rte_flow_action actions[],
+	struct rte_flow_error *error)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpni_attr dpni_attr;
@@ -3971,127 +3042,128 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static
-struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
-				   const struct rte_flow_attr *attr,
-				   const struct rte_flow_item pattern[],
-				   const struct rte_flow_action actions[],
-				   struct rte_flow_error *error)
+static struct rte_flow *
+dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error)
 {
-	struct rte_flow *flow = NULL;
-	size_t key_iova = 0, mask_iova = 0;
+	struct dpaa2_dev_flow *flow = NULL;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int ret;
 
 	dpaa2_flow_control_log =
 		getenv("DPAA2_FLOW_CONTROL_LOG");
 
 	if (getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")) {
-		struct dpaa2_dev_priv *priv = dev->data->dev_private;
-
 		dpaa2_flow_miss_flow_id =
 			(uint16_t)atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
 		if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
-			DPAA2_PMD_ERR(
-				"The missed flow ID %d exceeds the max flow ID %d",
-				dpaa2_flow_miss_flow_id,
-				priv->dist_queues - 1);
+			DPAA2_PMD_ERR("Missed flow ID %d >= dist size(%d)",
+				      dpaa2_flow_miss_flow_id,
+				      priv->dist_queues);
 			return NULL;
 		}
 	}
 
-	flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
+	flow = rte_zmalloc(NULL, sizeof(struct dpaa2_dev_flow),
+			   RTE_CACHE_LINE_SIZE);
 	if (!flow) {
 		DPAA2_PMD_ERR("Failure to allocate memory for flow");
 		goto mem_failure;
 	}
-	/* Allocate DMA'ble memory to write the rules */
-	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!key_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+
+	/* Allocate DMA'ble memory to write the qos rules */
+	flow->qos_key_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->qos_key_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!mask_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+	flow->qos_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->qos_key_addr);
+
+	flow->qos_mask_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->qos_mask_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
+	flow->qos_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->qos_mask_addr);
 
-	flow->qos_rule.key_iova = key_iova;
-	flow->qos_rule.mask_iova = mask_iova;
-
-	/* Allocate DMA'ble memory to write the rules */
-	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!key_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+	/* Allocate DMA'ble memory to write the FS rules */
+	flow->fs_key_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->fs_key_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
-	if (!mask_iova) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+	flow->fs_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->fs_key_addr);
+
+	flow->fs_mask_addr = rte_zmalloc(NULL, 256, 64);
+	if (!flow->fs_mask_addr) {
+		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
+	flow->fs_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->fs_mask_addr);
 
-	flow->fs_rule.key_iova = key_iova;
-	flow->fs_rule.mask_iova = mask_iova;
-
-	flow->ipaddr_rule.ipaddr_type = FLOW_NONE_IPADDR;
-	flow->ipaddr_rule.qos_ipsrc_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.qos_ipdst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.fs_ipsrc_offset =
-		IP_ADDRESS_OFFSET_INVALID;
-	flow->ipaddr_rule.fs_ipdst_offset =
-		IP_ADDRESS_OFFSET_INVALID;
+	priv->curr = flow;
 
-	ret = dpaa2_generic_flow_set(flow, dev, attr, pattern,
-			actions, error);
+	ret = dpaa2_generic_flow_set(flow, dev, attr, pattern, actions, error);
 	if (ret < 0) {
 		if (error && error->type > RTE_FLOW_ERROR_TYPE_ACTION)
 			rte_flow_error_set(error, EPERM,
-					RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-					attr, "unknown");
-		DPAA2_PMD_ERR("Failure to create flow, return code (%d)", ret);
+					   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   attr, "unknown");
+		DPAA2_PMD_ERR("Create flow failed (%d)", ret);
 		goto creation_error;
 	}
 
-	return flow;
+	priv->curr = NULL;
+	return (struct rte_flow *)flow;
+
 mem_failure:
-	rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-			   NULL, "memory alloc");
+	rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   "memory alloc");
+
 creation_error:
-	rte_free((void *)flow);
-	rte_free((void *)key_iova);
-	rte_free((void *)mask_iova);
+	if (flow) {
+		if (flow->qos_key_addr)
+			rte_free(flow->qos_key_addr);
+		if (flow->qos_mask_addr)
+			rte_free(flow->qos_mask_addr);
+		if (flow->fs_key_addr)
+			rte_free(flow->fs_key_addr);
+		if (flow->fs_mask_addr)
+			rte_free(flow->fs_mask_addr);
+		rte_free(flow);
+	}
+	priv->curr = NULL;
 
 	return NULL;
 }
 
-static
-int dpaa2_flow_destroy(struct rte_eth_dev *dev,
-		       struct rte_flow *flow,
-		       struct rte_flow_error *error)
+static int
+dpaa2_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *_flow,
+		   struct rte_flow_error *error)
 {
 	int ret = 0;
+	struct dpaa2_dev_flow *flow;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
+	struct fsl_mc_io *dpni = priv->hw;
 
-	switch (flow->action) {
+	flow = (struct dpaa2_dev_flow *)_flow;
+
+	switch (flow->action_type) {
 	case RTE_FLOW_ACTION_TYPE_QUEUE:
 	case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
 	case RTE_FLOW_ACTION_TYPE_PORT_ID:
 		if (priv->num_rx_tc > 1) {
 			/* Remove entry from QoS table first */
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-					&flow->qos_rule);
+			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+						    priv->token,
+						    &flow->qos_rule);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"Error in removing entry from QoS table(%d)", ret);
+				DPAA2_PMD_ERR("Remove FS QoS entry failed");
+				dpaa2_flow_qos_entry_log("Delete failed", flow,
+							 -1);
+				abort();
 				goto error;
 			}
 		}
@@ -4100,34 +3172,37 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
 		ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token,
 					   flow->tc_id, &flow->fs_rule);
 		if (ret < 0) {
-			DPAA2_PMD_ERR(
-				"Error in removing entry from FS table(%d)", ret);
+			DPAA2_PMD_ERR("Remove entry from FS[%d] failed",
+				      flow->tc_id);
 			goto error;
 		}
 		break;
 	case RTE_FLOW_ACTION_TYPE_RSS:
 		if (priv->num_rx_tc > 1) {
-			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
-					&flow->qos_rule);
+			ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+						    priv->token,
+						    &flow->qos_rule);
 			if (ret < 0) {
-				DPAA2_PMD_ERR(
-					"Error in entry addition in QoS table(%d)", ret);
+				DPAA2_PMD_ERR("Remove RSS QoS entry failed");
 				goto error;
 			}
 		}
 		break;
 	default:
-		DPAA2_PMD_ERR(
-		"Action type (%d) is not supported", flow->action);
+		DPAA2_PMD_ERR("Action(%d) not supported", flow->action_type);
 		ret = -ENOTSUP;
 		break;
 	}
 
 	LIST_REMOVE(flow, next);
-	rte_free((void *)(size_t)flow->qos_rule.key_iova);
-	rte_free((void *)(size_t)flow->qos_rule.mask_iova);
-	rte_free((void *)(size_t)flow->fs_rule.key_iova);
-	rte_free((void *)(size_t)flow->fs_rule.mask_iova);
+	if (flow->qos_key_addr)
+		rte_free(flow->qos_key_addr);
+	if (flow->qos_mask_addr)
+		rte_free(flow->qos_mask_addr);
+	if (flow->fs_key_addr)
+		rte_free(flow->fs_key_addr);
+	if (flow->fs_mask_addr)
+		rte_free(flow->fs_mask_addr);
 	/* Now free the flow */
 	rte_free(flow);
 
@@ -4152,12 +3227,12 @@ dpaa2_flow_flush(struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct rte_flow *flow = LIST_FIRST(&priv->flows);
+	struct dpaa2_dev_flow *flow = LIST_FIRST(&priv->flows);
 
 	while (flow) {
-		struct rte_flow *next = LIST_NEXT(flow, next);
+		struct dpaa2_dev_flow *next = LIST_NEXT(flow, next);
 
-		dpaa2_flow_destroy(dev, flow, error);
+		dpaa2_flow_destroy(dev, (struct rte_flow *)flow, error);
 		flow = next;
 	}
 	return 0;
@@ -4165,10 +3240,10 @@ dpaa2_flow_flush(struct rte_eth_dev *dev,
 
 static int
 dpaa2_flow_query(struct rte_eth_dev *dev __rte_unused,
-		struct rte_flow *flow __rte_unused,
-		const struct rte_flow_action *actions __rte_unused,
-		void *data __rte_unused,
-		struct rte_flow_error *error __rte_unused)
+	struct rte_flow *_flow __rte_unused,
+	const struct rte_flow_action *actions __rte_unused,
+	void *data __rte_unused,
+	struct rte_flow_error *error __rte_unused)
 {
 	return 0;
 }
@@ -4185,11 +3260,11 @@ dpaa2_flow_query(struct rte_eth_dev *dev __rte_unused,
 void
 dpaa2_flow_clean(struct rte_eth_dev *dev)
 {
-	struct rte_flow *flow;
+	struct dpaa2_dev_flow *flow;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
 	while ((flow = LIST_FIRST(&priv->flows)))
-		dpaa2_flow_destroy(dev, flow, NULL);
+		dpaa2_flow_destroy(dev, (struct rte_flow *)flow, NULL);
 }
 
 const struct rte_flow_ops dpaa2_flow_ops = {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 24/42] net/dpaa2: dump Rx parser result
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (22 preceding siblings ...)
  2024-10-23 11:59           ` [v5 23/42] net/dpaa2: flow API refactor vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 25/42] net/dpaa2: enhancement of raw flow extract vanshika.shukla
                             ` (18 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

export DPAA2_PRINT_RX_PARSER_RESULT=1 is used to dump
RX parser result and frame attribute flags generated by
hardware parser and soft parser.
The parser results are converted to big endian described in RM.
The areas set by soft parser are dump as well.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c     |   5 +
 drivers/net/dpaa2/dpaa2_ethdev.h     |  90 ++++++++++
 drivers/net/dpaa2/dpaa2_parse_dump.h | 248 +++++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_rxtx.c       |   7 +
 4 files changed, 350 insertions(+)
 create mode 100644 drivers/net/dpaa2/dpaa2_parse_dump.h

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index e55de5b614..187b648799 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -75,6 +75,8 @@ int dpaa2_timestamp_dynfield_offset = -1;
 /* Enable error queue */
 bool dpaa2_enable_err_queue;
 
+bool dpaa2_print_parser_result;
+
 #define MAX_NB_RX_DESC		11264
 int total_nb_rx_desc;
 
@@ -2730,6 +2732,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		DPAA2_PMD_INFO("Enable error queue");
 	}
 
+	if (getenv("DPAA2_PRINT_RX_PARSER_RESULT"))
+		dpaa2_print_parser_result = 1;
+
 	/* Allocate memory for hardware structure for queues */
 	ret = dpaa2_alloc_rx_tx_queues(eth_dev);
 	if (ret) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index ea1c1b5117..c864859b3f 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -19,6 +19,8 @@
 #include <mc/fsl_dpni.h>
 #include <mc/fsl_mc_sys.h>
 
+#include "base/dpaa2_hw_dpni_annot.h"
+
 #define DPAA2_MIN_RX_BUF_SIZE 512
 #define DPAA2_MAX_RX_PKT_LEN  10240 /*WRIOP support*/
 #define NET_DPAA2_PMD_DRIVER_NAME net_dpaa2
@@ -152,6 +154,88 @@ extern const struct rte_tm_ops dpaa2_tm_ops;
 
 extern bool dpaa2_enable_err_queue;
 
+extern bool dpaa2_print_parser_result;
+
+#define DPAA2_FAPR_SIZE \
+	(sizeof(struct dpaa2_annot_hdr) - \
+	offsetof(struct dpaa2_annot_hdr, word3))
+
+#define DPAA2_PR_NXTHDR_OFFSET 0
+
+#define DPAA2_FAFE_PSR_OFFSET 2
+#define DPAA2_FAFE_PSR_SIZE 2
+
+#define DPAA2_FAF_PSR_OFFSET 4
+#define DPAA2_FAF_PSR_SIZE 12
+
+#define DPAA2_FAF_TOTAL_SIZE \
+	(DPAA2_FAFE_PSR_SIZE + DPAA2_FAF_PSR_SIZE)
+
+/* Just most popular Frame attribute flags (FAF) here.*/
+enum dpaa2_rx_faf_offset {
+	/* Set by SP start*/
+	FAFE_VXLAN_IN_VLAN_FRAM = 0,
+	FAFE_VXLAN_IN_IPV4_FRAM = 1,
+	FAFE_VXLAN_IN_IPV6_FRAM = 2,
+	FAFE_VXLAN_IN_UDP_FRAM = 3,
+	FAFE_VXLAN_IN_TCP_FRAM = 4,
+	/* Set by SP end*/
+
+	FAF_GTP_PRIMED_FRAM = 1 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_PTP_FRAM = 3 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_VXLAN_FRAM = 4 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ETH_FRAM = 10 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_LLC_SNAP_FRAM = 18 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_VLAN_FRAM = 21 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_PPPOE_PPP_FRAM = 25 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_MPLS_FRAM = 27 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ARP_FRAM = 30 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPV4_FRAM = 34 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPV6_FRAM = 42 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IP_FRAM = 48 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ICMP_FRAM = 57 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IGMP_FRAM = 58 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_GRE_FRAM = 65 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_UDP_FRAM = 70 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_TCP_FRAM = 72 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_FRAM = 77 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_ESP_FRAM = 78 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IPSEC_AH_FRAM = 79 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_SCTP_FRAM = 81 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_DCCP_FRAM = 83 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_GTP_FRAM = 87 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_ESP_FRAM = 89 + DPAA2_FAFE_PSR_SIZE * 8,
+};
+
+#define DPAA2_PR_ETH_OFF_OFFSET 19
+#define DPAA2_PR_TCI_OFF_OFFSET 21
+#define DPAA2_PR_LAST_ETYPE_OFFSET 23
+#define DPAA2_PR_L3_OFF_OFFSET 27
+#define DPAA2_PR_L4_OFF_OFFSET 30
+#define DPAA2_PR_L5_OFF_OFFSET 31
+#define DPAA2_PR_NXTHDR_OFF_OFFSET 34
+
+/* Set by SP for vxlan distribution start*/
+#define DPAA2_VXLAN_IN_TCI_OFFSET 16
+
+#define DPAA2_VXLAN_IN_DADDR0_OFFSET 20
+#define DPAA2_VXLAN_IN_DADDR1_OFFSET 22
+#define DPAA2_VXLAN_IN_DADDR2_OFFSET 24
+#define DPAA2_VXLAN_IN_DADDR3_OFFSET 25
+#define DPAA2_VXLAN_IN_DADDR4_OFFSET 26
+#define DPAA2_VXLAN_IN_DADDR5_OFFSET 28
+
+#define DPAA2_VXLAN_IN_SADDR0_OFFSET 29
+#define DPAA2_VXLAN_IN_SADDR1_OFFSET 32
+#define DPAA2_VXLAN_IN_SADDR2_OFFSET 33
+#define DPAA2_VXLAN_IN_SADDR3_OFFSET 35
+#define DPAA2_VXLAN_IN_SADDR4_OFFSET 41
+#define DPAA2_VXLAN_IN_SADDR5_OFFSET 42
+
+#define DPAA2_VXLAN_VNI_OFFSET 43
+#define DPAA2_VXLAN_IN_TYPE_OFFSET 46
+/* Set by SP for vxlan distribution end*/
+
 struct ipv4_sd_addr_extract_rule {
 	uint32_t ipv4_src;
 	uint32_t ipv4_dst;
@@ -197,7 +281,13 @@ enum ip_addr_extract_type {
 	IP_DST_SRC_EXTRACT
 };
 
+enum key_prot_type {
+	DPAA2_NET_PROT_KEY,
+	DPAA2_FAF_KEY
+};
+
 struct key_prot_field {
+	enum key_prot_type type;
 	enum net_prot prot;
 	uint32_t key_field;
 };
diff --git a/drivers/net/dpaa2/dpaa2_parse_dump.h b/drivers/net/dpaa2/dpaa2_parse_dump.h
new file mode 100644
index 0000000000..f1cdc003de
--- /dev/null
+++ b/drivers/net/dpaa2/dpaa2_parse_dump.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ *   Copyright 2022 NXP
+ *
+ */
+
+#ifndef _DPAA2_PARSE_DUMP_H
+#define _DPAA2_PARSE_DUMP_H
+
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_pmd_dpaa2.h>
+
+#include <dpaa2_hw_pvt.h>
+#include "dpaa2_tm.h"
+
+#include <mc/fsl_dpni.h>
+#include <mc/fsl_mc_sys.h>
+
+#include "base/dpaa2_hw_dpni_annot.h"
+
+#define DPAA2_PR_PRINT printf
+
+struct dpaa2_faf_bit_info {
+	const char *name;
+	int position;
+};
+
+struct dpaa2_fapr_field_info {
+	const char *name;
+	uint16_t value;
+};
+
+struct dpaa2_fapr_array {
+	union {
+		uint64_t pr_64[DPAA2_FAPR_SIZE / 8];
+		uint8_t pr[DPAA2_FAPR_SIZE];
+	};
+};
+
+#define NEXT_HEADER_NAME "Next Header"
+#define ETH_OFF_NAME "ETH OFFSET"
+#define VLAN_TCI_OFF_NAME "VLAN TCI OFFSET"
+#define LAST_ENTRY_OFF_NAME "LAST ETYPE Offset"
+#define L3_OFF_NAME "L3 Offset"
+#define L4_OFF_NAME "L4 Offset"
+#define L5_OFF_NAME "L5 Offset"
+#define NEXT_HEADER_OFF_NAME "Next Header Offset"
+
+static const
+struct dpaa2_fapr_field_info support_dump_fields[] = {
+	{
+		.name = NEXT_HEADER_NAME,
+	},
+	{
+		.name = ETH_OFF_NAME,
+	},
+	{
+		.name = VLAN_TCI_OFF_NAME,
+	},
+	{
+		.name = LAST_ENTRY_OFF_NAME,
+	},
+	{
+		.name = L3_OFF_NAME,
+	},
+	{
+		.name = L4_OFF_NAME,
+	},
+	{
+		.name = L5_OFF_NAME,
+	},
+	{
+		.name = NEXT_HEADER_OFF_NAME,
+	}
+};
+
+static inline void
+dpaa2_print_faf(struct dpaa2_fapr_array *fapr)
+{
+	const int faf_bit_len = DPAA2_FAF_TOTAL_SIZE * 8;
+	struct dpaa2_faf_bit_info faf_bits[faf_bit_len];
+	int i, byte_pos, bit_pos, vxlan = 0, vxlan_vlan = 0;
+	struct rte_ether_hdr vxlan_in_eth;
+	uint16_t vxlan_vlan_tci;
+
+	for (i = 0; i < faf_bit_len; i++) {
+		faf_bits[i].position = i;
+		if (i == FAFE_VXLAN_IN_VLAN_FRAM)
+			faf_bits[i].name = "VXLAN VLAN Present";
+		else if (i == FAFE_VXLAN_IN_IPV4_FRAM)
+			faf_bits[i].name = "VXLAN IPV4 Present";
+		else if (i == FAFE_VXLAN_IN_IPV6_FRAM)
+			faf_bits[i].name = "VXLAN IPV6 Present";
+		else if (i == FAFE_VXLAN_IN_UDP_FRAM)
+			faf_bits[i].name = "VXLAN UDP Present";
+		else if (i == FAFE_VXLAN_IN_TCP_FRAM)
+			faf_bits[i].name = "VXLAN TCP Present";
+		else if (i == FAF_VXLAN_FRAM)
+			faf_bits[i].name = "VXLAN Present";
+		else if (i == FAF_ETH_FRAM)
+			faf_bits[i].name = "Ethernet MAC Present";
+		else if (i == FAF_VLAN_FRAM)
+			faf_bits[i].name = "VLAN 1 Present";
+		else if (i == FAF_IPV4_FRAM)
+			faf_bits[i].name = "IPv4 1 Present";
+		else if (i == FAF_IPV6_FRAM)
+			faf_bits[i].name = "IPv6 1 Present";
+		else if (i == FAF_UDP_FRAM)
+			faf_bits[i].name = "UDP Present";
+		else if (i == FAF_TCP_FRAM)
+			faf_bits[i].name = "TCP Present";
+		else
+			faf_bits[i].name = "Check RM for this unusual frame";
+	}
+
+	DPAA2_PR_PRINT("Frame Annotation Flags:\r\n");
+	for (i = 0; i < faf_bit_len; i++) {
+		byte_pos = i / 8 + DPAA2_FAFE_PSR_OFFSET;
+		bit_pos = i % 8;
+		if (fapr->pr[byte_pos] & (1 << (7 - bit_pos))) {
+			DPAA2_PR_PRINT("FAF bit %d : %s\r\n",
+				faf_bits[i].position, faf_bits[i].name);
+			if (i == FAF_VXLAN_FRAM)
+				vxlan = 1;
+		}
+	}
+
+	if (vxlan) {
+		vxlan_in_eth.dst_addr.addr_bytes[0] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR0_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[1] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR1_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[2] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR2_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[3] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR3_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[4] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR4_OFFSET];
+		vxlan_in_eth.dst_addr.addr_bytes[5] =
+			fapr->pr[DPAA2_VXLAN_IN_DADDR5_OFFSET];
+
+		vxlan_in_eth.src_addr.addr_bytes[0] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR0_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[1] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR1_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[2] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR2_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[3] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR3_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[4] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR4_OFFSET];
+		vxlan_in_eth.src_addr.addr_bytes[5] =
+			fapr->pr[DPAA2_VXLAN_IN_SADDR5_OFFSET];
+
+		vxlan_in_eth.ether_type =
+			fapr->pr[DPAA2_VXLAN_IN_TYPE_OFFSET];
+		vxlan_in_eth.ether_type =
+			vxlan_in_eth.ether_type << 8;
+		vxlan_in_eth.ether_type |=
+			fapr->pr[DPAA2_VXLAN_IN_TYPE_OFFSET + 1];
+
+		if (vxlan_in_eth.ether_type == RTE_ETHER_TYPE_VLAN)
+			vxlan_vlan = 1;
+		DPAA2_PR_PRINT("VXLAN inner eth:\r\n");
+		DPAA2_PR_PRINT("dst addr: ");
+		for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+			if (i != 0)
+				DPAA2_PR_PRINT(":");
+			DPAA2_PR_PRINT("%02x",
+				vxlan_in_eth.dst_addr.addr_bytes[i]);
+		}
+		DPAA2_PR_PRINT("\r\n");
+		DPAA2_PR_PRINT("src addr: ");
+		for (i = 0; i < RTE_ETHER_ADDR_LEN; i++) {
+			if (i != 0)
+				DPAA2_PR_PRINT(":");
+			DPAA2_PR_PRINT("%02x",
+				vxlan_in_eth.src_addr.addr_bytes[i]);
+		}
+		DPAA2_PR_PRINT("\r\n");
+		DPAA2_PR_PRINT("type: 0x%04x\r\n",
+			vxlan_in_eth.ether_type);
+		if (vxlan_vlan) {
+			vxlan_vlan_tci = fapr->pr[DPAA2_VXLAN_IN_TCI_OFFSET];
+			vxlan_vlan_tci = vxlan_vlan_tci << 8;
+			vxlan_vlan_tci |=
+				fapr->pr[DPAA2_VXLAN_IN_TCI_OFFSET + 1];
+
+			DPAA2_PR_PRINT("vlan tci: 0x%04x\r\n",
+				vxlan_vlan_tci);
+		}
+	}
+}
+
+static inline void
+dpaa2_print_parse_result(struct dpaa2_annot_hdr *annotation)
+{
+	struct dpaa2_fapr_array fapr;
+	struct dpaa2_fapr_field_info
+		fapr_fields[sizeof(support_dump_fields) /
+		sizeof(struct dpaa2_fapr_field_info)];
+	uint64_t len, i;
+
+	memcpy(&fapr, &annotation->word3, DPAA2_FAPR_SIZE);
+	for (i = 0; i < (DPAA2_FAPR_SIZE / 8); i++)
+		fapr.pr_64[i] = rte_cpu_to_be_64(fapr.pr_64[i]);
+
+	memcpy(fapr_fields, support_dump_fields,
+		sizeof(support_dump_fields));
+
+	for (i = 0;
+		i < sizeof(fapr_fields) /
+		sizeof(struct dpaa2_fapr_field_info);
+		i++) {
+		if (!strcmp(fapr_fields[i].name, NEXT_HEADER_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_NXTHDR_OFFSET];
+			fapr_fields[i].value = fapr_fields[i].value << 8;
+			fapr_fields[i].value |=
+				fapr.pr[DPAA2_PR_NXTHDR_OFFSET + 1];
+		} else if (!strcmp(fapr_fields[i].name, ETH_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_ETH_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, VLAN_TCI_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_TCI_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, LAST_ENTRY_OFF_NAME)) {
+			fapr_fields[i].value =
+				fapr.pr[DPAA2_PR_LAST_ETYPE_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L3_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L3_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L4_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L4_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, L5_OFF_NAME)) {
+			fapr_fields[i].value = fapr.pr[DPAA2_PR_L5_OFF_OFFSET];
+		} else if (!strcmp(fapr_fields[i].name, NEXT_HEADER_OFF_NAME)) {
+			fapr_fields[i].value =
+				fapr.pr[DPAA2_PR_NXTHDR_OFF_OFFSET];
+		}
+	}
+
+	len = sizeof(fapr_fields) / sizeof(struct dpaa2_fapr_field_info);
+	DPAA2_PR_PRINT("Parse Result:\r\n");
+	for (i = 0; i < len; i++) {
+		DPAA2_PR_PRINT("%21s : 0x%02x\r\n",
+			fapr_fields[i].name, fapr_fields[i].value);
+	}
+	dpaa2_print_faf(&fapr);
+}
+
+#endif
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 92e9dd40dc..71b2b4a427 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -25,6 +25,7 @@
 #include "dpaa2_pmd_logs.h"
 #include "dpaa2_ethdev.h"
 #include "base/dpaa2_hw_dpni_annot.h"
+#include "dpaa2_parse_dump.h"
 
 static inline uint32_t __rte_hot
 dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
@@ -57,6 +58,9 @@ dpaa2_dev_rx_parse_new(struct rte_mbuf *m, const struct qbman_fd *fd,
 	struct dpaa2_annot_hdr *annotation =
 			(struct dpaa2_annot_hdr *)hw_annot_addr;
 
+	if (unlikely(dpaa2_print_parser_result))
+		dpaa2_print_parse_result(annotation);
+
 	m->packet_type = RTE_PTYPE_UNKNOWN;
 	switch (frc) {
 	case DPAA2_PKT_TYPE_ETHER:
@@ -252,6 +256,9 @@ dpaa2_dev_rx_parse(struct rte_mbuf *mbuf, void *hw_annot_addr)
 	else
 		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
+	if (unlikely(dpaa2_print_parser_result))
+		dpaa2_print_parse_result(annotation);
+
 	if (dpaa2_enable_ts[mbuf->port]) {
 		*dpaa2_timestamp_dynfield(mbuf) = annotation->word2;
 		mbuf->ol_flags |= dpaa2_timestamp_rx_dynflag;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 25/42] net/dpaa2: enhancement of raw flow extract
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (23 preceding siblings ...)
  2024-10-23 11:59           ` [v5 24/42] net/dpaa2: dump Rx parser result vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 26/42] net/dpaa2: frame attribute flags parser vanshika.shukla
                             ` (17 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support combination of RAW extract and header extracts.
RAW extract can start from any absolute offset.

TBD: relative offset support.
To support relative offset of previous L3 protocol item,
extracts should be expanded to identify if the frame is:
vlan or none-vlan.

To support relative offset of previous L4 protocol item,
extracts should be expanded to identify if the frame is:
vlan/IPv4 or vlan/IPv6 or none-vlan/IPv4 or none-vlan/IPv6.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |  10 +
 drivers/net/dpaa2/dpaa2_flow.c   | 385 ++++++++++++++++++++++++++-----
 2 files changed, 340 insertions(+), 55 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index c864859b3f..8f548467a4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -292,6 +292,11 @@ struct key_prot_field {
 	uint32_t key_field;
 };
 
+struct dpaa2_raw_region {
+	uint8_t raw_start;
+	uint8_t raw_size;
+};
+
 struct dpaa2_key_profile {
 	uint8_t num;
 	uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
@@ -301,6 +306,10 @@ struct dpaa2_key_profile {
 	uint8_t ip_addr_extract_pos;
 	uint8_t ip_addr_extract_off;
 
+	uint8_t raw_extract_pos;
+	uint8_t raw_extract_off;
+	uint8_t raw_extract_num;
+
 	uint8_t l4_src_port_present;
 	uint8_t l4_src_port_pos;
 	uint8_t l4_src_port_offset;
@@ -309,6 +318,7 @@ struct dpaa2_key_profile {
 	uint8_t l4_dst_port_offset;
 	struct key_prot_field prot_field[DPKG_MAX_NUM_OF_EXTRACTS];
 	uint16_t key_max_size;
+	struct dpaa2_raw_region raw_region;
 };
 
 struct dpaa2_key_extract {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 9e03ad5401..a66edf78bc 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -768,42 +768,272 @@ dpaa2_flow_extract_add_hdr(enum net_prot prot,
 }
 
 static int
-dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
-	int size)
+dpaa2_flow_extract_new_raw(struct dpaa2_dev_priv *priv,
+	int offset, int size,
+	enum dpaa2_flow_dist_type dist_type, int tc_id)
 {
-	struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
-	struct dpaa2_key_profile *key_info = &key_extract->key_profile;
-	int last_extract_size, index;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpaa2_key_profile *key_profile;
+	int last_extract_size, index, pos, item_size;
+	uint8_t num_extracts;
+	uint32_t field;
 
-	if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
-	    DPKG_EXTRACT_FROM_DATA) {
-		DPAA2_PMD_WARN("RAW extract cannot be combined with others");
-		return -1;
-	}
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	key_profile = &key_extract->key_profile;
+
+	key_profile->raw_region.raw_start = 0;
+	key_profile->raw_region.raw_size = 0;
 
 	last_extract_size = (size % DPAA2_FLOW_MAX_KEY_SIZE);
-	dpkg->num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
+	num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
 	if (last_extract_size)
-		dpkg->num_extracts++;
+		num_extracts++;
 	else
 		last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
 
-	for (index = 0; index < dpkg->num_extracts; index++) {
-		dpkg->extracts[index].type = DPKG_EXTRACT_FROM_DATA;
-		if (index == dpkg->num_extracts - 1)
-			dpkg->extracts[index].extract.from_data.size =
-				last_extract_size;
+	for (index = 0; index < num_extracts; index++) {
+		if (index == num_extracts - 1)
+			item_size = last_extract_size;
 		else
-			dpkg->extracts[index].extract.from_data.size =
-				DPAA2_FLOW_MAX_KEY_SIZE;
-		dpkg->extracts[index].extract.from_data.offset =
-			DPAA2_FLOW_MAX_KEY_SIZE * index;
+			item_size = DPAA2_FLOW_MAX_KEY_SIZE;
+		field = offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+		field |= item_size;
+
+		pos = dpaa2_flow_key_profile_advance(NET_PROT_PAYLOAD,
+				field, item_size, priv, dist_type,
+				tc_id, NULL);
+		if (pos < 0)
+			return pos;
+
+		dpkg->extracts[pos].type = DPKG_EXTRACT_FROM_DATA;
+		dpkg->extracts[pos].extract.from_data.size = item_size;
+		dpkg->extracts[pos].extract.from_data.offset = offset;
+
+		if (index == 0) {
+			key_profile->raw_extract_pos = pos;
+			key_profile->raw_extract_off =
+				key_profile->key_offset[pos];
+			key_profile->raw_region.raw_start = offset;
+		}
+		key_profile->raw_extract_num++;
+		key_profile->raw_region.raw_size +=
+			key_profile->key_size[pos];
+
+		offset += item_size;
+		dpkg->num_extracts++;
 	}
 
-	key_info->key_max_size = size;
 	return 0;
 }
 
+static int
+dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
+	int offset, int size, enum dpaa2_flow_dist_type dist_type,
+	int tc_id, int *recfg)
+{
+	struct dpaa2_key_profile *key_profile;
+	struct dpaa2_raw_region *raw_region;
+	int end = offset + size, ret = 0, extract_extended, sz_extend;
+	int start_cmp, end_cmp, new_size, index, pos, end_pos;
+	int last_extract_size, item_size, num_extracts, bk_num = 0;
+	struct dpkg_extract extract_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint8_t key_offset_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	uint8_t key_size_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	struct key_prot_field prot_field_bk[DPKG_MAX_NUM_OF_EXTRACTS];
+	struct dpaa2_raw_region raw_hole;
+	struct dpkg_profile_cfg *dpkg;
+	enum net_prot prot;
+	uint32_t field;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+		dpkg = &priv->extract.qos_key_extract.dpkg;
+	} else {
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+		dpkg = &priv->extract.tc_key_extract[tc_id].dpkg;
+	}
+
+	raw_region = &key_profile->raw_region;
+	if (!raw_region->raw_size) {
+		/* New RAW region*/
+		ret = dpaa2_flow_extract_new_raw(priv, offset, size,
+			dist_type, tc_id);
+		if (!ret && recfg)
+			(*recfg) |= dist_type;
+
+		return ret;
+	}
+	start_cmp = raw_region->raw_start;
+	end_cmp = raw_region->raw_start + raw_region->raw_size;
+
+	if (offset >= start_cmp && end <= end_cmp)
+		return 0;
+
+	sz_extend = 0;
+	new_size = raw_region->raw_size;
+	if (offset < start_cmp) {
+		sz_extend += start_cmp - offset;
+		new_size += (start_cmp - offset);
+	}
+	if (end > end_cmp) {
+		sz_extend += end - end_cmp;
+		new_size += (end - end_cmp);
+	}
+
+	last_extract_size = (new_size % DPAA2_FLOW_MAX_KEY_SIZE);
+	num_extracts = (new_size / DPAA2_FLOW_MAX_KEY_SIZE);
+	if (last_extract_size)
+		num_extracts++;
+	else
+		last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
+
+	if ((key_profile->num + num_extracts -
+		key_profile->raw_extract_num) >=
+		DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("%s Failed to expand raw extracts",
+			__func__);
+		return -EINVAL;
+	}
+
+	if (offset < start_cmp) {
+		raw_hole.raw_start = key_profile->raw_extract_off;
+		raw_hole.raw_size = start_cmp - offset;
+		raw_region->raw_start = offset;
+		raw_region->raw_size += start_cmp - offset;
+
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size);
+			if (ret)
+				return ret;
+		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE) {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size, tc_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	if (end > end_cmp) {
+		raw_hole.raw_start =
+			key_profile->raw_extract_off +
+			raw_region->raw_size;
+		raw_hole.raw_size = end - end_cmp;
+		raw_region->raw_size += end - end_cmp;
+
+		if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size);
+			if (ret)
+				return ret;
+		}
+		if (dist_type & DPAA2_FLOW_FS_TYPE) {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+					raw_hole.raw_start,
+					raw_hole.raw_size, tc_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	end_pos = key_profile->raw_extract_pos +
+		key_profile->raw_extract_num;
+	if (key_profile->num > end_pos) {
+		bk_num = key_profile->num - end_pos;
+		memcpy(extract_bk, &dpkg->extracts[end_pos],
+			bk_num * sizeof(struct dpkg_extract));
+		memcpy(key_offset_bk, &key_profile->key_offset[end_pos],
+			bk_num * sizeof(uint8_t));
+		memcpy(key_size_bk, &key_profile->key_size[end_pos],
+			bk_num * sizeof(uint8_t));
+		memcpy(prot_field_bk, &key_profile->prot_field[end_pos],
+			bk_num * sizeof(struct key_prot_field));
+
+		for (index = 0; index < bk_num; index++) {
+			key_offset_bk[index] += sz_extend;
+			prot = prot_field_bk[index].prot;
+			field = prot_field_bk[index].key_field;
+			if (dpaa2_flow_l4_src_port_extract(prot,
+				field)) {
+				key_profile->l4_src_port_present = 1;
+				key_profile->l4_src_port_pos = end_pos + index;
+				key_profile->l4_src_port_offset =
+					key_offset_bk[index];
+			} else if (dpaa2_flow_l4_dst_port_extract(prot,
+				field)) {
+				key_profile->l4_dst_port_present = 1;
+				key_profile->l4_dst_port_pos = end_pos + index;
+				key_profile->l4_dst_port_offset =
+					key_offset_bk[index];
+			}
+		}
+	}
+
+	pos = key_profile->raw_extract_pos;
+
+	for (index = 0; index < num_extracts; index++) {
+		if (index == num_extracts - 1)
+			item_size = last_extract_size;
+		else
+			item_size = DPAA2_FLOW_MAX_KEY_SIZE;
+		field = offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+		field |= item_size;
+
+		if (pos > 0) {
+			key_profile->key_offset[pos] =
+				key_profile->key_offset[pos - 1] +
+				key_profile->key_size[pos - 1];
+		} else {
+			key_profile->key_offset[pos] = 0;
+		}
+		key_profile->key_size[pos] = item_size;
+		key_profile->prot_field[pos].prot = NET_PROT_PAYLOAD;
+		key_profile->prot_field[pos].key_field = field;
+
+		dpkg->extracts[pos].type = DPKG_EXTRACT_FROM_DATA;
+		dpkg->extracts[pos].extract.from_data.size = item_size;
+		dpkg->extracts[pos].extract.from_data.offset = offset;
+		offset += item_size;
+		pos++;
+	}
+
+	if (bk_num) {
+		memcpy(&dpkg->extracts[pos], extract_bk,
+			bk_num * sizeof(struct dpkg_extract));
+		memcpy(&key_profile->key_offset[end_pos],
+			key_offset_bk, bk_num * sizeof(uint8_t));
+		memcpy(&key_profile->key_size[end_pos],
+			key_size_bk, bk_num * sizeof(uint8_t));
+		memcpy(&key_profile->prot_field[end_pos],
+			prot_field_bk, bk_num * sizeof(struct key_prot_field));
+	}
+
+	extract_extended = num_extracts - key_profile->raw_extract_num;
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		key_profile->ip_addr_extract_pos += extract_extended;
+		key_profile->ip_addr_extract_off += sz_extend;
+	}
+	key_profile->raw_extract_num = num_extracts;
+	key_profile->num += extract_extended;
+	key_profile->key_max_size += sz_extend;
+
+	dpkg->num_extracts += extract_extended;
+	if (!ret && recfg)
+		(*recfg) |= dist_type;
+
+	return ret;
+}
+
 static inline int
 dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 	enum net_prot prot, uint32_t key_field)
@@ -843,7 +1073,6 @@ dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
 	int i;
 
 	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
-
 	if (i >= 0)
 		return key_profile->key_offset[i];
 	else
@@ -992,13 +1221,37 @@ dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 }
 
 static inline int
-dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
-			     const void *key, const void *mask, int size)
+dpaa2_flow_raw_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	uint32_t extract_offset, int size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
 {
-	int offset = 0;
+	int extract_size = size > DPAA2_FLOW_MAX_KEY_SIZE ?
+		DPAA2_FLOW_MAX_KEY_SIZE : size;
+	int offset, field;
+
+	field = extract_offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
+	field |= extract_size;
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			NET_PROT_PAYLOAD, field);
+	if (offset < 0) {
+		DPAA2_PMD_ERR("offset(%d)/size(%d) raw extract failed",
+			extract_offset, size);
+		return -EINVAL;
+	}
 
-	memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
-	memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, size);
+		memcpy((flow->qos_mask_addr + offset), mask, size);
+		flow->qos_rule_size = offset + size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, size);
+		memcpy((flow->fs_mask_addr + offset), mask, size);
+		flow->fs_rule_size = offset + size;
+	}
 
 	return 0;
 }
@@ -2233,22 +2486,36 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_raw *spec = pattern->spec;
 	const struct rte_flow_item_raw *mask = pattern->mask;
-	int prev_key_size =
-		priv->extract.qos_key_extract.key_profile.key_max_size;
 	int local_cfg = 0, ret;
 	uint32_t group;
+	struct dpaa2_key_extract *qos_key_extract;
+	struct dpaa2_key_extract *tc_key_extract;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
 		DPAA2_PMD_ERR("spec or mask not present.");
 		return -EINVAL;
 	}
-	/* Only supports non-relative with offset 0 */
-	if (spec->relative || spec->offset != 0 ||
-	    spec->search || spec->limit) {
-		DPAA2_PMD_ERR("relative and non zero offset not supported.");
+
+	if (spec->relative) {
+		/* TBD: relative offset support.
+		 * To support relative offset of previous L3 protocol item,
+		 * extracts should be expanded to identify if the frame is:
+		 * vlan or none-vlan.
+		 *
+		 * To support relative offset of previous L4 protocol item,
+		 * extracts should be expanded to identify if the frame is:
+		 * vlan/IPv4 or vlan/IPv6 or none-vlan/IPv4 or none-vlan/IPv6.
+		 */
+		DPAA2_PMD_ERR("relative not supported.");
+		return -EINVAL;
+	}
+
+	if (spec->search) {
+		DPAA2_PMD_ERR("search not supported.");
 		return -EINVAL;
 	}
+
 	/* Spec len and mask len should be same */
 	if (spec->length != mask->length) {
 		DPAA2_PMD_ERR("Spec len and mask len mismatch.");
@@ -2260,36 +2527,44 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (prev_key_size <= spec->length) {
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
-						 spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("QoS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FLOW_QOS_TYPE;
+	qos_key_extract = &priv->extract.qos_key_extract;
+	tc_key_extract = &priv->extract.tc_key_extract[group];
 
-		ret = dpaa2_flow_extract_add_raw(&priv->extract.tc_key_extract[group],
-					spec->length);
-		if (ret) {
-			DPAA2_PMD_ERR("FS Extract RAW add failed.");
-			return -1;
-		}
-		local_cfg |= DPAA2_FLOW_FS_TYPE;
+	ret = dpaa2_flow_extract_add_raw(priv,
+			spec->offset, spec->length,
+			DPAA2_FLOW_QOS_TYPE, 0, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("QoS Extract RAW add failed.");
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
-					   mask->pattern, spec->length);
+	ret = dpaa2_flow_extract_add_raw(priv,
+			spec->offset, spec->length,
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret) {
+		DPAA2_PMD_ERR("FS[%d] Extract RAW add failed.",
+			group);
+		return -EINVAL;
+	}
+
+	ret = dpaa2_flow_raw_rule_data_set(flow,
+			&qos_key_extract->key_profile,
+			spec->offset, spec->length,
+			spec->pattern, mask->pattern,
+			DPAA2_FLOW_QOS_TYPE);
 	if (ret) {
 		DPAA2_PMD_ERR("QoS RAW rule data set failed");
-		return -1;
+		return -EINVAL;
 	}
 
-	ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
-					   mask->pattern, spec->length);
+	ret = dpaa2_flow_raw_rule_data_set(flow,
+			&tc_key_extract->key_profile,
+			spec->offset, spec->length,
+			spec->pattern, mask->pattern,
+			DPAA2_FLOW_FS_TYPE);
 	if (ret) {
 		DPAA2_PMD_ERR("FS RAW rule data set failed");
-		return -1;
+		return -EINVAL;
 	}
 
 	(*device_configured) |= local_cfg;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 26/42] net/dpaa2: frame attribute flags parser
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (24 preceding siblings ...)
  2024-10-23 11:59           ` [v5 25/42] net/dpaa2: enhancement of raw flow extract vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 27/42] net/dpaa2: add VXLAN distribution support vanshika.shukla
                             ` (16 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

FAF parser extracts are used to identify protocol type
instead of extracts of previous protocol' type.
FAF starts from offset 2 to include user defined flags which
will be used for soft protocol distribution.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 475 +++++++++++++++++++--------------
 1 file changed, 273 insertions(+), 202 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index a66edf78bc..4c80efeff7 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -22,13 +22,6 @@
 #include <dpaa2_ethdev.h>
 #include <dpaa2_pmd_logs.h>
 
-/* Workaround to discriminate the UDP/TCP/SCTP
- * with next protocol of l3.
- * MC/WRIOP are not able to identify
- * the l4 protocol with l4 ports.
- */
-static int mc_l4_port_identification;
-
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
 
@@ -256,6 +249,10 @@ dpaa2_flow_qos_extracts_log(const struct dpaa2_dev_priv *priv)
 			sprintf(string, "raw offset/len: %d/%d",
 				extract->extract.from_data.offset,
 				extract->extract.from_data.size);
+		} else if (type == DPKG_EXTRACT_FROM_PARSE) {
+			sprintf(string, "parse offset/len: %d/%d",
+				extract->extract.from_parse.offset,
+				extract->extract.from_parse.size);
 		}
 		DPAA2_FLOW_DUMP("%s", string);
 		if ((idx + 1) < dpkg->num_extracts)
@@ -294,6 +291,10 @@ dpaa2_flow_fs_extracts_log(const struct dpaa2_dev_priv *priv,
 			sprintf(string, "raw offset/len: %d/%d",
 				extract->extract.from_data.offset,
 				extract->extract.from_data.size);
+		} else if (type == DPKG_EXTRACT_FROM_PARSE) {
+			sprintf(string, "parse offset/len: %d/%d",
+				extract->extract.from_parse.offset,
+				extract->extract.from_parse.size);
 		}
 		DPAA2_FLOW_DUMP("%s", string);
 		if ((idx + 1) < dpkg->num_extracts)
@@ -627,6 +628,66 @@ dpaa2_flow_fs_rule_insert_hole(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_faf_advance(struct dpaa2_dev_priv *priv,
+	int faf_byte, enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off++;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, 1);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, 1, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = 1;
+	key_profile->prot_field[pos].type = DPAA2_FAF_KEY;
+	key_profile->prot_field[pos].key_field = faf_byte;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	key_profile->key_max_size++;
+
+	return pos;
+}
+
 /* Move IPv4/IPv6 addresses to fill new extract previous IP address.
  * Current MC/WRIOP only support generic IP extract but IP address
  * is not fixed, so we have to put them at end of extracts, otherwise,
@@ -688,6 +749,7 @@ dpaa2_flow_key_profile_advance(enum net_prot prot,
 	}
 
 	key_profile->key_size[pos] = field_size;
+	key_profile->prot_field[pos].type = DPAA2_NET_PROT_KEY;
 	key_profile->prot_field[pos].prot = prot;
 	key_profile->prot_field[pos].key_field = field;
 	key_profile->num++;
@@ -711,6 +773,55 @@ dpaa2_flow_key_profile_advance(enum net_prot prot,
 	return pos;
 }
 
+static int
+dpaa2_flow_faf_add_hdr(int faf_byte,
+	struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i, offset;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	pos = dpaa2_flow_faf_advance(priv,
+			faf_byte, dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
+	}
+
+	offset = DPAA2_FAFE_PSR_OFFSET + faf_byte;
+
+	extracts[pos].type = DPKG_EXTRACT_FROM_PARSE;
+	extracts[pos].extract.from_parse.offset = offset;
+	extracts[pos].extract.from_parse.size = 1;
+
+	dpkg->num_extracts++;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_extract_add_hdr(enum net_prot prot,
 	uint32_t field, uint8_t field_size,
@@ -997,6 +1108,7 @@ dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
 			key_profile->key_offset[pos] = 0;
 		}
 		key_profile->key_size[pos] = item_size;
+		key_profile->prot_field[pos].type = DPAA2_NET_PROT_KEY;
 		key_profile->prot_field[pos].prot = NET_PROT_PAYLOAD;
 		key_profile->prot_field[pos].key_field = field;
 
@@ -1036,7 +1148,7 @@ dpaa2_flow_extract_add_raw(struct dpaa2_dev_priv *priv,
 
 static inline int
 dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
-	enum net_prot prot, uint32_t key_field)
+	enum key_prot_type type, enum net_prot prot, uint32_t key_field)
 {
 	int pos;
 	struct key_prot_field *prot_field;
@@ -1049,16 +1161,23 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 
 	prot_field = key_profile->prot_field;
 	for (pos = 0; pos < key_profile->num; pos++) {
-		if (prot_field[pos].prot == prot &&
-			prot_field[pos].key_field == key_field) {
+		if (type == DPAA2_NET_PROT_KEY &&
+			prot_field[pos].prot == prot &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
+			return pos;
+		else if (type == DPAA2_FAF_KEY &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
 			return pos;
-		}
 	}
 
-	if (dpaa2_flow_l4_src_port_extract(prot, key_field)) {
+	if (type == DPAA2_NET_PROT_KEY &&
+		dpaa2_flow_l4_src_port_extract(prot, key_field)) {
 		if (key_profile->l4_src_port_present)
 			return key_profile->l4_src_port_pos;
-	} else if (dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
+	} else if (type == DPAA2_NET_PROT_KEY &&
+		dpaa2_flow_l4_dst_port_extract(prot, key_field)) {
 		if (key_profile->l4_dst_port_present)
 			return key_profile->l4_dst_port_pos;
 	}
@@ -1068,80 +1187,53 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 
 static inline int
 dpaa2_flow_extract_key_offset(struct dpaa2_key_profile *key_profile,
-	enum net_prot prot, uint32_t key_field)
+	enum key_prot_type type, enum net_prot prot, uint32_t key_field)
 {
 	int i;
 
-	i = dpaa2_flow_extract_search(key_profile, prot, key_field);
+	i = dpaa2_flow_extract_search(key_profile, type, prot, key_field);
 	if (i >= 0)
 		return key_profile->key_offset[i];
 	else
 		return i;
 }
 
-struct prev_proto_field_id {
-	enum net_prot prot;
-	union {
-		rte_be16_t eth_type;
-		uint8_t ip_proto;
-	};
-};
-
 static int
-dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
+dpaa2_flow_faf_add_rule(struct dpaa2_dev_priv *priv,
 	struct dpaa2_dev_flow *flow,
-	const struct prev_proto_field_id *prev_proto,
+	enum dpaa2_rx_faf_offset faf_bit_off,
 	int group,
 	enum dpaa2_flow_dist_type dist_type)
 {
 	int offset;
 	uint8_t *key_addr;
 	uint8_t *mask_addr;
-	uint32_t field = 0;
-	rte_be16_t eth_type;
-	uint8_t ip_proto;
 	struct dpaa2_key_extract *key_extract;
 	struct dpaa2_key_profile *key_profile;
+	uint8_t faf_byte = faf_bit_off / 8;
+	uint8_t faf_bit_in_byte = faf_bit_off % 8;
 
-	if (prev_proto->prot == NET_PROT_ETH) {
-		field = NH_FLD_ETH_TYPE;
-	} else if (prev_proto->prot == NET_PROT_IP) {
-		field = NH_FLD_IP_PROTO;
-	} else {
-		DPAA2_PMD_ERR("Prev proto(%d) not support!",
-			prev_proto->prot);
-		return -EINVAL;
-	}
+	faf_bit_in_byte = 7 - faf_bit_in_byte;
 
 	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
 		key_extract = &priv->extract.qos_key_extract;
 		key_profile = &key_extract->key_profile;
 
 		offset = dpaa2_flow_extract_key_offset(key_profile,
-				prev_proto->prot, field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (offset < 0) {
 			DPAA2_PMD_ERR("%s QoS key extract failed", __func__);
 			return -EINVAL;
 		}
 		key_addr = flow->qos_key_addr + offset;
 		mask_addr = flow->qos_mask_addr + offset;
-		if (prev_proto->prot == NET_PROT_ETH) {
-			eth_type = prev_proto->eth_type;
-			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
-			eth_type = 0xffff;
-			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
-			flow->qos_rule_size += sizeof(rte_be16_t);
-		} else if (prev_proto->prot == NET_PROT_IP) {
-			ip_proto = prev_proto->ip_proto;
-			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
-			ip_proto = 0xff;
-			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
-			flow->qos_rule_size += sizeof(uint8_t);
-		} else {
-			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
-				prev_proto->prot);
-			return -EINVAL;
-		}
+
+		if (!(*key_addr) &&
+			key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size++;
+
+		*key_addr |=  (1 << faf_bit_in_byte);
+		*mask_addr |=  (1 << faf_bit_in_byte);
 	}
 
 	if (dist_type & DPAA2_FLOW_FS_TYPE) {
@@ -1149,7 +1241,7 @@ dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
 		key_profile = &key_extract->key_profile;
 
 		offset = dpaa2_flow_extract_key_offset(key_profile,
-				prev_proto->prot, field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (offset < 0) {
 			DPAA2_PMD_ERR("%s TC[%d] key extract failed",
 				__func__, group);
@@ -1158,23 +1250,12 @@ dpaa2_flow_prev_proto_rule(struct dpaa2_dev_priv *priv,
 		key_addr = flow->fs_key_addr + offset;
 		mask_addr = flow->fs_mask_addr + offset;
 
-		if (prev_proto->prot == NET_PROT_ETH) {
-			eth_type = prev_proto->eth_type;
-			memcpy(key_addr, &eth_type, sizeof(rte_be16_t));
-			eth_type = 0xffff;
-			memcpy(mask_addr, &eth_type, sizeof(rte_be16_t));
-			flow->fs_rule_size += sizeof(rte_be16_t);
-		} else if (prev_proto->prot == NET_PROT_IP) {
-			ip_proto = prev_proto->ip_proto;
-			memcpy(key_addr, &ip_proto, sizeof(uint8_t));
-			ip_proto = 0xff;
-			memcpy(mask_addr, &ip_proto, sizeof(uint8_t));
-			flow->fs_rule_size += sizeof(uint8_t);
-		} else {
-			DPAA2_PMD_ERR("Invalid Prev proto(%d)",
-				prev_proto->prot);
-			return -EINVAL;
-		}
+		if (!(*key_addr) &&
+			key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size++;
+
+		*key_addr |=  (1 << faf_bit_in_byte);
+		*mask_addr |=  (1 << faf_bit_in_byte);
 	}
 
 	return 0;
@@ -1196,7 +1277,7 @@ dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 	}
 
 	offset = dpaa2_flow_extract_key_offset(key_profile,
-			prot, field);
+			DPAA2_NET_PROT_KEY, prot, field);
 	if (offset < 0) {
 		DPAA2_PMD_ERR("P(%d)/F(%d) does not exist!",
 			prot, field);
@@ -1234,7 +1315,7 @@ dpaa2_flow_raw_rule_data_set(struct dpaa2_dev_flow *flow,
 	field = extract_offset << DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT;
 	field |= extract_size;
 	offset = dpaa2_flow_extract_key_offset(key_profile,
-			NET_PROT_PAYLOAD, field);
+			DPAA2_NET_PROT_KEY, NET_PROT_PAYLOAD, field);
 	if (offset < 0) {
 		DPAA2_PMD_ERR("offset(%d)/size(%d) raw extract failed",
 			extract_offset, size);
@@ -1317,60 +1398,39 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 }
 
 static int
-dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
+dpaa2_flow_identify_by_faf(struct dpaa2_dev_priv *priv,
 	struct dpaa2_dev_flow *flow,
-	const struct prev_proto_field_id *prev_prot,
+	enum dpaa2_rx_faf_offset faf_off,
 	enum dpaa2_flow_dist_type dist_type,
 	int group, int *recfg)
 {
-	int ret, index, local_cfg = 0, size = 0;
+	int ret, index, local_cfg = 0;
 	struct dpaa2_key_extract *extract;
 	struct dpaa2_key_profile *key_profile;
-	enum net_prot prot = prev_prot->prot;
-	uint32_t key_field = 0;
-
-	if (prot == NET_PROT_ETH) {
-		key_field = NH_FLD_ETH_TYPE;
-		size = sizeof(rte_be16_t);
-	} else if (prot == NET_PROT_IP) {
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else if (prot == NET_PROT_IPV4) {
-		prot = NET_PROT_IP;
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else if (prot == NET_PROT_IPV6) {
-		prot = NET_PROT_IP;
-		key_field = NH_FLD_IP_PROTO;
-		size = sizeof(uint8_t);
-	} else {
-		DPAA2_PMD_ERR("Invalid Prev prot(%d)", prot);
-		return -EINVAL;
-	}
+	uint8_t faf_byte = faf_off / 8;
 
 	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
 		extract = &priv->extract.qos_key_extract;
 		key_profile = &extract->key_profile;
 
 		index = dpaa2_flow_extract_search(key_profile,
-				prot, key_field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add_hdr(prot,
-					key_field, size, priv,
-					DPAA2_FLOW_QOS_TYPE, group,
+			ret = dpaa2_flow_faf_add_hdr(faf_byte,
+					priv, DPAA2_FLOW_QOS_TYPE, group,
 					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("QOS prev extract add failed");
+				DPAA2_PMD_ERR("QOS faf extract add failed");
 
 				return -EINVAL;
 			}
 			local_cfg |= DPAA2_FLOW_QOS_TYPE;
 		}
 
-		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+		ret = dpaa2_flow_faf_add_rule(priv, flow, faf_off, group,
 				DPAA2_FLOW_QOS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("QoS prev rule set failed");
+			DPAA2_PMD_ERR("QoS faf rule set failed");
 			return -EINVAL;
 		}
 	}
@@ -1380,14 +1440,13 @@ dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
 		key_profile = &extract->key_profile;
 
 		index = dpaa2_flow_extract_search(key_profile,
-				prot, key_field);
+				DPAA2_FAF_KEY, NET_PROT_NONE, faf_byte);
 		if (index < 0) {
-			ret = dpaa2_flow_extract_add_hdr(prot,
-					key_field, size, priv,
-					DPAA2_FLOW_FS_TYPE, group,
+			ret = dpaa2_flow_faf_add_hdr(faf_byte,
+					priv, DPAA2_FLOW_FS_TYPE, group,
 					NULL);
 			if (ret) {
-				DPAA2_PMD_ERR("FS[%d] prev extract add failed",
+				DPAA2_PMD_ERR("FS[%d] faf extract add failed",
 					group);
 
 				return -EINVAL;
@@ -1395,17 +1454,17 @@ dpaa2_flow_identify_by_prev_prot(struct dpaa2_dev_priv *priv,
 			local_cfg |= DPAA2_FLOW_FS_TYPE;
 		}
 
-		ret = dpaa2_flow_prev_proto_rule(priv, flow, prev_prot, group,
+		ret = dpaa2_flow_faf_add_rule(priv, flow, faf_off, group,
 				DPAA2_FLOW_FS_TYPE);
 		if (ret) {
-			DPAA2_PMD_ERR("FS[%d] prev rule set failed",
+			DPAA2_PMD_ERR("FS[%d] faf rule set failed",
 				group);
 			return -EINVAL;
 		}
 	}
 
 	if (recfg)
-		*recfg = local_cfg;
+		*recfg |= local_cfg;
 
 	return 0;
 }
@@ -1432,7 +1491,7 @@ dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
 	key_profile = &key_extract->key_profile;
 
 	index = dpaa2_flow_extract_search(key_profile,
-			prot, field);
+			DPAA2_NET_PROT_KEY, prot, field);
 	if (index < 0) {
 		ret = dpaa2_flow_extract_add_hdr(prot,
 				field, size, priv,
@@ -1571,6 +1630,7 @@ dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
 		key_profile->key_max_size += NH_FLD_IPV6_ADDR_SIZE;
 	}
 	key_profile->num++;
+	key_profile->prot_field[num].type = DPAA2_NET_PROT_KEY;
 
 	dpkg->extracts[num].extract.from_hdr.prot = prot;
 	dpkg->extracts[num].extract.from_hdr.field = field;
@@ -1681,15 +1741,28 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 	spec = pattern->spec;
 	mask = pattern->mask ?
 			pattern->mask : &dpaa2_flow_item_eth_mask;
-	if (!spec) {
-		DPAA2_PMD_WARN("No pattern spec for Eth flow");
-		return -EINVAL;
-	}
 
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ETH_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ETH_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
 		RTE_FLOW_ITEM_TYPE_ETH)) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
@@ -1778,15 +1851,18 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_ETH;
-		prev_proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
-		ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_proto,
-				DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-				group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
 		if (ret)
 			return ret;
+
 		(*device_configured) |= local_cfg;
 		return 0;
 	}
@@ -1833,7 +1909,6 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	const void *key, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int size;
-	struct prev_proto_field_id prev_prot;
 
 	group = attr->group;
 
@@ -1846,19 +1921,21 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	prev_prot.prot = NET_PROT_ETH;
-	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
+					 DPAA2_FLOW_QOS_TYPE, group,
+					 &local_cfg);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE, group,
-			&local_cfg);
-	if (ret) {
-		DPAA2_PMD_ERR("IPv4 identification failed!");
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
+					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret)
 		return ret;
-	}
 
-	if (!spec_ipv4)
+	if (!spec_ipv4) {
+		(*device_configured) |= local_cfg;
 		return 0;
+	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
 				       RTE_FLOW_ITEM_TYPE_IPV4)) {
@@ -1950,7 +2027,6 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
 	int size;
-	struct prev_proto_field_id prev_prot;
 
 	group = attr->group;
 
@@ -1962,19 +2038,21 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	prev_prot.prot = NET_PROT_ETH;
-	prev_prot.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
+					 DPAA2_FLOW_QOS_TYPE, group,
+					 &local_cfg);
+	if (ret)
+		return ret;
 
-	ret = dpaa2_flow_identify_by_prev_prot(priv, flow, &prev_prot,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
-	if (ret) {
-		DPAA2_PMD_ERR("IPv6 identification failed!");
+	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
+					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+	if (ret)
 		return ret;
-	}
 
-	if (!spec_ipv6)
+	if (!spec_ipv6) {
+		(*device_configured) |= local_cfg;
 		return 0;
+	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
 				       RTE_FLOW_ITEM_TYPE_IPV6)) {
@@ -2078,18 +2156,15 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		/* Next proto of Generical IP is actually used
-		 * for ICMP identification.
-		 * Example: flow create 0 ingress pattern icmp
-		 */
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ICMP_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_ICMP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_ICMP_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
 		if (ret)
 			return ret;
 
@@ -2166,22 +2241,21 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_UDP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_UDP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_UDP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2253,22 +2327,21 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_TCP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_TCP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_TCP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2340,22 +2413,21 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
-	if (!spec || !mc_l4_port_identification) {
-		struct prev_proto_field_id prev_proto;
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_SCTP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_SCTP;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_SCTP_FRAM, DPAA2_FLOW_FS_TYPE,
 			group, &local_cfg);
-		if (ret)
-			return ret;
+	if (ret)
+		return ret;
 
+	if (!spec) {
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
@@ -2428,21 +2500,20 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	flow->tc_index = attr->priority;
 
 	if (!spec) {
-		struct prev_proto_field_id prev_proto;
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GRE_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
 
-		prev_proto.prot = NET_PROT_IP;
-		prev_proto.ip_proto = IPPROTO_GRE;
-		ret = dpaa2_flow_identify_by_prev_prot(priv,
-			flow, &prev_proto,
-			DPAA2_FLOW_QOS_TYPE | DPAA2_FLOW_FS_TYPE,
-			group, &local_cfg);
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GRE_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
 		if (ret)
 			return ret;
 
 		(*device_configured) |= local_cfg;
-
-		if (!spec)
-			return 0;
+		return 0;
 	}
 
 	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 27/42] net/dpaa2: add VXLAN distribution support
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (25 preceding siblings ...)
  2024-10-23 11:59           ` [v5 26/42] net/dpaa2: frame attribute flags parser vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 28/42] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
                             ` (15 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Extracts from vxlan header for distribution.
The vxlan header is set by soft parser code in
soft parser context located from offset 43 of parser results:

<assign-variable name="$softparsectx[0:3]" value="vxlan.vnid"/>

vxlan protocol is identified by vxlan bit of frame attribute flags.
The parser result extracts are added for this functionality.

Example:
flow create 0 ingress pattern vxlan / end actions pf / queue index 4 / end

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |   6 +-
 drivers/net/dpaa2/dpaa2_flow.c   | 313 +++++++++++++++++++++++++++++++
 2 files changed, 318 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 8f548467a4..aeddcfdfa9 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -282,8 +282,12 @@ enum ip_addr_extract_type {
 };
 
 enum key_prot_type {
+	/* HW extracts from standard protocol fields*/
 	DPAA2_NET_PROT_KEY,
-	DPAA2_FAF_KEY
+	/* HW extracts from FAF of PR*/
+	DPAA2_FAF_KEY,
+	/* HW extracts from PR other than FAF*/
+	DPAA2_PR_KEY
 };
 
 struct key_prot_field {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 4c80efeff7..3530417a29 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -38,6 +38,8 @@ enum dpaa2_flow_dist_type {
 #define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
 #define DPAA2_FLOW_MAX_KEY_SIZE			16
 
+#define VXLAN_HF_VNI 0x08
+
 struct dpaa2_dev_flow {
 	LIST_ENTRY(dpaa2_dev_flow) next;
 	struct dpni_rule_cfg qos_rule;
@@ -140,6 +142,11 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
+
+static const struct rte_flow_item_vxlan dpaa2_flow_item_vxlan_mask = {
+	.flags = 0xff,
+	.vni = "\xff\xff\xff",
+};
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -688,6 +695,68 @@ dpaa2_flow_faf_advance(struct dpaa2_dev_priv *priv,
 	return pos;
 }
 
+static int
+dpaa2_flow_pr_advance(struct dpaa2_dev_priv *priv,
+	uint32_t pr_offset, uint32_t pr_size,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int offset, ret;
+	struct dpaa2_key_profile *key_profile;
+	int num, pos;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_profile = &priv->extract.qos_key_extract.key_profile;
+	else
+		key_profile = &priv->extract.tc_key_extract[tc_id].key_profile;
+
+	num = key_profile->num;
+
+	if (num >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	if (key_profile->ip_addr_type != IP_NONE_ADDR_EXTRACT) {
+		offset = key_profile->ip_addr_extract_off;
+		pos = key_profile->ip_addr_extract_pos;
+		key_profile->ip_addr_extract_pos++;
+		key_profile->ip_addr_extract_off += pr_size;
+		if (dist_type == DPAA2_FLOW_QOS_TYPE) {
+			ret = dpaa2_flow_qos_rule_insert_hole(priv,
+					offset, pr_size);
+		} else {
+			ret = dpaa2_flow_fs_rule_insert_hole(priv,
+				offset, pr_size, tc_id);
+		}
+		if (ret)
+			return ret;
+	} else {
+		pos = num;
+	}
+
+	if (pos > 0) {
+		key_profile->key_offset[pos] =
+			key_profile->key_offset[pos - 1] +
+			key_profile->key_size[pos - 1];
+	} else {
+		key_profile->key_offset[pos] = 0;
+	}
+
+	key_profile->key_size[pos] = pr_size;
+	key_profile->prot_field[pos].type = DPAA2_PR_KEY;
+	key_profile->prot_field[pos].key_field =
+		(pr_offset << 16) | pr_size;
+	key_profile->num++;
+
+	if (insert_offset)
+		*insert_offset = key_profile->key_offset[pos];
+
+	key_profile->key_max_size += pr_size;
+
+	return pos;
+}
+
 /* Move IPv4/IPv6 addresses to fill new extract previous IP address.
  * Current MC/WRIOP only support generic IP extract but IP address
  * is not fixed, so we have to put them at end of extracts, otherwise,
@@ -822,6 +891,59 @@ dpaa2_flow_faf_add_hdr(int faf_byte,
 	return 0;
 }
 
+static int
+dpaa2_flow_pr_add_hdr(uint32_t pr_offset,
+	uint32_t pr_size, struct dpaa2_dev_priv *priv,
+	enum dpaa2_flow_dist_type dist_type, int tc_id,
+	int *insert_offset)
+{
+	int pos, i;
+	struct dpaa2_key_extract *key_extract;
+	struct dpkg_profile_cfg *dpkg;
+	struct dpkg_extract *extracts;
+
+	if ((pr_offset + pr_size) > DPAA2_FAPR_SIZE) {
+		DPAA2_PMD_ERR("PR extracts(%d:%d) overflow",
+			pr_offset, pr_size);
+		return -EINVAL;
+	}
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	dpkg = &key_extract->dpkg;
+	extracts = dpkg->extracts;
+
+	if (dpkg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+		DPAA2_PMD_ERR("Number of extracts overflows");
+		return -EINVAL;
+	}
+
+	pos = dpaa2_flow_pr_advance(priv,
+			pr_offset, pr_size, dist_type, tc_id,
+			insert_offset);
+	if (pos < 0)
+		return pos;
+
+	if (pos != dpkg->num_extracts) {
+		/* Not the last pos, must have IP address extract.*/
+		for (i = dpkg->num_extracts - 1; i >= pos; i--) {
+			memcpy(&extracts[i + 1],
+				&extracts[i], sizeof(struct dpkg_extract));
+		}
+	}
+
+	extracts[pos].type = DPKG_EXTRACT_FROM_PARSE;
+	extracts[pos].extract.from_parse.offset = pr_offset;
+	extracts[pos].extract.from_parse.size = pr_size;
+
+	dpkg->num_extracts++;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_extract_add_hdr(enum net_prot prot,
 	uint32_t field, uint8_t field_size,
@@ -1170,6 +1292,10 @@ dpaa2_flow_extract_search(struct dpaa2_key_profile *key_profile,
 			prot_field[pos].key_field == key_field &&
 			prot_field[pos].type == type)
 			return pos;
+		else if (type == DPAA2_PR_KEY &&
+			prot_field[pos].key_field == key_field &&
+			prot_field[pos].type == type)
+			return pos;
 	}
 
 	if (type == DPAA2_NET_PROT_KEY &&
@@ -1261,6 +1387,41 @@ dpaa2_flow_faf_add_rule(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static inline int
+dpaa2_flow_pr_rule_data_set(struct dpaa2_dev_flow *flow,
+	struct dpaa2_key_profile *key_profile,
+	uint32_t pr_offset, uint32_t pr_size,
+	const void *key, const void *mask,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int offset;
+	uint32_t pr_field = pr_offset << 16 | pr_size;
+
+	offset = dpaa2_flow_extract_key_offset(key_profile,
+			DPAA2_PR_KEY, NET_PROT_NONE, pr_field);
+	if (offset < 0) {
+		DPAA2_PMD_ERR("PR off(%d)/size(%d) does not exist!",
+			pr_offset, pr_size);
+		return -EINVAL;
+	}
+
+	if (dist_type & DPAA2_FLOW_QOS_TYPE) {
+		memcpy((flow->qos_key_addr + offset), key, pr_size);
+		memcpy((flow->qos_mask_addr + offset), mask, pr_size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->qos_rule_size = offset + pr_size;
+	}
+
+	if (dist_type & DPAA2_FLOW_FS_TYPE) {
+		memcpy((flow->fs_key_addr + offset), key, pr_size);
+		memcpy((flow->fs_mask_addr + offset), mask, pr_size);
+		if (key_profile->ip_addr_type == IP_NONE_ADDR_EXTRACT)
+			flow->fs_rule_size = offset + pr_size;
+	}
+
+	return 0;
+}
+
 static inline int
 dpaa2_flow_hdr_rule_data_set(struct dpaa2_dev_flow *flow,
 	struct dpaa2_key_profile *key_profile,
@@ -1382,6 +1543,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_gre_mask;
 		size = sizeof(struct rte_flow_item_gre);
 		break;
+	case RTE_FLOW_ITEM_TYPE_VXLAN:
+		mask_support = (const char *)&dpaa2_flow_item_vxlan_mask;
+		size = sizeof(struct rte_flow_item_vxlan);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -1469,6 +1634,55 @@ dpaa2_flow_identify_by_faf(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_add_pr_extract_rule(struct dpaa2_dev_flow *flow,
+	uint32_t pr_offset, uint32_t pr_size,
+	const void *key, const void *mask,
+	struct dpaa2_dev_priv *priv, int tc_id, int *recfg,
+	enum dpaa2_flow_dist_type dist_type)
+{
+	int index, ret, local_cfg = 0;
+	struct dpaa2_key_extract *key_extract;
+	struct dpaa2_key_profile *key_profile;
+	uint32_t pr_field = pr_offset << 16 | pr_size;
+
+	if (dist_type == DPAA2_FLOW_QOS_TYPE)
+		key_extract = &priv->extract.qos_key_extract;
+	else
+		key_extract = &priv->extract.tc_key_extract[tc_id];
+
+	key_profile = &key_extract->key_profile;
+
+	index = dpaa2_flow_extract_search(key_profile,
+			DPAA2_PR_KEY, NET_PROT_NONE, pr_field);
+	if (index < 0) {
+		ret = dpaa2_flow_pr_add_hdr(pr_offset,
+				pr_size, priv,
+				dist_type, tc_id, NULL);
+		if (ret) {
+			DPAA2_PMD_ERR("PR add off(%d)/size(%d) failed",
+				pr_offset, pr_size);
+
+			return ret;
+		}
+		local_cfg |= dist_type;
+	}
+
+	ret = dpaa2_flow_pr_rule_data_set(flow, key_profile,
+			pr_offset, pr_size, key, mask, dist_type);
+	if (ret) {
+		DPAA2_PMD_ERR("PR off(%d)/size(%d) rule data set failed",
+			pr_offset, pr_size);
+
+		return ret;
+	}
+
+	if (recfg)
+		*recfg |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_flow_add_hdr_extract_rule(struct dpaa2_dev_flow *flow,
 	enum net_prot prot, uint32_t field,
@@ -2545,6 +2759,90 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_flow_item *pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_vxlan *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_vxlan_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_VXLAN_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_VXLAN_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VXLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of VXLAN not support.");
+
+		return -1;
+	}
+
+	if (mask->flags) {
+		if (spec->flags != VXLAN_HF_VNI) {
+			DPAA2_PMD_ERR("vxlan flag(0x%02x) must be 0x%02x.",
+				spec->flags, VXLAN_HF_VNI);
+			return -EINVAL;
+		}
+		if (mask->flags != 0xff) {
+			DPAA2_PMD_ERR("Not support to extract vxlan flag.");
+			return -EINVAL;
+		}
+	}
+
+	if (mask->vni[0] || mask->vni[1] || mask->vni[2]) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_VNI_OFFSET,
+			sizeof(mask->vni), spec->vni,
+			mask->vni,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_VNI_OFFSET,
+			sizeof(mask->vni), spec->vni,
+			mask->vni,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -2760,6 +3058,9 @@ dpaa2_flow_verify_action(struct dpaa2_dev_priv *priv,
 				}
 			}
 
+			break;
+		case RTE_FLOW_ACTION_TYPE_PF:
+			/* Skip this action, have to add for vxlan*/
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
@@ -3110,6 +3411,15 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				return ret;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+			ret = dpaa2_configure_flow_vxlan(flow,
+					dev, attr, &pattern[i], actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("VXLAN flow config failed!");
+				return ret;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow,
 					dev, attr, &pattern[i],
@@ -3222,6 +3532,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			if (ret)
 				return ret;
 
+			break;
+		case RTE_FLOW_ACTION_TYPE_PF:
+			/* Skip this action, have to add for vxlan*/
 			break;
 		case RTE_FLOW_ACTION_TYPE_END:
 			end_of_list = 1;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 28/42] net/dpaa2: protocol inside tunnel distribution
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (26 preceding siblings ...)
  2024-10-23 11:59           ` [v5 27/42] net/dpaa2: add VXLAN distribution support vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 29/42] net/dpaa2: eCPRI support by parser result vanshika.shukla
                             ` (14 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Control flow by protocols inside tunnel.
The tunnel flow items applied by application are in order from
outer to inner. The inner items start from tunnel item, something
like vxlan, GRE etc.

For example:
flow create 0 ingress pattern ipv4 / vxlan / ipv6 / end
	actions pf / queue index 2 / end

So the items following the tunnel item are tagged with "innner".
The inner items are extracted from parser results which are set
by soft parser.
So far only vxlan tunnel is supported. Limited by soft parser area,
only ethernet header and vlan header inside tunnel are able to be used
for flow distribution. IPv4, IPv6, UDP and TCP inside tunnel can be
detected by user defined FAF set by SP for flow distribution.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 587 +++++++++++++++++++++++++++++----
 1 file changed, 519 insertions(+), 68 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 3530417a29..d02859fea7 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -58,6 +58,11 @@ struct dpaa2_dev_flow {
 	struct dpni_fs_action_cfg fs_action_cfg;
 };
 
+struct rte_dpaa2_flow_item {
+	struct rte_flow_item generic_item;
+	int in_tunnel;
+};
+
 static const
 enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
@@ -1935,10 +1940,203 @@ dpaa2_flow_add_ipaddr_extract_rule(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+dpaa2_configure_flow_tunnel_eth(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_eth *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+			pattern->mask : &dpaa2_flow_item_eth_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec)
+		return 0;
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH)) {
+		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
+
+		return -EINVAL;
+	}
+
+	if (memcmp((const char *)&mask->src,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		/*SRC[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR0_OFFSET,
+			1, &spec->src.addr_bytes[0],
+			&mask->src.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[1:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR1_OFFSET,
+			2, &spec->src.addr_bytes[1],
+			&mask->src.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[3:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR3_OFFSET,
+			1, &spec->src.addr_bytes[3],
+			&mask->src.addr_bytes[3],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[4:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR4_OFFSET,
+			2, &spec->src.addr_bytes[4],
+			&mask->src.addr_bytes[4],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		/*SRC[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR0_OFFSET,
+			1, &spec->src.addr_bytes[0],
+			&mask->src.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[1:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR1_OFFSET,
+			2, &spec->src.addr_bytes[1],
+			&mask->src.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[3:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR3_OFFSET,
+			1, &spec->src.addr_bytes[3],
+			&mask->src.addr_bytes[3],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*SRC[4:2]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_SADDR4_OFFSET,
+			2, &spec->src.addr_bytes[4],
+			&mask->src.addr_bytes[4],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->dst,
+		zero_cmp, RTE_ETHER_ADDR_LEN)) {
+		/*DST[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR0_OFFSET,
+			1, &spec->dst.addr_bytes[0],
+			&mask->dst.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[1:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR1_OFFSET,
+			1, &spec->dst.addr_bytes[1],
+			&mask->dst.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[2:3]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR2_OFFSET,
+			3, &spec->dst.addr_bytes[2],
+			&mask->dst.addr_bytes[2],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[5:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR5_OFFSET,
+			1, &spec->dst.addr_bytes[5],
+			&mask->dst.addr_bytes[5],
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		/*DST[0:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR0_OFFSET,
+			1, &spec->dst.addr_bytes[0],
+			&mask->dst.addr_bytes[0],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[1:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR1_OFFSET,
+			1, &spec->dst.addr_bytes[1],
+			&mask->dst.addr_bytes[1],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[2:3]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR2_OFFSET,
+			3, &spec->dst.addr_bytes[2],
+			&mask->dst.addr_bytes[2],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+		/*DST[5:1]*/
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_DADDR5_OFFSET,
+			1, &spec->dst.addr_bytes[5],
+			&mask->dst.addr_bytes[5],
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (memcmp((const char *)&mask->type,
+		zero_cmp, sizeof(rte_be16_t))) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TYPE_OFFSET,
+			sizeof(rte_be16_t), &spec->type, &mask->type,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TYPE_OFFSET,
+			sizeof(rte_be16_t), &spec->type, &mask->type,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -1948,6 +2146,13 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 	const struct rte_flow_item_eth *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	if (dpaa2_pattern->in_tunnel) {
+		return dpaa2_configure_flow_tunnel_eth(flow,
+				dev, attr, pattern, device_configured);
+	}
 
 	group = attr->group;
 
@@ -2041,10 +2246,81 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
+dpaa2_configure_flow_tunnel_vlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
 	const struct rte_flow_item *pattern,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_vlan *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_vlan_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAFE_VXLAN_IN_VLAN_FRAM,
+				DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAFE_VXLAN_IN_VLAN_FRAM,
+				DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VLAN)) {
+		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+
+		return -EINVAL;
+	}
+
+	if (!mask->tci)
+		return 0;
+
+	ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TCI_OFFSET,
+			sizeof(rte_be16_t), &spec->tci, &mask->tci,
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_add_pr_extract_rule(flow,
+			DPAA2_VXLAN_IN_TCI_OFFSET,
+			sizeof(rte_be16_t), &spec->tci, &mask->tci,
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2053,6 +2329,13 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_vlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	if (dpaa2_pattern->in_tunnel) {
+		return dpaa2_configure_flow_tunnel_vlan(flow,
+				dev, attr, pattern, device_configured);
+	}
 
 	group = attr->group;
 
@@ -2112,7 +2395,7 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 static int
 dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
+			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
 			  const struct rte_flow_action actions[] __rte_unused,
 			  struct rte_flow_error *error __rte_unused,
 			  int *device_configured)
@@ -2123,6 +2406,7 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	const void *key, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int size;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2131,6 +2415,26 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	mask_ipv4 = pattern->mask ?
 		    pattern->mask : &dpaa2_flow_item_ipv4_mask;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec_ipv4) {
+			DPAA2_PMD_ERR("Tunnel-IPv4 distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV4_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV4_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	/* Get traffic class index and flow id to be configured */
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
@@ -2229,7 +2533,7 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 static int
 dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 			  const struct rte_flow_attr *attr,
-			  const struct rte_flow_item *pattern,
+			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
 			  const struct rte_flow_action actions[] __rte_unused,
 			  struct rte_flow_error *error __rte_unused,
 			  int *device_configured)
@@ -2241,6 +2545,7 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
 	int size;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2252,6 +2557,26 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec_ipv6) {
+			DPAA2_PMD_ERR("Tunnel-IPv6 distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV6_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_IPV6_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
 					 DPAA2_FLOW_QOS_TYPE, group,
 					 &local_cfg);
@@ -2348,7 +2673,7 @@ static int
 dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2357,6 +2682,7 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_icmp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2369,6 +2695,11 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ICMP distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_ICMP_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2434,7 +2765,7 @@ static int
 dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2443,6 +2774,7 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_udp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2455,6 +2787,26 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec) {
+			DPAA2_PMD_ERR("Tunnel-UDP distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_UDP_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_UDP_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_UDP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2520,7 +2872,7 @@ static int
 dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2529,6 +2881,7 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_tcp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2541,6 +2894,26 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		if (spec) {
+			DPAA2_PMD_ERR("Tunnel-TCP distribution not support");
+			return -ENOTSUP;
+		}
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_TCP_FRAM,
+						 DPAA2_FLOW_QOS_TYPE, group,
+						 &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+						 FAFE_VXLAN_IN_TCP_FRAM,
+						 DPAA2_FLOW_FS_TYPE, group,
+						 &local_cfg);
+		return ret;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_TCP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2606,7 +2979,7 @@ static int
 dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2615,6 +2988,7 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_sctp *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2627,6 +3001,11 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-SCTP distribution not support");
+		return -ENOTSUP;
+	}
+
 	ret = dpaa2_flow_identify_by_faf(priv, flow,
 			FAF_SCTP_FRAM, DPAA2_FLOW_QOS_TYPE,
 			group, &local_cfg);
@@ -2692,7 +3071,7 @@ static int
 dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2701,6 +3080,7 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_gre *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2713,6 +3093,11 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-GRE distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_GRE_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2763,7 +3148,7 @@ static int
 dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
@@ -2772,6 +3157,7 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	uint32_t group;
 	const struct rte_flow_item_vxlan *spec, *mask;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
 
 	group = attr->group;
 
@@ -2784,6 +3170,11 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	flow->tc_id = group;
 	flow->tc_index = attr->priority;
 
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-VXLAN distribution not support");
+		return -ENOTSUP;
+	}
+
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
 				FAF_VXLAN_FRAM, DPAA2_FLOW_QOS_TYPE,
@@ -2847,18 +3238,19 @@ static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
 	const struct rte_flow_attr *attr,
-	const struct rte_flow_item *pattern,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
 	const struct rte_flow_action actions[] __rte_unused,
 	struct rte_flow_error *error __rte_unused,
 	int *device_configured)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	const struct rte_flow_item_raw *spec = pattern->spec;
-	const struct rte_flow_item_raw *mask = pattern->mask;
 	int local_cfg = 0, ret;
 	uint32_t group;
 	struct dpaa2_key_extract *qos_key_extract;
 	struct dpaa2_key_extract *tc_key_extract;
+	const struct rte_flow_item *pattern = &dpaa2_pattern->generic_item;
+	const struct rte_flow_item_raw *spec = pattern->spec;
+	const struct rte_flow_item_raw *mask = pattern->mask;
 
 	/* Need both spec and mask */
 	if (!spec || !mask) {
@@ -3302,6 +3694,45 @@ dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
 	return 0;
 }
 
+static int
+dpaa2_flow_item_convert(const struct rte_flow_item pattern[],
+			struct rte_dpaa2_flow_item **dpaa2_pattern)
+{
+	struct rte_dpaa2_flow_item *new_pattern;
+	int num = 0, tunnel_start = 0;
+
+	while (1) {
+		num++;
+		if (pattern[num].type == RTE_FLOW_ITEM_TYPE_END)
+			break;
+	}
+
+	new_pattern = rte_malloc(NULL, sizeof(struct rte_dpaa2_flow_item) * num,
+				 RTE_CACHE_LINE_SIZE);
+	if (!new_pattern) {
+		DPAA2_PMD_ERR("Failed to alloc %d flow items", num);
+		return -ENOMEM;
+	}
+
+	num = 0;
+	while (pattern[num].type != RTE_FLOW_ITEM_TYPE_END) {
+		memcpy(&new_pattern[num].generic_item, &pattern[num],
+		       sizeof(struct rte_flow_item));
+		new_pattern[num].in_tunnel = 0;
+
+		if (pattern[num].type == RTE_FLOW_ITEM_TYPE_VXLAN)
+			tunnel_start = 1;
+		else if (tunnel_start)
+			new_pattern[num].in_tunnel = 1;
+		num++;
+	}
+
+	new_pattern[num].generic_item.type = RTE_FLOW_ITEM_TYPE_END;
+	*dpaa2_pattern = new_pattern;
+
+	return 0;
+}
+
 static int
 dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3318,6 +3749,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	uint16_t dist_size, key_size;
 	struct dpaa2_key_extract *qos_key_extract;
 	struct dpaa2_key_extract *tc_key_extract;
+	struct rte_dpaa2_flow_item *dpaa2_pattern = NULL;
 
 	ret = dpaa2_flow_verify_attr(priv, attr);
 	if (ret)
@@ -3327,107 +3759,121 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 	if (ret)
 		return ret;
 
+	ret = dpaa2_flow_item_convert(pattern, &dpaa2_pattern);
+	if (ret)
+		return ret;
+
 	/* Parse pattern list to get the matching parameters */
 	while (!end_of_list) {
 		switch (pattern[i].type) {
 		case RTE_FLOW_ITEM_TYPE_ETH:
-			ret = dpaa2_configure_flow_eth(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_eth(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ETH flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_VLAN:
-			ret = dpaa2_configure_flow_vlan(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_vlan(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("vLan flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
-			ret = dpaa2_configure_flow_ipv4(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_ipv4(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV4 flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ret = dpaa2_configure_flow_ipv6(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_ipv6(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV6 flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_ICMP:
-			ret = dpaa2_configure_flow_icmp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_icmp(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ICMP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_UDP:
-			ret = dpaa2_configure_flow_udp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_udp(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("UDP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
-			ret = dpaa2_configure_flow_tcp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_tcp(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("TCP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_SCTP:
-			ret = dpaa2_configure_flow_sctp(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_sctp(flow, dev, attr,
+							&dpaa2_pattern[i],
+							actions, error,
+							&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("SCTP flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
-			ret = dpaa2_configure_flow_gre(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_gre(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("GRE flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = dpaa2_configure_flow_vxlan(flow,
-					dev, attr, &pattern[i], actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_vxlan(flow, dev, attr,
+							 &dpaa2_pattern[i],
+							 actions, error,
+							 &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("VXLAN flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
-			ret = dpaa2_configure_flow_raw(flow,
-					dev, attr, &pattern[i],
-					actions, error,
-					&is_keycfg_configured);
+			ret = dpaa2_configure_flow_raw(flow, dev, attr,
+						       &dpaa2_pattern[i],
+						       actions, error,
+						       &is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("RAW flow config failed!");
-				return ret;
+				goto end_flow_set;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_END:
@@ -3459,7 +3905,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			ret = dpaa2_configure_flow_fs_action(priv, flow,
 							     &actions[j]);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			/* Configure FS table first*/
 			dist_size = priv->nb_rx_queues / priv->num_rx_tc;
@@ -3469,20 +3915,20 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 								   dist_size,
 								   false);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			/* Configure QoS table then.*/
 			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
 				ret = dpaa2_configure_qos_table(priv, false);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (priv->num_rx_tc > 1) {
 				ret = dpaa2_flow_add_qos_rule(priv, flow);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (flow->tc_index >= priv->fs_entries) {
@@ -3493,7 +3939,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 
 			ret = dpaa2_flow_add_fs_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			break;
 		case RTE_FLOW_ACTION_TYPE_RSS:
@@ -3505,7 +3951,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			if (ret < 0) {
 				DPAA2_PMD_ERR("TC[%d] distset RSS failed",
 					      flow->tc_id);
-				return ret;
+				goto end_flow_set;
 			}
 
 			dist_size = rss_conf->queue_num;
@@ -3515,22 +3961,22 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 								   dist_size,
 								   true);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			if (is_keycfg_configured & DPAA2_FLOW_QOS_TYPE) {
 				ret = dpaa2_configure_qos_table(priv, true);
 				if (ret)
-					return ret;
+					goto end_flow_set;
 			}
 
 			ret = dpaa2_flow_add_qos_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			ret = dpaa2_flow_add_fs_rule(priv, flow);
 			if (ret)
-				return ret;
+				goto end_flow_set;
 
 			break;
 		case RTE_FLOW_ACTION_TYPE_PF:
@@ -3547,6 +3993,7 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 		j++;
 	}
 
+end_flow_set:
 	if (!ret) {
 		/* New rules are inserted. */
 		if (!curr) {
@@ -3557,6 +4004,10 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			LIST_INSERT_AFTER(curr, flow, next);
 		}
 	}
+
+	if (dpaa2_pattern)
+		rte_free(dpaa2_pattern);
+
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 29/42] net/dpaa2: eCPRI support by parser result
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (27 preceding siblings ...)
  2024-10-23 11:59           ` [v5 28/42] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 30/42] net/dpaa2: add GTP flow support vanshika.shukla
                             ` (13 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Soft parser extracts ECPRI header and message to specified
areas of parser result.
Flow is classified according to the ECPRI extracts from praser result.
This implementation supports ECPRI over ethernet/vlan/UDP and various
types/messages combinations.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |  18 ++
 drivers/net/dpaa2/dpaa2_flow.c   | 348 ++++++++++++++++++++++++++++++-
 2 files changed, 365 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index aeddcfdfa9..eaa653d266 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -179,6 +179,8 @@ enum dpaa2_rx_faf_offset {
 	FAFE_VXLAN_IN_IPV6_FRAM = 2,
 	FAFE_VXLAN_IN_UDP_FRAM = 3,
 	FAFE_VXLAN_IN_TCP_FRAM = 4,
+
+	FAFE_ECPRI_FRAM = 7,
 	/* Set by SP end*/
 
 	FAF_GTP_PRIMED_FRAM = 1 + DPAA2_FAFE_PSR_SIZE * 8,
@@ -207,6 +209,17 @@ enum dpaa2_rx_faf_offset {
 	FAF_ESP_FRAM = 89 + DPAA2_FAFE_PSR_SIZE * 8,
 };
 
+enum dpaa2_ecpri_fafe_type {
+	ECPRI_FAFE_TYPE_0 = (8 - FAFE_ECPRI_FRAM),
+	ECPRI_FAFE_TYPE_1 = (8 - FAFE_ECPRI_FRAM) | (1 << 1),
+	ECPRI_FAFE_TYPE_2 = (8 - FAFE_ECPRI_FRAM) | (2 << 1),
+	ECPRI_FAFE_TYPE_3 = (8 - FAFE_ECPRI_FRAM) | (3 << 1),
+	ECPRI_FAFE_TYPE_4 = (8 - FAFE_ECPRI_FRAM) | (4 << 1),
+	ECPRI_FAFE_TYPE_5 = (8 - FAFE_ECPRI_FRAM) | (5 << 1),
+	ECPRI_FAFE_TYPE_6 = (8 - FAFE_ECPRI_FRAM) | (6 << 1),
+	ECPRI_FAFE_TYPE_7 = (8 - FAFE_ECPRI_FRAM) | (7 << 1)
+};
+
 #define DPAA2_PR_ETH_OFF_OFFSET 19
 #define DPAA2_PR_TCI_OFF_OFFSET 21
 #define DPAA2_PR_LAST_ETYPE_OFFSET 23
@@ -236,6 +249,11 @@ enum dpaa2_rx_faf_offset {
 #define DPAA2_VXLAN_IN_TYPE_OFFSET 46
 /* Set by SP for vxlan distribution end*/
 
+/* ECPRI shares SP context with VXLAN*/
+#define DPAA2_ECPRI_MSG_OFFSET DPAA2_VXLAN_VNI_OFFSET
+
+#define DPAA2_ECPRI_MAX_EXTRACT_NB 8
+
 struct ipv4_sd_addr_extract_rule {
 	uint32_t ipv4_src;
 	uint32_t ipv4_dst;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index d02859fea7..0fdf8f14b8 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -152,6 +152,13 @@ static const struct rte_flow_item_vxlan dpaa2_flow_item_vxlan_mask = {
 	.flags = 0xff,
 	.vni = "\xff\xff\xff",
 };
+
+static const struct rte_flow_item_ecpri dpaa2_flow_item_ecpri_mask = {
+	.hdr.common.type = 0xff,
+	.hdr.dummy[0] = RTE_BE32(0xffffffff),
+	.hdr.dummy[1] = RTE_BE32(0xffffffff),
+	.hdr.dummy[2] = RTE_BE32(0xffffffff),
+};
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -1552,6 +1559,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_vxlan_mask;
 		size = sizeof(struct rte_flow_item_vxlan);
 		break;
+	case RTE_FLOW_ITEM_TYPE_ECPRI:
+		mask_support = (const char *)&dpaa2_flow_item_ecpri_mask;
+		size = sizeof(struct rte_flow_item_ecpri);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -3234,6 +3245,330 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ecpri *spec, *mask;
+	struct rte_flow_item_ecpri local_mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+	uint8_t extract_nb = 0, i;
+	uint64_t rule_data[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint64_t mask_data[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint8_t extract_size[DPAA2_ECPRI_MAX_EXTRACT_NB];
+	uint8_t extract_off[DPAA2_ECPRI_MAX_EXTRACT_NB];
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	if (pattern->mask) {
+		memcpy(&local_mask, pattern->mask,
+			sizeof(struct rte_flow_item_ecpri));
+		local_mask.hdr.common.u32 =
+			rte_be_to_cpu_32(local_mask.hdr.common.u32);
+		mask = &local_mask;
+	} else {
+		mask = &dpaa2_flow_item_ecpri_mask;
+	}
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ECPRI distribution not support");
+		return -ENOTSUP;
+	}
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAFE_ECPRI_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAFE_ECPRI_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ECPRI)) {
+		DPAA2_PMD_WARN("Extract field(s) of ECPRI not support.");
+
+		return -1;
+	}
+
+	if (mask->hdr.common.type != 0xff) {
+		DPAA2_PMD_WARN("ECPRI header type not specified.");
+
+		return -1;
+	}
+
+	if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_0;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type0.pc_id) {
+			rule_data[extract_nb] = spec->hdr.type0.pc_id;
+			mask_data[extract_nb] = mask->hdr.type0.pc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_iq_data, pc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type0.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type0.seq_id;
+			mask_data[extract_nb] = mask->hdr.type0.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_iq_data, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_BIT_SEQ) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_1;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type1.pc_id) {
+			rule_data[extract_nb] = spec->hdr.type1.pc_id;
+			mask_data[extract_nb] = mask->hdr.type1.pc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_bit_seq, pc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type1.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type1.seq_id;
+			mask_data[extract_nb] = mask->hdr.type1.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_bit_seq, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RTC_CTRL) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_2;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type2.rtc_id) {
+			rule_data[extract_nb] = spec->hdr.type2.rtc_id;
+			mask_data[extract_nb] = mask->hdr.type2.rtc_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_rtc_ctrl, rtc_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type2.seq_id) {
+			rule_data[extract_nb] = spec->hdr.type2.seq_id;
+			mask_data[extract_nb] = mask->hdr.type2.seq_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_rtc_ctrl, seq_id);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_GEN_DATA) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_3;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type3.pc_id || mask->hdr.type3.seq_id)
+			DPAA2_PMD_WARN("Extract type3 msg not support.");
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RM_ACC) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_4;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type4.rma_id) {
+			rule_data[extract_nb] = spec->hdr.type4.rma_id;
+			mask_data[extract_nb] = mask->hdr.type4.rma_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET + 0;
+				/** Compiler not support to take address
+				 * of bit-field
+				 * offsetof(struct rte_ecpri_msg_rm_access,
+				 * rma_id);
+				 */
+			extract_nb++;
+		}
+		if (mask->hdr.type4.ele_id) {
+			rule_data[extract_nb] = spec->hdr.type4.ele_id;
+			mask_data[extract_nb] = mask->hdr.type4.ele_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET + 2;
+				/** Compiler not support to take address
+				 * of bit-field
+				 * offsetof(struct rte_ecpri_msg_rm_access,
+				 * ele_id);
+				 */
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_DLY_MSR) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_5;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type5.msr_id) {
+			rule_data[extract_nb] = spec->hdr.type5.msr_id;
+			mask_data[extract_nb] = mask->hdr.type5.msr_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_delay_measure,
+					msr_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type5.act_type) {
+			rule_data[extract_nb] = spec->hdr.type5.act_type;
+			mask_data[extract_nb] = mask->hdr.type5.act_type;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_delay_measure,
+					act_type);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_RMT_RST) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_6;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type6.rst_id) {
+			rule_data[extract_nb] = spec->hdr.type6.rst_id;
+			mask_data[extract_nb] = mask->hdr.type6.rst_id;
+			extract_size[extract_nb] = sizeof(rte_be16_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_remote_reset,
+					rst_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type6.rst_op) {
+			rule_data[extract_nb] = spec->hdr.type6.rst_op;
+			mask_data[extract_nb] = mask->hdr.type6.rst_op;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_remote_reset,
+					rst_op);
+			extract_nb++;
+		}
+	} else if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_EVT_IND) {
+		rule_data[extract_nb] = ECPRI_FAFE_TYPE_7;
+		mask_data[extract_nb] = 0xff;
+		extract_size[extract_nb] = sizeof(uint8_t);
+		extract_off[extract_nb] = DPAA2_FAFE_PSR_OFFSET;
+		extract_nb++;
+
+		if (mask->hdr.type7.evt_id) {
+			rule_data[extract_nb] = spec->hdr.type7.evt_id;
+			mask_data[extract_nb] = mask->hdr.type7.evt_id;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					evt_id);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.evt_type) {
+			rule_data[extract_nb] = spec->hdr.type7.evt_type;
+			mask_data[extract_nb] = mask->hdr.type7.evt_type;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					evt_type);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.seq) {
+			rule_data[extract_nb] = spec->hdr.type7.seq;
+			mask_data[extract_nb] = mask->hdr.type7.seq;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					seq);
+			extract_nb++;
+		}
+		if (mask->hdr.type7.number) {
+			rule_data[extract_nb] = spec->hdr.type7.number;
+			mask_data[extract_nb] = mask->hdr.type7.number;
+			extract_size[extract_nb] = sizeof(uint8_t);
+			extract_off[extract_nb] =
+				DPAA2_ECPRI_MSG_OFFSET +
+				offsetof(struct rte_ecpri_msg_event_ind,
+					number);
+			extract_nb++;
+		}
+	} else {
+		DPAA2_PMD_ERR("Invalid ecpri header type(%d)",
+				spec->hdr.common.type);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < extract_nb; i++) {
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			extract_off[i],
+			extract_size[i], &rule_data[i], &mask_data[i],
+			priv, group,
+			device_configured,
+			DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_pr_extract_rule(flow,
+			extract_off[i],
+			extract_size[i], &rule_data[i], &mask_data[i],
+			priv, group,
+			device_configured,
+			DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3866,6 +4201,16 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_ECPRI:
+			ret = dpaa2_configure_flow_ecpri(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("ECPRI flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow, dev, attr,
 						       &dpaa2_pattern[i],
@@ -3880,7 +4225,8 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			end_of_list = 1;
 			break; /*End of List*/
 		default:
-			DPAA2_PMD_ERR("Invalid action type");
+			DPAA2_PMD_ERR("Invalid flow item[%d] type(%d)",
+				i, pattern[i].type);
 			ret = -ENOTSUP;
 			break;
 		}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 30/42] net/dpaa2: add GTP flow support
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (28 preceding siblings ...)
  2024-10-23 11:59           ` [v5 29/42] net/dpaa2: eCPRI support by parser result vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 31/42] net/dpaa2: check if Soft parser is loaded vanshika.shukla
                             ` (12 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Configure gtp flow to support RSS and FS.
Check FAF of parser result to identify GTP frame.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 172 ++++++++++++++++++++++++++-------
 1 file changed, 138 insertions(+), 34 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 0fdf8f14b8..c7c3681005 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -37,7 +37,7 @@ enum dpaa2_flow_dist_type {
 
 #define DPAA2_FLOW_RAW_OFFSET_FIELD_SHIFT	16
 #define DPAA2_FLOW_MAX_KEY_SIZE			16
-
+#define DPAA2_PROT_FIELD_STRING_SIZE		16
 #define VXLAN_HF_VNI 0x08
 
 struct dpaa2_dev_flow {
@@ -75,6 +75,7 @@ enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_TCP,
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
+	RTE_FLOW_ITEM_TYPE_GTP
 };
 
 static const
@@ -159,6 +160,11 @@ static const struct rte_flow_item_ecpri dpaa2_flow_item_ecpri_mask = {
 	.hdr.dummy[1] = RTE_BE32(0xffffffff),
 	.hdr.dummy[2] = RTE_BE32(0xffffffff),
 };
+
+static const struct rte_flow_item_gtp dpaa2_flow_item_gtp_mask = {
+	.teid = RTE_BE32(0xffffffff),
+};
+
 #endif
 
 #define DPAA2_FLOW_DUMP printf
@@ -234,6 +240,12 @@ dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 			strcat(string, ".type");
 		else
 			strcat(string, ".unknown field");
+	} else if (prot == NET_PROT_GTP) {
+		rte_strscpy(string, "gtp", DPAA2_PROT_FIELD_STRING_SIZE);
+		if (field == NH_FLD_GTP_TEID)
+			strcat(string, ".teid");
+		else
+			strcat(string, ".unknown field");
 	} else {
 		strcpy(string, "unknown protocol");
 	}
@@ -1563,6 +1575,10 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_ecpri_mask;
 		size = sizeof(struct rte_flow_item_ecpri);
 		break;
+	case RTE_FLOW_ITEM_TYPE_GTP:
+		mask_support = (const char *)&dpaa2_flow_item_gtp_mask;
+		size = sizeof(struct rte_flow_item_gtp);
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -3569,6 +3585,84 @@ dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_gtp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_gtp *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_gtp_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-GTP distribution not support");
+		return -ENOTSUP;
+	}
+
+	if (!spec) {
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GTP_FRAM, DPAA2_FLOW_QOS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_identify_by_faf(priv, flow,
+				FAF_GTP_FRAM, DPAA2_FLOW_FS_TYPE,
+				group, &local_cfg);
+		if (ret)
+			return ret;
+
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	if (dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GTP)) {
+		DPAA2_PMD_WARN("Extract field(s) of GTP not support.");
+
+		return -1;
+	}
+
+	if (!mask->teid)
+		return 0;
+
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GTP,
+			NH_FLD_GTP_TEID, &spec->teid,
+			&mask->teid, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_GTP,
+			NH_FLD_GTP_TEID, &spec->teid,
+			&mask->teid, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+	if (ret)
+		return ret;
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -4103,9 +4197,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 		switch (pattern[i].type) {
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			ret = dpaa2_configure_flow_eth(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ETH flow config failed!");
 				goto end_flow_set;
@@ -4113,9 +4207,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_VLAN:
 			ret = dpaa2_configure_flow_vlan(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("vLan flow config failed!");
 				goto end_flow_set;
@@ -4123,9 +4217,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			ret = dpaa2_configure_flow_ipv4(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV4 flow config failed!");
 				goto end_flow_set;
@@ -4133,9 +4227,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
 			ret = dpaa2_configure_flow_ipv6(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("IPV6 flow config failed!");
 				goto end_flow_set;
@@ -4143,9 +4237,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_ICMP:
 			ret = dpaa2_configure_flow_icmp(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("ICMP flow config failed!");
 				goto end_flow_set;
@@ -4153,9 +4247,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_UDP:
 			ret = dpaa2_configure_flow_udp(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("UDP flow config failed!");
 				goto end_flow_set;
@@ -4163,9 +4257,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			ret = dpaa2_configure_flow_tcp(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("TCP flow config failed!");
 				goto end_flow_set;
@@ -4173,9 +4267,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_SCTP:
 			ret = dpaa2_configure_flow_sctp(flow, dev, attr,
-							&dpaa2_pattern[i],
-							actions, error,
-							&is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("SCTP flow config failed!");
 				goto end_flow_set;
@@ -4183,9 +4277,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
 			ret = dpaa2_configure_flow_gre(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("GRE flow config failed!");
 				goto end_flow_set;
@@ -4193,9 +4287,9 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			ret = dpaa2_configure_flow_vxlan(flow, dev, attr,
-							 &dpaa2_pattern[i],
-							 actions, error,
-							 &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("VXLAN flow config failed!");
 				goto end_flow_set;
@@ -4211,11 +4305,21 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_GTP:
+			ret = dpaa2_configure_flow_gtp(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("GTP flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_RAW:
 			ret = dpaa2_configure_flow_raw(flow, dev, attr,
-						       &dpaa2_pattern[i],
-						       actions, error,
-						       &is_keycfg_configured);
+					&dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
 			if (ret) {
 				DPAA2_PMD_ERR("RAW flow config failed!");
 				goto end_flow_set;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 31/42] net/dpaa2: check if Soft parser is loaded
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (29 preceding siblings ...)
  2024-10-23 11:59           ` [v5 30/42] net/dpaa2: add GTP flow support vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 32/42] net/dpaa2: soft parser flow verification vanshika.shukla
                             ` (11 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang, Vanshika Shukla

From: Jun Yang <jun.yang@nxp.com>

Access sp instruction area to check if sp is loaded.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c |  4 ++
 drivers/net/dpaa2/dpaa2_ethdev.h |  2 +
 drivers/net/dpaa2/dpaa2_flow.c   | 88 ++++++++++++++++++++++++++++++++
 3 files changed, 94 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 187b648799..da0ea57ed2 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2861,6 +2861,10 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 			return ret;
 		}
 	}
+
+	ret = dpaa2_soft_parser_loaded();
+	if (ret > 0)
+		DPAA2_PMD_INFO("soft parser is loaded");
 	DPAA2_PMD_INFO("%s: netdev created, connected to %s",
 		eth_dev->data->name, dpaa2_dev->ep_name);
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index eaa653d266..db918725a7 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -479,6 +479,8 @@ int dpaa2_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
 
 int dpaa2_dev_recycle_config(struct rte_eth_dev *eth_dev);
 int dpaa2_dev_recycle_deconfig(struct rte_eth_dev *eth_dev);
+int dpaa2_soft_parser_loaded(void);
+
 int dpaa2_dev_recycle_qp_setup(struct rte_dpaa2_device *dpaa2_dev,
 	uint16_t qidx, uint64_t cntx,
 	eth_rx_burst_t tx_lpbk, eth_tx_burst_t rx_lpbk,
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index c7c3681005..58ea0f578f 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -9,6 +9,7 @@
 #include <string.h>
 #include <unistd.h>
 #include <stdarg.h>
+#include <sys/mman.h>
 
 #include <rte_ethdev.h>
 #include <rte_log.h>
@@ -24,6 +25,7 @@
 
 static char *dpaa2_flow_control_log;
 static uint16_t dpaa2_flow_miss_flow_id; /* Default miss flow id is 0. */
+static int dpaa2_sp_loaded = -1;
 
 enum dpaa2_flow_entry_size {
 	DPAA2_FLOW_ENTRY_MIN_SIZE = (DPNI_MAX_KEY_SIZE / 2),
@@ -397,6 +399,92 @@ dpaa2_flow_fs_entry_log(const char *log_info,
 	DPAA2_FLOW_DUMP("\r\n");
 }
 
+/** For LX2160A, LS2088A and LS1088A*/
+#define WRIOP_CCSR_BASE 0x8b80000
+#define WRIOP_CCSR_CTLU_OFFSET 0
+#define WRIOP_CCSR_CTLU_PARSER_OFFSET 0
+#define WRIOP_CCSR_CTLU_PARSER_INGRESS_OFFSET 0
+
+#define WRIOP_INGRESS_PARSER_PHY \
+	(WRIOP_CCSR_BASE + WRIOP_CCSR_CTLU_OFFSET + \
+	WRIOP_CCSR_CTLU_PARSER_OFFSET + \
+	WRIOP_CCSR_CTLU_PARSER_INGRESS_OFFSET)
+
+struct dpaa2_parser_ccsr {
+	uint32_t psr_cfg;
+	uint32_t psr_idle;
+	uint32_t psr_pclm;
+	uint8_t psr_ver_min;
+	uint8_t psr_ver_maj;
+	uint8_t psr_id1_l;
+	uint8_t psr_id1_h;
+	uint32_t psr_rev2;
+	uint8_t rsv[0x2c];
+	uint8_t sp_ins[4032];
+};
+
+int
+dpaa2_soft_parser_loaded(void)
+{
+	int fd, i, ret = 0;
+	struct dpaa2_parser_ccsr *parser_ccsr = NULL;
+
+	dpaa2_flow_control_log = getenv("DPAA2_FLOW_CONTROL_LOG");
+
+	if (dpaa2_sp_loaded >= 0)
+		return dpaa2_sp_loaded;
+
+	fd = open("/dev/mem", O_RDWR | O_SYNC);
+	if (fd < 0) {
+		DPAA2_PMD_ERR("open \"/dev/mem\" ERROR(%d)", fd);
+		ret = fd;
+		goto exit;
+	}
+
+	parser_ccsr = mmap(NULL, sizeof(struct dpaa2_parser_ccsr),
+		PROT_READ | PROT_WRITE, MAP_SHARED, fd,
+		WRIOP_INGRESS_PARSER_PHY);
+	if (!parser_ccsr) {
+		DPAA2_PMD_ERR("Map 0x%" PRIx64 "(size=0x%x) failed",
+			(uint64_t)WRIOP_INGRESS_PARSER_PHY,
+			(uint32_t)sizeof(struct dpaa2_parser_ccsr));
+		ret = -ENOBUFS;
+		goto exit;
+	}
+
+	DPAA2_PMD_INFO("Parser ID:0x%02x%02x, Rev:major(%02x), minor(%02x)",
+		parser_ccsr->psr_id1_h, parser_ccsr->psr_id1_l,
+		parser_ccsr->psr_ver_maj, parser_ccsr->psr_ver_min);
+
+	if (dpaa2_flow_control_log) {
+		for (i = 0; i < 64; i++) {
+			DPAA2_FLOW_DUMP("%02x ",
+				parser_ccsr->sp_ins[i]);
+			if (!((i + 1) % 16))
+				DPAA2_FLOW_DUMP("\r\n");
+		}
+	}
+
+	for (i = 0; i < 16; i++) {
+		if (parser_ccsr->sp_ins[i]) {
+			dpaa2_sp_loaded = 1;
+			break;
+		}
+	}
+	if (dpaa2_sp_loaded < 0)
+		dpaa2_sp_loaded = 0;
+
+	ret = dpaa2_sp_loaded;
+
+exit:
+	if (parser_ccsr)
+		munmap(parser_ccsr, sizeof(struct dpaa2_parser_ccsr));
+	if (fd >= 0)
+		close(fd);
+
+	return ret;
+}
+
 static int
 dpaa2_flow_ip_address_extract(enum net_prot prot,
 	uint32_t field)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 32/42] net/dpaa2: soft parser flow verification
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (30 preceding siblings ...)
  2024-10-23 11:59           ` [v5 31/42] net/dpaa2: check if Soft parser is loaded vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 33/42] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
                             ` (10 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Add flow supported by soft parser to verification list.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 84 +++++++++++++++++++++-------------
 1 file changed, 51 insertions(+), 33 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 58ea0f578f..018ffec266 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -66,7 +66,7 @@ struct rte_dpaa2_flow_item {
 };
 
 static const
-enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
+enum rte_flow_item_type dpaa2_hp_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
 	RTE_FLOW_ITEM_TYPE_ETH,
 	RTE_FLOW_ITEM_TYPE_VLAN,
@@ -77,7 +77,14 @@ enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_TCP,
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
-	RTE_FLOW_ITEM_TYPE_GTP
+	RTE_FLOW_ITEM_TYPE_GTP,
+	RTE_FLOW_ITEM_TYPE_RAW
+};
+
+static const
+enum rte_flow_item_type dpaa2_sp_supported_pattern_type[] = {
+	RTE_FLOW_ITEM_TYPE_VXLAN,
+	RTE_FLOW_ITEM_TYPE_ECPRI
 };
 
 static const
@@ -4556,16 +4563,17 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
 	int ret = 0;
 
 	if (unlikely(attr->group >= dpni_attr->num_rx_tcs)) {
-		DPAA2_PMD_ERR("Priority group is out of range");
+		DPAA2_PMD_ERR("Group/TC(%d) is out of range(%d)",
+			attr->group, dpni_attr->num_rx_tcs);
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->priority >= dpni_attr->fs_entries)) {
-		DPAA2_PMD_ERR("Priority within the group is out of range");
+		DPAA2_PMD_ERR("Priority(%d) within group is out of range(%d)",
+			attr->priority, dpni_attr->fs_entries);
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->egress)) {
-		DPAA2_PMD_ERR(
-			"Flow configuration is not supported on egress side");
+		DPAA2_PMD_ERR("Egress flow configuration is not supported");
 		ret = -ENOTSUP;
 	}
 	if (unlikely(!attr->ingress)) {
@@ -4580,27 +4588,41 @@ dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
 {
 	unsigned int i, j, is_found = 0;
 	int ret = 0;
+	const enum rte_flow_item_type *hp_supported;
+	const enum rte_flow_item_type *sp_supported;
+	uint64_t hp_supported_num, sp_supported_num;
+
+	hp_supported = dpaa2_hp_supported_pattern_type;
+	hp_supported_num = RTE_DIM(dpaa2_hp_supported_pattern_type);
+
+	sp_supported = dpaa2_sp_supported_pattern_type;
+	sp_supported_num = RTE_DIM(dpaa2_sp_supported_pattern_type);
 
 	for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
-		for (i = 0; i < RTE_DIM(dpaa2_supported_pattern_type); i++) {
-			if (dpaa2_supported_pattern_type[i]
-					== pattern[j].type) {
+		is_found = 0;
+		for (i = 0; i < hp_supported_num; i++) {
+			if (hp_supported[i] == pattern[j].type) {
 				is_found = 1;
 				break;
 			}
 		}
+		if (is_found)
+			continue;
+		if (dpaa2_sp_loaded > 0) {
+			for (i = 0; i < sp_supported_num; i++) {
+				if (sp_supported[i] == pattern[j].type) {
+					is_found = 1;
+					break;
+				}
+			}
+		}
 		if (!is_found) {
+			DPAA2_PMD_WARN("Flow type(%d) not supported",
+				pattern[j].type);
 			ret = -ENOTSUP;
 			break;
 		}
 	}
-	/* Lets verify other combinations of given pattern rules */
-	for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
-		if (!pattern[j].spec) {
-			ret = -EINVAL;
-			break;
-		}
-	}
 
 	return ret;
 }
@@ -4647,43 +4669,39 @@ dpaa2_flow_validate(struct rte_eth_dev *dev,
 	memset(&dpni_attr, 0, sizeof(struct dpni_attr));
 	ret = dpni_get_attributes(dpni, CMD_PRI_LOW, token, &dpni_attr);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Failure to get dpni@%p attribute, err code  %d",
-			dpni, ret);
+		DPAA2_PMD_ERR("Get dpni@%d attribute failed(%d)",
+			priv->hw_id, ret);
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ATTR,
-			   flow_attr, "invalid");
+			RTE_FLOW_ERROR_TYPE_ATTR,
+			flow_attr, "invalid");
 		return ret;
 	}
 
 	/* Verify input attributes */
 	ret = dpaa2_dev_verify_attr(&dpni_attr, flow_attr);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid attributes are given");
+		DPAA2_PMD_ERR("Invalid attributes are given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ATTR,
-			   flow_attr, "invalid");
+			RTE_FLOW_ERROR_TYPE_ATTR,
+			flow_attr, "invalid");
 		goto not_valid_params;
 	}
 	/* Verify input pattern list */
 	ret = dpaa2_dev_verify_patterns(pattern);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid pattern list is given");
+		DPAA2_PMD_ERR("Invalid pattern list is given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ITEM,
-			   pattern, "invalid");
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			pattern, "invalid");
 		goto not_valid_params;
 	}
 	/* Verify input action list */
 	ret = dpaa2_dev_verify_actions(actions);
 	if (ret < 0) {
-		DPAA2_PMD_ERR(
-			"Invalid action list is given");
+		DPAA2_PMD_ERR("Invalid action list is given");
 		rte_flow_error_set(error, EPERM,
-			   RTE_FLOW_ERROR_TYPE_ACTION,
-			   actions, "invalid");
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			actions, "invalid");
 		goto not_valid_params;
 	}
 not_valid_params:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 33/42] net/dpaa2: add flow support for IPsec AH and ESP
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (31 preceding siblings ...)
  2024-10-23 11:59           ` [v5 32/42] net/dpaa2: soft parser flow verification vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 34/42] net/dpaa2: fix memory corruption in TM vanshika.shukla
                             ` (9 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support AH/ESP flow with SPI field.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_flow.c | 528 ++++++++++++++++++++++++---------
 1 file changed, 385 insertions(+), 143 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 018ffec266..1605c0c584 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -78,6 +78,8 @@ enum rte_flow_item_type dpaa2_hp_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_SCTP,
 	RTE_FLOW_ITEM_TYPE_GRE,
 	RTE_FLOW_ITEM_TYPE_GTP,
+	RTE_FLOW_ITEM_TYPE_ESP,
+	RTE_FLOW_ITEM_TYPE_AH,
 	RTE_FLOW_ITEM_TYPE_RAW
 };
 
@@ -154,6 +156,17 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 	},
 };
 
+static const struct rte_flow_item_esp dpaa2_flow_item_esp_mask = {
+	.hdr = {
+		.spi = RTE_BE32(0xffffffff),
+		.seq = RTE_BE32(0xffffffff),
+	},
+};
+
+static const struct rte_flow_item_ah dpaa2_flow_item_ah_mask = {
+	.spi = RTE_BE32(0xffffffff),
+};
+
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
 	.protocol = RTE_BE16(0xffff),
 };
@@ -255,8 +268,16 @@ dpaa2_prot_field_string(uint32_t prot, uint32_t field,
 			strcat(string, ".teid");
 		else
 			strcat(string, ".unknown field");
+	} else if (prot == NET_PROT_IPSEC_ESP) {
+		rte_strscpy(string, "esp", DPAA2_PROT_FIELD_STRING_SIZE);
+		if (field == NH_FLD_IPSEC_ESP_SPI)
+			strcat(string, ".spi");
+		else if (field == NH_FLD_IPSEC_ESP_SEQUENCE_NUM)
+			strcat(string, ".seq");
+		else
+			strcat(string, ".unknown field");
 	} else {
-		strcpy(string, "unknown protocol");
+		sprintf(string, "unknown protocol(%d)", prot);
 	}
 }
 
@@ -1654,6 +1675,14 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask_support = (const char *)&dpaa2_flow_item_tcp_mask;
 		size = sizeof(struct rte_flow_item_tcp);
 		break;
+	case RTE_FLOW_ITEM_TYPE_ESP:
+		mask_support = (const char *)&dpaa2_flow_item_esp_mask;
+		size = sizeof(struct rte_flow_item_esp);
+		break;
+	case RTE_FLOW_ITEM_TYPE_AH:
+		mask_support = (const char *)&dpaa2_flow_item_ah_mask;
+		size = sizeof(struct rte_flow_item_ah);
+		break;
 	case RTE_FLOW_ITEM_TYPE_SCTP:
 		mask_support = (const char *)&dpaa2_flow_item_sctp_mask;
 		size = sizeof(struct rte_flow_item_sctp);
@@ -1684,7 +1713,7 @@ dpaa2_flow_extract_support(const uint8_t *mask_src,
 		mask[i] = (mask[i] | mask_src[i]);
 
 	if (memcmp(mask, mask_support, size))
-		return -1;
+		return -ENOTSUP;
 
 	return 0;
 }
@@ -2088,11 +2117,12 @@ dpaa2_configure_flow_tunnel_eth(struct dpaa2_dev_flow *flow,
 	if (!spec)
 		return 0;
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask->src,
@@ -2304,11 +2334,12 @@ dpaa2_configure_flow_eth(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ETH)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ETH);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ethernet failed");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask->src,
@@ -2409,11 +2440,12 @@ dpaa2_configure_flow_tunnel_vlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (!mask->tci)
@@ -2471,14 +2503,14 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 
 	if (!spec) {
 		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_VLAN_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
@@ -2486,27 +2518,28 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-				       RTE_FLOW_ITEM_TYPE_VLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+			RTE_FLOW_ITEM_TYPE_VLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (!mask->tci)
 		return 0;
 
 	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
-					      NH_FLD_VLAN_TCI, &spec->tci,
-					      &mask->tci, sizeof(rte_be16_t),
-					      priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+			NH_FLD_VLAN_TCI, &spec->tci,
+			&mask->tci, sizeof(rte_be16_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_VLAN,
-					      NH_FLD_VLAN_TCI, &spec->tci,
-					      &mask->tci, sizeof(rte_be16_t),
-					      priv, group, &local_cfg,
-					      DPAA2_FLOW_FS_TYPE);
+			NH_FLD_VLAN_TCI, &spec->tci,
+			&mask->tci, sizeof(rte_be16_t),
+			priv, group, &local_cfg,
+			DPAA2_FLOW_FS_TYPE);
 	if (ret)
 		return ret;
 
@@ -2515,12 +2548,13 @@ dpaa2_configure_flow_vlan(struct dpaa2_dev_flow *flow,
 }
 
 static int
-dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	int ret, local_cfg = 0;
 	uint32_t group;
@@ -2544,16 +2578,16 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV4_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV4_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV4_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV4_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
@@ -2562,13 +2596,13 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 	flow->tc_index = attr->priority;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
-					 DPAA2_FLOW_QOS_TYPE, group,
-					 &local_cfg);
+			DPAA2_FLOW_QOS_TYPE, group,
+			&local_cfg);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV4_FRAM,
-					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
 	if (ret)
 		return ret;
 
@@ -2577,10 +2611,11 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
-				       RTE_FLOW_ITEM_TYPE_IPV4)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+			RTE_FLOW_ITEM_TYPE_IPV4);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask_ipv4->hdr.src_addr) {
@@ -2589,18 +2624,18 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(rte_be32_t);
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV4_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV4_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2611,17 +2646,17 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(rte_be32_t);
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV4_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV4,
-							 NH_FLD_IPV4_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV4_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2632,18 +2667,18 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(uint8_t);
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2653,12 +2688,13 @@ dpaa2_configure_flow_ipv4(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 }
 
 static int
-dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
-			  const struct rte_flow_attr *attr,
-			  const struct rte_dpaa2_flow_item *dpaa2_pattern,
-			  const struct rte_flow_action actions[] __rte_unused,
-			  struct rte_flow_error *error __rte_unused,
-			  int *device_configured)
+dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
 {
 	int ret, local_cfg = 0;
 	uint32_t group;
@@ -2686,27 +2722,27 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV6_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV6_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_IPV6_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_IPV6_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
-					 DPAA2_FLOW_QOS_TYPE, group,
-					 &local_cfg);
+			DPAA2_FLOW_QOS_TYPE, group,
+			&local_cfg);
 	if (ret)
 		return ret;
 
 	ret = dpaa2_flow_identify_by_faf(priv, flow, FAF_IPV6_FRAM,
-					 DPAA2_FLOW_FS_TYPE, group, &local_cfg);
+			DPAA2_FLOW_FS_TYPE, group, &local_cfg);
 	if (ret)
 		return ret;
 
@@ -2715,10 +2751,11 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
-				       RTE_FLOW_ITEM_TYPE_IPV6)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+			RTE_FLOW_ITEM_TYPE_IPV6);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
-		return -EINVAL;
+		return ret;
 	}
 
 	if (memcmp((const char *)&mask_ipv6->hdr.src_addr, zero_cmp, NH_FLD_IPV6_ADDR_SIZE)) {
@@ -2727,18 +2764,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = NH_FLD_IPV6_ADDR_SIZE;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV6_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_SRC_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV6_SRC_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2749,18 +2786,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = NH_FLD_IPV6_ADDR_SIZE;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IPV6_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_ipaddr_extract_rule(flow, NET_PROT_IPV6,
-							 NH_FLD_IPV6_DST_IP,
-							 key, mask, size, priv,
-							 group, &local_cfg,
-							 DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IPV6_DST_IP,
+				key, mask, size, priv,
+				group, &local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2771,18 +2808,18 @@ dpaa2_configure_flow_ipv6(struct dpaa2_dev_flow *flow, struct rte_eth_dev *dev,
 		size = sizeof(uint8_t);
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_QOS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_QOS_TYPE);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IP,
-						      NH_FLD_IP_PROTO, key,
-						      mask, size, priv, group,
-						      &local_cfg,
-						      DPAA2_FLOW_FS_TYPE);
+				NH_FLD_IP_PROTO, key,
+				mask, size, priv, group,
+				&local_cfg,
+				DPAA2_FLOW_FS_TYPE);
 		if (ret)
 			return ret;
 	}
@@ -2839,11 +2876,12 @@ dpaa2_configure_flow_icmp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ICMP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ICMP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.icmp_type) {
@@ -2916,16 +2954,16 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_UDP_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_UDP_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_UDP_FRAM,
-						 DPAA2_FLOW_FS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_UDP_FRAM,
+				DPAA2_FLOW_FS_TYPE, group,
+				&local_cfg);
 		return ret;
 	}
 
@@ -2946,11 +2984,12 @@ dpaa2_configure_flow_udp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_UDP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_UDP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3023,9 +3062,9 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 		}
 
 		ret = dpaa2_flow_identify_by_faf(priv, flow,
-						 FAFE_VXLAN_IN_TCP_FRAM,
-						 DPAA2_FLOW_QOS_TYPE, group,
-						 &local_cfg);
+				FAFE_VXLAN_IN_TCP_FRAM,
+				DPAA2_FLOW_QOS_TYPE, group,
+				&local_cfg);
 		if (ret)
 			return ret;
 
@@ -3053,11 +3092,12 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_TCP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_TCP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
 
-		return -EINVAL;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3097,6 +3137,183 @@ dpaa2_configure_flow_tcp(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
+static int
+dpaa2_configure_flow_esp(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_esp *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_esp_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-ESP distribution not support");
+		return -ENOTSUP;
+	}
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_ESP_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_ESP_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	if (!spec) {
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ESP);
+	if (ret) {
+		DPAA2_PMD_WARN("Extract field(s) of ESP not support.");
+
+		return ret;
+	}
+
+	if (mask->hdr.spi) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SPI, &spec->hdr.spi,
+			&mask->hdr.spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SPI, &spec->hdr.spi,
+			&mask->hdr.spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask->hdr.seq) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SEQUENCE_NUM, &spec->hdr.seq,
+			&mask->hdr.seq, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_ESP,
+			NH_FLD_IPSEC_ESP_SEQUENCE_NUM, &spec->hdr.seq,
+			&mask->hdr.seq, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
+static int
+dpaa2_configure_flow_ah(struct dpaa2_dev_flow *flow,
+	struct rte_eth_dev *dev,
+	const struct rte_flow_attr *attr,
+	const struct rte_dpaa2_flow_item *dpaa2_pattern,
+	const struct rte_flow_action actions[] __rte_unused,
+	struct rte_flow_error *error __rte_unused,
+	int *device_configured)
+{
+	int ret, local_cfg = 0;
+	uint32_t group;
+	const struct rte_flow_item_ah *spec, *mask;
+	struct dpaa2_dev_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *pattern =
+		&dpaa2_pattern->generic_item;
+
+	group = attr->group;
+
+	/* Parse pattern list to get the matching parameters */
+	spec = pattern->spec;
+	mask = pattern->mask ?
+		pattern->mask : &dpaa2_flow_item_ah_mask;
+
+	/* Get traffic class index and flow id to be configured */
+	flow->tc_id = group;
+	flow->tc_index = attr->priority;
+
+	if (dpaa2_pattern->in_tunnel) {
+		DPAA2_PMD_ERR("Tunnel-AH distribution not support");
+		return -ENOTSUP;
+	}
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_AH_FRAM, DPAA2_FLOW_QOS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	ret = dpaa2_flow_identify_by_faf(priv, flow,
+			FAF_IPSEC_AH_FRAM, DPAA2_FLOW_FS_TYPE,
+			group, &local_cfg);
+	if (ret)
+		return ret;
+
+	if (!spec) {
+		(*device_configured) |= local_cfg;
+		return 0;
+	}
+
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_AH);
+	if (ret) {
+		DPAA2_PMD_WARN("Extract field(s) of AH not support.");
+
+		return ret;
+	}
+
+	if (mask->spi) {
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_AH,
+			NH_FLD_IPSEC_AH_SPI, &spec->spi,
+			&mask->spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_QOS_TYPE);
+		if (ret)
+			return ret;
+
+		ret = dpaa2_flow_add_hdr_extract_rule(flow, NET_PROT_IPSEC_AH,
+			NH_FLD_IPSEC_AH_SPI, &spec->spi,
+			&mask->spi, sizeof(rte_be32_t),
+			priv, group, &local_cfg, DPAA2_FLOW_FS_TYPE);
+		if (ret)
+			return ret;
+	}
+
+	if (mask->seq_num) {
+		DPAA2_PMD_ERR("AH seq distribution not support");
+		return -ENOTSUP;
+	}
+
+	(*device_configured) |= local_cfg;
+
+	return 0;
+}
+
 static int
 dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 	struct rte_eth_dev *dev,
@@ -3145,11 +3362,12 @@ dpaa2_configure_flow_sctp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_SCTP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_SCTP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of SCTP not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->hdr.src_port) {
@@ -3237,11 +3455,12 @@ dpaa2_configure_flow_gre(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_GRE)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GRE);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of GRE not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (!mask->protocol)
@@ -3314,11 +3533,12 @@ dpaa2_configure_flow_vxlan(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_VXLAN)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_VXLAN);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of VXLAN not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->flags) {
@@ -3418,17 +3638,18 @@ dpaa2_configure_flow_ecpri(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_ECPRI)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_ECPRI);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of ECPRI not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (mask->hdr.common.type != 0xff) {
 		DPAA2_PMD_WARN("ECPRI header type not specified.");
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (spec->hdr.common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA) {
@@ -3729,11 +3950,12 @@ dpaa2_configure_flow_gtp(struct dpaa2_dev_flow *flow,
 		return 0;
 	}
 
-	if (dpaa2_flow_extract_support((const uint8_t *)mask,
-		RTE_FLOW_ITEM_TYPE_GTP)) {
+	ret = dpaa2_flow_extract_support((const uint8_t *)mask,
+		RTE_FLOW_ITEM_TYPE_GTP);
+	if (ret) {
 		DPAA2_PMD_WARN("Extract field(s) of GTP not support.");
 
-		return -1;
+		return ret;
 	}
 
 	if (!mask->teid)
@@ -4370,6 +4592,26 @@ dpaa2_generic_flow_set(struct dpaa2_dev_flow *flow,
 				goto end_flow_set;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_ESP:
+			ret = dpaa2_configure_flow_esp(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("ESP flow config failed!");
+				goto end_flow_set;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_AH:
+			ret = dpaa2_configure_flow_ah(flow,
+					dev, attr, &dpaa2_pattern[i],
+					actions, error,
+					&is_keycfg_configured);
+			if (ret) {
+				DPAA2_PMD_ERR("AH flow config failed!");
+				goto end_flow_set;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_GRE:
 			ret = dpaa2_configure_flow_gre(flow, dev, attr,
 					&dpaa2_pattern[i],
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 34/42] net/dpaa2: fix memory corruption in TM
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (32 preceding siblings ...)
  2024-10-23 11:59           ` [v5 33/42] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 35/42] net/dpaa2: support software taildrop vanshika.shukla
                             ` (8 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena, Gagandeep Singh; +Cc: stable

From: Gagandeep Singh <g.singh@nxp.com>

driver was reserving memory in an array for 8 queues only,
but it can support many more queues configuration.

This patch fixes the memory corruption issue by defining the
queue array with correct size.

Fixes: 72100f0dee21 ("net/dpaa2: support level 2 in traffic management")
Cc: g.singh@nxp.com
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/net/dpaa2/dpaa2_tm.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index fb8c384ca4..ab3e355853 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -684,6 +684,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 	struct dpaa2_tm_node *leaf_node, *temp_leaf_node, *channel_node;
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
 	int ret, t;
+	bool conf_schedule = false;
 
 	/* Populate TCs */
 	LIST_FOREACH(channel_node, &priv->nodes, next) {
@@ -757,7 +758,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 	}
 
 	LIST_FOREACH(channel_node, &priv->nodes, next) {
-		int wfq_grp = 0, is_wfq_grp = 0, conf[DPNI_MAX_TC];
+		int wfq_grp = 0, is_wfq_grp = 0, conf[priv->nb_tx_queues];
 		struct dpni_tx_priorities_cfg prio_cfg;
 
 		memset(&prio_cfg, 0, sizeof(prio_cfg));
@@ -767,6 +768,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 		if (channel_node->level_id != CHANNEL_LEVEL)
 			continue;
 
+		conf_schedule = false;
 		LIST_FOREACH(leaf_node, &priv->nodes, next) {
 			struct dpaa2_queue *leaf_dpaa2_q;
 			uint8_t leaf_tc_id;
@@ -789,6 +791,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			if (leaf_node->parent != channel_node)
 				continue;
 
+			conf_schedule = true;
 			leaf_dpaa2_q =  (struct dpaa2_queue *)dev->data->tx_queues[leaf_node->id];
 			leaf_tc_id = leaf_dpaa2_q->tc_index;
 			/* Process sibling leaf nodes */
@@ -829,8 +832,8 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 						goto out;
 					}
 					is_wfq_grp = 1;
-					conf[temp_leaf_node->id] = 1;
 				}
+				conf[temp_leaf_node->id] = 1;
 			}
 			if (is_wfq_grp) {
 				if (wfq_grp == 0) {
@@ -851,6 +854,9 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			}
 			conf[leaf_node->id] = 1;
 		}
+		if (!conf_schedule)
+			continue;
+
 		if (wfq_grp > 1) {
 			prio_cfg.separate_groups = 1;
 			if (prio_cfg.prio_group_B < prio_cfg.prio_group_A) {
@@ -864,6 +870,16 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 
 		prio_cfg.prio_group_A = 1;
 		prio_cfg.channel_idx = channel_node->channel_id;
+		DPAA2_PMD_DEBUG("########################################");
+		DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
+		for (t = 0; t < DPNI_MAX_TC; t++)
+			DPAA2_PMD_DEBUG("tc = %d mode = %d, delta = %d", t,
+					prio_cfg.tc_sched[t].mode,
+					prio_cfg.tc_sched[t].delta_bandwidth);
+
+		DPAA2_PMD_DEBUG("prioritya = %d, priorityb = %d, separate grps"
+				" = %d", prio_cfg.prio_group_A,
+				prio_cfg.prio_group_B, prio_cfg.separate_groups);
 		ret = dpni_set_tx_priorities(dpni, 0, priv->token, &prio_cfg);
 		if (ret) {
 			ret = -rte_tm_error_set(error, EINVAL,
@@ -871,15 +887,6 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 					"Scheduling Failed\n");
 			goto out;
 		}
-		DPAA2_PMD_DEBUG("########################################");
-		DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
-		for (t = 0; t < DPNI_MAX_TC; t++) {
-			DPAA2_PMD_DEBUG("tc = %d mode = %d ", t, prio_cfg.tc_sched[t].mode);
-			DPAA2_PMD_DEBUG("delta = %d", prio_cfg.tc_sched[t].delta_bandwidth);
-		}
-		DPAA2_PMD_DEBUG("prioritya = %d", prio_cfg.prio_group_A);
-		DPAA2_PMD_DEBUG("priorityb = %d", prio_cfg.prio_group_B);
-		DPAA2_PMD_DEBUG("separate grps = %d", prio_cfg.separate_groups);
 	}
 	return 0;
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 35/42] net/dpaa2: support software taildrop
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (33 preceding siblings ...)
  2024-10-23 11:59           ` [v5 34/42] net/dpaa2: fix memory corruption in TM vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 36/42] net/dpaa2: check IOVA before sending MC command vanshika.shukla
                             ` (7 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

Add software based taildrop support.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  2 +-
 drivers/net/dpaa2/dpaa2_rxtx.c          | 24 +++++++++++++++++++++++-
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 74a1a8b2fa..b6cd1f00c4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -179,7 +179,7 @@ struct __rte_cache_aligned dpaa2_queue {
 	struct dpaa2_queue *tx_conf_queue;
 	int32_t eventfd;	/*!< Event Fd of this queue */
 	uint16_t nb_desc;
-	uint16_t resv;
+	uint16_t tm_sw_td;	/*!< TM software taildrop */
 	uint64_t offloads;
 	uint64_t lpbk_cntx;
 	uint8_t data_stashing_off;
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 71b2b4a427..fd07a75a40 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1297,8 +1297,11 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		while (qbman_result_SCN_state(dpaa2_q->cscn)) {
 			retry_count++;
 			/* Retry for some time before giving up */
-			if (retry_count > CONG_RETRY_COUNT)
+			if (retry_count > CONG_RETRY_COUNT) {
+				if (dpaa2_q->tm_sw_td)
+					goto sw_td;
 				goto skip_tx;
+			}
 		}
 
 		frames_to_send = (nb_pkts > dpaa2_eqcr_size) ?
@@ -1490,6 +1493,25 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
+	return num_tx;
+sw_td:
+	loop = 0;
+	while (loop < num_tx) {
+		if (unlikely(RTE_MBUF_HAS_EXTBUF(*bufs)))
+			rte_pktmbuf_free(*bufs);
+		bufs++;
+		loop++;
+	}
+
+	/* free the pending buffers */
+	while (nb_pkts) {
+		rte_pktmbuf_free(*bufs);
+		bufs++;
+		nb_pkts--;
+		num_tx++;
+	}
+	dpaa2_q->tx_pkts += num_tx;
+
 	return num_tx;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 36/42] net/dpaa2: check IOVA before sending MC command
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (34 preceding siblings ...)
  2024-10-23 11:59           ` [v5 35/42] net/dpaa2: support software taildrop vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 37/42] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
                             ` (6 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Convert VA to IOVA and check IOVA before sending parameter
to MC. Invalid IOVA of parameter sent to MC will cause system
stuck and not be recovered unless power reset.
IOVA is not checked in data path because:
1) MC is not involved and error can be recovered.
2) IOVA check impacts performance a little bit.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c |  63 +++--
 drivers/net/dpaa2/dpaa2_ethdev.c       | 338 +++++++++++++------------
 drivers/net/dpaa2/dpaa2_ethdev.h       |   3 +
 drivers/net/dpaa2/dpaa2_flow.c         |  67 ++++-
 drivers/net/dpaa2/dpaa2_sparser.c      |  25 +-
 drivers/net/dpaa2/dpaa2_tm.c           |  43 ++--
 6 files changed, 320 insertions(+), 219 deletions(-)

diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 4d33b51fea..20b37a97bb 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -30,8 +30,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 
 int
 rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
-			      uint16_t offset,
-			      uint8_t size)
+	uint16_t offset, uint8_t size)
 {
 	struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
 	struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
@@ -52,8 +51,8 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	p_params = rte_zmalloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_zmalloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -73,17 +72,23 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 	}
 
 	memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
-	tc_cfg.key_cfg_iova = (size_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.dist_size = eth_dev->data->nb_rx_queues;
 	tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
 
 	ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index,
-				  &tc_cfg);
+			&tc_cfg);
 	rte_free(p_params);
 	if (ret) {
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("Set RX TC dist failed(err=%d)", ret);
 		return ret;
 	}
 
@@ -115,8 +120,8 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 	if (tc_dist_queues > priv->dist_queues)
 		tc_dist_queues = priv->dist_queues;
 
-	p_params = rte_malloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_malloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -133,7 +138,15 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 		return ret;
 	}
 
-	tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.dist_size = tc_dist_queues;
 	tc_cfg.enable = true;
 	tc_cfg.tc = tc_index;
@@ -148,17 +161,15 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
 	ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, &tc_cfg);
 	rte_free(p_params);
 	if (ret) {
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("RX Hash dist for failed(err=%d)", ret);
 		return ret;
 	}
 
 	return 0;
 }
 
-int dpaa2_remove_flow_dist(
-	struct rte_eth_dev *eth_dev,
+int
+dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
 	uint8_t tc_index)
 {
 	struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
@@ -168,8 +179,8 @@ int dpaa2_remove_flow_dist(
 	void *p_params;
 	int ret;
 
-	p_params = rte_malloc(
-		NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
+	p_params = rte_malloc(NULL,
+		DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!p_params) {
 		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
 		return -ENOMEM;
@@ -177,7 +188,15 @@ int dpaa2_remove_flow_dist(
 
 	memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
 	tc_cfg.dist_size = 0;
-	tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+	tc_cfg.key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(p_params,
+		DIST_PARAM_IOVA_SIZE);
+	if (tc_cfg.key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, p_params);
+		rte_free(p_params);
+		return -ENOBUFS;
+	}
+
 	tc_cfg.enable = true;
 	tc_cfg.tc = tc_index;
 
@@ -194,9 +213,7 @@ int dpaa2_remove_flow_dist(
 			&tc_cfg);
 	rte_free(p_params);
 	if (ret)
-		DPAA2_PMD_ERR(
-			     "Setting distribution for Rx failed with err: %d",
-			     ret);
+		DPAA2_PMD_ERR("RX hash dist failed(err=%d)", ret);
 	return ret;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index da0ea57ed2..7a3937346c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -123,9 +123,9 @@ dpaa2_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
-		return -1;
+		return -EINVAL;
 	}
 
 	if (on)
@@ -174,8 +174,8 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 static int
 dpaa2_vlan_tpid_set(struct rte_eth_dev *dev,
-		      enum rte_vlan_type vlan_type __rte_unused,
-		      uint16_t tpid)
+	enum rte_vlan_type vlan_type __rte_unused,
+	uint16_t tpid)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct fsl_mc_io *dpni = dev->process_private;
@@ -212,8 +212,7 @@ dpaa2_vlan_tpid_set(struct rte_eth_dev *dev,
 
 static int
 dpaa2_fw_version_get(struct rte_eth_dev *dev,
-		     char *fw_version,
-		     size_t fw_size)
+	char *fw_version, size_t fw_size)
 {
 	int ret;
 	struct fsl_mc_io *dpni = dev->process_private;
@@ -245,7 +244,8 @@ dpaa2_fw_version_get(struct rte_eth_dev *dev,
 }
 
 static int
-dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+dpaa2_dev_info_get(struct rte_eth_dev *dev,
+	struct rte_eth_dev_info *dev_info)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 
@@ -291,8 +291,8 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 static int
 dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
-			__rte_unused uint16_t queue_id,
-			struct rte_eth_burst_mode *mode)
+	__rte_unused uint16_t queue_id,
+	struct rte_eth_burst_mode *mode)
 {
 	struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
 	int ret = -EINVAL;
@@ -368,7 +368,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	uint8_t num_rxqueue_per_tc;
 	struct dpaa2_queue *mc_q, *mcq;
 	uint32_t tot_queues;
-	int i;
+	int i, ret;
 	struct dpaa2_queue *dpaa2_q;
 
 	PMD_INIT_FUNC_TRACE();
@@ -382,7 +382,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 			  RTE_CACHE_LINE_SIZE);
 	if (!mc_q) {
 		DPAA2_PMD_ERR("Memory allocation failed for rx/tx queues");
-		return -1;
+		return -ENOBUFS;
 	}
 
 	for (i = 0; i < priv->nb_rx_queues; i++) {
@@ -404,8 +404,10 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	if (dpaa2_enable_err_queue) {
 		priv->rx_err_vq = rte_zmalloc("dpni_rx_err",
 			sizeof(struct dpaa2_queue), 0);
-		if (!priv->rx_err_vq)
+		if (!priv->rx_err_vq) {
+			ret = -ENOBUFS;
 			goto fail;
+		}
 
 		dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq;
 		dpaa2_q->q_storage = rte_malloc("err_dq_storage",
@@ -424,13 +426,15 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 
 	for (i = 0; i < priv->nb_tx_queues; i++) {
 		mc_q->eth_data = dev->data;
-		mc_q->flow_id = 0xffff;
+		mc_q->flow_id = DPAA2_INVALID_FLOW_ID;
 		priv->tx_vq[i] = mc_q++;
 		dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i];
 		dpaa2_q->cscn = rte_malloc(NULL,
 					   sizeof(struct qbman_result), 16);
-		if (!dpaa2_q->cscn)
+		if (!dpaa2_q->cscn) {
+			ret = -ENOBUFS;
 			goto fail_tx;
+		}
 	}
 
 	if (priv->flags & DPAA2_TX_CONF_ENABLE) {
@@ -498,7 +502,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
 	}
 
 	rte_free(mc_q);
-	return -1;
+	return ret;
 }
 
 static void
@@ -718,14 +722,14 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
  */
 static int
 dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
-			 uint16_t rx_queue_id,
-			 uint16_t nb_rx_desc,
-			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_rxconf *rx_conf,
-			 struct rte_mempool *mb_pool)
+	uint16_t rx_queue_id,
+	uint16_t nb_rx_desc,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_rxconf *rx_conf,
+	struct rte_mempool *mb_pool)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct dpni_queue cfg;
 	uint8_t options = 0;
@@ -747,8 +751,8 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* Rx deferred start is not supported */
 	if (rx_conf->rx_deferred_start) {
-		DPAA2_PMD_ERR("%p:Rx deferred start not supported",
-				(void *)dev);
+		DPAA2_PMD_ERR("%s:Rx deferred start not supported",
+			dev->data->name);
 		return -EINVAL;
 	}
 
@@ -764,7 +768,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		if (ret)
 			return ret;
 	}
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = priv->rx_vq[rx_queue_id];
 	dpaa2_q->mb_pool = mb_pool; /**< mbuf pool to populate RX ring. */
 	dpaa2_q->bp_array = rte_dpaa2_bpid_info;
 	dpaa2_q->nb_desc = UINT16_MAX;
@@ -790,7 +794,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		cfg.cgid = i;
 		dpaa2_q->cgid = cfg.cgid;
 	} else {
-		dpaa2_q->cgid = 0xff;
+		dpaa2_q->cgid = DPAA2_INVALID_CGID;
 	}
 
 	/*if ls2088 or rev2 device, enable the stashing */
@@ -814,10 +818,10 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 	}
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_RX,
-			     dpaa2_q->tc_index, flow_id, options, &cfg);
+			dpaa2_q->tc_index, flow_id, options, &cfg);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in setting the rx flow: = %d", ret);
-		return -1;
+		return ret;
 	}
 
 	if (!(priv->flags & DPAA2_RX_TAILDROP_OFF)) {
@@ -830,7 +834,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		 * There is no HW restriction, but number of CGRs are limited,
 		 * hence this restriction is placed.
 		 */
-		if (dpaa2_q->cgid != 0xff) {
+		if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 			/*enabling per rx queue congestion control */
 			taildrop.threshold = nb_rx_desc;
 			taildrop.units = DPNI_CONGESTION_UNIT_FRAMES;
@@ -856,15 +860,15 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 		if (ret) {
 			DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)",
-				      ret);
-			return -1;
+				ret);
+			return ret;
 		}
 	} else { /* Disable tail Drop */
 		struct dpni_taildrop taildrop = {0};
 		DPAA2_PMD_INFO("Tail drop is disabled on queue");
 
 		taildrop.enable = 0;
-		if (dpaa2_q->cgid != 0xff) {
+		if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
 					dpaa2_q->tc_index,
@@ -876,8 +880,8 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		}
 		if (ret) {
 			DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)",
-				      ret);
-			return -1;
+				ret);
+			return ret;
 		}
 	}
 
@@ -887,16 +891,14 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 static int
 dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
-			 uint16_t tx_queue_id,
-			 uint16_t nb_tx_desc,
-			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_txconf *tx_conf)
+	uint16_t tx_queue_id,
+	uint16_t nb_tx_desc,
+	unsigned int socket_id __rte_unused,
+	const struct rte_eth_txconf *tx_conf)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)
-		priv->tx_vq[tx_queue_id];
-	struct dpaa2_queue *dpaa2_tx_conf_q = (struct dpaa2_queue *)
-		priv->tx_conf_vq[tx_queue_id];
+	struct dpaa2_queue *dpaa2_q = priv->tx_vq[tx_queue_id];
+	struct dpaa2_queue *dpaa2_tx_conf_q = priv->tx_conf_vq[tx_queue_id];
 	struct fsl_mc_io *dpni = dev->process_private;
 	struct dpni_queue tx_conf_cfg;
 	struct dpni_queue tx_flow_cfg;
@@ -906,13 +908,14 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct dpni_queue_id qid;
 	uint32_t tc_id;
 	int ret;
+	uint64_t iova;
 
 	PMD_INIT_FUNC_TRACE();
 
 	/* Tx deferred start is not supported */
 	if (tx_conf->tx_deferred_start) {
-		DPAA2_PMD_ERR("%p:Tx deferred start not supported",
-				(void *)dev);
+		DPAA2_PMD_ERR("%s:Tx deferred start not supported",
+			dev->data->name);
 		return -EINVAL;
 	}
 
@@ -920,7 +923,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	dpaa2_q->offloads = tx_conf->offloads;
 
 	/* Return if queue already configured */
-	if (dpaa2_q->flow_id != 0xffff) {
+	if (dpaa2_q->flow_id != DPAA2_INVALID_FLOW_ID) {
 		dev->data->tx_queues[tx_queue_id] = dpaa2_q;
 		return 0;
 	}
@@ -962,7 +965,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		DPAA2_PMD_ERR("Error in setting the tx flow: "
 			"tc_id=%d, flow=%d err=%d",
 			tc_id, flow_id, ret);
-			return -1;
+			return ret;
 	}
 
 	dpaa2_q->flow_id = flow_id;
@@ -970,11 +973,11 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	dpaa2_q->tc_index = tc_id;
 
 	ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX, ((channel_id << 8) | dpaa2_q->tc_index),
-			     dpaa2_q->flow_id, &tx_flow_cfg, &qid);
+			DPNI_QUEUE_TX, ((channel_id << 8) | dpaa2_q->tc_index),
+			dpaa2_q->flow_id, &tx_flow_cfg, &qid);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-		return -1;
+		return ret;
 	}
 	dpaa2_q->fqid = qid.fqid;
 
@@ -990,8 +993,17 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		 */
 		cong_notif_cfg.threshold_exit = (nb_tx_desc * 9) / 10;
 		cong_notif_cfg.message_ctx = 0;
-		cong_notif_cfg.message_iova =
-				(size_t)DPAA2_VADDR_TO_IOVA(dpaa2_q->cscn);
+
+		iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(dpaa2_q->cscn,
+			sizeof(struct qbman_result));
+		if (iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("No IOMMU map for cscn(%p)(size=%x)",
+				dpaa2_q->cscn, (uint32_t)sizeof(struct qbman_result));
+
+			return -ENOBUFS;
+		}
+
+		cong_notif_cfg.message_iova = iova;
 		cong_notif_cfg.dest_cfg.dest_type = DPNI_DEST_NONE;
 		cong_notif_cfg.notification_mode =
 					 DPNI_CONG_OPT_WRITE_MEM_ON_ENTER |
@@ -999,16 +1011,13 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 					 DPNI_CONG_OPT_COHERENT_WRITE;
 		cong_notif_cfg.cg_point = DPNI_CP_QUEUE;
 
-		ret = dpni_set_congestion_notification(dpni, CMD_PRI_LOW,
-						       priv->token,
-						       DPNI_QUEUE_TX,
-						       ((channel_id << 8) | tc_id),
-						       &cong_notif_cfg);
+		ret = dpni_set_congestion_notification(dpni,
+				CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
+				((channel_id << 8) | tc_id), &cong_notif_cfg);
 		if (ret) {
-			DPAA2_PMD_ERR(
-			   "Error in setting tx congestion notification: "
-			   "err=%d", ret);
-			return -ret;
+			DPAA2_PMD_ERR("Set TX congestion notification err=%d",
+			   ret);
+			return ret;
 		}
 	}
 	dpaa2_q->cb_eqresp_free = dpaa2_dev_free_eqresp_buf;
@@ -1019,22 +1028,24 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		options = options | DPNI_QUEUE_OPT_USER_CTX;
 		tx_conf_cfg.user_context = (size_t)(dpaa2_q);
 		ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX_CONFIRM, ((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
-			     dpaa2_tx_conf_q->flow_id, options, &tx_conf_cfg);
+				DPNI_QUEUE_TX_CONFIRM,
+				((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
+				dpaa2_tx_conf_q->flow_id,
+				options, &tx_conf_cfg);
 		if (ret) {
-			DPAA2_PMD_ERR("Error in setting the tx conf flow: "
-			      "tc_index=%d, flow=%d err=%d",
-			      dpaa2_tx_conf_q->tc_index,
-			      dpaa2_tx_conf_q->flow_id, ret);
-			return -1;
+			DPAA2_PMD_ERR("Set TC[%d].TX[%d] conf flow err=%d",
+				dpaa2_tx_conf_q->tc_index,
+				dpaa2_tx_conf_q->flow_id, ret);
+			return ret;
 		}
 
 		ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-			     DPNI_QUEUE_TX_CONFIRM, ((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
-			     dpaa2_tx_conf_q->flow_id, &tx_conf_cfg, &qid);
+				DPNI_QUEUE_TX_CONFIRM,
+				((channel_id << 8) | dpaa2_tx_conf_q->tc_index),
+				dpaa2_tx_conf_q->flow_id, &tx_conf_cfg, &qid);
 		if (ret) {
 			DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-			return -1;
+			return ret;
 		}
 		dpaa2_tx_conf_q->fqid = qid.fqid;
 	}
@@ -1046,8 +1057,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct dpaa2_queue *dpaa2_q = dev->data->rx_queues[rx_queue_id];
 	struct dpaa2_dev_priv *priv = dpaa2_q->eth_data->dev_private;
-	struct fsl_mc_io *dpni =
-		(struct fsl_mc_io *)priv->eth_dev->process_private;
+	struct fsl_mc_io *dpni = priv->eth_dev->process_private;
 	uint8_t options = 0;
 	int ret;
 	struct dpni_queue cfg;
@@ -1057,7 +1067,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	total_nb_rx_desc -= dpaa2_q->nb_desc;
 
-	if (dpaa2_q->cgid != 0xff) {
+	if (dpaa2_q->cgid != DPAA2_INVALID_CGID) {
 		options = DPNI_QUEUE_OPT_CLEAR_CGID;
 		cfg.cgid = dpaa2_q->cgid;
 
@@ -1069,7 +1079,7 @@ dpaa2_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 			DPAA2_PMD_ERR("Unable to clear CGR from q=%u err=%d",
 					dpaa2_q->fqid, ret);
 		priv->cgid_in_use[dpaa2_q->cgid] = 0;
-		dpaa2_q->cgid = 0xff;
+		dpaa2_q->cgid = DPAA2_INVALID_CGID;
 	}
 }
 
@@ -1233,10 +1243,10 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
 	dpaa2_dev_set_link_up(dev);
 
 	for (i = 0; i < data->nb_rx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)data->rx_queues[i];
+		dpaa2_q = data->rx_queues[i];
 		ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-				     DPNI_QUEUE_RX, dpaa2_q->tc_index,
-				       dpaa2_q->flow_id, &cfg, &qid);
+				DPNI_QUEUE_RX, dpaa2_q->tc_index,
+				dpaa2_q->flow_id, &cfg, &qid);
 		if (ret) {
 			DPAA2_PMD_ERR("Error in getting flow information: "
 				      "err=%d", ret);
@@ -1253,7 +1263,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
 						ret);
 			return ret;
 		}
-		dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq;
+		dpaa2_q = priv->rx_err_vq;
 		dpaa2_q->fqid = qid.fqid;
 		dpaa2_q->eth_data = dev->data;
 
@@ -1318,7 +1328,7 @@ static int
 dpaa2_dev_stop(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int ret;
 	struct rte_eth_link link;
 	struct rte_device *rdev = dev->device;
@@ -1371,7 +1381,7 @@ static int
 dpaa2_dev_close(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int i, ret;
 	struct rte_eth_link link;
 
@@ -1382,7 +1392,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 
 	if (!dpni) {
 		DPAA2_PMD_WARN("Already closed or not started");
-		return -1;
+		return -EINVAL;
 	}
 
 	dpaa2_tm_deinit(dev);
@@ -1391,7 +1401,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	ret = dpni_reset(dpni, CMD_PRI_LOW, priv->token);
 	if (ret) {
 		DPAA2_PMD_ERR("Failure cleaning dpni device: err=%d", ret);
-		return -1;
+		return ret;
 	}
 
 	memset(&link, 0, sizeof(link));
@@ -1403,7 +1413,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	ret = dpni_close(dpni, CMD_PRI_LOW, priv->token);
 	if (ret) {
 		DPAA2_PMD_ERR("Failure closing dpni device with err code %d",
-			      ret);
+			ret);
 	}
 
 	/* Free the allocated memory for ethernet private data and dpni*/
@@ -1412,18 +1422,17 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
 	rte_free(dpni);
 
 	for (i = 0; i < MAX_TCS; i++)
-		rte_free((void *)(size_t)priv->extract.tc_extract_param[i]);
+		rte_free(priv->extract.tc_extract_param[i]);
 
 	if (priv->extract.qos_extract_param)
-		rte_free((void *)(size_t)priv->extract.qos_extract_param);
+		rte_free(priv->extract.qos_extract_param);
 
 	DPAA2_PMD_INFO("%s: netdev deleted", dev->data->name);
 	return 0;
 }
 
 static int
-dpaa2_dev_promiscuous_enable(
-		struct rte_eth_dev *dev)
+dpaa2_dev_promiscuous_enable(struct rte_eth_dev *dev)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
@@ -1483,7 +1492,7 @@ dpaa2_dev_allmulticast_enable(
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1504,7 +1513,7 @@ dpaa2_dev_allmulticast_disable(struct rte_eth_dev *dev)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1529,13 +1538,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
 				+ VLAN_TAG_SIZE;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1547,7 +1556,7 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 					frame_size - RTE_ETHER_CRC_LEN);
 	if (ret) {
 		DPAA2_PMD_ERR("Setting the max frame length failed");
-		return -1;
+		return ret;
 	}
 	dev->data->mtu = mtu;
 	DPAA2_PMD_INFO("MTU configured for the device: %d", mtu);
@@ -1556,36 +1565,35 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 static int
 dpaa2_dev_add_mac_addr(struct rte_eth_dev *dev,
-		       struct rte_ether_addr *addr,
-		       __rte_unused uint32_t index,
-		       __rte_unused uint32_t pool)
+	struct rte_ether_addr *addr,
+	__rte_unused uint32_t index,
+	__rte_unused uint32_t pool)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
-		return -1;
+		return -EINVAL;
 	}
 
 	ret = dpni_add_mac_addr(dpni, CMD_PRI_LOW, priv->token,
 				addr->addr_bytes, 0, 0, 0);
 	if (ret)
-		DPAA2_PMD_ERR(
-			"error: Adding the MAC ADDR failed: err = %d", ret);
-	return 0;
+		DPAA2_PMD_ERR("ERR(%d) Adding the MAC ADDR failed", ret);
+	return ret;
 }
 
 static void
 dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
-			  uint32_t index)
+	uint32_t index)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct rte_eth_dev_data *data = dev->data;
 	struct rte_ether_addr *macaddr;
 
@@ -1593,7 +1601,7 @@ dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
 
 	macaddr = &data->mac_addrs[index];
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return;
 	}
@@ -1607,15 +1615,15 @@ dpaa2_dev_remove_mac_addr(struct rte_eth_dev *dev,
 
 static int
 dpaa2_dev_set_mac_addr(struct rte_eth_dev *dev,
-		       struct rte_ether_addr *addr)
+	struct rte_ether_addr *addr)
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1624,19 +1632,18 @@ dpaa2_dev_set_mac_addr(struct rte_eth_dev *dev,
 					priv->token, addr->addr_bytes);
 
 	if (ret)
-		DPAA2_PMD_ERR(
-			"error: Setting the MAC ADDR failed %d", ret);
+		DPAA2_PMD_ERR("ERR(%d) Setting the MAC ADDR failed", ret);
 
 	return ret;
 }
 
-static
-int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
-			 struct rte_eth_stats *stats)
+static int
+dpaa2_dev_stats_get(struct rte_eth_dev *dev,
+	struct rte_eth_stats *stats)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
-	int32_t  retcode;
+	struct fsl_mc_io *dpni = dev->process_private;
+	int32_t retcode;
 	uint8_t page0 = 0, page1 = 1, page2 = 2;
 	union dpni_statistics value;
 	int i;
@@ -1691,8 +1698,8 @@ int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
 	/* Fill in per queue stats */
 	for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) &&
 		(i < priv->nb_rx_queues || i < priv->nb_tx_queues); ++i) {
-		dpaa2_rxq = (struct dpaa2_queue *)priv->rx_vq[i];
-		dpaa2_txq = (struct dpaa2_queue *)priv->tx_vq[i];
+		dpaa2_rxq = priv->rx_vq[i];
+		dpaa2_txq = priv->tx_vq[i];
 		if (dpaa2_rxq)
 			stats->q_ipackets[i] = dpaa2_rxq->rx_pkts;
 		if (dpaa2_txq)
@@ -1711,19 +1718,20 @@ int dpaa2_dev_stats_get(struct rte_eth_dev *dev,
 };
 
 static int
-dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
-		     unsigned int n)
+dpaa2_dev_xstats_get(struct rte_eth_dev *dev,
+	struct rte_eth_xstat *xstats, unsigned int n)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
-	int32_t  retcode;
+	int32_t retcode;
 	union dpni_statistics value[5] = {};
 	unsigned int i = 0, num = RTE_DIM(dpaa2_xstats_strings);
+	uint8_t page_id, stats_id;
 
 	if (n < num)
 		return num;
 
-	if (xstats == NULL)
+	if (!xstats)
 		return 0;
 
 	/* Get Counters from page_0*/
@@ -1758,8 +1766,9 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 
 	for (i = 0; i < num; i++) {
 		xstats[i].id = i;
-		xstats[i].value = value[dpaa2_xstats_strings[i].page_id].
-			raw.counter[dpaa2_xstats_strings[i].stats_id];
+		page_id = dpaa2_xstats_strings[i].page_id;
+		stats_id = dpaa2_xstats_strings[i].stats_id;
+		xstats[i].value = value[page_id].raw.counter[stats_id];
 	}
 	return i;
 err:
@@ -1769,8 +1778,8 @@ dpaa2_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 
 static int
 dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
-		       struct rte_eth_xstat_name *xstats_names,
-		       unsigned int limit)
+	struct rte_eth_xstat_name *xstats_names,
+	unsigned int limit)
 {
 	unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings);
 
@@ -1788,16 +1797,16 @@ dpaa2_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 
 static int
 dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
-		       uint64_t *values, unsigned int n)
+	uint64_t *values, unsigned int n)
 {
 	unsigned int i, stat_cnt = RTE_DIM(dpaa2_xstats_strings);
 	uint64_t values_copy[stat_cnt];
+	uint8_t page_id, stats_id;
 
 	if (!ids) {
 		struct dpaa2_dev_priv *priv = dev->data->dev_private;
-		struct fsl_mc_io *dpni =
-			(struct fsl_mc_io *)dev->process_private;
-		int32_t  retcode;
+		struct fsl_mc_io *dpni = dev->process_private;
+		int32_t retcode;
 		union dpni_statistics value[5] = {};
 
 		if (n < stat_cnt)
@@ -1831,8 +1840,9 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 			return 0;
 
 		for (i = 0; i < stat_cnt; i++) {
-			values[i] = value[dpaa2_xstats_strings[i].page_id].
-				raw.counter[dpaa2_xstats_strings[i].stats_id];
+			page_id = dpaa2_xstats_strings[i].page_id;
+			stats_id = dpaa2_xstats_strings[i].stats_id;
+			values[i] = value[page_id].raw.counter[stats_id];
 		}
 		return stat_cnt;
 	}
@@ -1842,7 +1852,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 	for (i = 0; i < n; i++) {
 		if (ids[i] >= stat_cnt) {
 			DPAA2_PMD_ERR("xstats id value isn't valid");
-			return -1;
+			return -EINVAL;
 		}
 		values[i] = values_copy[ids[i]];
 	}
@@ -1850,8 +1860,7 @@ dpaa2_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 }
 
 static int
-dpaa2_xstats_get_names_by_id(
-	struct rte_eth_dev *dev,
+dpaa2_xstats_get_names_by_id(struct rte_eth_dev *dev,
 	const uint64_t *ids,
 	struct rte_eth_xstat_name *xstats_names,
 	unsigned int limit)
@@ -1878,14 +1887,14 @@ static int
 dpaa2_dev_stats_reset(struct rte_eth_dev *dev)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	int retcode;
 	int i;
 	struct dpaa2_queue *dpaa2_q;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return -EINVAL;
 	}
@@ -1896,13 +1905,13 @@ dpaa2_dev_stats_reset(struct rte_eth_dev *dev)
 
 	/* Reset the per queue stats in dpaa2_queue structure */
 	for (i = 0; i < priv->nb_rx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[i];
+		dpaa2_q = priv->rx_vq[i];
 		if (dpaa2_q)
 			dpaa2_q->rx_pkts = 0;
 	}
 
 	for (i = 0; i < priv->nb_tx_queues; i++) {
-		dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i];
+		dpaa2_q = priv->tx_vq[i];
 		if (dpaa2_q)
 			dpaa2_q->tx_pkts = 0;
 	}
@@ -1921,12 +1930,12 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	struct rte_eth_link link;
 	struct dpni_link_state state = {0};
 	uint8_t count;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return 0;
 	}
@@ -1936,7 +1945,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 					  &state);
 		if (ret < 0) {
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
-			return -1;
+			return ret;
 		}
 		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
@@ -1955,7 +1964,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
-	if (ret == -1)
+	if (ret < 0)
 		DPAA2_PMD_DEBUG("No change in status");
 	else
 		DPAA2_PMD_INFO("Port %d Link is %s", dev->data->port_id,
@@ -1978,9 +1987,9 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	struct dpni_link_state state = {0};
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return ret;
 	}
@@ -2040,9 +2049,9 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("Device has not yet been configured");
 		return ret;
 	}
@@ -2094,9 +2103,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL || fc_conf == NULL) {
+	if (!dpni || !fc_conf) {
 		DPAA2_PMD_ERR("device not configured");
 		return ret;
 	}
@@ -2149,9 +2158,9 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	PMD_INIT_FUNC_TRACE();
 
 	priv = dev->data->dev_private;
-	dpni = (struct fsl_mc_io *)dev->process_private;
+	dpni = dev->process_private;
 
-	if (dpni == NULL) {
+	if (!dpni) {
 		DPAA2_PMD_ERR("dpni is NULL");
 		return ret;
 	}
@@ -2394,10 +2403,10 @@ dpaa2_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 {
 	struct dpaa2_queue *rxq;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
-	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
+	struct fsl_mc_io *dpni = dev->process_private;
 	uint16_t max_frame_length;
 
-	rxq = (struct dpaa2_queue *)dev->data->rx_queues[queue_id];
+	rxq = dev->data->rx_queues[queue_id];
 
 	qinfo->mp = rxq->mb_pool;
 	qinfo->scattered_rx = dev->data->scattered_rx;
@@ -2513,10 +2522,10 @@ static struct eth_dev_ops dpaa2_ethdev_ops = {
  * Returns the table of MAC entries (multiple entries)
  */
 static int
-populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv,
-		  struct rte_ether_addr *mac_entry)
+populate_mac_addr(struct fsl_mc_io *dpni_dev,
+	struct dpaa2_dev_priv *priv, struct rte_ether_addr *mac_entry)
 {
-	int ret;
+	int ret = 0;
 	struct rte_ether_addr phy_mac, prime_mac;
 
 	memset(&phy_mac, 0, sizeof(struct rte_ether_addr));
@@ -2574,7 +2583,7 @@ populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv,
 	return 0;
 
 cleanup:
-	return -1;
+	return ret;
 }
 
 static int
@@ -2633,7 +2642,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 	dpni_dev->regs = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
-	eth_dev->process_private = (void *)dpni_dev;
+	eth_dev->process_private = dpni_dev;
 
 	/* For secondary processes, the primary has done all the work */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
@@ -2662,7 +2671,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 			     "Failure in opening dpni@%d with err code %d",
 			     hw_id, ret);
 		rte_free(dpni_dev);
-		return -1;
+		return ret;
 	}
 
 	if (eth_dev->data->dev_conf.lpbk_mode)
@@ -2813,7 +2822,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	/* Init fields w.r.t. classification */
 	memset(&priv->extract.qos_key_extract, 0,
 		sizeof(struct dpaa2_key_extract));
-	priv->extract.qos_extract_param = rte_malloc(NULL, 256, 64);
+	priv->extract.qos_extract_param = rte_malloc(NULL,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE,
+		RTE_CACHE_LINE_SIZE);
 	if (!priv->extract.qos_extract_param) {
 		DPAA2_PMD_ERR("Memory alloc failed");
 		goto init_err;
@@ -2822,7 +2833,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	for (i = 0; i < MAX_TCS; i++) {
 		memset(&priv->extract.tc_key_extract[i], 0,
 			sizeof(struct dpaa2_key_extract));
-		priv->extract.tc_extract_param[i] = rte_malloc(NULL, 256, 64);
+		priv->extract.tc_extract_param[i] = rte_malloc(NULL,
+			DPAA2_EXTRACT_PARAM_MAX_SIZE,
+			RTE_CACHE_LINE_SIZE);
 		if (!priv->extract.tc_extract_param[i]) {
 			DPAA2_PMD_ERR("Memory alloc failed");
 			goto init_err;
@@ -2982,12 +2995,11 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 
 	if ((DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE) >
 		RTE_PKTMBUF_HEADROOM) {
-		DPAA2_PMD_ERR(
-		"RTE_PKTMBUF_HEADROOM(%d) shall be > DPAA2 Annotation req(%d)",
-		RTE_PKTMBUF_HEADROOM,
-		DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE);
+		DPAA2_PMD_ERR("RTE_PKTMBUF_HEADROOM(%d) < DPAA2 Annotation(%d)",
+			RTE_PKTMBUF_HEADROOM,
+			DPAA2_MBUF_HW_ANNOTATION + DPAA2_FD_PTA_SIZE);
 
-		return -1;
+		return -EINVAL;
 	}
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index db918725a7..a2b9fc5678 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -31,6 +31,9 @@
 #define MAX_DPNI		8
 #define DPAA2_MAX_CHANNELS	16
 
+#define DPAA2_EXTRACT_PARAM_MAX_SIZE 256
+#define DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE 256
+
 #define DPAA2_RX_DEFAULT_NBDESC 512
 
 #define DPAA2_ETH_MAX_LEN (RTE_ETHER_MTU + \
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 1605c0c584..fb635815aa 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -4318,7 +4318,14 @@ dpaa2_configure_fs_rss_table(struct dpaa2_dev_priv *priv,
 
 	tc_extract = &priv->extract.tc_key_extract[tc_id];
 	key_cfg_buf = priv->extract.tc_extract_param[tc_id];
-	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_buf,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_buf);
+
+		return -ENOBUFS;
+	}
 
 	key_max_size = tc_extract->key_profile.key_max_size;
 	entry_size = dpaa2_flow_entry_size(key_max_size);
@@ -4402,7 +4409,14 @@ dpaa2_configure_qos_table(struct dpaa2_dev_priv *priv,
 
 	qos_extract = &priv->extract.qos_key_extract;
 	key_cfg_buf = priv->extract.qos_extract_param;
-	key_cfg_iova = DPAA2_VADDR_TO_IOVA(key_cfg_buf);
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_buf,
+		DPAA2_EXTRACT_PARAM_MAX_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_buf);
+
+		return -ENOBUFS;
+	}
 
 	key_max_size = qos_extract->key_profile.key_max_size;
 	entry_size = dpaa2_flow_entry_size(key_max_size);
@@ -4959,6 +4973,7 @@ dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	struct dpaa2_dev_flow *flow = NULL;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	int ret;
+	uint64_t iova;
 
 	dpaa2_flow_control_log =
 		getenv("DPAA2_FLOW_CONTROL_LOG");
@@ -4982,34 +4997,66 @@ dpaa2_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	}
 
 	/* Allocate DMA'ble memory to write the qos rules */
-	flow->qos_key_addr = rte_zmalloc(NULL, 256, 64);
+	flow->qos_key_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->qos_key_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->qos_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->qos_key_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->qos_key_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for qos key(%p)",
+			__func__, flow->qos_key_addr);
+		goto mem_failure;
+	}
+	flow->qos_rule.key_iova = iova;
 
-	flow->qos_mask_addr = rte_zmalloc(NULL, 256, 64);
+	flow->qos_mask_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->qos_mask_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->qos_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->qos_mask_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->qos_mask_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for qos mask(%p)",
+			__func__, flow->qos_mask_addr);
+		goto mem_failure;
+	}
+	flow->qos_rule.mask_iova = iova;
 
 	/* Allocate DMA'ble memory to write the FS rules */
-	flow->fs_key_addr = rte_zmalloc(NULL, 256, 64);
+	flow->fs_key_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->fs_key_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->fs_rule.key_iova = DPAA2_VADDR_TO_IOVA(flow->fs_key_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->fs_key_addr,
+			DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for fs key(%p)",
+			__func__, flow->fs_key_addr);
+		goto mem_failure;
+	}
+	flow->fs_rule.key_iova = iova;
 
-	flow->fs_mask_addr = rte_zmalloc(NULL, 256, 64);
+	flow->fs_mask_addr = rte_zmalloc(NULL,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE, RTE_CACHE_LINE_SIZE);
 	if (!flow->fs_mask_addr) {
 		DPAA2_PMD_ERR("Memory allocation failed");
 		goto mem_failure;
 	}
-	flow->fs_rule.mask_iova = DPAA2_VADDR_TO_IOVA(flow->fs_mask_addr);
+	iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(flow->fs_mask_addr,
+		DPAA2_EXTRACT_ALLOC_KEY_MAX_SIZE);
+	if (iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for fs mask(%p)",
+			__func__, flow->fs_mask_addr);
+		goto mem_failure;
+	}
+	flow->fs_rule.mask_iova = iova;
 
 	priv->curr = flow;
 
diff --git a/drivers/net/dpaa2/dpaa2_sparser.c b/drivers/net/dpaa2/dpaa2_sparser.c
index 59f7a172c6..265c9b5c57 100644
--- a/drivers/net/dpaa2/dpaa2_sparser.c
+++ b/drivers/net/dpaa2/dpaa2_sparser.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2023 NXP
  */
 
 #include <rte_mbuf.h>
@@ -170,7 +170,14 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 	}
 
 	memcpy(addr, sp_param.byte_code, sp_param.size);
-	cfg.ss_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(addr));
+	cfg.ss_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(addr, sp_param.size);
+	if (cfg.ss_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("No IOMMU map for soft sequence(%p), size=%d",
+			addr, sp_param.size);
+		rte_free(addr);
+
+		return -ENOBUFS;
+	}
 
 	ret = dpni_load_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
@@ -179,7 +186,7 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 		return ret;
 	}
 
-	priv->ss_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(addr));
+	priv->ss_iova = cfg.ss_iova;
 	priv->ss_offset += sp_param.size;
 	DPAA2_PMD_INFO("Soft parser loaded for dpni@%d", priv->hw_id);
 
@@ -219,7 +226,15 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 		}
 
 		memcpy(param_addr, sp_param.param_array, cfg.param_size);
-		cfg.param_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(param_addr));
+		cfg.param_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(param_addr,
+			cfg.param_size);
+		if (cfg.param_iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("%s: No IOMMU map for %p, size=%d",
+				__func__, param_addr, cfg.param_size);
+			rte_free(param_addr);
+
+			return -ENOBUFS;
+		}
 		priv->ss_param_iova = cfg.param_iova;
 	} else {
 		cfg.param_iova = 0;
@@ -227,7 +242,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 
 	ret = dpni_enable_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d",
+		DPAA2_PMD_ERR("Soft parser enabled for dpni@%d failed",
 			priv->hw_id);
 		rte_free(param_addr);
 		return ret;
diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index ab3e355853..f91392b092 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2020-2021 NXP
+ * Copyright 2020-2023 NXP
  */
 
 #include <rte_ethdev.h>
@@ -572,41 +572,42 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 	struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
+	uint64_t iova;
 
 	memset(&tx_flow_cfg, 0, sizeof(struct dpni_queue));
-	dpaa2_q =  (struct dpaa2_queue *)dev->data->tx_queues[node->id];
+	dpaa2_q = (struct dpaa2_queue *)dev->data->tx_queues[node->id];
 	tc_id = node->parent->tc_id;
 	node->parent->tc_id++;
 	flow_id = 0;
 
-	if (dpaa2_q == NULL) {
-		DPAA2_PMD_ERR("Queue is not configured for node = %d", node->id);
-		return -1;
+	if (!dpaa2_q) {
+		DPAA2_PMD_ERR("Queue is not configured for node = %d",
+			node->id);
+		return -ENOMEM;
 	}
 
 	DPAA2_PMD_DEBUG("tc_id = %d, channel = %d", tc_id,
 			node->parent->channel_id);
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
-			     ((node->parent->channel_id << 8) | tc_id),
-			     flow_id, options, &tx_flow_cfg);
+			((node->parent->channel_id << 8) | tc_id),
+			flow_id, options, &tx_flow_cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("Error in setting the tx flow: "
-		       "channel id  = %d tc_id= %d, param = 0x%x "
-		       "flow=%d err=%d", node->parent->channel_id, tc_id,
-		       ((node->parent->channel_id << 8) | tc_id), flow_id,
-		       ret);
-		return -1;
+		DPAA2_PMD_ERR("Set the TC[%d].ch[%d].TX flow[%d] (err=%d)",
+			tc_id, node->parent->channel_id, flow_id,
+			ret);
+		return ret;
 	}
 
 	dpaa2_q->flow_id = flow_id;
 	dpaa2_q->tc_index = tc_id;
 
 	ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token,
-		DPNI_QUEUE_TX, ((node->parent->channel_id << 8) | dpaa2_q->tc_index),
-		dpaa2_q->flow_id, &tx_flow_cfg, &qid);
+			DPNI_QUEUE_TX,
+			((node->parent->channel_id << 8) | dpaa2_q->tc_index),
+			dpaa2_q->flow_id, &tx_flow_cfg, &qid);
 	if (ret) {
 		DPAA2_PMD_ERR("Error in getting LFQID err=%d", ret);
-		return -1;
+		return ret;
 	}
 	dpaa2_q->fqid = qid.fqid;
 
@@ -621,8 +622,13 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 		 */
 		cong_notif_cfg.threshold_exit = (dpaa2_q->nb_desc * 9) / 10;
 		cong_notif_cfg.message_ctx = 0;
-		cong_notif_cfg.message_iova =
-			(size_t)DPAA2_VADDR_TO_IOVA(dpaa2_q->cscn);
+		iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(dpaa2_q->cscn,
+				sizeof(struct qbman_result));
+		if (iova == RTE_BAD_IOVA) {
+			DPAA2_PMD_ERR("No IOMMU map for cscn(%p)", dpaa2_q->cscn);
+			return -ENOBUFS;
+		}
+		cong_notif_cfg.message_iova = iova;
 		cong_notif_cfg.dest_cfg.dest_type = DPNI_DEST_NONE;
 		cong_notif_cfg.notification_mode =
 					DPNI_CONG_OPT_WRITE_MEM_ON_ENTER |
@@ -641,6 +647,7 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 			return -ret;
 		}
 	}
+	dpaa2_q->tm_sw_td = true;
 
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 37/42] net/dpaa2: improve DPDMUX error behavior settings
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (35 preceding siblings ...)
  2024-10-23 11:59           ` [v5 36/42] net/dpaa2: check IOVA before sending MC command vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 38/42] net/dpaa2: store drop priority in mbuf vanshika.shukla
                             ` (5 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena

From: Sachin Saxena <sachin.saxena@nxp.com>

compatible with MC v10.36 or later

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index f4b8d481af..13de7d5783 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021 NXP
+ * Copyright 2018-2021,2023 NXP
  */
 
 #include <sys/queue.h>
@@ -448,13 +448,12 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 		struct dpdmux_error_cfg mux_err_cfg;
 
 		memset(&mux_err_cfg, 0, sizeof(mux_err_cfg));
+		/* Note: Discarded flag(DPDMUX_ERROR_DISC) has effect only when
+		 * ERROR_ACTION is set to DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE.
+		 */
+		mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
 		mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE;
 
-		if (attr.method != DPDMUX_METHOD_C_VLAN_MAC)
-			mux_err_cfg.errors = DPDMUX_ERROR_DISC;
-		else
-			mux_err_cfg.errors = DPDMUX_ALL_ERRORS;
-
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
 				dpdmux_dev->token, DPAA2_DPDMUX_DPMAC_IDX,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 38/42] net/dpaa2: store drop priority in mbuf
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (36 preceding siblings ...)
  2024-10-23 11:59           ` [v5 37/42] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 39/42] net/dpaa2: add API to get endpoint name vanshika.shukla
                             ` (4 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

store drop priority in mbuf from fd.

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 1 +
 drivers/net/dpaa2/dpaa2_rxtx.c          | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index b6cd1f00c4..cd22974752 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -329,6 +329,7 @@ enum qbman_fd_format {
 #define DPAA2_GET_FD_BPID(fd)	(((fd)->simple.bpid_offset & 0x00003FFF))
 #define DPAA2_GET_FD_IVP(fd)   (((fd)->simple.bpid_offset & 0x00004000) >> 14)
 #define DPAA2_GET_FD_OFFSET(fd)	(((fd)->simple.bpid_offset & 0x0FFF0000) >> 16)
+#define DPAA2_GET_FD_DROPP(fd)  (((fd)->simple.ctrl & 0x07000000) >> 24)
 #define DPAA2_GET_FD_FRC(fd)   ((fd)->simple.frc)
 #define DPAA2_GET_FD_FLC(fd) \
 	(((uint64_t)((fd)->simple.flc_hi) << 32) + (fd)->simple.flc_lo)
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index fd07a75a40..01e699d282 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -388,6 +388,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 	mbuf->pkt_len = mbuf->data_len;
 	mbuf->port = port_id;
 	mbuf->next = NULL;
+	mbuf->hash.sched.color = DPAA2_GET_FD_DROPP(fd);
 	rte_mbuf_refcnt_set(mbuf, 1);
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 39/42] net/dpaa2: add API to get endpoint name
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (37 preceding siblings ...)
  2024-10-23 11:59           ` [v5 38/42] net/dpaa2: store drop priority in mbuf vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 40/42] net/dpaa2: support VLAN traffic splitting vanshika.shukla
                             ` (3 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Export API in rte_pmd_dpaa2.h

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c  | 24 ++++++++++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h  |  4 ++++
 drivers/net/dpaa2/rte_pmd_dpaa2.h |  3 +++
 drivers/net/dpaa2/version.map     |  1 +
 4 files changed, 32 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 7a3937346c..137e116963 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2903,6 +2903,30 @@ rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
+const char *
+rte_pmd_dpaa2_ep_name(uint32_t eth_id)
+{
+	struct rte_eth_dev *dev;
+	struct dpaa2_dev_priv *priv;
+
+	if (eth_id >= RTE_MAX_ETHPORTS)
+		return NULL;
+
+	if (!rte_pmd_dpaa2_dev_is_dpaa2(eth_id))
+		return NULL;
+
+	dev = &rte_eth_devices[eth_id];
+	if (!dev->data)
+		return NULL;
+
+	if (!dev->data->dev_private)
+		return NULL;
+
+	priv = dev->data->dev_private;
+
+	return priv->ep_name;
+}
+
 #if defined(RTE_LIBRTE_IEEE1588)
 int
 rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index a2b9fc5678..fd6bad7f74 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -385,6 +385,10 @@ struct dpaa2_dev_priv {
 	uint8_t max_cgs;
 	uint8_t cgid_in_use[MAX_RX_QUEUES];
 
+	enum rte_dpaa2_dev_type ep_dev_type;   /**< Endpoint Device Type */
+	uint16_t ep_object_id;                 /**< Endpoint DPAA2 Object ID */
+	char ep_name[RTE_DEV_NAME_MAX_LEN];
+
 	struct extract_s extract;
 
 	uint16_t ss_offset;
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index fc52a9218e..f93af1c65f 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -130,6 +130,9 @@ rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
 __rte_experimental
 int
 rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id);
+__rte_experimental
+const char *
+rte_pmd_dpaa2_ep_name(uint32_t eth_id);
 
 #if defined(RTE_LIBRTE_IEEE1588)
 __rte_experimental
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 233c6e6b2c..35815f7777 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -18,6 +18,7 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_get_tlu_hash;
 	# added in 24.11
 	rte_pmd_dpaa2_dev_is_dpaa2;
+	rte_pmd_dpaa2_ep_name;
 	rte_pmd_dpaa2_set_one_step_ts;
 	rte_pmd_dpaa2_get_one_step_ts;
 	rte_pmd_dpaa2_mux_dump_counter;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 40/42] net/dpaa2: support VLAN traffic splitting
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (38 preceding siblings ...)
  2024-10-23 11:59           ` [v5 39/42] net/dpaa2: add API to get endpoint name vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 41/42] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
                             ` (2 subsequent siblings)
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds support for adding rules in DPDMUX
to split VLAN traffic based on VLAN ids.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 13de7d5783..c8f1d46bb2 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -118,6 +118,26 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	}
 	break;
 
+	case RTE_FLOW_ITEM_TYPE_VLAN:
+	{
+		const struct rte_flow_item_vlan *spec;
+
+		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
+		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
+		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
+		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FROM_FIELD;
+		kg_cfg.extracts[0].extract.from_hdr.offset = 1;
+		kg_cfg.extracts[0].extract.from_hdr.size = 1;
+		kg_cfg.num_extracts = 1;
+
+		spec = (const struct rte_flow_item_vlan *)pattern[0]->spec;
+		memcpy((void *)key_iova, (const void *)(&spec->hdr.vlan_tci),
+			sizeof(uint16_t));
+		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
+		key_size = sizeof(uint16_t);
+	}
+	break;
+
 	case RTE_FLOW_ITEM_TYPE_UDP:
 	{
 		const struct rte_flow_item_udp *spec;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 41/42] net/dpaa2: add support for C-VLAN and MAC
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (39 preceding siblings ...)
  2024-10-23 11:59           ` [v5 40/42] net/dpaa2: support VLAN traffic splitting vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-10-23 11:59           ` [v5 42/42] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
  2024-11-07 11:24           ` [v5 00/42] DPAA2 specific patches Hemant Agrawal
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Vanshika Shukla

From: Vanshika Shukla <vanshika.shukla@nxp.com>

This patch adds the support for DPDMUX_METHOD_C_VLAN_MAC method
which implements DPDMUX based on C-VLAN and MAC address.

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c     |  2 +-
 drivers/net/dpaa2/mc/fsl_dpdmux.h | 16 ++++++++++++++++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index c8f1d46bb2..6e10739dd3 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2021,2023 NXP
+ * Copyright 2018-2024 NXP
  */
 
 #include <sys/queue.h>
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index 97b09e59f9..70b81f3b3b 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -593,6 +593,22 @@ int dpdmux_dump_table(struct fsl_mc_io *mc_io,
  */
 #define DPDMUX__ERROR_L4CE			0x00000001
 
+#define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
+				 DPDMUX__ERROR_L4CV | \
+				 DPDMUX__ERROR_L3CE | \
+				 DPDMUX__ERROR_L3CV | \
+				 DPDMUX_ERROR_BLE | \
+				 DPDMUX_ERROR_PHE | \
+				 DPDMUX_ERROR_ISP | \
+				 DPDMUX_ERROR_PTE | \
+				 DPDMUX_ERROR_FPE | \
+				 DPDMUX_ERROR_FLE | \
+				 DPDMUX_ERROR_PIEE | \
+				 DPDMUX_ERROR_TIDE | \
+				 DPDMUX_ERROR_MNLE | \
+				 DPDMUX_ERROR_EOFHE | \
+				 DPDMUX_ERROR_KSE)
+
 #define DPDMUX_ALL_ERRORS	(DPDMUX__ERROR_L4CE | \
 				 DPDMUX__ERROR_L4CV | \
 				 DPDMUX__ERROR_L3CE | \
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* [v5 42/42] net/dpaa2: dpdmux single flow/multiple rules support
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (40 preceding siblings ...)
  2024-10-23 11:59           ` [v5 41/42] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
@ 2024-10-23 11:59           ` vanshika.shukla
  2024-11-07 11:24           ` [v5 00/42] DPAA2 specific patches Hemant Agrawal
  42 siblings, 0 replies; 229+ messages in thread
From: vanshika.shukla @ 2024-10-23 11:59 UTC (permalink / raw)
  To: dev, Hemant Agrawal, Sachin Saxena; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Support multiple extractions as well as hardware descriptions
instead of hard code.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h     |   1 +
 drivers/net/dpaa2/dpaa2_flow.c       |  22 --
 drivers/net/dpaa2/dpaa2_mux.c        | 393 ++++++++++++++++-----------
 drivers/net/dpaa2/dpaa2_parse_dump.h |   2 +
 drivers/net/dpaa2/rte_pmd_dpaa2.h    |   8 +-
 5 files changed, 246 insertions(+), 180 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fd6bad7f74..fd3119247a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -198,6 +198,7 @@ enum dpaa2_rx_faf_offset {
 	FAF_IPV4_FRAM = 34 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IPV6_FRAM = 42 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IP_FRAM = 48 + DPAA2_FAFE_PSR_SIZE * 8,
+	FAF_IP_FRAG_FRAM = 50 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_ICMP_FRAM = 57 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_IGMP_FRAM = 58 + DPAA2_FAFE_PSR_SIZE * 8,
 	FAF_GRE_FRAM = 65 + DPAA2_FAFE_PSR_SIZE * 8,
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index fb635815aa..1ec2b83b7d 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -98,13 +98,6 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = {
 	RTE_FLOW_ACTION_TYPE_RSS
 };
 
-static const
-enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
-	RTE_FLOW_ACTION_TYPE_QUEUE,
-	RTE_FLOW_ACTION_TYPE_PORT_ID,
-	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
-};
-
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
 	.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
@@ -4079,21 +4072,6 @@ dpaa2_configure_flow_raw(struct dpaa2_dev_flow *flow,
 	return 0;
 }
 
-static inline int
-dpaa2_fs_action_supported(enum rte_flow_action_type action)
-{
-	int i;
-	int action_num = sizeof(dpaa2_supported_fs_action_type) /
-		sizeof(enum rte_flow_action_type);
-
-	for (i = 0; i < action_num; i++) {
-		if (action == dpaa2_supported_fs_action_type[i])
-			return true;
-	}
-
-	return false;
-}
-
 static inline int
 dpaa2_flow_verify_attr(struct dpaa2_dev_priv *priv,
 	const struct rte_flow_attr *attr)
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 6e10739dd3..a6d35f2fcb 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -32,8 +32,9 @@ struct dpaa2_dpdmux_dev {
 	uint8_t num_ifs;   /* Number of interfaces in DPDMUX */
 };
 
-struct rte_flow {
-	struct dpdmux_rule_cfg rule;
+#define DPAA2_MUX_FLOW_MAX_RULE_NUM 8
+struct dpaa2_mux_flow {
+	struct dpdmux_rule_cfg rule[DPAA2_MUX_FLOW_MAX_RULE_NUM];
 };
 
 TAILQ_HEAD(dpdmux_dev_list, dpaa2_dpdmux_dev);
@@ -53,204 +54,287 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 	return dpdmux_dev;
 }
 
-struct rte_flow *
+int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
-			      struct rte_flow_item *pattern[],
-			      struct rte_flow_action *actions[])
+	struct rte_flow_item pattern[],
+	struct rte_flow_action actions[])
 {
 	struct dpaa2_dpdmux_dev *dpdmux_dev;
+	static struct dpkg_profile_cfg s_kg_cfg;
 	struct dpkg_profile_cfg kg_cfg;
 	const struct rte_flow_action_vf *vf_conf;
 	struct dpdmux_cls_action dpdmux_action;
-	struct rte_flow *flow = NULL;
-	void *key_iova, *mask_iova, *key_cfg_iova = NULL;
+	uint8_t *key_va = NULL, *mask_va = NULL;
+	void *key_cfg_va = NULL;
+	uint64_t key_iova, mask_iova, key_cfg_iova;
 	uint8_t key_size = 0;
-	int ret;
-	static int i;
+	int ret = 0, loop = 0;
+	static int s_i;
+	struct dpkg_extract *extract;
+	struct dpdmux_rule_cfg rule;
 
-	if (!pattern || !actions || !pattern[0] || !actions[0])
-		return NULL;
+	memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
 
 	/* Find the DPDMUX from dpdmux_id in our list */
 	dpdmux_dev = get_dpdmux_from_id(dpdmux_id);
 	if (!dpdmux_dev) {
 		DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id);
-		return NULL;
+		ret = -ENODEV;
+		goto creation_error;
 	}
 
-	key_cfg_iova = rte_zmalloc(NULL, DIST_PARAM_IOVA_SIZE,
-				   RTE_CACHE_LINE_SIZE);
-	if (!key_cfg_iova) {
-		DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
-		return NULL;
+	key_cfg_va = rte_zmalloc(NULL, DIST_PARAM_IOVA_SIZE,
+				RTE_CACHE_LINE_SIZE);
+	if (!key_cfg_va) {
+		DPAA2_PMD_ERR("Unable to allocate key configure buffer");
+		ret = -ENOMEM;
+		goto creation_error;
+	}
+
+	key_cfg_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_cfg_va,
+		DIST_PARAM_IOVA_SIZE);
+	if (key_cfg_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU map for key cfg(%p)",
+			__func__, key_cfg_va);
+		ret = -ENOBUFS;
+		goto creation_error;
 	}
-	flow = rte_zmalloc(NULL, sizeof(struct rte_flow) +
-			   (2 * DIST_PARAM_IOVA_SIZE), RTE_CACHE_LINE_SIZE);
-	if (!flow) {
-		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration");
+
+	key_va = rte_zmalloc(NULL, (2 * DIST_PARAM_IOVA_SIZE),
+		RTE_CACHE_LINE_SIZE);
+	if (!key_va) {
+		DPAA2_PMD_ERR("Unable to allocate flow dist parameter");
+		ret = -ENOMEM;
 		goto creation_error;
 	}
-	key_iova = (void *)((size_t)flow + sizeof(struct rte_flow));
-	mask_iova = (void *)((size_t)key_iova + DIST_PARAM_IOVA_SIZE);
+
+	key_iova = DPAA2_VADDR_TO_IOVA_AND_CHECK(key_va,
+		(2 * DIST_PARAM_IOVA_SIZE));
+	if (key_iova == RTE_BAD_IOVA) {
+		DPAA2_PMD_ERR("%s: No IOMMU mapping for address(%p)",
+			__func__, key_va);
+		ret = -ENOBUFS;
+		goto creation_error;
+	}
+
+	mask_va = key_va + DIST_PARAM_IOVA_SIZE;
+	mask_iova = key_iova + DIST_PARAM_IOVA_SIZE;
 
 	/* Currently taking only IP protocol as an extract type.
 	 * This can be extended to other fields using pattern->type.
 	 */
 	memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
 
-	switch (pattern[0]->type) {
-	case RTE_FLOW_ITEM_TYPE_IPV4:
-	{
-		const struct rte_flow_item_ipv4 *spec;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_IP;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_ipv4 *)pattern[0]->spec;
-		memcpy(key_iova, (const void *)(&spec->hdr.next_proto_id),
-			sizeof(uint8_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint8_t));
-		key_size = sizeof(uint8_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_VLAN:
-	{
-		const struct rte_flow_item_vlan *spec;
-
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FROM_FIELD;
-		kg_cfg.extracts[0].extract.from_hdr.offset = 1;
-		kg_cfg.extracts[0].extract.from_hdr.size = 1;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_vlan *)pattern[0]->spec;
-		memcpy((void *)key_iova, (const void *)(&spec->hdr.vlan_tci),
-			sizeof(uint16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_UDP:
-	{
-		const struct rte_flow_item_udp *spec;
-		uint16_t udp_dst_port;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_UDP;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_udp *)pattern[0]->spec;
-		udp_dst_port = rte_constant_bswap16(spec->hdr.dst_port);
-		memcpy((void *)key_iova, (const void *)&udp_dst_port,
-							sizeof(rte_be16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_ETH:
-	{
-		const struct rte_flow_item_eth *spec;
-		uint16_t eth_type;
-
-		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_ETH;
-		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_ETH_TYPE;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
-		kg_cfg.num_extracts = 1;
-
-		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
-		memcpy((void *)key_iova, (const void *)&eth_type,
-							sizeof(rte_be16_t));
-		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
-		key_size = sizeof(uint16_t);
-	}
-	break;
-
-	case RTE_FLOW_ITEM_TYPE_RAW:
-	{
-		const struct rte_flow_item_raw *spec;
-
-		spec = (const struct rte_flow_item_raw *)pattern[0]->spec;
-		kg_cfg.extracts[0].extract.from_data.offset = spec->offset;
-		kg_cfg.extracts[0].extract.from_data.size = spec->length;
-		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_DATA;
-		kg_cfg.num_extracts = 1;
-		memcpy((void *)key_iova, (const void *)spec->pattern,
-							spec->length);
-		memcpy(mask_iova, pattern[0]->mask, spec->length);
-
-		key_size = spec->length;
-	}
-	break;
+	while (pattern[loop].type != RTE_FLOW_ITEM_TYPE_END) {
+		if (kg_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+			DPAA2_PMD_ERR("Too many extracts(%d)",
+				kg_cfg.num_extracts);
+			ret = -ENOTSUP;
+			goto creation_error;
+		}
+		switch (pattern[loop].type) {
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		{
+			const struct rte_flow_item_ipv4 *spec;
+			const struct rte_flow_item_ipv4 *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_IP;
+			extract->extract.from_hdr.field = NH_FLD_IP_PROTO;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->hdr.next_proto_id, sizeof(uint8_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->hdr.next_proto_id,
+					sizeof(uint8_t));
+			} else {
+				mask_va[key_size] = 0xff;
+			}
+			key_size += sizeof(uint8_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+		{
+			const struct rte_flow_item_vlan *spec;
+			const struct rte_flow_item_vlan *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_VLAN;
+			extract->extract.from_hdr.field = NH_FLD_VLAN_TCI;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->tci, sizeof(uint16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->tci, sizeof(uint16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(uint16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_UDP:
+		{
+			const struct rte_flow_item_udp *spec;
+			const struct rte_flow_item_udp *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_UDP;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+			extract->extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->hdr.dst_port, sizeof(rte_be16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->hdr.dst_port,
+					sizeof(rte_be16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(rte_be16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_ETH:
+		{
+			const struct rte_flow_item_eth *spec;
+			const struct rte_flow_item_eth *mask;
+
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_HDR;
+			extract->extract.from_hdr.prot = NET_PROT_ETH;
+			extract->extract.from_hdr.type = DPKG_FULL_FIELD;
+			extract->extract.from_hdr.field = NH_FLD_ETH_TYPE;
+			kg_cfg.num_extracts++;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			rte_memcpy(&key_va[key_size],
+				&spec->type, sizeof(rte_be16_t));
+			if (mask) {
+				rte_memcpy(&mask_va[key_size],
+					&mask->type, sizeof(rte_be16_t));
+			} else {
+				memset(&mask_va[key_size], 0xff,
+					sizeof(rte_be16_t));
+			}
+			key_size += sizeof(rte_be16_t);
+		}
+		break;
+
+		case RTE_FLOW_ITEM_TYPE_RAW:
+		{
+			const struct rte_flow_item_raw *spec;
+			const struct rte_flow_item_raw *mask;
+
+			spec = pattern[loop].spec;
+			mask = pattern[loop].mask;
+			extract = &kg_cfg.extracts[kg_cfg.num_extracts];
+			extract->type = DPKG_EXTRACT_FROM_DATA;
+			extract->extract.from_data.offset = spec->offset;
+			extract->extract.from_data.size = spec->length;
+			kg_cfg.num_extracts++;
+
+			rte_memcpy(&key_va[key_size],
+				spec->pattern, spec->length);
+			if (mask && mask->pattern) {
+				rte_memcpy(&mask_va[key_size],
+					mask->pattern, spec->length);
+			} else {
+				memset(&mask_va[key_size], 0xff, spec->length);
+			}
+
+			key_size += spec->length;
+		}
+		break;
 
-	default:
-		DPAA2_PMD_ERR("Not supported pattern type: %d",
-				pattern[0]->type);
-		goto creation_error;
+		default:
+			DPAA2_PMD_ERR("Not supported pattern[%d] type: %d",
+				loop, pattern[loop].type);
+			ret = -ENOTSUP;
+			goto creation_error;
+		}
+		loop++;
 	}
 
-	ret = dpkg_prepare_key_cfg(&kg_cfg, key_cfg_iova);
+	ret = dpkg_prepare_key_cfg(&kg_cfg, key_cfg_va);
 	if (ret) {
 		DPAA2_PMD_ERR("dpkg_prepare_key_cfg failed: err(%d)", ret);
 		goto creation_error;
 	}
 
-	/* Multiple rules with same DPKG extracts (kg_cfg.extracts) like same
-	 * offset and length values in raw is supported right now. Different
-	 * values of kg_cfg may not work.
-	 */
-	if (i == 0) {
-		ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-					    dpdmux_dev->token,
-				(uint64_t)(DPAA2_VADDR_TO_IOVA(key_cfg_iova)));
+	if (!s_i) {
+		ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux,
+				CMD_PRI_LOW, dpdmux_dev->token, key_cfg_iova);
 		if (ret) {
 			DPAA2_PMD_ERR("dpdmux_set_custom_key failed: err(%d)",
-					ret);
+				ret);
+			goto creation_error;
+		}
+		rte_memcpy(&s_kg_cfg, &kg_cfg, sizeof(struct dpkg_profile_cfg));
+	} else {
+		if (memcmp(&s_kg_cfg, &kg_cfg,
+			sizeof(struct dpkg_profile_cfg))) {
+			DPAA2_PMD_ERR("%s: Single flow support only.",
+				__func__);
+			ret = -ENOTSUP;
 			goto creation_error;
 		}
 	}
-	/* As now our key extract parameters are set, let us configure
-	 * the rule.
-	 */
-	flow->rule.key_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(key_iova));
-	flow->rule.mask_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(mask_iova));
-	flow->rule.key_size = key_size;
-	flow->rule.entry_index = i++;
 
-	vf_conf = (const struct rte_flow_action_vf *)(actions[0]->conf);
+	vf_conf = actions[0].conf;
 	if (vf_conf->id == 0 || vf_conf->id > dpdmux_dev->num_ifs) {
-		DPAA2_PMD_ERR("Invalid destination id");
+		DPAA2_PMD_ERR("Invalid destination id(%d)", vf_conf->id);
 		goto creation_error;
 	}
 	dpdmux_action.dest_if = vf_conf->id;
 
-	ret = dpdmux_add_custom_cls_entry(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
-					  dpdmux_dev->token, &flow->rule,
-					  &dpdmux_action);
+	rule.key_iova = key_iova;
+	rule.mask_iova = mask_iova;
+	rule.key_size = key_size;
+	rule.entry_index = s_i;
+	s_i++;
+
+	/* As now our key extract parameters are set, let us configure
+	 * the rule.
+	 */
+	ret = dpdmux_add_custom_cls_entry(&dpdmux_dev->dpdmux,
+			CMD_PRI_LOW, dpdmux_dev->token,
+			&rule, &dpdmux_action);
 	if (ret) {
-		DPAA2_PMD_ERR("dpdmux_add_custom_cls_entry failed: err(%d)",
-			      ret);
+		DPAA2_PMD_ERR("Add classification entry failed:err(%d)", ret);
 		goto creation_error;
 	}
 
-	return flow;
-
 creation_error:
-	rte_free((void *)key_cfg_iova);
-	rte_free((void *)flow);
-	return NULL;
+	if (key_cfg_va)
+		rte_free(key_cfg_va);
+	if (key_va)
+		rte_free(key_va);
+
+	return ret;
 }
 
 int
@@ -407,10 +491,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 	PMD_INIT_FUNC_TRACE();
 
 	/* Allocate DPAA2 dpdmux handle */
-	dpdmux_dev = rte_malloc(NULL, sizeof(struct dpaa2_dpdmux_dev), 0);
+	dpdmux_dev = rte_zmalloc(NULL,
+		sizeof(struct dpaa2_dpdmux_dev), RTE_CACHE_LINE_SIZE);
 	if (!dpdmux_dev) {
 		DPAA2_PMD_ERR("Memory allocation failed for DPDMUX Device");
-		return -1;
+		return -ENOMEM;
 	}
 
 	/* Open the dpdmux object */
diff --git a/drivers/net/dpaa2/dpaa2_parse_dump.h b/drivers/net/dpaa2/dpaa2_parse_dump.h
index f1cdc003de..78fd3b768c 100644
--- a/drivers/net/dpaa2/dpaa2_parse_dump.h
+++ b/drivers/net/dpaa2/dpaa2_parse_dump.h
@@ -105,6 +105,8 @@ dpaa2_print_faf(struct dpaa2_fapr_array *fapr)
 			faf_bits[i].name = "IPv4 1 Present";
 		else if (i == FAF_IPV6_FRAM)
 			faf_bits[i].name = "IPv6 1 Present";
+		else if (i == FAF_IP_FRAG_FRAM)
+			faf_bits[i].name = "IP fragment Present";
 		else if (i == FAF_UDP_FRAM)
 			faf_bits[i].name = "UDP Present";
 		else if (i == FAF_TCP_FRAM)
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index f93af1c65f..237c3cd6e7 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -26,12 +26,12 @@
  *    Associated actions.
  *
  * @return
- *    A valid handle in case of success, NULL otherwise.
+ *    0 in case of success,  otherwise failure.
  */
-struct rte_flow *
+int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
-			      struct rte_flow_item *pattern[],
-			      struct rte_flow_action *actions[]);
+	struct rte_flow_item pattern[],
+	struct rte_flow_action actions[]);
 int
 rte_pmd_dpaa2_mux_flow_destroy(uint32_t dpdmux_id,
 	uint16_t entry_index);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 229+ messages in thread

* RE: [EXT] Re: [v4 23/42] net/dpaa2: flow API refactor
  2024-10-23  0:52           ` Stephen Hemminger
@ 2024-10-23 12:04             ` Vanshika Shukla
  0 siblings, 0 replies; 229+ messages in thread
From: Vanshika Shukla @ 2024-10-23 12:04 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Hemant Agrawal, Sachin Saxena, Jun Yang

This seems ok.

> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Wednesday, October 23, 2024 6:23 AM
> To: Vanshika Shukla <vanshika.shukla@nxp.com>
> Cc: dev@dpdk.org; Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin
> Saxena <sachin.saxena@nxp.com>; Jun Yang <jun.yang@nxp.com>
> Subject: [EXT] Re: [v4 23/42] net/dpaa2: flow API refactor
> 
> Caution: This is an external email. Please take care when clicking links or
> opening attachments. When in doubt, report the message using the 'Report
> this email' button
> 
> 
> On Wed, 23 Oct 2024 00:42:36 +0530
> vanshika.shukla@nxp.com wrote:
> 
> > From: Jun Yang <jun.yang@nxp.com>
> >
> > 1) Gather redundant code with same logic from various protocol
> >    handlers to create common functions.
> > 2) struct dpaa2_key_profile is used to describe each extract's
> >    offset of rule and size. It's easy to insert new extract previous
> >    IP address extract.
> > 3) IP address profile is used to describe ipv4/v6 addresses extracts
> >    located at end of rule.
> > 4) L4 ports profile is used to describe the ports positions and offsets
> >    of rule.
> > 5) Once the extracts of QoS/FS table are update, go through all
> >    the existing flows of this table to update the rule data.
> >
> > Signed-off-by: Jun Yang <jun.yang@nxp.com>
> 
> Before it looked possible to dump flow info to file, now it only goes to stdout.
> Is that ok?

^ permalink raw reply	[flat|nested] 229+ messages in thread

* RE: [v5 00/42] DPAA2 specific patches
  2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
                             ` (41 preceding siblings ...)
  2024-10-23 11:59           ` [v5 42/42] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
@ 2024-11-07 11:24           ` Hemant Agrawal
  42 siblings, 0 replies; 229+ messages in thread
From: Hemant Agrawal @ 2024-11-07 11:24 UTC (permalink / raw)
  To: Vanshika Shukla, dev; +Cc: Sachin Saxena

> -----Original Message-----
> From: vanshika.shukla@nxp.com <vanshika.shukla@nxp.com>
> Sent: Wednesday, October 23, 2024 5:29 PM
> To: dev@dpdk.org
> Subject: [v5 00/42] DPAA2 specific patches
> 
> From: Vanshika Shukla <vanshika.shukla@nxp.com>
> 
> This series includes:
> -> Fixes and enhancements for NXP DPAA2 drivers.
> -> Upgrade with MC version 10.37
> -> Enhancements in DPDMUX code
> -> Fixes for coverity issues reported
> 
> V2 changes:
> Fixed the broken compilation for clang in:
>         "net/dpaa2: dpdmux single flow/multiple rules support" patch.
> Fixed checkpatch warnings in the below patches:
>         "net/dpaa2: protocol inside tunnel distribution"
>         "net/dpaa2: add VXLAN distribution support"
>         "bus/fslmc: dynamic IOVA mode configuration"
>         "bus/fslmc: enhance MC VFIO multiprocess support"
> 
> V3 changes:
> Rebased to the latest commit.
> 
> V4 changes:
> Fixed the checkpatch warnings in:
>         "bus/fslmc: get MC VFIO group FD directly"
>         "bus/fslmc: dynamic IOVA mode configuration"
>         "net/dpaa2: add GTP flow support"
>         "net/dpaa2: add flow support for IPsec AH and ESP
>         "bus/fslmc: enhance MC VFIO multiprocess support"
> Resolved comments by the reviewer.
> 
> V5 changes:
> Resolved comments by the reviewer in:
> 	"bus/fslmc: dynamic IOVA mode configuration"
> 
> Apeksha Gupta (2):
>   net/dpaa2: add proper MTU debugging print
>   net/dpaa2: store drop priority in mbuf
> 
> Brick Yang (1):
>   net/dpaa2: update DPNI link status method
> 
> Gagandeep Singh (3):
>   bus/fslmc: upgrade with MC version 10.37
>   net/dpaa2: fix memory corruption in TM
>   net/dpaa2: support software taildrop
> 
> Hemant Agrawal (2):
>   net/dpaa2: add support to dump dpdmux counters
>   bus/fslmc: change dpcon close as internal symbol
> 
> Jun Yang (23):
>   net/dpaa2: enhance Tx scatter-gather mempool
>   net/dpaa2: add new PMD API to check dpaa platform version
>   bus/fslmc: improve BMAN buffer acquire
>   bus/fslmc: get MC VFIO group FD directly
>   bus/fslmc: enhance MC VFIO multiprocess support
>   bus/fslmc: dynamic IOVA mode configuration
>   bus/fslmc: remove VFIO IRQ mapping
>   bus/fslmc: create dpaa2 device with it's object
>   bus/fslmc: introduce VFIO DMA mapping API for fslmc
>   net/dpaa2: flow API refactor
>   net/dpaa2: dump Rx parser result
>   net/dpaa2: enhancement of raw flow extract
>   net/dpaa2: frame attribute flags parser
>   net/dpaa2: add VXLAN distribution support
>   net/dpaa2: protocol inside tunnel distribution
>   net/dpaa2: eCPRI support by parser result
>   net/dpaa2: add GTP flow support
>   net/dpaa2: check if Soft parser is loaded
>   net/dpaa2: soft parser flow verification
>   net/dpaa2: add flow support for IPsec AH and ESP
>   net/dpaa2: check IOVA before sending MC command
>   net/dpaa2: add API to get endpoint name
>   net/dpaa2: dpdmux single flow/multiple rules support
> 
> Rohit Raj (6):
>   bus/fslmc: add close API to close DPAA2 device
>   net/dpaa2: support link state for eth interfaces
>   bus/fslmc: free VFIO group FD in case of add group failure
>   bus/fslmc: fix coverity issue
>   bus/fslmc: change qbman eq desc from d to desc
>   net/dpaa2: change miss flow ID macro name
> 
> Sachin Saxena (1):
>   net/dpaa2: improve DPDMUX error behavior settings
> 
> Vanshika Shukla (4):
>   net/dpaa2: support PTP packet one-step timestamp
>   net/dpaa2: dpdmux: add support for CVLAN
>   net/dpaa2: support VLAN traffic splitting
>   net/dpaa2: add support for C-VLAN and MAC
> 
>  doc/guides/platform/dpaa2.rst                 |    4 +-
>  drivers/bus/fslmc/bus_fslmc_driver.h          |   72 +-
>  drivers/bus/fslmc/fslmc_bus.c                 |   62 +-
>  drivers/bus/fslmc/fslmc_vfio.c                | 1621 +++-
>  drivers/bus/fslmc/fslmc_vfio.h                |   35 +-
>  drivers/bus/fslmc/mc/dpio.c                   |   94 +-
>  drivers/bus/fslmc/mc/fsl_dpcon.h              |    6 +-
>  drivers/bus/fslmc/mc/fsl_dpio.h               |   21 +-
>  drivers/bus/fslmc/mc/fsl_dpio_cmd.h           |   13 +-
>  drivers/bus/fslmc/mc/fsl_dpmng.h              |    4 +-
>  drivers/bus/fslmc/mc/fsl_dprc_cmd.h           |    8 +-
>  drivers/bus/fslmc/meson.build                 |    3 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |   38 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |   38 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   50 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |    3 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dprc.c      |    8 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  114 +-
>  .../bus/fslmc/qbman/include/fsl_qbman_debug.h |   12 +-
>  drivers/bus/fslmc/qbman/qbman_debug.c         |   49 +-
>  drivers/bus/fslmc/qbman/qbman_portal.c        |   30 +-
>  drivers/bus/fslmc/version.map                 |   16 +-
>  drivers/crypto/dpaa2_sec/mc/dpseci.c          |   91 +-
>  drivers/crypto/dpaa2_sec/mc/fsl_dpseci.h      |   47 +-
>  drivers/crypto/dpaa2_sec/mc/fsl_dpseci_cmd.h  |   19 +-
>  drivers/dma/dpaa2/dpaa2_qdma.c                |    1 +
>  drivers/event/dpaa2/dpaa2_hw_dpcon.c          |   38 +-
>  drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |    2 +-
>  drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |   63 +-
>  drivers/net/dpaa2/dpaa2_ethdev.c              |  597 +-
>  drivers/net/dpaa2/dpaa2_ethdev.h              |  225 +-
>  drivers/net/dpaa2/dpaa2_flow.c                | 7066 ++++++++++-------
>  drivers/net/dpaa2/dpaa2_mux.c                 |  541 +-
>  drivers/net/dpaa2/dpaa2_parse_dump.h          |  250 +
>  drivers/net/dpaa2/dpaa2_ptp.c                 |    8 +-
>  drivers/net/dpaa2/dpaa2_rxtx.c                |   32 +-
>  drivers/net/dpaa2/dpaa2_sparser.c             |   25 +-
>  drivers/net/dpaa2/dpaa2_tm.c                  |   72 +-
>  drivers/net/dpaa2/mc/dpdmux.c                 |  205 +-
>  drivers/net/dpaa2/mc/dpkg.c                   |   12 +-
>  drivers/net/dpaa2/mc/dpni.c                   |  383 +-
>  drivers/net/dpaa2/mc/fsl_dpdmux.h             |   99 +-
>  drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h         |   83 +-
>  drivers/net/dpaa2/mc/fsl_dpkg.h               |    7 +-
>  drivers/net/dpaa2/mc/fsl_dpni.h               |  176 +-
>  drivers/net/dpaa2/mc/fsl_dpni_cmd.h           |  125 +-
>  drivers/net/dpaa2/rte_pmd_dpaa2.h             |   51 +-
>  drivers/net/dpaa2/version.map                 |    6 +
>  48 files changed, 8271 insertions(+), 4254 deletions(-)  create mode 100644
> drivers/net/dpaa2/dpaa2_parse_dump.h
> 
> --
> 2.25.1

Series-
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>

^ permalink raw reply	[flat|nested] 229+ messages in thread

* Re: [v5 14/42] bus/fslmc: enhance MC VFIO multiprocess support
  2024-10-23 11:59           ` [v5 14/42] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
@ 2024-11-09 17:07             ` Thomas Monjalon
  0 siblings, 0 replies; 229+ messages in thread
From: Thomas Monjalon @ 2024-11-09 17:07 UTC (permalink / raw)
  To: Hemant Agrawal, Sachin Saxena, Anatoly Burakov, Jun Yang
  Cc: dev, vanshika.shukla

23/10/2024 13:59, vanshika.shukla@nxp.com:
> --- a/drivers/bus/fslmc/version.map
> +++ b/drivers/bus/fslmc/version.map
> @@ -118,6 +118,7 @@ INTERNAL {
>         rte_fslmc_get_device_count;
>         rte_fslmc_object_register;
>         rte_global_active_dqs_list;
> +       rte_fslmc_vfio_mem_dmaunmap;
>  
>         local: *;
>  };

rte_fslmc_vfio_mem_dmaunmap is not flagged as internal
but is listed as internal in version map



^ permalink raw reply	[flat|nested] 229+ messages in thread

* Re: [v5 23/42] net/dpaa2: flow API refactor
  2024-10-23 11:59           ` [v5 23/42] net/dpaa2: flow API refactor vanshika.shukla
@ 2024-11-09 19:01             ` Thomas Monjalon
  0 siblings, 0 replies; 229+ messages in thread
From: Thomas Monjalon @ 2024-11-09 19:01 UTC (permalink / raw)
  To: Hemant Agrawal, Sachin Saxena, Jun Yang; +Cc: dev, vanshika.shukla

23/10/2024 13:59, vanshika.shukla@nxp.com:
> +static inline int
> +dpaa2_fs_action_supported(enum rte_flow_action_type action)
> +{
> +       int i;
> +       int action_num = sizeof(dpaa2_supported_fs_action_type) /
> +               sizeof(enum rte_flow_action_type);
>  
> -               curr = LIST_NEXT(curr, next);
> +       for (i = 0; i < action_num; i++) {
> +               if (action == dpaa2_supported_fs_action_type[i])
> +                       return true;
>         }
>  
> -       return 0;
> +       return false;
>  }

One more compilation error:

unused function 'dpaa2_fs_action_supported'



^ permalink raw reply	[flat|nested] 229+ messages in thread

end of thread, other threads:[~2024-11-09 19:02 UTC | newest]

Thread overview: 229+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-13  5:59 [v1 00/43] DPAA2 specific patches vanshika.shukla
2024-09-13  5:59 ` [v1 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
2024-09-13  5:59 ` [v1 02/43] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
2024-09-13  5:59 ` [v1 03/43] net/dpaa2: add proper MTU debugging print vanshika.shukla
2024-09-13  5:59 ` [v1 04/43] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
2024-09-13  5:59 ` [v1 05/43] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
2024-09-13  5:59 ` [v1 06/43] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
2024-09-13  5:59 ` [v1 07/43] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
2024-09-13  5:59 ` [v1 08/43] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
2024-09-13  5:59 ` [v1 09/43] net/dpaa2: support link state for eth interfaces vanshika.shukla
2024-09-13  5:59 ` [v1 10/43] net/dpaa2: update DPNI link status method vanshika.shukla
2024-09-13  5:59 ` [v1 11/43] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
2024-09-13  5:59 ` [v1 12/43] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
2024-09-13  5:59 ` [v1 13/43] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
2024-09-13  5:59 ` [v1 14/43] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
2024-09-13  5:59 ` [v1 15/43] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
2024-09-13  5:59 ` [v1 16/43] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
2024-09-13  5:59 ` [v1 17/43] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
2024-09-13  5:59 ` [v1 18/43] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
2024-09-13  5:59 ` [v1 19/43] bus/fslmc: fix coverity issue vanshika.shukla
2024-09-13  5:59 ` [v1 20/43] bus/fslmc: fix invalid error FD code vanshika.shukla
2024-09-13  5:59 ` [v1 21/43] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
2024-09-13  5:59 ` [v1 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
2024-09-13  5:59 ` [v1 23/43] net/dpaa2: change miss flow ID macro name vanshika.shukla
2024-09-13  5:59 ` [v1 24/43] net/dpaa2: flow API refactor vanshika.shukla
2024-09-13  5:59 ` [v1 25/43] net/dpaa2: dump Rx parser result vanshika.shukla
2024-09-13  5:59 ` [v1 26/43] net/dpaa2: enhancement of raw flow extract vanshika.shukla
2024-09-13  5:59 ` [v1 27/43] net/dpaa2: frame attribute flags parser vanshika.shukla
2024-09-13  5:59 ` [v1 28/43] net/dpaa2: add VXLAN distribution support vanshika.shukla
2024-09-13  5:59 ` [v1 29/43] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
2024-09-13  5:59 ` [v1 30/43] net/dpaa2: eCPRI support by parser result vanshika.shukla
2024-09-13  5:59 ` [v1 31/43] net/dpaa2: add GTP flow support vanshika.shukla
2024-09-13  5:59 ` [v1 32/43] net/dpaa2: check if Soft parser is loaded vanshika.shukla
2024-09-13  5:59 ` [v1 33/43] net/dpaa2: soft parser flow verification vanshika.shukla
2024-09-13  5:59 ` [v1 34/43] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
2024-09-13  5:59 ` [v1 35/43] net/dpaa2: fix memory corruption in TM vanshika.shukla
2024-09-13  5:59 ` [v1 36/43] net/dpaa2: support software taildrop vanshika.shukla
2024-09-13  5:59 ` [v1 37/43] net/dpaa2: check IOVA before sending MC command vanshika.shukla
2024-09-13  5:59 ` [v1 38/43] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
2024-09-13  5:59 ` [v1 39/43] net/dpaa2: store drop priority in mbuf vanshika.shukla
2024-09-13  5:59 ` [v1 40/43] net/dpaa2: add API to get endpoint name vanshika.shukla
2024-09-13  5:59 ` [v1 41/43] net/dpaa2: support VLAN traffic splitting vanshika.shukla
2024-09-13  5:59 ` [v1 42/43] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
2024-09-13  5:59 ` [v1 43/43] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
2024-09-18  7:50 ` [v2 00/43] DPAA2 specific patches vanshika.shukla
2024-09-18  7:50   ` [v2 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
2024-10-14 12:00     ` [v3 00/43] DPAA2 specific patches vanshika.shukla
2024-10-14 12:00       ` [v3 01/43] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
2024-10-14 12:00       ` [v3 02/43] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
2024-10-14 12:00       ` [v3 03/43] net/dpaa2: add proper MTU debugging print vanshika.shukla
2024-10-14 12:00       ` [v3 04/43] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
2024-10-14 12:00       ` [v3 05/43] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
2024-10-14 12:00       ` [v3 06/43] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
2024-10-14 12:00       ` [v3 07/43] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
2024-10-14 12:00       ` [v3 08/43] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
2024-10-14 12:00       ` [v3 09/43] net/dpaa2: support link state for eth interfaces vanshika.shukla
2024-10-14 12:00       ` [v3 10/43] net/dpaa2: update DPNI link status method vanshika.shukla
2024-10-14 12:00       ` [v3 11/43] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
2024-10-14 12:00       ` [v3 12/43] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
2024-10-14 12:00       ` [v3 13/43] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
2024-10-15  2:27         ` Stephen Hemminger
2024-10-14 12:00       ` [v3 14/43] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
2024-10-15  2:29         ` Stephen Hemminger
2024-10-14 12:00       ` [v3 15/43] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
2024-10-14 12:00       ` [v3 16/43] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
2024-10-15  2:31         ` Stephen Hemminger
2024-10-14 12:01       ` [v3 17/43] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
2024-10-14 12:01       ` [v3 18/43] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
2024-10-14 12:01       ` [v3 19/43] bus/fslmc: fix coverity issue vanshika.shukla
2024-10-14 12:01       ` [v3 20/43] bus/fslmc: fix invalid error FD code vanshika.shukla
2024-10-14 12:01       ` [v3 21/43] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
2024-10-14 12:01       ` [v3 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
2024-10-14 12:01       ` [v3 23/43] net/dpaa2: change miss flow ID macro name vanshika.shukla
2024-10-14 12:01       ` [v3 24/43] net/dpaa2: flow API refactor vanshika.shukla
2024-10-14 12:01       ` [v3 25/43] net/dpaa2: dump Rx parser result vanshika.shukla
2024-10-14 12:01       ` [v3 26/43] net/dpaa2: enhancement of raw flow extract vanshika.shukla
2024-10-14 12:01       ` [v3 27/43] net/dpaa2: frame attribute flags parser vanshika.shukla
2024-10-14 12:01       ` [v3 28/43] net/dpaa2: add VXLAN distribution support vanshika.shukla
2024-10-14 12:01       ` [v3 29/43] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
2024-10-14 12:01       ` [v3 30/43] net/dpaa2: eCPRI support by parser result vanshika.shukla
2024-10-14 12:01       ` [v3 31/43] net/dpaa2: add GTP flow support vanshika.shukla
2024-10-14 12:01       ` [v3 32/43] net/dpaa2: check if Soft parser is loaded vanshika.shukla
2024-10-14 12:01       ` [v3 33/43] net/dpaa2: soft parser flow verification vanshika.shukla
2024-10-14 12:01       ` [v3 34/43] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
2024-10-14 12:01       ` [v3 35/43] net/dpaa2: fix memory corruption in TM vanshika.shukla
2024-10-14 12:01       ` [v3 36/43] net/dpaa2: support software taildrop vanshika.shukla
2024-10-14 12:01       ` [v3 37/43] net/dpaa2: check IOVA before sending MC command vanshika.shukla
2024-10-14 12:01       ` [v3 38/43] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
2024-10-14 12:01       ` [v3 39/43] net/dpaa2: store drop priority in mbuf vanshika.shukla
2024-10-14 12:01       ` [v3 40/43] net/dpaa2: add API to get endpoint name vanshika.shukla
2024-10-14 12:01       ` [v3 41/43] net/dpaa2: support VLAN traffic splitting vanshika.shukla
2024-10-14 12:01       ` [v3 42/43] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
2024-10-14 12:01       ` [v3 43/43] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
2024-10-15  2:32         ` Stephen Hemminger
2024-10-22 19:12       ` [v4 00/42] DPAA2 specific patches vanshika.shukla
2024-10-22 19:12         ` [v4 01/42] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
2024-10-22 19:12         ` [v4 02/42] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
2024-10-22 19:12         ` [v4 03/42] net/dpaa2: add proper MTU debugging print vanshika.shukla
2024-10-22 19:12         ` [v4 04/42] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
2024-10-22 19:12         ` [v4 05/42] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
2024-10-22 19:12         ` [v4 06/42] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
2024-10-22 19:12         ` [v4 07/42] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
2024-10-22 19:12         ` [v4 08/42] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
2024-10-22 19:12         ` [v4 09/42] net/dpaa2: support link state for eth interfaces vanshika.shukla
2024-10-22 19:12         ` [v4 10/42] net/dpaa2: update DPNI link status method vanshika.shukla
2024-10-22 19:12         ` [v4 11/42] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
2024-10-22 19:12         ` [v4 12/42] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
2024-10-22 19:12         ` [v4 13/42] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
2024-10-22 19:12         ` [v4 14/42] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
2024-10-22 19:12         ` [v4 15/42] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
2024-10-22 19:12         ` [v4 16/42] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
2024-10-23  1:02           ` Stephen Hemminger
2024-10-22 19:12         ` [v4 17/42] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
2024-10-22 19:12         ` [v4 18/42] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
2024-10-22 19:12         ` [v4 19/42] bus/fslmc: fix coverity issue vanshika.shukla
2024-10-22 19:12         ` [v4 20/42] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
2024-10-22 19:12         ` [v4 21/42] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
2024-10-22 19:12         ` [v4 22/42] net/dpaa2: change miss flow ID macro name vanshika.shukla
2024-10-22 19:12         ` [v4 23/42] net/dpaa2: flow API refactor vanshika.shukla
2024-10-23  0:52           ` Stephen Hemminger
2024-10-23 12:04             ` [EXT] " Vanshika Shukla
2024-10-22 19:12         ` [v4 24/42] net/dpaa2: dump Rx parser result vanshika.shukla
2024-10-22 19:12         ` [v4 25/42] net/dpaa2: enhancement of raw flow extract vanshika.shukla
2024-10-22 19:12         ` [v4 26/42] net/dpaa2: frame attribute flags parser vanshika.shukla
2024-10-22 19:12         ` [v4 27/42] net/dpaa2: add VXLAN distribution support vanshika.shukla
2024-10-22 19:12         ` [v4 28/42] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
2024-10-22 19:12         ` [v4 29/42] net/dpaa2: eCPRI support by parser result vanshika.shukla
2024-10-22 19:12         ` [v4 30/42] net/dpaa2: add GTP flow support vanshika.shukla
2024-10-22 19:12         ` [v4 31/42] net/dpaa2: check if Soft parser is loaded vanshika.shukla
2024-10-22 19:12         ` [v4 32/42] net/dpaa2: soft parser flow verification vanshika.shukla
2024-10-22 19:12         ` [v4 33/42] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
2024-10-22 19:12         ` [v4 34/42] net/dpaa2: fix memory corruption in TM vanshika.shukla
2024-10-22 19:12         ` [v4 35/42] net/dpaa2: support software taildrop vanshika.shukla
2024-10-22 19:12         ` [v4 36/42] net/dpaa2: check IOVA before sending MC command vanshika.shukla
2024-10-22 19:12         ` [v4 37/42] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
2024-10-22 19:12         ` [v4 38/42] net/dpaa2: store drop priority in mbuf vanshika.shukla
2024-10-22 19:12         ` [v4 39/42] net/dpaa2: add API to get endpoint name vanshika.shukla
2024-10-22 19:12         ` [v4 40/42] net/dpaa2: support VLAN traffic splitting vanshika.shukla
2024-10-22 19:12         ` [v4 41/42] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
2024-10-22 19:12         ` [v4 42/42] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
2024-10-23 11:59         ` [v5 00/42] DPAA2 specific patches vanshika.shukla
2024-10-23 11:59           ` [v5 01/42] net/dpaa2: enhance Tx scatter-gather mempool vanshika.shukla
2024-10-23 11:59           ` [v5 02/42] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
2024-10-23 11:59           ` [v5 03/42] net/dpaa2: add proper MTU debugging print vanshika.shukla
2024-10-23 11:59           ` [v5 04/42] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
2024-10-23 11:59           ` [v5 05/42] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
2024-10-23 11:59           ` [v5 06/42] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
2024-10-23 11:59           ` [v5 07/42] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
2024-10-23 11:59           ` [v5 08/42] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
2024-10-23 11:59           ` [v5 09/42] net/dpaa2: support link state for eth interfaces vanshika.shukla
2024-10-23 11:59           ` [v5 10/42] net/dpaa2: update DPNI link status method vanshika.shukla
2024-10-23 11:59           ` [v5 11/42] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
2024-10-23 11:59           ` [v5 12/42] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
2024-10-23 11:59           ` [v5 13/42] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
2024-10-23 11:59           ` [v5 14/42] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
2024-11-09 17:07             ` Thomas Monjalon
2024-10-23 11:59           ` [v5 15/42] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
2024-10-23 11:59           ` [v5 16/42] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
2024-10-23 11:59           ` [v5 17/42] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
2024-10-23 11:59           ` [v5 18/42] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
2024-10-23 11:59           ` [v5 19/42] bus/fslmc: fix coverity issue vanshika.shukla
2024-10-23 11:59           ` [v5 20/42] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
2024-10-23 11:59           ` [v5 21/42] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
2024-10-23 11:59           ` [v5 22/42] net/dpaa2: change miss flow ID macro name vanshika.shukla
2024-10-23 11:59           ` [v5 23/42] net/dpaa2: flow API refactor vanshika.shukla
2024-11-09 19:01             ` Thomas Monjalon
2024-10-23 11:59           ` [v5 24/42] net/dpaa2: dump Rx parser result vanshika.shukla
2024-10-23 11:59           ` [v5 25/42] net/dpaa2: enhancement of raw flow extract vanshika.shukla
2024-10-23 11:59           ` [v5 26/42] net/dpaa2: frame attribute flags parser vanshika.shukla
2024-10-23 11:59           ` [v5 27/42] net/dpaa2: add VXLAN distribution support vanshika.shukla
2024-10-23 11:59           ` [v5 28/42] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
2024-10-23 11:59           ` [v5 29/42] net/dpaa2: eCPRI support by parser result vanshika.shukla
2024-10-23 11:59           ` [v5 30/42] net/dpaa2: add GTP flow support vanshika.shukla
2024-10-23 11:59           ` [v5 31/42] net/dpaa2: check if Soft parser is loaded vanshika.shukla
2024-10-23 11:59           ` [v5 32/42] net/dpaa2: soft parser flow verification vanshika.shukla
2024-10-23 11:59           ` [v5 33/42] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
2024-10-23 11:59           ` [v5 34/42] net/dpaa2: fix memory corruption in TM vanshika.shukla
2024-10-23 11:59           ` [v5 35/42] net/dpaa2: support software taildrop vanshika.shukla
2024-10-23 11:59           ` [v5 36/42] net/dpaa2: check IOVA before sending MC command vanshika.shukla
2024-10-23 11:59           ` [v5 37/42] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
2024-10-23 11:59           ` [v5 38/42] net/dpaa2: store drop priority in mbuf vanshika.shukla
2024-10-23 11:59           ` [v5 39/42] net/dpaa2: add API to get endpoint name vanshika.shukla
2024-10-23 11:59           ` [v5 40/42] net/dpaa2: support VLAN traffic splitting vanshika.shukla
2024-10-23 11:59           ` [v5 41/42] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
2024-10-23 11:59           ` [v5 42/42] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
2024-11-07 11:24           ` [v5 00/42] DPAA2 specific patches Hemant Agrawal
2024-09-18  7:50   ` [v2 02/43] net/dpaa2: support PTP packet one-step timestamp vanshika.shukla
2024-09-18  7:50   ` [v2 03/43] net/dpaa2: add proper MTU debugging print vanshika.shukla
2024-09-18  7:50   ` [v2 04/43] net/dpaa2: add support to dump dpdmux counters vanshika.shukla
2024-09-18  7:50   ` [v2 05/43] bus/fslmc: change dpcon close as internal symbol vanshika.shukla
2024-09-18  7:50   ` [v2 06/43] bus/fslmc: add close API to close DPAA2 device vanshika.shukla
2024-09-18  7:50   ` [v2 07/43] net/dpaa2: dpdmux: add support for CVLAN vanshika.shukla
2024-09-18  7:50   ` [v2 08/43] bus/fslmc: upgrade with MC version 10.37 vanshika.shukla
2024-09-18  7:50   ` [v2 09/43] net/dpaa2: support link state for eth interfaces vanshika.shukla
2024-09-18  7:50   ` [v2 10/43] net/dpaa2: update DPNI link status method vanshika.shukla
2024-09-18  7:50   ` [v2 11/43] net/dpaa2: add new PMD API to check dpaa platform version vanshika.shukla
2024-09-18  7:50   ` [v2 12/43] bus/fslmc: improve BMAN buffer acquire vanshika.shukla
2024-09-18  7:50   ` [v2 13/43] bus/fslmc: get MC VFIO group FD directly vanshika.shukla
2024-09-18  7:50   ` [v2 14/43] bus/fslmc: enhance MC VFIO multiprocess support vanshika.shukla
2024-09-18  7:50   ` [v2 15/43] bus/fslmc: free VFIO group FD in case of add group failure vanshika.shukla
2024-09-18  7:50   ` [v2 16/43] bus/fslmc: dynamic IOVA mode configuration vanshika.shukla
2024-09-18  7:50   ` [v2 17/43] bus/fslmc: remove VFIO IRQ mapping vanshika.shukla
2024-09-18  7:50   ` [v2 18/43] bus/fslmc: create dpaa2 device with it's object vanshika.shukla
2024-09-18  7:50   ` [v2 19/43] bus/fslmc: fix coverity issue vanshika.shukla
2024-09-18  7:50   ` [v2 20/43] bus/fslmc: fix invalid error FD code vanshika.shukla
2024-09-18  7:50   ` [v2 21/43] bus/fslmc: change qbman eq desc from d to desc vanshika.shukla
2024-09-18  7:50   ` [v2 22/43] bus/fslmc: introduce VFIO DMA mapping API for fslmc vanshika.shukla
2024-09-18  7:50   ` [v2 23/43] net/dpaa2: change miss flow ID macro name vanshika.shukla
2024-09-18  7:50   ` [v2 24/43] net/dpaa2: flow API refactor vanshika.shukla
2024-09-18  7:50   ` [v2 25/43] net/dpaa2: dump Rx parser result vanshika.shukla
2024-09-18  7:50   ` [v2 26/43] net/dpaa2: enhancement of raw flow extract vanshika.shukla
2024-09-18  7:50   ` [v2 27/43] net/dpaa2: frame attribute flags parser vanshika.shukla
2024-09-18  7:50   ` [v2 28/43] net/dpaa2: add VXLAN distribution support vanshika.shukla
2024-09-18  7:50   ` [v2 29/43] net/dpaa2: protocol inside tunnel distribution vanshika.shukla
2024-09-18  7:50   ` [v2 30/43] net/dpaa2: eCPRI support by parser result vanshika.shukla
2024-09-18  7:50   ` [v2 31/43] net/dpaa2: add GTP flow support vanshika.shukla
2024-09-18  7:50   ` [v2 32/43] net/dpaa2: check if Soft parser is loaded vanshika.shukla
2024-09-18  7:50   ` [v2 33/43] net/dpaa2: soft parser flow verification vanshika.shukla
2024-09-18  7:50   ` [v2 34/43] net/dpaa2: add flow support for IPsec AH and ESP vanshika.shukla
2024-09-18  7:50   ` [v2 35/43] net/dpaa2: fix memory corruption in TM vanshika.shukla
2024-09-18  7:50   ` [v2 36/43] net/dpaa2: support software taildrop vanshika.shukla
2024-09-18  7:50   ` [v2 37/43] net/dpaa2: check IOVA before sending MC command vanshika.shukla
2024-09-18  7:50   ` [v2 38/43] net/dpaa2: improve DPDMUX error behavior settings vanshika.shukla
2024-09-18  7:50   ` [v2 39/43] net/dpaa2: store drop priority in mbuf vanshika.shukla
2024-09-18  7:50   ` [v2 40/43] net/dpaa2: add API to get endpoint name vanshika.shukla
2024-09-18  7:50   ` [v2 41/43] net/dpaa2: support VLAN traffic splitting vanshika.shukla
2024-09-18  7:50   ` [v2 42/43] net/dpaa2: add support for C-VLAN and MAC vanshika.shukla
2024-09-18  7:50   ` [v2 43/43] net/dpaa2: dpdmux single flow/multiple rules support vanshika.shukla
2024-10-10  2:54   ` [v2 00/43] DPAA2 specific patches Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).