* [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements
@ 2020-05-27 13:22 Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 01/37] bus/fslmc: fix getting the FD error Hemant Agrawal
` (38 more replies)
0 siblings, 39 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit
This patch-set mainly address following enhancements
1. Supporting the non-EAL thread based I/O processing
2. Reducing the thread local storage
3. Adding support for HW FM library in DPAA, so that
additional queue, flow configuration can be done.
4. Adding Shared MAC or Virtual storage profile support
5. DPAA2 flow support
Gagandeep Singh (3):
net/dpaa2: enable timestamp for Rx offload case as well
bus/fslmc: combine thread specific variables
net/dpaa: enable Tx queue taildrop
Hemant Agrawal (3):
bus/fslmc: support handle portal alloc failure
net/dpaa: add support for fmlib in dpdk
bus/dpaa: add Virtual Storage Profile port init
Jun Yang (17):
net/dpaa: add VSP support in FMLIB
net/dpaa: add support for Virtual Storage Profile
net/dpaa: add fmc parser support for VSP
net/dpaa2: dynamic flow control support
net/dpaa2: key extracts of flow API
net/dpaa2: sanity check for flow extracts
net/dpaa2: free flow rule memory
net/dpaa2: flow QoS or FS table entry indexing
net/dpaa2: define the size of table entry
net/dpaa2: log of flow extracts and rules
net/dpaa2: discrimination between IPv4 and IPv6
net/dpaa2: distribution size set on multiple TCs
net/dpaa2: index of queue action for flow
net/dpaa2: flow data sanity check
net/dpaa2: flow API QoS setup follows FS setup
net/dpaa2: flow API FS miss action configuration
net/dpaa2: configure per class distribution size
Nipun Gupta (7):
bus/fslmc: fix getting the FD error
net/dpaa: fix fd offset data type
bus/fslmc: rework portal allocation to a per thread basis
bus/fslmc: support portal migration
bus/fslmc: rename the cinh read functions used for ls1088
net/dpaa: update process specific device info
net/dpaa2: support raw flow classification
Radu Bulie (1):
bus/dpaa: add shared MAC support
Rohit Raj (3):
drivers: optimize thread local storage for dpaa
bus/dpaa: enable link state interrupt
bus/dpaa: enable set link status
Sachin Saxena (3):
net/dpaa: add 2.5G support
net/dpaa: add support for fmcless mode
net/dpaa: add RSS update func with FMCless
doc/guides/nics/features/dpaa.ini | 2 +-
drivers/bus/dpaa/base/fman/fman.c | 94 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 3 +-
drivers/bus/dpaa/base/qbman/process.c | 99 +-
drivers/bus/dpaa/base/qbman/qman.c | 43 +
drivers/bus/dpaa/dpaa_bus.c | 52 +-
drivers/bus/dpaa/include/fman.h | 8 +
drivers/bus/dpaa/include/fsl_qman.h | 18 +
drivers/bus/dpaa/include/process.h | 31 +
drivers/bus/dpaa/rte_bus_dpaa_version.map | 7 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 48 +-
drivers/bus/fslmc/Makefile | 1 +
drivers/bus/fslmc/fslmc_bus.c | 2 -
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 284 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 10 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 10 +-
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 1 +
.../fslmc/qbman/include/fsl_qbman_portal.h | 8 +-
drivers/bus/fslmc/qbman/qbman_portal.c | 580 +-
drivers/bus/fslmc/qbman/qbman_portal.h | 19 +-
drivers/bus/fslmc/qbman/qbman_sys.h | 135 +-
drivers/bus/fslmc/rte_bus_fslmc_version.map | 1 -
drivers/bus/fslmc/rte_fslmc.h | 18 -
drivers/common/dpaax/compat.h | 5 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 11 +-
drivers/event/dpaa/dpaa_eventdev.c | 4 +-
drivers/mempool/dpaa/dpaa_mempool.c | 6 +-
drivers/net/dpaa/Makefile | 7 +-
drivers/net/dpaa/dpaa_ethdev.c | 757 ++-
drivers/net/dpaa/dpaa_ethdev.h | 19 +-
drivers/net/dpaa/dpaa_flow.c | 1079 ++++
drivers/net/dpaa/dpaa_flow.h | 19 +
drivers/net/dpaa/dpaa_fmc.c | 488 ++
drivers/net/dpaa/dpaa_rxtx.c | 77 +-
drivers/net/dpaa/dpaa_rxtx.h | 3 +
drivers/net/dpaa/fmlib/dpaa_integration.h | 48 +
drivers/net/dpaa/fmlib/fm_ext.h | 968 +++
drivers/net/dpaa/fmlib/fm_lib.c | 557 ++
drivers/net/dpaa/fmlib/fm_pcd_ext.h | 5164 +++++++++++++++++
drivers/net/dpaa/fmlib/fm_port_ext.h | 3512 +++++++++++
drivers/net/dpaa/fmlib/fm_vsp.c | 143 +
drivers/net/dpaa/fmlib/fm_vsp_ext.h | 140 +
drivers/net/dpaa/fmlib/ncsw_ext.h | 153 +
drivers/net/dpaa/fmlib/net_ext.h | 383 ++
drivers/net/dpaa/meson.build | 8 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 50 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 95 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 49 +-
drivers/net/dpaa2/dpaa2_flow.c | 4767 ++++++++++-----
49 files changed, 18160 insertions(+), 1826 deletions(-)
create mode 100644 drivers/net/dpaa/dpaa_flow.c
create mode 100644 drivers/net/dpaa/dpaa_flow.h
create mode 100644 drivers/net/dpaa/dpaa_fmc.c
create mode 100644 drivers/net/dpaa/fmlib/dpaa_integration.h
create mode 100644 drivers/net/dpaa/fmlib/fm_ext.h
create mode 100644 drivers/net/dpaa/fmlib/fm_lib.c
create mode 100644 drivers/net/dpaa/fmlib/fm_pcd_ext.h
create mode 100644 drivers/net/dpaa/fmlib/fm_port_ext.h
create mode 100644 drivers/net/dpaa/fmlib/fm_vsp.c
create mode 100644 drivers/net/dpaa/fmlib/fm_vsp_ext.h
create mode 100644 drivers/net/dpaa/fmlib/ncsw_ext.h
create mode 100644 drivers/net/dpaa/fmlib/net_ext.h
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 01/37] bus/fslmc: fix getting the FD error
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
@ 2020-05-27 13:22 ` Hemant Agrawal
2020-05-27 18:07 ` Akhil Goyal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 02/37] net/dpaa: fix fd offset data type Hemant Agrawal
` (37 subsequent siblings)
38 siblings, 1 reply; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: stable, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Fix the incorrect register for getting error
Fixes: 03e36408b9fb ("bus/fslmc: add macros required by QDMA for FLE and FD")
Cc: stable@dpdk.org
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 4682a5299..f1c70251a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -286,7 +286,7 @@ enum qbman_fd_format {
#define DPAA2_GET_FD_FRC(fd) ((fd)->simple.frc)
#define DPAA2_GET_FD_FLC(fd) \
(((uint64_t)((fd)->simple.flc_hi) << 32) + (fd)->simple.flc_lo)
-#define DPAA2_GET_FD_ERR(fd) ((fd)->simple.bpid_offset & 0x000000FF)
+#define DPAA2_GET_FD_ERR(fd) ((fd)->simple.ctrl & 0x000000FF)
#define DPAA2_GET_FLE_OFFSET(fle) (((fle)->fin_bpid_offset & 0x0FFF0000) >> 16)
#define DPAA2_SET_FLE_SG_EXT(fle) ((fle)->fin_bpid_offset |= (uint64_t)1 << 29)
#define DPAA2_IS_SET_FLE_SG_EXT(fle) \
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 02/37] net/dpaa: fix fd offset data type
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 01/37] bus/fslmc: fix getting the FD error Hemant Agrawal
@ 2020-05-27 13:22 ` Hemant Agrawal
2020-05-27 18:08 ` Akhil Goyal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 03/37] net/dpaa2: enable timestamp for Rx offload case as well Hemant Agrawal
` (36 subsequent siblings)
38 siblings, 1 reply; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: stable, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
On DPAA fd offset is 9 bits, but we are using uint8_t in the
SG case. This patch fixes the same.
Fixes: 8cffdcbe85aa ("net/dpaa: support scattered Rx")
Cc: stable@dpdk.org
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 5dba1db8b..3aeecb7d2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -305,7 +305,7 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
struct qm_sg_entry *sgt, *sg_temp;
void *vaddr, *sg_vaddr;
int i = 0;
- uint8_t fd_offset = fd->offset;
+ uint16_t fd_offset = fd->offset;
vaddr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd));
if (!vaddr) {
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 03/37] net/dpaa2: enable timestamp for Rx offload case as well
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 01/37] bus/fslmc: fix getting the FD error Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 02/37] net/dpaa: fix fd offset data type Hemant Agrawal
@ 2020-05-27 13:22 ` Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 04/37] bus/fslmc: combine thread specific variables Hemant Agrawal
` (35 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
This patch enables the packet timestamping
conditionally when Rx offload is enabled for timestamp.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 2f031ec5c..d3eb10459 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -526,8 +526,10 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
return ret;
}
+#if !defined(RTE_LIBRTE_IEEE1588)
if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
- dpaa2_enable_ts = true;
+#endif
+ dpaa2_enable_ts = true;
if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
tx_l3_csum_offload = true;
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 04/37] bus/fslmc: combine thread specific variables
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (2 preceding siblings ...)
2020-05-27 13:22 ` [dpdk-dev] [PATCH 03/37] net/dpaa2: enable timestamp for Rx offload case as well Hemant Agrawal
@ 2020-05-27 13:22 ` Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 05/37] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
` (34 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
This is to reduce the thread local storage
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/fslmc/fslmc_bus.c | 2 --
drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 7 +++++++
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 8 ++++++++
drivers/bus/fslmc/rte_bus_fslmc_version.map | 1 -
drivers/bus/fslmc/rte_fslmc.h | 18 ------------------
5 files changed, 15 insertions(+), 21 deletions(-)
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index afbd82e8d..373659411 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -37,8 +37,6 @@ rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type)
return rte_fslmc_bus.device_count[device_type];
}
-RTE_DEFINE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
-
static void
cleanup_fslmc_device_list(void)
{
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 7c5966241..f6436f2e5 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -28,6 +28,13 @@ RTE_DECLARE_PER_LCORE(struct dpaa2_io_portal_t, _dpaa2_io);
#define DPAA2_PER_LCORE_ETHRX_DPIO RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
#define DPAA2_PER_LCORE_ETHRX_PORTAL DPAA2_PER_LCORE_ETHRX_DPIO->sw_portal
+#define DPAA2_PER_LCORE_DQRR_SIZE \
+ RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_size
+#define DPAA2_PER_LCORE_DQRR_HELD \
+ RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_held
+#define DPAA2_PER_LCORE_DQRR_MBUF(i) \
+ RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.mbuf[i]
+
/* Variable to store DPAA2 DQRR size */
extern uint8_t dpaa2_dqrr_size;
/* Variable to store DPAA2 EQCR size */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index f1c70251a..be48462dd 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -87,6 +87,13 @@ struct eqresp_metadata {
struct rte_mempool *mp;
};
+#define DPAA2_PORTAL_DEQUEUE_DEPTH 32
+struct dpaa2_portal_dqrr {
+ struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH];
+ uint64_t dqrr_held;
+ uint8_t dqrr_size;
+};
+
struct dpaa2_dpio_dev {
TAILQ_ENTRY(dpaa2_dpio_dev) next;
/**< Pointer to Next device instance */
@@ -112,6 +119,7 @@ struct dpaa2_dpio_dev {
struct rte_intr_handle intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
+ struct dpaa2_portal_dqrr dpaa2_held_bufs;
};
struct dpaa2_dpbp_dev {
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index 69e7dc6ad..2a79f4518 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -57,7 +57,6 @@ INTERNAL {
mc_get_version;
mc_send_command;
per_lcore__dpaa2_io;
- per_lcore_dpaa2_held_bufs;
qbman_check_command_complete;
qbman_check_new_result;
qbman_eq_desc_clear;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 5078b48ee..80873fffc 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -137,24 +137,6 @@ struct rte_fslmc_bus {
/**< Count of all devices scanned */
};
-#define DPAA2_PORTAL_DEQUEUE_DEPTH 32
-
-/* Create storage for dqrr entries per lcore */
-struct dpaa2_portal_dqrr {
- struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH];
- uint64_t dqrr_held;
- uint8_t dqrr_size;
-};
-
-RTE_DECLARE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
-
-#define DPAA2_PER_LCORE_DQRR_SIZE \
- RTE_PER_LCORE(dpaa2_held_bufs).dqrr_size
-#define DPAA2_PER_LCORE_DQRR_HELD \
- RTE_PER_LCORE(dpaa2_held_bufs).dqrr_held
-#define DPAA2_PER_LCORE_DQRR_MBUF(i) \
- RTE_PER_LCORE(dpaa2_held_bufs).mbuf[i]
-
/**
* Register a DPAA2 driver.
*
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 05/37] bus/fslmc: rework portal allocation to a per thread basis
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (3 preceding siblings ...)
2020-05-27 13:22 ` [dpdk-dev] [PATCH 04/37] bus/fslmc: combine thread specific variables Hemant Agrawal
@ 2020-05-27 13:22 ` Hemant Agrawal
2020-07-01 7:23 ` Ferruh Yigit
2020-05-27 13:22 ` [dpdk-dev] [PATCH 06/37] bus/fslmc: support handle portal alloc failure Hemant Agrawal
` (33 subsequent siblings)
38 siblings, 1 reply; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
The patch reworks the portal allocation which was previously
being done on per lcore basis to a per thread basis.
Now user can also create its own threads and use DPAA2 portals
for packet I/O.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/bus/fslmc/Makefile | 1 +
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 219 +++++++++++++----------
drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 3 -
3 files changed, 123 insertions(+), 100 deletions(-)
diff --git a/drivers/bus/fslmc/Makefile b/drivers/bus/fslmc/Makefile
index c70e359c8..b98d758ee 100644
--- a/drivers/bus/fslmc/Makefile
+++ b/drivers/bus/fslmc/Makefile
@@ -17,6 +17,7 @@ CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
CFLAGS += -I$(RTE_SDK)/drivers/common/dpaax
CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
+LDLIBS += -lpthread
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
LDLIBS += -lrte_ethdev
LDLIBS += -lrte_common_dpaax
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 21c535f2f..b7a49e8f6 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -62,6 +62,9 @@ uint8_t dpaa2_dqrr_size;
/* Variable to store DPAA2 EQCR size */
uint8_t dpaa2_eqcr_size;
+/* Variable to hold the portal_key, once created.*/
+static pthread_key_t dpaa2_portal_key;
+
/*Stashing Macros default for LS208x*/
static int dpaa2_core_cluster_base = 0x04;
static int dpaa2_cluster_sz = 2;
@@ -87,6 +90,32 @@ static int dpaa2_cluster_sz = 2;
* Cluster 4 (ID = x07) : CPU14, CPU15;
*/
+static int
+dpaa2_get_core_id(void)
+{
+ rte_cpuset_t cpuset;
+ int i, ret, cpu_id = -1;
+
+ ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+ &cpuset);
+ if (ret) {
+ DPAA2_BUS_ERR("pthread_getaffinity_np() failed");
+ return ret;
+ }
+
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ if (CPU_ISSET(i, &cpuset)) {
+ if (cpu_id == -1)
+ cpu_id = i;
+ else
+ /* Multiple cpus are affined */
+ return -1;
+ }
+ }
+
+ return cpu_id;
+}
+
static int
dpaa2_core_cluster_sdest(int cpu_id)
{
@@ -97,7 +126,7 @@ dpaa2_core_cluster_sdest(int cpu_id)
#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
static void
-dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
+dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id)
{
#define STRING_LEN 28
#define COMMAND_LEN 50
@@ -130,7 +159,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
return;
}
- cpu_mask = cpu_mask << dpaa2_cpu[lcoreid];
+ cpu_mask = cpu_mask << dpaa2_cpu[cpu_id];
snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity",
cpu_mask, token);
ret = system(command);
@@ -144,7 +173,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
fclose(file);
}
-static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
+static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int cpu_id)
{
struct epoll_event epoll_ev;
int eventfd, dpio_epoll_fd, ret;
@@ -181,36 +210,42 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
}
dpio_dev->epoll_fd = dpio_epoll_fd;
- dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, lcoreid);
+ dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, cpu_id);
return 0;
}
+
+static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
+{
+ int ret;
+
+ ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ if (ret)
+ DPAA2_BUS_ERR("DPIO interrupt disable failed");
+
+ close(dpio_dev->epoll_fd);
+}
#endif
static int
-dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
+dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
{
int sdest, ret;
int cpu_id;
/* Set the Stashing Destination */
- if (lcoreid < 0) {
- lcoreid = rte_get_master_lcore();
- if (lcoreid < 0) {
- DPAA2_BUS_ERR("Getting CPU Index failed");
- return -1;
- }
+ cpu_id = dpaa2_get_core_id();
+ if (cpu_id < 0) {
+ DPAA2_BUS_ERR("Thread not affined to a single core");
+ return -1;
}
- cpu_id = dpaa2_cpu[lcoreid];
-
/* Set the STASH Destination depending on Current CPU ID.
* Valid values of SDEST are 4,5,6,7. Where,
*/
-
sdest = dpaa2_core_cluster_sdest(cpu_id);
- DPAA2_BUS_DEBUG("Portal= %d CPU= %u lcore id =%u SDEST= %d",
- dpio_dev->index, cpu_id, lcoreid, sdest);
+ DPAA2_BUS_DEBUG("Portal= %d CPU= %u SDEST= %d",
+ dpio_dev->index, cpu_id, sdest);
ret = dpio_set_stashing_destination(dpio_dev->dpio, CMD_PRI_LOW,
dpio_dev->token, sdest);
@@ -220,7 +255,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
}
#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
- if (dpaa2_dpio_intr_init(dpio_dev, lcoreid)) {
+ if (dpaa2_dpio_intr_init(dpio_dev, cpu_id)) {
DPAA2_BUS_ERR("Interrupt registration failed for dpio");
return -1;
}
@@ -229,7 +264,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
return 0;
}
-static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
+static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
{
struct dpaa2_dpio_dev *dpio_dev = NULL;
int ret;
@@ -245,108 +280,83 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
dpio_dev, dpio_dev->index, syscall(SYS_gettid));
- ret = dpaa2_configure_stashing(dpio_dev, lcoreid);
- if (ret)
+ ret = dpaa2_configure_stashing(dpio_dev);
+ if (ret) {
DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+ return NULL;
+ }
+
+ ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev);
+ if (ret) {
+ DPAA2_BUS_ERR("pthread_setspecific failed with ret: %d", ret);
+ dpaa2_put_qbman_swp(dpio_dev);
+ return NULL;
+ }
return dpio_dev;
}
+static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
+{
+#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
+ dpaa2_dpio_intr_deinit(dpio_dev);
+#endif
+ if (dpio_dev)
+ rte_atomic16_clear(&dpio_dev->ref_count);
+}
+
int
dpaa2_affine_qbman_swp(void)
{
- unsigned int lcore_id = rte_lcore_id();
+ struct dpaa2_dpio_dev *dpio_dev;
uint64_t tid = syscall(SYS_gettid);
- if (lcore_id == LCORE_ID_ANY)
- lcore_id = rte_get_master_lcore();
- /* if the core id is not supported */
- else if (lcore_id >= RTE_MAX_LCORE)
- return -1;
-
- if (dpaa2_io_portal[lcore_id].dpio_dev) {
- DPAA2_BUS_DP_INFO("DPAA Portal=%p (%d) is being shared"
- " between thread %" PRIu64 " and current "
- "%" PRIu64 "\n",
- dpaa2_io_portal[lcore_id].dpio_dev,
- dpaa2_io_portal[lcore_id].dpio_dev->index,
- dpaa2_io_portal[lcore_id].net_tid,
- tid);
- RTE_PER_LCORE(_dpaa2_io).dpio_dev
- = dpaa2_io_portal[lcore_id].dpio_dev;
- rte_atomic16_inc(&dpaa2_io_portal
- [lcore_id].dpio_dev->ref_count);
- dpaa2_io_portal[lcore_id].net_tid = tid;
-
- DPAA2_BUS_DP_DEBUG("Old Portal=%p (%d) affined thread - "
- "%" PRIu64 "\n",
- dpaa2_io_portal[lcore_id].dpio_dev,
- dpaa2_io_portal[lcore_id].dpio_dev->index,
- tid);
- return 0;
- }
-
/* Populate the dpaa2_io_portal structure */
- dpaa2_io_portal[lcore_id].dpio_dev = dpaa2_get_qbman_swp(lcore_id);
-
- if (dpaa2_io_portal[lcore_id].dpio_dev) {
- RTE_PER_LCORE(_dpaa2_io).dpio_dev
- = dpaa2_io_portal[lcore_id].dpio_dev;
- dpaa2_io_portal[lcore_id].net_tid = tid;
+ if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) {
+ dpio_dev = dpaa2_get_qbman_swp();
+ if (!dpio_dev) {
+ DPAA2_BUS_ERR("No software portal resource left");
+ return -1;
+ }
+ RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
- return 0;
- } else {
- return -1;
+ DPAA2_BUS_INFO(
+ "DPAA Portal=%p (%d) is affined to thread %" PRIu64,
+ dpio_dev, dpio_dev->index, tid);
}
+ return 0;
}
int
dpaa2_affine_qbman_ethrx_swp(void)
{
- unsigned int lcore_id = rte_lcore_id();
+ struct dpaa2_dpio_dev *dpio_dev;
uint64_t tid = syscall(SYS_gettid);
- if (lcore_id == LCORE_ID_ANY)
- lcore_id = rte_get_master_lcore();
- /* if the core id is not supported */
- else if (lcore_id >= RTE_MAX_LCORE)
- return -1;
+ /* Populate the dpaa2_io_portal structure */
+ if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) {
+ dpio_dev = dpaa2_get_qbman_swp();
+ if (!dpio_dev) {
+ DPAA2_BUS_ERR("No software portal resource left");
+ return -1;
+ }
+ RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
- if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) {
- DPAA2_BUS_DP_INFO(
- "DPAA Portal=%p (%d) is being shared between thread"
- " %" PRIu64 " and current %" PRIu64 "\n",
- dpaa2_io_portal[lcore_id].ethrx_dpio_dev,
- dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index,
- dpaa2_io_portal[lcore_id].sec_tid,
- tid);
- RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
- = dpaa2_io_portal[lcore_id].ethrx_dpio_dev;
- rte_atomic16_inc(&dpaa2_io_portal
- [lcore_id].ethrx_dpio_dev->ref_count);
- dpaa2_io_portal[lcore_id].sec_tid = tid;
-
- DPAA2_BUS_DP_DEBUG(
- "Old Portal=%p (%d) affined thread"
- " - %" PRIu64 "\n",
- dpaa2_io_portal[lcore_id].ethrx_dpio_dev,
- dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index,
- tid);
- return 0;
+ DPAA2_BUS_INFO(
+ "DPAA Portal=%p (%d) is affined for eth rx to thread %"
+ PRIu64, dpio_dev, dpio_dev->index, tid);
}
+ return 0;
+}
- /* Populate the dpaa2_io_portal structure */
- dpaa2_io_portal[lcore_id].ethrx_dpio_dev =
- dpaa2_get_qbman_swp(lcore_id);
-
- if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) {
- RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
- = dpaa2_io_portal[lcore_id].ethrx_dpio_dev;
- dpaa2_io_portal[lcore_id].sec_tid = tid;
- return 0;
- } else {
- return -1;
- }
+static void dpaa2_portal_finish(void *arg)
+{
+ RTE_SET_USED(arg);
+
+ dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).dpio_dev);
+ dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev);
+
+ pthread_setspecific(dpaa2_portal_key, NULL);
}
/*
@@ -398,6 +408,7 @@ dpaa2_create_dpio_device(int vdev_fd,
struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)};
struct qbman_swp_desc p_des;
struct dpio_attr attr;
+ int ret;
static int check_lcore_cpuset;
if (obj_info->num_regions < NUM_DPIO_REGIONS) {
@@ -547,12 +558,26 @@ dpaa2_create_dpio_device(int vdev_fd,
TAILQ_INSERT_TAIL(&dpio_dev_list, dpio_dev, next);
+ if (!dpaa2_portal_key) {
+ /* create the key, supplying a function that'll be invoked
+ * when a portal affined thread will be deleted.
+ */
+ ret = pthread_key_create(&dpaa2_portal_key,
+ dpaa2_portal_finish);
+ if (ret) {
+ DPAA2_BUS_DEBUG("Unable to create pthread key (%d)",
+ ret);
+ goto err;
+ }
+ }
+
return 0;
err:
if (dpio_dev->dpio) {
dpio_disable(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token);
dpio_close(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token);
+ rte_free(dpio_dev->eqresp);
rte_free(dpio_dev->dpio);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index f6436f2e5..b8eb8ee0a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -14,9 +14,6 @@
struct dpaa2_io_portal_t {
struct dpaa2_dpio_dev *dpio_dev;
struct dpaa2_dpio_dev *ethrx_dpio_dev;
- uint64_t net_tid;
- uint64_t sec_tid;
- void *eventdev;
};
/*! Global per thread DPIO portal */
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 06/37] bus/fslmc: support handle portal alloc failure
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (4 preceding siblings ...)
2020-05-27 13:22 ` [dpdk-dev] [PATCH 05/37] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
@ 2020-05-27 13:22 ` Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 07/37] bus/fslmc: support portal migration Hemant Agrawal
` (32 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Nipun Gupta, Hemant Agrawal
Add the error handling on failure.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 28 ++++++++++++++----------
1 file changed, 16 insertions(+), 12 deletions(-)
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index b7a49e8f6..5a12ff35d 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -264,6 +264,16 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
return 0;
}
+static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
+{
+ if (dpio_dev) {
+#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
+ dpaa2_dpio_intr_deinit(dpio_dev);
+#endif
+ rte_atomic16_clear(&dpio_dev->ref_count);
+ }
+}
+
static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
{
struct dpaa2_dpio_dev *dpio_dev = NULL;
@@ -274,8 +284,10 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
if (dpio_dev && rte_atomic16_test_and_set(&dpio_dev->ref_count))
break;
}
- if (!dpio_dev)
+ if (!dpio_dev) {
+ DPAA2_BUS_ERR("No software portal resource left");
return NULL;
+ }
DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
dpio_dev, dpio_dev->index, syscall(SYS_gettid));
@@ -283,6 +295,7 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
ret = dpaa2_configure_stashing(dpio_dev);
if (ret) {
DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+ rte_atomic16_clear(&dpio_dev->ref_count);
return NULL;
}
@@ -296,15 +309,6 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
return dpio_dev;
}
-static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
-{
-#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
- dpaa2_dpio_intr_deinit(dpio_dev);
-#endif
- if (dpio_dev)
- rte_atomic16_clear(&dpio_dev->ref_count);
-}
-
int
dpaa2_affine_qbman_swp(void)
{
@@ -315,7 +319,7 @@ dpaa2_affine_qbman_swp(void)
if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) {
dpio_dev = dpaa2_get_qbman_swp();
if (!dpio_dev) {
- DPAA2_BUS_ERR("No software portal resource left");
+ DPAA2_BUS_ERR("Error in software portal allocation");
return -1;
}
RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
@@ -337,7 +341,7 @@ dpaa2_affine_qbman_ethrx_swp(void)
if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) {
dpio_dev = dpaa2_get_qbman_swp();
if (!dpio_dev) {
- DPAA2_BUS_ERR("No software portal resource left");
+ DPAA2_BUS_ERR("Error in software portal allocation");
return -1;
}
RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 07/37] bus/fslmc: support portal migration
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (5 preceding siblings ...)
2020-05-27 13:22 ` [dpdk-dev] [PATCH 06/37] bus/fslmc: support handle portal alloc failure Hemant Agrawal
@ 2020-05-27 13:22 ` Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 08/37] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
` (31 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
The patch adds support for portal migration by disabling stashing
for the portals which is used in the non-affined threads, or on
threads affined to multiple cores
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 83 +----
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 1 +
.../fslmc/qbman/include/fsl_qbman_portal.h | 8 +-
drivers/bus/fslmc/qbman/qbman_portal.c | 340 +++++++++++++++++-
drivers/bus/fslmc/qbman/qbman_portal.h | 19 +-
drivers/bus/fslmc/qbman/qbman_sys.h | 135 ++++++-
6 files changed, 503 insertions(+), 83 deletions(-)
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 5a12ff35d..97be76116 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -53,10 +53,6 @@ static uint32_t io_space_count;
/* Variable to store DPAA2 platform type */
uint32_t dpaa2_svr_family;
-/* Physical core id for lcores running on dpaa2. */
-/* DPAA2 only support 1 lcore to 1 phy cpu mapping */
-static unsigned int dpaa2_cpu[RTE_MAX_LCORE];
-
/* Variable to store DPAA2 DQRR size */
uint8_t dpaa2_dqrr_size;
/* Variable to store DPAA2 EQCR size */
@@ -159,7 +155,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id)
return;
}
- cpu_mask = cpu_mask << dpaa2_cpu[cpu_id];
+ cpu_mask = cpu_mask << cpu_id;
snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity",
cpu_mask, token);
ret = system(command);
@@ -228,17 +224,9 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
#endif
static int
-dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
+dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int cpu_id)
{
int sdest, ret;
- int cpu_id;
-
- /* Set the Stashing Destination */
- cpu_id = dpaa2_get_core_id();
- if (cpu_id < 0) {
- DPAA2_BUS_ERR("Thread not affined to a single core");
- return -1;
- }
/* Set the STASH Destination depending on Current CPU ID.
* Valid values of SDEST are 4,5,6,7. Where,
@@ -277,6 +265,7 @@ static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
{
struct dpaa2_dpio_dev *dpio_dev = NULL;
+ int cpu_id;
int ret;
/* Get DPIO dev handle from list using index */
@@ -292,11 +281,19 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
dpio_dev, dpio_dev->index, syscall(SYS_gettid));
- ret = dpaa2_configure_stashing(dpio_dev);
- if (ret) {
- DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
- rte_atomic16_clear(&dpio_dev->ref_count);
- return NULL;
+ /* Set the Stashing Destination */
+ cpu_id = dpaa2_get_core_id();
+ if (cpu_id < 0) {
+ DPAA2_BUS_WARN("Thread not affined to a single core");
+ if (dpaa2_svr_family != SVR_LX2160A)
+ qbman_swp_update(dpio_dev->sw_portal, 1);
+ } else {
+ ret = dpaa2_configure_stashing(dpio_dev, cpu_id);
+ if (ret) {
+ DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+ rte_atomic16_clear(&dpio_dev->ref_count);
+ return NULL;
+ }
}
ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev);
@@ -363,46 +360,6 @@ static void dpaa2_portal_finish(void *arg)
pthread_setspecific(dpaa2_portal_key, NULL);
}
-/*
- * This checks for not supported lcore mappings as well as get the physical
- * cpuid for the lcore.
- * one lcore can only map to 1 cpu i.e. 1@10-14 not supported.
- * one cpu can be mapped to more than one lcores.
- */
-static int
-dpaa2_check_lcore_cpuset(void)
-{
- unsigned int lcore_id, i;
- int ret = 0;
-
- for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
- dpaa2_cpu[lcore_id] = 0xffffffff;
-
- for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- rte_cpuset_t cpuset = rte_lcore_cpuset(lcore_id);
-
- for (i = 0; i < CPU_SETSIZE; i++) {
- if (!CPU_ISSET(i, &cpuset))
- continue;
- if (i >= RTE_MAX_LCORE) {
- DPAA2_BUS_ERR("ERR:lcore map to core %u (>= %u) not supported",
- i, RTE_MAX_LCORE);
- ret = -1;
- continue;
- }
- RTE_LOG(DEBUG, EAL, "lcore id = %u cpu=%u\n",
- lcore_id, i);
- if (dpaa2_cpu[lcore_id] != 0xffffffff) {
- DPAA2_BUS_ERR("ERR:lcore map to multi-cpu not supported");
- ret = -1;
- continue;
- }
- dpaa2_cpu[lcore_id] = i;
- }
- }
- return ret;
-}
-
static int
dpaa2_create_dpio_device(int vdev_fd,
struct vfio_device_info *obj_info,
@@ -413,7 +370,6 @@ dpaa2_create_dpio_device(int vdev_fd,
struct qbman_swp_desc p_des;
struct dpio_attr attr;
int ret;
- static int check_lcore_cpuset;
if (obj_info->num_regions < NUM_DPIO_REGIONS) {
DPAA2_BUS_ERR("Not sufficient number of DPIO regions");
@@ -433,13 +389,6 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
- if (!check_lcore_cpuset) {
- check_lcore_cpuset = 1;
-
- if (dpaa2_check_lcore_cpuset() < 0)
- goto err;
- }
-
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 11267d439..54096e877 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
+ * Copyright 2020 NXP
*/
#ifndef _FSL_QBMAN_DEBUG_H
#define _FSL_QBMAN_DEBUG_H
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
index f820077d2..eb68c9cab 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (C) 2014 Freescale Semiconductor, Inc.
- * Copyright 2015-2019 NXP
+ * Copyright 2015-2020 NXP
*
*/
#ifndef _FSL_QBMAN_PORTAL_H
@@ -44,6 +44,12 @@ extern uint32_t dpaa2_svr_family;
*/
struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d);
+/**
+ * qbman_swp_update() - Update portal cacheability attributes.
+ * @p: the given qbman swp portal
+ */
+int qbman_swp_update(struct qbman_swp *p, int stash_off);
+
/**
* qbman_swp_finish() - Create and destroy a functional object representing
* the given QBMan portal descriptor.
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index d7ff74c7a..57f50b0d8 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
*
*/
@@ -82,6 +82,10 @@ qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd);
static int
+qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd);
+static int
qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd);
@@ -99,6 +103,12 @@ qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
uint32_t *flags,
int num_frames);
static int
+qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd,
+ uint32_t *flags,
+ int num_frames);
+static int
qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -118,6 +128,12 @@ qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
uint32_t *flags,
int num_frames);
static int
+qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ struct qbman_fd **fd,
+ uint32_t *flags,
+ int num_frames);
+static int
qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
struct qbman_fd **fd,
@@ -135,6 +151,11 @@ qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
const struct qbman_fd *fd,
int num_frames);
static int
+qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd,
+ int num_frames);
+static int
qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -143,9 +164,12 @@ qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
static int
qbman_swp_pull_direct(struct qbman_swp *s, struct qbman_pull_desc *d);
static int
+qbman_swp_pull_cinh_direct(struct qbman_swp *s, struct qbman_pull_desc *d);
+static int
qbman_swp_pull_mem_back(struct qbman_swp *s, struct qbman_pull_desc *d);
const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s);
+const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s);
const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s);
static int
@@ -153,6 +177,10 @@ qbman_swp_release_direct(struct qbman_swp *s,
const struct qbman_release_desc *d,
const uint64_t *buffers, unsigned int num_buffers);
static int
+qbman_swp_release_cinh_direct(struct qbman_swp *s,
+ const struct qbman_release_desc *d,
+ const uint64_t *buffers, unsigned int num_buffers);
+static int
qbman_swp_release_mem_back(struct qbman_swp *s,
const struct qbman_release_desc *d,
const uint64_t *buffers, unsigned int num_buffers);
@@ -327,6 +355,28 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
return p;
}
+int qbman_swp_update(struct qbman_swp *p, int stash_off)
+{
+ const struct qbman_swp_desc *d = &p->desc;
+ struct qbman_swp_sys *s = &p->sys;
+ int ret;
+
+ /* Nothing needs to be done for QBMAN rev > 5000 with fast access */
+ if ((qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+ && (d->cena_access_mode == qman_cena_fastest_access))
+ return 0;
+
+ ret = qbman_swp_sys_update(s, d, p->dqrr.dqrr_size, stash_off);
+ if (ret) {
+ pr_err("qbman_swp_sys_init() failed %d\n", ret);
+ return ret;
+ }
+
+ p->stash_off = stash_off;
+
+ return 0;
+}
+
void qbman_swp_finish(struct qbman_swp *p)
{
#ifdef QBMAN_CHECKING
@@ -462,6 +512,27 @@ void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb)
#endif
}
+void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb)
+{
+ uint8_t *v = cmd;
+#ifdef QBMAN_CHECKING
+ QBMAN_BUG_ON(!(p->mc.check != swp_mc_can_submit));
+#endif
+ /* TBD: "|=" is going to hurt performance. Need to move as many fields
+ * out of word zero, and for those that remain, the "OR" needs to occur
+ * at the caller side. This debug check helps to catch cases where the
+ * caller wants to OR but has forgotten to do so.
+ */
+ QBMAN_BUG_ON((*v & cmd_verb) != *v);
+ dma_wmb();
+ *v = cmd_verb | p->mc.valid_bit;
+ qbman_cinh_write_complete(&p->sys, QBMAN_CENA_SWP_CR, cmd);
+ clean(cmd);
+#ifdef QBMAN_CHECKING
+ p->mc.check = swp_mc_can_poll;
+#endif
+}
+
void *qbman_swp_mc_result(struct qbman_swp *p)
{
uint32_t *ret, verb;
@@ -500,6 +571,27 @@ void *qbman_swp_mc_result(struct qbman_swp *p)
return ret;
}
+void *qbman_swp_mc_result_cinh(struct qbman_swp *p)
+{
+ uint32_t *ret, verb;
+#ifdef QBMAN_CHECKING
+ QBMAN_BUG_ON(p->mc.check != swp_mc_can_poll);
+#endif
+ ret = qbman_cinh_read_shadow(&p->sys,
+ QBMAN_CENA_SWP_RR(p->mc.valid_bit));
+ /* Remove the valid-bit -
+ * command completed iff the rest is non-zero
+ */
+ verb = ret[0] & ~QB_VALID_BIT;
+ if (!verb)
+ return NULL;
+ p->mc.valid_bit ^= QB_VALID_BIT;
+#ifdef QBMAN_CHECKING
+ p->mc.check = swp_mc_can_start;
+#endif
+ return ret;
+}
+
/***********/
/* Enqueue */
/***********/
@@ -640,6 +732,16 @@ static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p,
QMAN_RT_MODE);
}
+static void memcpy_byte_by_byte(void *to, const void *from, size_t n)
+{
+ const uint8_t *src = from;
+ volatile uint8_t *dest = to;
+ size_t i;
+
+ for (i = 0; i < n; i++)
+ dest[i] = src[i];
+}
+
static int qbman_swp_enqueue_array_mode_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
@@ -754,7 +856,7 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
return -EBUSY;
}
- p = qbman_cena_write_start_wo_shadow(&s->sys,
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
memcpy(&p[1], &cl[1], 28);
memcpy(&p[8], fd, sizeof(*fd));
@@ -762,8 +864,6 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
/* Set the verb byte, have to substitute in the valid-bit */
p[0] = cl[0] | s->eqcr.pi_vb;
- qbman_cena_write_complete_wo_shadow(&s->sys,
- QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
s->eqcr.pi++;
s->eqcr.pi &= full_mask;
s->eqcr.available--;
@@ -815,7 +915,10 @@ static int qbman_swp_enqueue_ring_mode(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd)
{
- return qbman_swp_enqueue_ring_mode_ptr(s, d, fd);
+ if (!s->stash_off)
+ return qbman_swp_enqueue_ring_mode_ptr(s, d, fd);
+ else
+ return qbman_swp_enqueue_ring_mode_cinh_direct(s, d, fd);
}
int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
@@ -1025,7 +1128,12 @@ int qbman_swp_enqueue_multiple(struct qbman_swp *s,
uint32_t *flags,
int num_frames)
{
- return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags, num_frames);
+ if (!s->stash_off)
+ return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags,
+ num_frames);
+ else
+ return qbman_swp_enqueue_multiple_cinh_direct(s, d, fd, flags,
+ num_frames);
}
static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
@@ -1233,7 +1341,12 @@ int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
uint32_t *flags,
int num_frames)
{
- return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags, num_frames);
+ if (!s->stash_off)
+ return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags,
+ num_frames);
+ else
+ return qbman_swp_enqueue_multiple_fd_cinh_direct(s, d, fd,
+ flags, num_frames);
}
static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
@@ -1426,7 +1539,13 @@ int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
const struct qbman_fd *fd,
int num_frames)
{
- return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd, num_frames);
+ if (!s->stash_off)
+ return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd,
+ num_frames);
+ else
+ return qbman_swp_enqueue_multiple_desc_cinh_direct(s, d, fd,
+ num_frames);
+
}
/*************************/
@@ -1574,6 +1693,30 @@ static int qbman_swp_pull_direct(struct qbman_swp *s,
return 0;
}
+static int qbman_swp_pull_cinh_direct(struct qbman_swp *s,
+ struct qbman_pull_desc *d)
+{
+ uint32_t *p;
+ uint32_t *cl = qb_cl(d);
+
+ if (!atomic_dec_and_test(&s->vdq.busy)) {
+ atomic_inc(&s->vdq.busy);
+ return -EBUSY;
+ }
+
+ d->pull.tok = s->sys.idx + 1;
+ s->vdq.storage = (void *)(size_t)d->pull.rsp_addr_virt;
+ p = qbman_cinh_write_start_wo_shadow(&s->sys, QBMAN_CENA_SWP_VDQCR);
+ memcpy_byte_by_byte(&p[1], &cl[1], 12);
+
+ /* Set the verb byte, have to substitute in the valid-bit */
+ lwsync();
+ p[0] = cl[0] | s->vdq.valid_bit;
+ s->vdq.valid_bit ^= QB_VALID_BIT;
+
+ return 0;
+}
+
static int qbman_swp_pull_mem_back(struct qbman_swp *s,
struct qbman_pull_desc *d)
{
@@ -1601,7 +1744,10 @@ static int qbman_swp_pull_mem_back(struct qbman_swp *s,
int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
{
- return qbman_swp_pull_ptr(s, d);
+ if (!s->stash_off)
+ return qbman_swp_pull_ptr(s, d);
+ else
+ return qbman_swp_pull_cinh_direct(s, d);
}
/****************/
@@ -1638,7 +1784,10 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s)
*/
const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s)
{
- return qbman_swp_dqrr_next_ptr(s);
+ if (!s->stash_off)
+ return qbman_swp_dqrr_next_ptr(s);
+ else
+ return qbman_swp_dqrr_next_cinh_direct(s);
}
const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s)
@@ -1718,6 +1867,81 @@ const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s)
return p;
}
+const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s)
+{
+ uint32_t verb;
+ uint32_t response_verb;
+ uint32_t flags;
+ const struct qbman_result *p;
+
+ /* Before using valid-bit to detect if something is there, we have to
+ * handle the case of the DQRR reset bug...
+ */
+ if (s->dqrr.reset_bug) {
+ /* We pick up new entries by cache-inhibited producer index,
+ * which means that a non-coherent mapping would require us to
+ * invalidate and read *only* once that PI has indicated that
+ * there's an entry here. The first trip around the DQRR ring
+ * will be much less efficient than all subsequent trips around
+ * it...
+ */
+ uint8_t pi = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_DQPI) &
+ QMAN_DQRR_PI_MASK;
+
+ /* there are new entries if pi != next_idx */
+ if (pi == s->dqrr.next_idx)
+ return NULL;
+
+ /* if next_idx is/was the last ring index, and 'pi' is
+ * different, we can disable the workaround as all the ring
+ * entries have now been DMA'd to so valid-bit checking is
+ * repaired. Note: this logic needs to be based on next_idx
+ * (which increments one at a time), rather than on pi (which
+ * can burst and wrap-around between our snapshots of it).
+ */
+ QBMAN_BUG_ON((s->dqrr.dqrr_size - 1) < 0);
+ if (s->dqrr.next_idx == (s->dqrr.dqrr_size - 1u)) {
+ pr_debug("DEBUG: next_idx=%d, pi=%d, clear reset bug\n",
+ s->dqrr.next_idx, pi);
+ s->dqrr.reset_bug = 0;
+ }
+ }
+ p = qbman_cinh_read_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
+
+ verb = p->dq.verb;
+
+ /* If the valid-bit isn't of the expected polarity, nothing there. Note,
+ * in the DQRR reset bug workaround, we shouldn't need to skip these
+ * check, because we've already determined that a new entry is available
+ * and we've invalidated the cacheline before reading it, so the
+ * valid-bit behaviour is repaired and should tell us what we already
+ * knew from reading PI.
+ */
+ if ((verb & QB_VALID_BIT) != s->dqrr.valid_bit)
+ return NULL;
+
+ /* There's something there. Move "next_idx" attention to the next ring
+ * entry (and prefetch it) before returning what we found.
+ */
+ s->dqrr.next_idx++;
+ if (s->dqrr.next_idx == s->dqrr.dqrr_size) {
+ s->dqrr.next_idx = 0;
+ s->dqrr.valid_bit ^= QB_VALID_BIT;
+ }
+ /* If this is the final response to a volatile dequeue command
+ * indicate that the vdq is no longer busy
+ */
+ flags = p->dq.stat;
+ response_verb = verb & QBMAN_RESPONSE_VERB_MASK;
+ if ((response_verb == QBMAN_RESULT_DQ) &&
+ (flags & QBMAN_DQ_STAT_VOLATILE) &&
+ (flags & QBMAN_DQ_STAT_EXPIRED))
+ atomic_inc(&s->vdq.busy);
+
+ return p;
+}
+
const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s)
{
uint32_t verb;
@@ -2096,6 +2320,37 @@ static int qbman_swp_release_direct(struct qbman_swp *s,
return 0;
}
+static int qbman_swp_release_cinh_direct(struct qbman_swp *s,
+ const struct qbman_release_desc *d,
+ const uint64_t *buffers,
+ unsigned int num_buffers)
+{
+ uint32_t *p;
+ const uint32_t *cl = qb_cl(d);
+ uint32_t rar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_RAR);
+
+ pr_debug("RAR=%08x\n", rar);
+ if (!RAR_SUCCESS(rar))
+ return -EBUSY;
+
+ QBMAN_BUG_ON(!num_buffers || (num_buffers > 7));
+
+ /* Start the release command */
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_RCR(RAR_IDX(rar)));
+
+ /* Copy the caller's buffer pointers to the command */
+ memcpy_byte_by_byte(&p[2], buffers, num_buffers * sizeof(uint64_t));
+
+ /* Set the verb byte, have to substitute in the valid-bit and the
+ * number of buffers.
+ */
+ lwsync();
+ p[0] = cl[0] | RAR_VB(rar) | num_buffers;
+
+ return 0;
+}
+
static int qbman_swp_release_mem_back(struct qbman_swp *s,
const struct qbman_release_desc *d,
const uint64_t *buffers,
@@ -2134,7 +2389,11 @@ int qbman_swp_release(struct qbman_swp *s,
const uint64_t *buffers,
unsigned int num_buffers)
{
- return qbman_swp_release_ptr(s, d, buffers, num_buffers);
+ if (!s->stash_off)
+ return qbman_swp_release_ptr(s, d, buffers, num_buffers);
+ else
+ return qbman_swp_release_cinh_direct(s, d, buffers,
+ num_buffers);
}
/*******************/
@@ -2157,8 +2416,8 @@ struct qbman_acquire_rslt {
uint64_t buf[7];
};
-int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
- unsigned int num_buffers)
+static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
+ uint64_t *buffers, unsigned int num_buffers)
{
struct qbman_acquire_desc *p;
struct qbman_acquire_rslt *r;
@@ -2202,6 +2461,61 @@ int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
return (int)r->num;
}
+static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
+ uint64_t *buffers, unsigned int num_buffers)
+{
+ struct qbman_acquire_desc *p;
+ struct qbman_acquire_rslt *r;
+
+ if (!num_buffers || (num_buffers > 7))
+ return -EINVAL;
+
+ /* Start the management command */
+ p = qbman_swp_mc_start(s);
+
+ if (!p)
+ return -EBUSY;
+
+ /* Encode the caller-provided attributes */
+ p->bpid = bpid;
+ p->num = num_buffers;
+
+ /* Complete the management command */
+ r = qbman_swp_mc_complete_cinh(s, p, QBMAN_MC_ACQUIRE);
+ if (!r) {
+ pr_err("qbman: acquire from BPID %d failed, no response\n",
+ bpid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_MC_ACQUIRE);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Acquire buffers from BPID 0x%x failed, code=0x%02x\n",
+ bpid, r->rslt);
+ return -EIO;
+ }
+
+ QBMAN_BUG_ON(r->num > num_buffers);
+
+ /* Copy the acquired buffers to the caller's array */
+ u64_from_le32_copy(buffers, &r->buf[0], r->num);
+
+ return (int)r->num;
+}
+
+int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
+ unsigned int num_buffers)
+{
+ if (!s->stash_off)
+ return qbman_swp_acquire_direct(s, bpid, buffers, num_buffers);
+ else
+ return qbman_swp_acquire_cinh_direct(s, bpid, buffers,
+ num_buffers);
+}
+
/*****************/
/* FQ management */
/*****************/
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.h b/drivers/bus/fslmc/qbman/qbman_portal.h
index 3aaacae52..1cf791830 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/qbman_portal.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
*
*/
@@ -102,6 +102,7 @@ struct qbman_swp {
uint32_t ci;
int available;
} eqcr;
+ uint8_t stash_off;
};
/* -------------------------- */
@@ -118,7 +119,9 @@ struct qbman_swp {
*/
void *qbman_swp_mc_start(struct qbman_swp *p);
void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb);
+void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb);
void *qbman_swp_mc_result(struct qbman_swp *p);
+void *qbman_swp_mc_result_cinh(struct qbman_swp *p);
/* Wraps up submit + poll-for-result */
static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
@@ -135,6 +138,20 @@ static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
return cmd;
}
+static inline void *qbman_swp_mc_complete_cinh(struct qbman_swp *swp, void *cmd,
+ uint8_t cmd_verb)
+{
+ int loopvar = 1000;
+
+ qbman_swp_mc_submit_cinh(swp, cmd, cmd_verb);
+ do {
+ cmd = qbman_swp_mc_result_cinh(swp);
+ } while (!cmd && loopvar--);
+ QBMAN_BUG_ON(!loopvar);
+
+ return cmd;
+}
+
/* ---------------------- */
/* Descriptors/cachelines */
/* ---------------------- */
diff --git a/drivers/bus/fslmc/qbman/qbman_sys.h b/drivers/bus/fslmc/qbman/qbman_sys.h
index 55449edf3..61f817c47 100644
--- a/drivers/bus/fslmc/qbman/qbman_sys.h
+++ b/drivers/bus/fslmc/qbman/qbman_sys.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2019 NXP
+ * Copyright 2019-2020 NXP
*/
/* qbman_sys_decl.h and qbman_sys.h are the two platform-specific files in the
* driver. They are only included via qbman_private.h, which is itself a
@@ -190,6 +190,34 @@ static inline void qbman_cinh_write(struct qbman_swp_sys *s, uint32_t offset,
#endif
}
+static inline void *qbman_cinh_write_start_wo_shadow(struct qbman_swp_sys *s,
+ uint32_t offset)
+{
+#ifdef QBMAN_CINH_TRACE
+ pr_info("qbman_cinh_write_start(%p:%d:0x%03x)\n",
+ s->addr_cinh, s->idx, offset);
+#endif
+ QBMAN_BUG_ON(offset & 63);
+ return (s->addr_cinh + offset);
+}
+
+static inline void qbman_cinh_write_complete(struct qbman_swp_sys *s,
+ uint32_t offset, void *cmd)
+{
+ const uint32_t *shadow = cmd;
+ int loop;
+#ifdef QBMAN_CINH_TRACE
+ pr_info("qbman_cinh_write_complete(%p:%d:0x%03x) %p\n",
+ s->addr_cinh, s->idx, offset, shadow);
+ hexdump(cmd, 64);
+#endif
+ for (loop = 15; loop >= 1; loop--)
+ __raw_writel(shadow[loop], s->addr_cinh +
+ offset + loop * 4);
+ lwsync();
+ __raw_writel(shadow[0], s->addr_cinh + offset);
+}
+
static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
{
uint32_t reg = __raw_readl(s->addr_cinh + offset);
@@ -200,6 +228,35 @@ static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
return reg;
}
+static inline void *qbman_cinh_read_shadow(struct qbman_swp_sys *s,
+ uint32_t offset)
+{
+ uint32_t *shadow = (uint32_t *)(s->cena + offset);
+ unsigned int loop;
+#ifdef QBMAN_CINH_TRACE
+ pr_info(" %s (%p:%d:0x%03x) %p\n", __func__,
+ s->addr_cinh, s->idx, offset, shadow);
+#endif
+
+ for (loop = 0; loop < 16; loop++)
+ shadow[loop] = __raw_readl(s->addr_cinh + offset
+ + loop * 4);
+#ifdef QBMAN_CINH_TRACE
+ hexdump(shadow, 64);
+#endif
+ return shadow;
+}
+
+static inline void *qbman_cinh_read_wo_shadow(struct qbman_swp_sys *s,
+ uint32_t offset)
+{
+#ifdef QBMAN_CINH_TRACE
+ pr_info("qbman_cinh_read(%p:%d:0x%03x)\n",
+ s->addr_cinh, s->idx, offset);
+#endif
+ return s->addr_cinh + offset;
+}
+
static inline void *qbman_cena_write_start(struct qbman_swp_sys *s,
uint32_t offset)
{
@@ -476,6 +533,82 @@ static inline int qbman_swp_sys_init(struct qbman_swp_sys *s,
return 0;
}
+static inline int qbman_swp_sys_update(struct qbman_swp_sys *s,
+ const struct qbman_swp_desc *d,
+ uint8_t dqrr_size,
+ int stash_off)
+{
+ uint32_t reg;
+ int i;
+ int cena_region_size = 4*1024;
+ uint8_t est = 1;
+#ifdef RTE_ARCH_64
+ uint8_t wn = CENA_WRITE_ENABLE;
+#else
+ uint8_t wn = CINH_WRITE_ENABLE;
+#endif
+
+ if (stash_off)
+ wn = CINH_WRITE_ENABLE;
+
+ QBMAN_BUG_ON(d->idx < 0);
+#ifdef QBMAN_CHECKING
+ /* We should never be asked to initialise for a portal that isn't in
+ * the power-on state. (Ie. don't forget to reset portals when they are
+ * decommissioned!)
+ */
+ reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+ QBMAN_BUG_ON(reg);
+#endif
+ if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+ && (d->cena_access_mode == qman_cena_fastest_access))
+ memset(s->addr_cena, 0, cena_region_size);
+ else {
+ /* Invalidate the portal memory.
+ * This ensures no stale cache lines
+ */
+ for (i = 0; i < cena_region_size; i += 64)
+ dccivac(s->addr_cena + i);
+ }
+
+ if (dpaa2_svr_family == SVR_LS1080A)
+ est = 0;
+
+ if (s->eqcr_mode == qman_eqcr_vb_array) {
+ reg = qbman_set_swp_cfg(dqrr_size, wn,
+ 0, 3, 2, 3, 1, 1, 1, 1, 1, 1);
+ } else {
+ if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 &&
+ (d->cena_access_mode == qman_cena_fastest_access))
+ reg = qbman_set_swp_cfg(dqrr_size, wn,
+ 1, 3, 2, 0, 1, 1, 1, 1, 1, 1);
+ else
+ reg = qbman_set_swp_cfg(dqrr_size, wn,
+ est, 3, 2, 2, 1, 1, 1, 1, 1, 1);
+ }
+
+ if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+ && (d->cena_access_mode == qman_cena_fastest_access))
+ reg |= 1 << SWP_CFG_CPBS_SHIFT | /* memory-backed mode */
+ 1 << SWP_CFG_VPM_SHIFT | /* VDQCR read triggered mode */
+ 1 << SWP_CFG_CPM_SHIFT; /* CR read triggered mode */
+
+ qbman_cinh_write(s, QBMAN_CINH_SWP_CFG, reg);
+ reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+ if (!reg) {
+ pr_err("The portal %d is not enabled!\n", s->idx);
+ return -1;
+ }
+
+ if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+ && (d->cena_access_mode == qman_cena_fastest_access)) {
+ qbman_cinh_write(s, QBMAN_CINH_SWP_EQCR_PI, QMAN_RT_MODE);
+ qbman_cinh_write(s, QBMAN_CINH_SWP_RCR_PI, QMAN_RT_MODE);
+ }
+
+ return 0;
+}
+
static inline void qbman_swp_sys_finish(struct qbman_swp_sys *s)
{
free(s->cena);
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 08/37] bus/fslmc: rename the cinh read functions used for ls1088
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (6 preceding siblings ...)
2020-05-27 13:22 ` [dpdk-dev] [PATCH 07/37] bus/fslmc: support portal migration Hemant Agrawal
@ 2020-05-27 13:22 ` Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 09/37] net/dpaa: enable Tx queue taildrop Hemant Agrawal
` (30 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
This patch changes the qbman I/O function names as they are
only reading from cinh register, but writing to cena registers.
This gives way to add functions which purely work in cinh mode
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/bus/fslmc/qbman/qbman_portal.c | 250 +++++++++++++++++++++++--
1 file changed, 233 insertions(+), 17 deletions(-)
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 57f50b0d8..0a2af7be4 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -78,7 +78,7 @@ qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd);
static int
-qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_ring_mode_cinh_read_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd);
static int
@@ -97,7 +97,7 @@ qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
uint32_t *flags,
int num_frames);
static int
-qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_cinh_read_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
uint32_t *flags,
@@ -122,7 +122,7 @@ qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
uint32_t *flags,
int num_frames);
static int
-qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_fd_cinh_read_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
struct qbman_fd **fd,
uint32_t *flags,
@@ -146,7 +146,7 @@ qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
const struct qbman_fd *fd,
int num_frames);
static int
-qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_desc_cinh_read_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
int num_frames);
@@ -309,15 +309,15 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
&& (d->cena_access_mode == qman_cena_fastest_access)) {
p->eqcr.pi_ring_size = 32;
qbman_swp_enqueue_array_mode_ptr =
- qbman_swp_enqueue_array_mode_mem_back;
+ qbman_swp_enqueue_array_mode_mem_back;
qbman_swp_enqueue_ring_mode_ptr =
- qbman_swp_enqueue_ring_mode_mem_back;
+ qbman_swp_enqueue_ring_mode_mem_back;
qbman_swp_enqueue_multiple_ptr =
- qbman_swp_enqueue_multiple_mem_back;
+ qbman_swp_enqueue_multiple_mem_back;
qbman_swp_enqueue_multiple_fd_ptr =
- qbman_swp_enqueue_multiple_fd_mem_back;
+ qbman_swp_enqueue_multiple_fd_mem_back;
qbman_swp_enqueue_multiple_desc_ptr =
- qbman_swp_enqueue_multiple_desc_mem_back;
+ qbman_swp_enqueue_multiple_desc_mem_back;
qbman_swp_pull_ptr = qbman_swp_pull_mem_back;
qbman_swp_dqrr_next_ptr = qbman_swp_dqrr_next_mem_back;
qbman_swp_release_ptr = qbman_swp_release_mem_back;
@@ -325,13 +325,13 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
if (dpaa2_svr_family == SVR_LS1080A) {
qbman_swp_enqueue_ring_mode_ptr =
- qbman_swp_enqueue_ring_mode_cinh_direct;
+ qbman_swp_enqueue_ring_mode_cinh_read_direct;
qbman_swp_enqueue_multiple_ptr =
- qbman_swp_enqueue_multiple_cinh_direct;
+ qbman_swp_enqueue_multiple_cinh_read_direct;
qbman_swp_enqueue_multiple_fd_ptr =
- qbman_swp_enqueue_multiple_fd_cinh_direct;
+ qbman_swp_enqueue_multiple_fd_cinh_read_direct;
qbman_swp_enqueue_multiple_desc_ptr =
- qbman_swp_enqueue_multiple_desc_cinh_direct;
+ qbman_swp_enqueue_multiple_desc_cinh_read_direct;
}
for (mask_size = p->eqcr.pi_ring_size; mask_size > 0; mask_size >>= 1)
@@ -835,7 +835,7 @@ static int qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s,
return 0;
}
-static int qbman_swp_enqueue_ring_mode_cinh_direct(
+static int qbman_swp_enqueue_ring_mode_cinh_read_direct(
struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd)
@@ -873,6 +873,44 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
return 0;
}
+static int qbman_swp_enqueue_ring_mode_cinh_direct(
+ struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd)
+{
+ uint32_t *p;
+ const uint32_t *cl = qb_cl(d);
+ uint32_t eqcr_ci, full_mask, half_mask;
+
+ half_mask = (s->eqcr.pi_ci_mask>>1);
+ full_mask = s->eqcr.pi_ci_mask;
+ if (!s->eqcr.available) {
+ eqcr_ci = s->eqcr.ci;
+ s->eqcr.ci = qbman_cinh_read(&s->sys,
+ QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+ s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ eqcr_ci, s->eqcr.ci);
+ if (!s->eqcr.available)
+ return -EBUSY;
+ }
+
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
+ memcpy_byte_by_byte(&p[1], &cl[1], 28);
+ memcpy_byte_by_byte(&p[8], fd, sizeof(*fd));
+ lwsync();
+
+ /* Set the verb byte, have to substitute in the valid-bit */
+ p[0] = cl[0] | s->eqcr.pi_vb;
+ s->eqcr.pi++;
+ s->eqcr.pi &= full_mask;
+ s->eqcr.available--;
+ if (!(s->eqcr.pi & half_mask))
+ s->eqcr.pi_vb ^= QB_VALID_BIT;
+
+ return 0;
+}
+
static int qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd)
@@ -999,7 +1037,7 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
return num_enqueued;
}
-static int qbman_swp_enqueue_multiple_cinh_direct(
+static int qbman_swp_enqueue_multiple_cinh_read_direct(
struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -1069,6 +1107,67 @@ static int qbman_swp_enqueue_multiple_cinh_direct(
return num_enqueued;
}
+static int qbman_swp_enqueue_multiple_cinh_direct(
+ struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd,
+ uint32_t *flags,
+ int num_frames)
+{
+ uint32_t *p = NULL;
+ const uint32_t *cl = qb_cl(d);
+ uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+ int i, num_enqueued = 0;
+
+ half_mask = (s->eqcr.pi_ci_mask>>1);
+ full_mask = s->eqcr.pi_ci_mask;
+ if (!s->eqcr.available) {
+ eqcr_ci = s->eqcr.ci;
+ s->eqcr.ci = qbman_cinh_read(&s->sys,
+ QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+ s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ eqcr_ci, s->eqcr.ci);
+ if (!s->eqcr.available)
+ return 0;
+ }
+
+ eqcr_pi = s->eqcr.pi;
+ num_enqueued = (s->eqcr.available < num_frames) ?
+ s->eqcr.available : num_frames;
+ s->eqcr.available -= num_enqueued;
+ /* Fill in the EQCR ring */
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ memcpy_byte_by_byte(&p[1], &cl[1], 28);
+ memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd));
+ eqcr_pi++;
+ }
+
+ lwsync();
+
+ /* Set the verb byte, have to substitute in the valid-bit */
+ eqcr_pi = s->eqcr.pi;
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ p[0] = cl[0] | s->eqcr.pi_vb;
+ if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
+ struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+
+ d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+ ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
+ }
+ eqcr_pi++;
+ if (!(eqcr_pi & half_mask))
+ s->eqcr.pi_vb ^= QB_VALID_BIT;
+ }
+
+ s->eqcr.pi = eqcr_pi & full_mask;
+
+ return num_enqueued;
+}
+
static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -1205,7 +1304,7 @@ static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
return num_enqueued;
}
-static int qbman_swp_enqueue_multiple_fd_cinh_direct(
+static int qbman_swp_enqueue_multiple_fd_cinh_read_direct(
struct qbman_swp *s,
const struct qbman_eq_desc *d,
struct qbman_fd **fd,
@@ -1275,6 +1374,67 @@ static int qbman_swp_enqueue_multiple_fd_cinh_direct(
return num_enqueued;
}
+static int qbman_swp_enqueue_multiple_fd_cinh_direct(
+ struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ struct qbman_fd **fd,
+ uint32_t *flags,
+ int num_frames)
+{
+ uint32_t *p = NULL;
+ const uint32_t *cl = qb_cl(d);
+ uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+ int i, num_enqueued = 0;
+
+ half_mask = (s->eqcr.pi_ci_mask>>1);
+ full_mask = s->eqcr.pi_ci_mask;
+ if (!s->eqcr.available) {
+ eqcr_ci = s->eqcr.ci;
+ s->eqcr.ci = qbman_cinh_read(&s->sys,
+ QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+ s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ eqcr_ci, s->eqcr.ci);
+ if (!s->eqcr.available)
+ return 0;
+ }
+
+ eqcr_pi = s->eqcr.pi;
+ num_enqueued = (s->eqcr.available < num_frames) ?
+ s->eqcr.available : num_frames;
+ s->eqcr.available -= num_enqueued;
+ /* Fill in the EQCR ring */
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ memcpy_byte_by_byte(&p[1], &cl[1], 28);
+ memcpy_byte_by_byte(&p[8], fd[i], sizeof(struct qbman_fd));
+ eqcr_pi++;
+ }
+
+ lwsync();
+
+ /* Set the verb byte, have to substitute in the valid-bit */
+ eqcr_pi = s->eqcr.pi;
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ p[0] = cl[0] | s->eqcr.pi_vb;
+ if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
+ struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+
+ d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+ ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
+ }
+ eqcr_pi++;
+ if (!(eqcr_pi & half_mask))
+ s->eqcr.pi_vb ^= QB_VALID_BIT;
+ }
+
+ s->eqcr.pi = eqcr_pi & full_mask;
+
+ return num_enqueued;
+}
+
static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
struct qbman_fd **fd,
@@ -1413,7 +1573,7 @@ static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
return num_enqueued;
}
-static int qbman_swp_enqueue_multiple_desc_cinh_direct(
+static int qbman_swp_enqueue_multiple_desc_cinh_read_direct(
struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -1478,6 +1638,62 @@ static int qbman_swp_enqueue_multiple_desc_cinh_direct(
return num_enqueued;
}
+static int qbman_swp_enqueue_multiple_desc_cinh_direct(
+ struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd,
+ int num_frames)
+{
+ uint32_t *p;
+ const uint32_t *cl;
+ uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+ int i, num_enqueued = 0;
+
+ half_mask = (s->eqcr.pi_ci_mask>>1);
+ full_mask = s->eqcr.pi_ci_mask;
+ if (!s->eqcr.available) {
+ eqcr_ci = s->eqcr.ci;
+ s->eqcr.ci = qbman_cinh_read(&s->sys,
+ QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+ s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ eqcr_ci, s->eqcr.ci);
+ if (!s->eqcr.available)
+ return 0;
+ }
+
+ eqcr_pi = s->eqcr.pi;
+ num_enqueued = (s->eqcr.available < num_frames) ?
+ s->eqcr.available : num_frames;
+ s->eqcr.available -= num_enqueued;
+ /* Fill in the EQCR ring */
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ cl = qb_cl(&d[i]);
+ memcpy_byte_by_byte(&p[1], &cl[1], 28);
+ memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd));
+ eqcr_pi++;
+ }
+
+ lwsync();
+
+ /* Set the verb byte, have to substitute in the valid-bit */
+ eqcr_pi = s->eqcr.pi;
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ cl = qb_cl(&d[i]);
+ p[0] = cl[0] | s->eqcr.pi_vb;
+ eqcr_pi++;
+ if (!(eqcr_pi & half_mask))
+ s->eqcr.pi_vb ^= QB_VALID_BIT;
+ }
+
+ s->eqcr.pi = eqcr_pi & full_mask;
+
+ return num_enqueued;
+}
+
static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 09/37] net/dpaa: enable Tx queue taildrop
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (7 preceding siblings ...)
2020-05-27 13:22 ` [dpdk-dev] [PATCH 08/37] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
@ 2020-05-27 13:22 ` Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 10/37] net/dpaa: add 2.5G support Hemant Agrawal
` (29 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
Enable congestion handling/tail drop for TX queues.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 43 +++++++++
drivers/bus/dpaa/include/fsl_qman.h | 17 ++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 2 +
drivers/net/dpaa/dpaa_ethdev.c | 111 ++++++++++++++++++++--
drivers/net/dpaa/dpaa_ethdev.h | 1 +
drivers/net/dpaa/dpaa_rxtx.c | 71 ++++++++++++++
drivers/net/dpaa/dpaa_rxtx.h | 3 +
7 files changed, 242 insertions(+), 6 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index b596e79c2..447c09177 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -40,6 +40,8 @@
spin_unlock(&__fq478->fqlock); \
} while (0)
+static qman_cb_free_mbuf qman_free_mbuf_cb;
+
static inline void fq_set(struct qman_fq *fq, u32 mask)
{
dpaa_set_bits(mask, &fq->flags);
@@ -790,6 +792,47 @@ static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
FQUNLOCK(fq);
}
+void
+qman_ern_register_cb(qman_cb_free_mbuf cb)
+{
+ qman_free_mbuf_cb = cb;
+}
+
+
+void
+qman_ern_poll_free(void)
+{
+ struct qman_portal *p = get_affine_portal();
+ u8 verb, num = 0;
+ const struct qm_mr_entry *msg;
+ const struct qm_fd *fd;
+ struct qm_mr_entry swapped_msg;
+
+ qm_mr_pvb_update(&p->p);
+ msg = qm_mr_current(&p->p);
+
+ while (msg != NULL) {
+ swapped_msg = *msg;
+ hw_fd_to_cpu(&swapped_msg.ern.fd);
+ verb = msg->ern.verb & QM_MR_VERB_TYPE_MASK;
+ fd = &swapped_msg.ern.fd;
+
+ if (unlikely(verb & 0x20)) {
+ printf("HW ERN notification, Nothing to do\n");
+ } else {
+ if ((fd->bpid & 0xff) != 0xff)
+ qman_free_mbuf_cb(fd);
+ }
+
+ num++;
+ qm_mr_next(&p->p);
+ qm_mr_pvb_update(&p->p);
+ msg = qm_mr_current(&p->p);
+ }
+
+ qm_mr_cci_consume(&p->p, num);
+}
+
static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
{
const struct qm_mr_entry *msg;
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 78b698f39..0d9cfc339 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1158,6 +1158,10 @@ typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq,
/* This callback type is used when handling DCP ERNs */
typedef void (*qman_cb_dc_ern)(struct qman_portal *qm,
const struct qm_mr_entry *msg);
+
+/* This callback function will be used to free mbufs of ERN */
+typedef uint16_t (*qman_cb_free_mbuf)(const struct qm_fd *fd);
+
/*
* s/w-visible states. Ie. tentatively scheduled + truly scheduled + active +
* held-active + held-suspended are just "sched". Things like "retired" will not
@@ -1808,6 +1812,19 @@ __rte_internal
int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
int frames_to_send);
+/**
+ * qman_ern_poll_free - Polling on MR and calling a callback function to free
+ * mbufs when SW ERNs received.
+ */
+__rte_internal
+void qman_ern_poll_free(void);
+
+/**
+ * qman_ern_register_cb - Register a callback function to free buffers.
+ */
+__rte_internal
+void qman_ern_register_cb(qman_cb_free_mbuf cb);
+
/**
* qman_enqueue_multi_fq - Enqueue multiple frames to their respective frame
* queues.
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 46d42f7d6..8069b05af 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -61,6 +61,8 @@ INTERNAL {
qman_enqueue;
qman_enqueue_multi;
qman_enqueue_multi_fq;
+ qman_ern_poll_free;
+ qman_ern_register_cb;
qman_fq_fqid;
qman_fq_portal_irqsource_add;
qman_fq_portal_irqsource_remove;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 13d1c6a1f..9448615ab 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2020 NXP
*
*/
/* System headers */
@@ -88,9 +88,12 @@ static int dpaa_push_mode_max_queue = DPAA_DEFAULT_PUSH_MODE_QUEUE;
static int dpaa_push_queue_idx; /* Queue index which are in push mode*/
-/* Per FQ Taildrop in frame count */
+/* Per RX FQ Taildrop in frame count */
static unsigned int td_threshold = CGR_RX_PERFQ_THRESH;
+/* Per TX FQ Taildrop in frame count, disabled by default */
+static unsigned int td_tx_threshold;
+
struct rte_dpaa_xstats_name_off {
char name[RTE_ETH_XSTATS_NAME_SIZE];
uint32_t offset;
@@ -277,7 +280,11 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* Change tx callback to the real one */
- dev->tx_pkt_burst = dpaa_eth_queue_tx;
+ if (dpaa_intf->cgr_tx)
+ dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+ else
+ dev->tx_pkt_burst = dpaa_eth_queue_tx;
+
fman_if_enable_rx(dpaa_intf->fif);
return 0;
@@ -869,6 +876,7 @@ int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_PMD_INFO("Tx queue setup for queue index: %d fq_id (0x%x)",
queue_idx, dpaa_intf->tx_queues[queue_idx].fqid);
dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+
return 0;
}
@@ -1238,9 +1246,19 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
/* Initialise a Tx FQ */
static int dpaa_tx_queue_init(struct qman_fq *fq,
- struct fman_if *fman_intf)
+ struct fman_if *fman_intf,
+ struct qman_cgr *cgr_tx)
{
struct qm_mcc_initfq opts = {0};
+ struct qm_mcc_initcgr cgr_opts = {
+ .we_mask = QM_CGR_WE_CS_THRES |
+ QM_CGR_WE_CSTD_EN |
+ QM_CGR_WE_MODE,
+ .cgr = {
+ .cstd_en = QM_CGR_EN,
+ .mode = QMAN_CGR_MODE_FRAME
+ }
+ };
int ret;
ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
@@ -1259,6 +1277,27 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
DPAA_PMD_DEBUG("init tx fq %p, fqid 0x%x", fq, fq->fqid);
+
+ if (cgr_tx) {
+ /* Enable tail drop with cgr on this queue */
+ qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres,
+ td_tx_threshold, 0);
+ cgr_tx->cb = NULL;
+ ret = qman_create_cgr(cgr_tx, QMAN_CGR_FLAG_USE_INIT,
+ &cgr_opts);
+ if (ret) {
+ DPAA_PMD_WARN(
+ "rx taildrop init fail on rx fqid 0x%x(ret=%d)",
+ fq->fqid, ret);
+ goto without_cgr;
+ }
+ opts.we_mask |= QM_INITFQ_WE_CGID;
+ opts.fqd.cgid = cgr_tx->cgrid;
+ opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+ DPAA_PMD_DEBUG("Tx FQ tail drop enabled, threshold = %d\n",
+ td_tx_threshold);
+ }
+without_cgr:
ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
if (ret)
DPAA_PMD_ERR("init tx fqid 0x%x failed %d", fq->fqid, ret);
@@ -1311,6 +1350,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
struct fman_if *fman_intf;
struct fman_if_bpool *bp, *tmp_bp;
uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES];
+ uint32_t cgrid_tx[MAX_DPAA_CORES];
char eth_buf[RTE_ETHER_ADDR_FMT_SIZE];
PMD_INIT_FUNC_TRACE();
@@ -1321,7 +1361,10 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->dev_ops = &dpaa_devops;
/* Plugging of UCODE burst API not supported in Secondary */
eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
- eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
+ if (dpaa_intf->cgr_tx)
+ eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+ else
+ eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
qman_set_fq_lookup_table(
dpaa_intf->rx_queues->qman_fq_lookup_table);
@@ -1368,6 +1411,21 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
+ memset(cgrid, 0, sizeof(cgrid));
+ memset(cgrid_tx, 0, sizeof(cgrid_tx));
+
+ /* if DPAA_TX_TAILDROP_THRESHOLD is set, use that value; if 0, it means
+ * Tx tail drop is disabled.
+ */
+ if (getenv("DPAA_TX_TAILDROP_THRESHOLD")) {
+ td_tx_threshold = atoi(getenv("DPAA_TX_TAILDROP_THRESHOLD"));
+ DPAA_PMD_DEBUG("Tail drop threshold env configured: %u",
+ td_tx_threshold);
+ /* if a very large value is being configured */
+ if (td_tx_threshold > UINT16_MAX)
+ td_tx_threshold = CGR_RX_PERFQ_THRESH;
+ }
+
/* If congestion control is enabled globally*/
if (td_threshold) {
dpaa_intf->cgr_rx = rte_zmalloc(NULL,
@@ -1416,9 +1474,36 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
goto free_rx;
}
+ /* If congestion control is enabled globally*/
+ if (td_tx_threshold) {
+ dpaa_intf->cgr_tx = rte_zmalloc(NULL,
+ sizeof(struct qman_cgr) * MAX_DPAA_CORES,
+ MAX_CACHELINE);
+ if (!dpaa_intf->cgr_tx) {
+ DPAA_PMD_ERR("Failed to alloc mem for cgr_tx\n");
+ ret = -ENOMEM;
+ goto free_rx;
+ }
+
+ ret = qman_alloc_cgrid_range(&cgrid_tx[0], MAX_DPAA_CORES,
+ 1, 0);
+ if (ret != MAX_DPAA_CORES) {
+ DPAA_PMD_WARN("insufficient CGRIDs available");
+ ret = -EINVAL;
+ goto free_rx;
+ }
+ } else {
+ dpaa_intf->cgr_tx = NULL;
+ }
+
+
for (loop = 0; loop < MAX_DPAA_CORES; loop++) {
+ if (dpaa_intf->cgr_tx)
+ dpaa_intf->cgr_tx[loop].cgrid = cgrid_tx[loop];
+
ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
- fman_intf);
+ fman_intf,
+ dpaa_intf->cgr_tx ? &dpaa_intf->cgr_tx[loop] : NULL);
if (ret)
goto free_tx;
dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
@@ -1489,6 +1574,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
free_rx:
rte_free(dpaa_intf->cgr_rx);
+ rte_free(dpaa_intf->cgr_tx);
rte_free(dpaa_intf->rx_queues);
dpaa_intf->rx_queues = NULL;
dpaa_intf->nb_rx_queues = 0;
@@ -1529,6 +1615,17 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
rte_free(dpaa_intf->cgr_rx);
dpaa_intf->cgr_rx = NULL;
+ /* Release TX congestion Groups */
+ if (dpaa_intf->cgr_tx) {
+ for (loop = 0; loop < MAX_DPAA_CORES; loop++)
+ qman_delete_cgr(&dpaa_intf->cgr_tx[loop]);
+
+ qman_release_cgrid_range(dpaa_intf->cgr_tx[loop].cgrid,
+ MAX_DPAA_CORES);
+ rte_free(dpaa_intf->cgr_tx);
+ dpaa_intf->cgr_tx = NULL;
+ }
+
rte_free(dpaa_intf->rx_queues);
dpaa_intf->rx_queues = NULL;
@@ -1633,6 +1730,8 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
eth_dev->device = &dpaa_dev->device;
dpaa_dev->eth_dev = eth_dev;
+ qman_ern_register_cb(dpaa_free_mbuf);
+
/* Invoke PMD device initialization function */
diag = dpaa_dev_init(eth_dev);
if (diag == 0) {
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 6a6477ac8..d4261f885 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -111,6 +111,7 @@ struct dpaa_if {
struct qman_fq *rx_queues;
struct qman_cgr *cgr_rx;
struct qman_fq *tx_queues;
+ struct qman_cgr *cgr_tx;
struct qman_fq debug_queues[2];
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 3aeecb7d2..819cad7c6 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -398,6 +398,69 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
return mbuf;
}
+uint16_t
+dpaa_free_mbuf(const struct qm_fd *fd)
+{
+ struct rte_mbuf *mbuf;
+ struct dpaa_bp_info *bp_info;
+ uint8_t format;
+ void *ptr;
+
+ bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+ format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
+ if (unlikely(format == qm_fd_sg)) {
+ struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+ struct qm_sg_entry *sgt, *sg_temp;
+ void *vaddr, *sg_vaddr;
+ int i = 0;
+ uint16_t fd_offset = fd->offset;
+
+ vaddr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd));
+ if (!vaddr) {
+ DPAA_PMD_ERR("unable to convert physical address");
+ return -1;
+ }
+ sgt = vaddr + fd_offset;
+ sg_temp = &sgt[i++];
+ hw_sg_to_cpu(sg_temp);
+ temp = (struct rte_mbuf *)
+ ((char *)vaddr - bp_info->meta_data_size);
+ sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
+ qm_sg_entry_get64(sg_temp));
+
+ first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+ bp_info->meta_data_size);
+ first_seg->nb_segs = 1;
+ prev_seg = first_seg;
+ while (i < DPAA_SGT_MAX_ENTRIES) {
+ sg_temp = &sgt[i++];
+ hw_sg_to_cpu(sg_temp);
+ sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
+ qm_sg_entry_get64(sg_temp));
+ cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+ bp_info->meta_data_size);
+ first_seg->nb_segs += 1;
+ prev_seg->next = cur_seg;
+ if (sg_temp->final) {
+ cur_seg->next = NULL;
+ break;
+ }
+ prev_seg = cur_seg;
+ }
+
+ rte_pktmbuf_free_seg(temp);
+ rte_pktmbuf_free_seg(first_seg);
+ return 0;
+ }
+
+ ptr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd));
+ mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+
+ rte_pktmbuf_free(mbuf);
+
+ return 0;
+}
+
/* Specific for LS1043 */
void
dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
@@ -1011,6 +1074,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return sent;
}
+uint16_t
+dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+ qman_ern_poll_free();
+
+ return dpaa_eth_queue_tx(q, bufs, nb_bufs);
+}
+
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
struct rte_mbuf **bufs __rte_unused,
uint16_t nb_bufs __rte_unused)
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 4f896fba1..fe8eb6dc7 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -254,6 +254,8 @@ struct annotations_t {
uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+uint16_t dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs,
+ uint16_t nb_bufs);
uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
@@ -266,6 +268,7 @@ int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
struct qm_fd *fd,
uint32_t bpid);
+uint16_t dpaa_free_mbuf(const struct qm_fd *fd);
void dpaa_rx_cb(struct qman_fq **fq,
struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs);
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 10/37] net/dpaa: add 2.5G support
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (8 preceding siblings ...)
2020-05-27 13:22 ` [dpdk-dev] [PATCH 09/37] net/dpaa: enable Tx queue taildrop Hemant Agrawal
@ 2020-05-27 13:22 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 11/37] net/dpaa: update process specific device info Hemant Agrawal
` (28 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:22 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Sachin Saxena, Gagandeep Singh
From: Sachin Saxena <sachin.saxena@nxp.com>
Handle 2.5Gbps ethernet ports as well.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/dpaa.ini | 2 +-
drivers/bus/dpaa/base/fman/fman.c | 6 ++++--
drivers/bus/dpaa/base/fman/netcfg_layer.c | 3 ++-
drivers/bus/dpaa/include/fman.h | 1 +
drivers/net/dpaa/dpaa_ethdev.c | 9 ++++++++-
5 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 24cfd8566..b00f46a97 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,7 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
-Speed capabilities = P
+Speed capabilities = Y
Link status = Y
Jumbo frame = Y
MTU update = Y
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 6d77a7e39..ae26041ca 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -263,7 +263,7 @@ fman_if_init(const struct device_node *dpa_node)
fman_dealloc_bufs_mask_hi = 0;
fman_dealloc_bufs_mask_lo = 0;
}
- /* Is the MAC node 1G, 10G? */
+ /* Is the MAC node 1G, 2.5G, 10G? */
__if->__if.is_memac = 0;
if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
@@ -279,7 +279,9 @@ fman_if_init(const struct device_node *dpa_node)
/* Right now forcing memac to 1g in case of error*/
__if->__if.mac_type = fman_mac_1g;
} else {
- if (strstr(char_prop, "sgmii"))
+ if (strstr(char_prop, "sgmii-2500"))
+ __if->__if.mac_type = fman_mac_2_5g;
+ else if (strstr(char_prop, "sgmii"))
__if->__if.mac_type = fman_mac_1g;
else if (strstr(char_prop, "rgmii")) {
__if->__if.mac_type = fman_mac_1g;
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index 36eca88cd..b7009f229 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -44,7 +44,8 @@ dump_netcfg(struct netcfg_info *cfg_ptr)
printf("\n+ Fman %d, MAC %d (%s);\n",
__if->fman_idx, __if->mac_idx,
- (__if->mac_type == fman_mac_1g) ? "1G" : "10G");
+ (__if->mac_type == fman_mac_1g) ? "1G" :
+ (__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
printf("\tmac_addr: %02x:%02x:%02x:%02x:%02x:%02x\n",
(&__if->mac_addr)->addr_bytes[0],
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index c02d32d22..b6293b61c 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -72,6 +72,7 @@ enum fman_mac_type {
fman_offline = 0,
fman_mac_1g,
fman_mac_10g,
+ fman_mac_2_5g,
};
struct mac_addr {
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 9448615ab..2b14bd712 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -358,8 +358,13 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
if (dpaa_intf->fif->mac_type == fman_mac_1g) {
dev_info->speed_capa = ETH_LINK_SPEED_1G;
+ } else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) {
+ dev_info->speed_capa = ETH_LINK_SPEED_1G
+ | ETH_LINK_SPEED_2_5G;
} else if (dpaa_intf->fif->mac_type == fman_mac_10g) {
- dev_info->speed_capa = (ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G);
+ dev_info->speed_capa = ETH_LINK_SPEED_1G
+ | ETH_LINK_SPEED_2_5G
+ | ETH_LINK_SPEED_10G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, dpaa_intf->fif->mac_type);
@@ -390,6 +395,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
if (dpaa_intf->fif->mac_type == fman_mac_1g)
link->link_speed = ETH_SPEED_NUM_1G;
+ else if (dpaa_intf->fif->mac_type == fman_mac_2_5g)
+ link->link_speed = ETH_SPEED_NUM_2_5G;
else if (dpaa_intf->fif->mac_type == fman_mac_10g)
link->link_speed = ETH_SPEED_NUM_10G;
else
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 11/37] net/dpaa: update process specific device info
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (9 preceding siblings ...)
2020-05-27 13:22 ` [dpdk-dev] [PATCH 10/37] net/dpaa: add 2.5G support Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 12/37] drivers: optimize thread local storage for dpaa Hemant Agrawal
` (27 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
For DPAA devices the memory maps stored in the FMAN interface
information is per process. Store them in the device process specific
area.
This is required to support multi-process apps.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 207 ++++++++++++++++-----------------
drivers/net/dpaa/dpaa_ethdev.h | 1 -
2 files changed, 102 insertions(+), 106 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 2b14bd712..4ef140640 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -151,7 +151,6 @@ dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts)
static int
dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
+ VLAN_TAG_SIZE;
uint32_t buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
@@ -187,7 +186,7 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- fman_if_set_maxfrm(dpaa_intf->fif, frame_size);
+ fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
}
@@ -195,7 +194,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
static int
dpaa_eth_dev_configure(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint64_t tx_offloads = eth_conf->txmode.offloads;
@@ -234,14 +232,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
max_len = DPAA_MAX_RX_PKT_LEN;
}
- fman_if_set_maxfrm(dpaa_intf->fif, max_len);
+ fman_if_set_maxfrm(dev->process_private, max_len);
dev->data->mtu = max_len
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
}
if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
- fman_if_set_sg(dpaa_intf->fif, 1);
+ fman_if_set_sg(dev->process_private, 1);
dev->data->scattered_rx = 1;
}
@@ -285,18 +283,18 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
else
dev->tx_pkt_burst = dpaa_eth_queue_tx;
- fman_if_enable_rx(dpaa_intf->fif);
+ fman_if_enable_rx(dev->process_private);
return 0;
}
static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
- fman_if_disable_rx(dpaa_intf->fif);
+ fman_if_disable_rx(fif);
dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
}
@@ -344,6 +342,7 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
DPAA_PMD_DEBUG(": %s", dpaa_intf->name);
@@ -356,18 +355,18 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_vmdq_pools = ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
- if (dpaa_intf->fif->mac_type == fman_mac_1g) {
+ if (fif->mac_type == fman_mac_1g) {
dev_info->speed_capa = ETH_LINK_SPEED_1G;
- } else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) {
+ } else if (fif->mac_type == fman_mac_2_5g) {
dev_info->speed_capa = ETH_LINK_SPEED_1G
| ETH_LINK_SPEED_2_5G;
- } else if (dpaa_intf->fif->mac_type == fman_mac_10g) {
+ } else if (fif->mac_type == fman_mac_10g) {
dev_info->speed_capa = ETH_LINK_SPEED_1G
| ETH_LINK_SPEED_2_5G
| ETH_LINK_SPEED_10G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
- dpaa_intf->name, dpaa_intf->fif->mac_type);
+ dpaa_intf->name, fif->mac_type);
return -EINVAL;
}
@@ -390,18 +389,19 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
struct rte_eth_link *link = &dev->data->dev_link;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
- if (dpaa_intf->fif->mac_type == fman_mac_1g)
+ if (fif->mac_type == fman_mac_1g)
link->link_speed = ETH_SPEED_NUM_1G;
- else if (dpaa_intf->fif->mac_type == fman_mac_2_5g)
+ else if (fif->mac_type == fman_mac_2_5g)
link->link_speed = ETH_SPEED_NUM_2_5G;
- else if (dpaa_intf->fif->mac_type == fman_mac_10g)
+ else if (fif->mac_type == fman_mac_10g)
link->link_speed = ETH_SPEED_NUM_10G;
else
DPAA_PMD_ERR("invalid link_speed: %s, %d",
- dpaa_intf->name, dpaa_intf->fif->mac_type);
+ dpaa_intf->name, fif->mac_type);
link->link_status = dpaa_intf->valid;
link->link_duplex = ETH_LINK_FULL_DUPLEX;
@@ -412,21 +412,17 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
static int dpaa_eth_stats_get(struct rte_eth_dev *dev,
struct rte_eth_stats *stats)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_stats_get(dpaa_intf->fif, stats);
+ fman_if_stats_get(dev->process_private, stats);
return 0;
}
static int dpaa_eth_stats_reset(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_stats_reset(dpaa_intf->fif);
+ fman_if_stats_reset(dev->process_private);
return 0;
}
@@ -435,7 +431,6 @@ static int
dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
unsigned int n)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
uint64_t values[sizeof(struct dpaa_if_stats) / 8];
@@ -445,7 +440,7 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
if (xstats == NULL)
return 0;
- fman_if_stats_get_all(dpaa_intf->fif, values,
+ fman_if_stats_get_all(dev->process_private, values,
sizeof(struct dpaa_if_stats) / 8);
for (i = 0; i < num; i++) {
@@ -482,15 +477,13 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
if (!ids) {
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
if (n < stat_cnt)
return stat_cnt;
if (!values)
return 0;
- fman_if_stats_get_all(dpaa_intf->fif, values_copy,
+ fman_if_stats_get_all(dev->process_private, values_copy,
sizeof(struct dpaa_if_stats) / 8);
for (i = 0; i < stat_cnt; i++)
@@ -539,44 +532,36 @@ dpaa_xstats_get_names_by_id(
static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_promiscuous_enable(dpaa_intf->fif);
+ fman_if_promiscuous_enable(dev->process_private);
return 0;
}
static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_promiscuous_disable(dpaa_intf->fif);
+ fman_if_promiscuous_disable(dev->process_private);
return 0;
}
static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_set_mcast_filter_table(dpaa_intf->fif);
+ fman_if_set_mcast_filter_table(dev->process_private);
return 0;
}
static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_reset_mcast_filter_table(dpaa_intf->fif);
+ fman_if_reset_mcast_filter_table(dev->process_private);
return 0;
}
@@ -589,6 +574,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct rte_mempool *mp)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
struct qm_mcc_initfq opts = {0};
u32 flags = 0;
@@ -645,22 +631,22 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
icp.iciof = DEFAULT_ICIOF;
icp.iceof = DEFAULT_RX_ICEOF;
icp.icsz = DEFAULT_ICSZ;
- fman_if_set_ic_params(dpaa_intf->fif, &icp);
+ fman_if_set_ic_params(fif, &icp);
fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
- fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+ fman_if_set_fdoff(fif, fd_offset);
/* Buffer pool size should be equal to Dataroom Size*/
bp_size = rte_pktmbuf_data_room_size(mp);
- fman_if_set_bp(dpaa_intf->fif, mp->size,
+ fman_if_set_bp(fif, mp->size,
dpaa_intf->bp_info->bpid, bp_size);
dpaa_intf->valid = 1;
DPAA_PMD_DEBUG("if:%s fd_offset = %d offset = %d",
dpaa_intf->name, fd_offset,
- fman_if_get_fdoff(dpaa_intf->fif));
+ fman_if_get_fdoff(fif));
}
DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(dpaa_intf->fif),
+ fman_if_get_sg_enable(fif),
dev->data->dev_conf.rxmode.max_rx_pkt_len);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
@@ -952,11 +938,12 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
return 0;
} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
fc_conf->mode == RTE_FC_FULL) {
- fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water,
+ fman_if_set_fc_threshold(dev->process_private,
+ fc_conf->high_water,
fc_conf->low_water,
- dpaa_intf->bp_info->bpid);
+ dpaa_intf->bp_info->bpid);
if (fc_conf->pause_time)
- fman_if_set_fc_quanta(dpaa_intf->fif,
+ fman_if_set_fc_quanta(dev->process_private,
fc_conf->pause_time);
}
@@ -992,10 +979,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
fc_conf->autoneg = net_fc->autoneg;
return 0;
}
- ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+ ret = fman_if_get_fc_threshold(dev->process_private);
if (ret) {
fc_conf->mode = RTE_FC_TX_PAUSE;
- fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+ fc_conf->pause_time =
+ fman_if_get_fc_quanta(dev->process_private);
} else {
fc_conf->mode = RTE_FC_NONE;
}
@@ -1010,11 +998,11 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
__rte_unused uint32_t pool)
{
int ret;
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
PMD_INIT_FUNC_TRACE();
- ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, index);
+ ret = fman_if_add_mac_addr(dev->process_private,
+ addr->addr_bytes, index);
if (ret)
DPAA_PMD_ERR("Adding the MAC ADDR failed: err = %d", ret);
@@ -1025,11 +1013,9 @@ static void
dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
uint32_t index)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_clear_mac_addr(dpaa_intf->fif, index);
+ fman_if_clear_mac_addr(dev->process_private, index);
}
static int
@@ -1037,11 +1023,10 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
struct rte_ether_addr *addr)
{
int ret;
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
PMD_INIT_FUNC_TRACE();
- ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, 0);
+ ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
if (ret)
DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret);
@@ -1144,7 +1129,6 @@ int
rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
{
struct rte_eth_dev *dev;
- struct dpaa_if *dpaa_intf;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV);
@@ -1153,17 +1137,16 @@ rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
if (!is_dpaa_supported(dev))
return -ENOTSUP;
- dpaa_intf = dev->data->dev_private;
-
if (on)
- fman_if_loopback_enable(dpaa_intf->fif);
+ fman_if_loopback_enable(dev->process_private);
else
- fman_if_loopback_disable(dpaa_intf->fif);
+ fman_if_loopback_disable(dev->process_private);
return 0;
}
-static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
+static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
+ struct fman_if *fman_intf)
{
struct rte_eth_fc_conf *fc_conf;
int ret;
@@ -1179,10 +1162,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
}
}
fc_conf = dpaa_intf->fc_conf;
- ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+ ret = fman_if_get_fc_threshold(fman_intf);
if (ret) {
fc_conf->mode = RTE_FC_TX_PAUSE;
- fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+ fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
} else {
fc_conf->mode = RTE_FC_NONE;
}
@@ -1344,6 +1327,39 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
}
#endif
+/* Initialise a network interface */
+static int
+dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
+{
+ struct rte_dpaa_device *dpaa_device;
+ struct fm_eth_port_cfg *cfg;
+ struct dpaa_if *dpaa_intf;
+ struct fman_if *fman_intf;
+ int dev_id;
+
+ PMD_INIT_FUNC_TRACE();
+
+ dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
+ dev_id = dpaa_device->id.dev_id;
+ cfg = dpaa_get_eth_port_cfg(dev_id);
+ fman_intf = cfg->fman_if;
+ eth_dev->process_private = fman_intf;
+
+ /* Plugging of UCODE burst API not supported in Secondary */
+ dpaa_intf = eth_dev->data->dev_private;
+ eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+ if (dpaa_intf->cgr_tx)
+ eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+ else
+ eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+ qman_set_fq_lookup_table(
+ dpaa_intf->rx_queues->qman_fq_lookup_table);
+#endif
+
+ return 0;
+}
+
/* Initialise a network interface */
static int
dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -1362,23 +1378,6 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE();
- dpaa_intf = eth_dev->data->dev_private;
- /* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- eth_dev->dev_ops = &dpaa_devops;
- /* Plugging of UCODE burst API not supported in Secondary */
- eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
- if (dpaa_intf->cgr_tx)
- eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
- else
- eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
- qman_set_fq_lookup_table(
- dpaa_intf->rx_queues->qman_fq_lookup_table);
-#endif
- return 0;
- }
-
dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
dev_id = dpaa_device->id.dev_id;
dpaa_intf = eth_dev->data->dev_private;
@@ -1388,7 +1387,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->name = dpaa_device->name;
/* save fman_if & cfg in the interface struture */
- dpaa_intf->fif = fman_intf;
+ eth_dev->process_private = fman_intf;
dpaa_intf->ifid = dev_id;
dpaa_intf->cfg = cfg;
@@ -1457,7 +1456,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
if (default_q)
fqid = cfg->rx_def;
else
- fqid = DPAA_PCD_FQID_START + dpaa_intf->fif->mac_idx *
+ fqid = DPAA_PCD_FQID_START + fman_intf->mac_idx *
DPAA_PCD_FQID_MULTIPLIER + loop;
if (dpaa_intf->cgr_rx)
@@ -1529,7 +1528,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
- dpaa_fc_set_default(dpaa_intf);
+ dpaa_fc_set_default(dpaa_intf, fman_intf);
/* reset bpool list, initialize bpool dynamically */
list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
@@ -1676,6 +1675,13 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
return -ENOMEM;
eth_dev->device = &dpaa_dev->device;
eth_dev->dev_ops = &dpaa_devops;
+
+ ret = dpaa_dev_init_secondary(eth_dev);
+ if (ret != 0) {
+ RTE_LOG(ERR, PMD, "secondary dev init failed\n");
+ return ret;
+ }
+
rte_eth_dev_probing_finish(eth_dev);
return 0;
}
@@ -1711,29 +1717,20 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
}
}
- /* In case of secondary process, the device is already configured
- * and no further action is required, except portal initialization
- * and verifying secondary attachment to port name.
- */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- eth_dev = rte_eth_dev_attach_secondary(dpaa_dev->name);
- if (!eth_dev)
- return -ENOMEM;
- } else {
- eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
- if (eth_dev == NULL)
- return -ENOMEM;
+ eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
+ if (!eth_dev)
+ return -ENOMEM;
- eth_dev->data->dev_private = rte_zmalloc(
- "ethdev private structure",
- sizeof(struct dpaa_if),
- RTE_CACHE_LINE_SIZE);
- if (!eth_dev->data->dev_private) {
- DPAA_PMD_ERR("Cannot allocate memzone for port data");
- rte_eth_dev_release_port(eth_dev);
- return -ENOMEM;
- }
+ eth_dev->data->dev_private =
+ rte_zmalloc("ethdev private structure",
+ sizeof(struct dpaa_if),
+ RTE_CACHE_LINE_SIZE);
+ if (!eth_dev->data->dev_private) {
+ DPAA_PMD_ERR("Cannot allocate memzone for port data");
+ rte_eth_dev_release_port(eth_dev);
+ return -ENOMEM;
}
+
eth_dev->device = &dpaa_dev->device;
dpaa_dev->eth_dev = eth_dev;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index d4261f885..4c40ff86a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -116,7 +116,6 @@ struct dpaa_if {
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
uint32_t ifid;
- struct fman_if *fif;
struct dpaa_bp_info *bp_info;
struct rte_eth_fc_conf *fc_conf;
};
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 12/37] drivers: optimize thread local storage for dpaa
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (10 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 11/37] net/dpaa: update process specific device info Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 18:13 ` Akhil Goyal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 13/37] bus/dpaa: enable link state interrupt Hemant Agrawal
` (26 subsequent siblings)
38 siblings, 1 reply; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
Minimize the number of different thread variables
Add all the thread specific variables in dpaa_portal
structure to optimize TLS Usage.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/bus/dpaa/dpaa_bus.c | 24 ++++++-------
drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 -
drivers/bus/dpaa/rte_dpaa_bus.h | 42 ++++++++++++++---------
drivers/crypto/dpaa_sec/dpaa_sec.c | 11 +++---
drivers/event/dpaa/dpaa_eventdev.c | 4 +--
drivers/mempool/dpaa/dpaa_mempool.c | 6 ++--
drivers/net/dpaa/dpaa_ethdev.c | 2 +-
drivers/net/dpaa/dpaa_rxtx.c | 4 +--
8 files changed, 48 insertions(+), 46 deletions(-)
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index d53fe6083..68d47be37 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -54,8 +54,7 @@ unsigned int dpaa_svr_family;
#define FSL_DPAA_BUS_NAME dpaa_bus
-RTE_DEFINE_PER_LCORE(bool, dpaa_io);
-RTE_DEFINE_PER_LCORE(struct dpaa_portal_dqrr, held_bufs);
+RTE_DEFINE_PER_LCORE(struct dpaa_portal *, dpaa_io);
struct fm_eth_port_cfg *
dpaa_get_eth_port_cfg(int dev_id)
@@ -255,7 +254,6 @@ int rte_dpaa_portal_init(void *arg)
{
unsigned int cpu, lcore = rte_lcore_id();
int ret;
- struct dpaa_portal *dpaa_io_portal;
BUS_INIT_FUNC_TRACE();
@@ -290,20 +288,21 @@ int rte_dpaa_portal_init(void *arg)
DPAA_BUS_LOG(DEBUG, "QMAN thread initialized - CPU=%d lcore=%d",
cpu, lcore);
- dpaa_io_portal = rte_malloc(NULL, sizeof(struct dpaa_portal),
+ DPAA_PER_LCORE_PORTAL = rte_malloc(NULL, sizeof(struct dpaa_portal),
RTE_CACHE_LINE_SIZE);
- if (!dpaa_io_portal) {
+ if (!DPAA_PER_LCORE_PORTAL) {
DPAA_BUS_LOG(ERR, "Unable to allocate memory");
bman_thread_finish();
qman_thread_finish();
return -ENOMEM;
}
- dpaa_io_portal->qman_idx = qman_get_portal_index();
- dpaa_io_portal->bman_idx = bman_get_portal_index();
- dpaa_io_portal->tid = syscall(SYS_gettid);
+ DPAA_PER_LCORE_PORTAL->qman_idx = qman_get_portal_index();
+ DPAA_PER_LCORE_PORTAL->bman_idx = bman_get_portal_index();
+ DPAA_PER_LCORE_PORTAL->tid = syscall(SYS_gettid);
- ret = pthread_setspecific(dpaa_portal_key, (void *)dpaa_io_portal);
+ ret = pthread_setspecific(dpaa_portal_key,
+ (void *)DPAA_PER_LCORE_PORTAL);
if (ret) {
DPAA_BUS_LOG(ERR, "pthread_setspecific failed on core %u"
" (lcore=%u) with ret: %d", cpu, lcore, ret);
@@ -312,8 +311,6 @@ int rte_dpaa_portal_init(void *arg)
return ret;
}
- RTE_PER_LCORE(dpaa_io) = true;
-
DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
return 0;
@@ -326,7 +323,7 @@ rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
u32 sdqcr;
int ret;
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init(arg);
if (ret < 0) {
DPAA_BUS_LOG(ERR, "portal initialization failure");
@@ -369,8 +366,7 @@ dpaa_portal_finish(void *arg)
rte_free(dpaa_io_portal);
dpaa_io_portal = NULL;
-
- RTE_PER_LCORE(dpaa_io) = false;
+ DPAA_PER_LCORE_PORTAL = NULL;
}
static int
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 8069b05af..2defa7992 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -48,7 +48,6 @@ INTERNAL {
netcfg_acquire;
netcfg_release;
per_lcore_dpaa_io;
- per_lcore_held_bufs;
qman_alloc_cgrid_range;
qman_alloc_pool_range;
qman_clear_irq;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 2a186d83f..25aff2d30 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -35,8 +35,6 @@
extern unsigned int dpaa_svr_family;
-extern RTE_DEFINE_PER_LCORE(bool, dpaa_io);
-
struct rte_dpaa_device;
struct rte_dpaa_driver;
@@ -90,12 +88,38 @@ struct rte_dpaa_driver {
rte_dpaa_remove_t remove;
};
+/* Create storage for dqrr entries per lcore */
+#define DPAA_PORTAL_DEQUEUE_DEPTH 16
+struct dpaa_portal_dqrr {
+ void *mbuf[DPAA_PORTAL_DEQUEUE_DEPTH];
+ uint64_t dqrr_held;
+ uint8_t dqrr_size;
+};
+
struct dpaa_portal {
uint32_t bman_idx; /**< BMAN Portal ID*/
uint32_t qman_idx; /**< QMAN Portal ID*/
+ struct dpaa_portal_dqrr dpaa_held_bufs;
+ struct rte_crypto_op **dpaa_sec_ops;
+ int dpaa_sec_op_nb;
uint64_t tid;/**< Parent Thread id for this portal */
};
+RTE_DECLARE_PER_LCORE(struct dpaa_portal *, dpaa_io);
+
+#define DPAA_PER_LCORE_PORTAL \
+ RTE_PER_LCORE(dpaa_io)
+#define DPAA_PER_LCORE_DQRR_SIZE \
+ RTE_PER_LCORE(dpaa_io)->dpaa_held_bufs.dqrr_size
+#define DPAA_PER_LCORE_DQRR_HELD \
+ RTE_PER_LCORE(dpaa_io)->dpaa_held_bufs.dqrr_held
+#define DPAA_PER_LCORE_DQRR_MBUF(i) \
+ RTE_PER_LCORE(dpaa_io)->dpaa_held_bufs.mbuf[i]
+#define DPAA_PER_LCORE_RTE_CRYPTO_OP \
+ RTE_PER_LCORE(dpaa_io)->dpaa_sec_ops
+#define DPAA_PER_LCORE_DPAA_SEC_OP_NB \
+ RTE_PER_LCORE(dpaa_io)->dpaa_sec_op_nb
+
/* Various structures representing contiguous memory maps */
struct dpaa_memseg {
TAILQ_ENTRY(dpaa_memseg) next;
@@ -200,20 +224,6 @@ RTE_INIT(dpaainitfn_ ##nm) \
} \
RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
-/* Create storage for dqrr entries per lcore */
-#define DPAA_PORTAL_DEQUEUE_DEPTH 16
-struct dpaa_portal_dqrr {
- void *mbuf[DPAA_PORTAL_DEQUEUE_DEPTH];
- uint64_t dqrr_held;
- uint8_t dqrr_size;
-};
-
-RTE_DECLARE_PER_LCORE(struct dpaa_portal_dqrr, held_bufs);
-
-#define DPAA_PER_LCORE_DQRR_SIZE RTE_PER_LCORE(held_bufs).dqrr_size
-#define DPAA_PER_LCORE_DQRR_HELD RTE_PER_LCORE(held_bufs).dqrr_held
-#define DPAA_PER_LCORE_DQRR_MBUF(i) RTE_PER_LCORE(held_bufs).mbuf[i]
-
__rte_internal
struct fm_eth_port_cfg *dpaa_get_eth_port_cfg(int dev_id);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 66ee0ba0c..c32eaf5c8 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -47,9 +47,6 @@ int dpaa_logtype_sec;
static uint8_t cryptodev_driver_id;
-static __thread struct rte_crypto_op **dpaa_sec_ops;
-static __thread int dpaa_sec_op_nb;
-
static int
dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess);
@@ -145,7 +142,7 @@ dqrr_out_fq_cb_rx(struct qman_portal *qm __always_unused,
struct dpaa_sec_job *job;
struct dpaa_sec_op_ctx *ctx;
- if (dpaa_sec_op_nb >= DPAA_SEC_BURST)
+ if (DPAA_PER_LCORE_DPAA_SEC_OP_NB >= DPAA_SEC_BURST)
return qman_cb_dqrr_defer;
if (!(dqrr->stat & QM_DQRR_STAT_FD_VALID))
@@ -176,7 +173,7 @@ dqrr_out_fq_cb_rx(struct qman_portal *qm __always_unused,
}
mbuf->data_len = len;
}
- dpaa_sec_ops[dpaa_sec_op_nb++] = ctx->op;
+ DPAA_PER_LCORE_RTE_CRYPTO_OP[DPAA_PER_LCORE_DPAA_SEC_OP_NB++] = ctx->op;
dpaa_sec_op_ending(ctx);
return qman_cb_dqrr_consume;
@@ -2303,7 +2300,7 @@ dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess)
DPAA_SEC_ERR("Unable to prepare sec cdb");
return ret;
}
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_SEC_ERR("Failure in affining portal");
@@ -3465,7 +3462,7 @@ cryptodev_dpaa_sec_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
}
}
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
retval = rte_dpaa_portal_init((void *)1);
if (retval) {
DPAA_SEC_ERR("Unable to initialize portal");
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index 5a018d487..3efcf0357 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -179,7 +179,7 @@ dpaa_event_dequeue_burst(void *port, struct rte_event ev[],
struct dpaa_port *portal = (struct dpaa_port *)port;
struct rte_mbuf *mbuf;
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
/* Affine current thread context to a qman portal */
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
@@ -251,7 +251,7 @@ dpaa_event_dequeue_burst_intr(void *port, struct rte_event ev[],
struct dpaa_port *portal = (struct dpaa_port *)port;
struct rte_mbuf *mbuf;
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
/* Affine current thread context to a qman portal */
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 451e2d5d5..15e5cc692 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -52,7 +52,7 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp)
MEMPOOL_INIT_FUNC_TRACE();
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_MEMPOOL_ERR(
@@ -168,7 +168,7 @@ dpaa_mbuf_free_bulk(struct rte_mempool *pool,
DPAA_MEMPOOL_DPDEBUG("Request to free %d buffers in bpid = %d",
n, bp_info->bpid);
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
@@ -223,7 +223,7 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
return -1;
}
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4ef140640..074079185 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1709,7 +1709,7 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
is_global_init = 1;
}
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)1);
if (ret) {
DPAA_PMD_ERR("Unable to initialize portal");
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 819cad7c6..5303c9b76 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -670,7 +670,7 @@ uint16_t dpaa_eth_queue_rx(void *q,
if (likely(fq->is_static))
return dpaa_eth_queue_portal_rx(fq, bufs, nb_bufs);
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_PMD_ERR("Failure in affining portal");
@@ -970,7 +970,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
int ret, realloc_mbuf = 0;
uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0};
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_PMD_ERR("Failure in affining portal");
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 13/37] bus/dpaa: enable link state interrupt
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (11 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 12/37] drivers: optimize thread local storage for dpaa Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 14/37] bus/dpaa: enable set link status Hemant Agrawal
` (25 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
Enable/disable link state interrupt and get link state api is
defined using IOCTL calls from kernel driver
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/bus/dpaa/base/fman/fman.c | 4 +-
drivers/bus/dpaa/base/qbman/process.c | 72 ++++++++++++++++-
drivers/bus/dpaa/dpaa_bus.c | 28 ++++++-
drivers/bus/dpaa/include/fman.h | 2 +
drivers/bus/dpaa/include/process.h | 20 +++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 3 +
drivers/bus/dpaa/rte_dpaa_bus.h | 6 +-
drivers/common/dpaax/compat.h | 5 +-
drivers/net/dpaa/dpaa_ethdev.c | 97 ++++++++++++++++++++++-
9 files changed, 231 insertions(+), 6 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index ae26041ca..33be9e5d7 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2020 NXP
*
*/
@@ -185,6 +185,8 @@ fman_if_init(const struct device_node *dpa_node)
}
memset(__if, 0, sizeof(*__if));
INIT_LIST_HEAD(&__if->__if.bpool_list);
+ strlcpy(__if->node_name, dpa_node->name, IF_NAME_MAX_LEN - 1);
+ __if->node_name[IF_NAME_MAX_LEN - 1] = '\0';
strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
__if->node_path[PATH_MAX - 1] = '\0';
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
index 2c23c98df..68b7af243 100644
--- a/drivers/bus/dpaa/base/qbman/process.c
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2011-2016 Freescale Semiconductor Inc.
- * Copyright 2017 NXP
+ * Copyright 2017,2020 NXP
*
*/
#include <assert.h>
@@ -296,3 +296,73 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal)
return process_portal_free(&input);
}
+
+#define DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT \
+ _IOW(DPAA_IOCTL_MAGIC, 0x0E, struct usdpaa_ioctl_link_status)
+
+#define DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT \
+ _IOW(DPAA_IOCTL_MAGIC, 0x0F, char*)
+
+int dpaa_intr_enable(char *if_name, int efd)
+{
+ struct usdpaa_ioctl_link_status args;
+
+ int ret = check_fd();
+
+ if (ret)
+ return ret;
+
+ args.efd = (uint32_t)efd;
+ strcpy(args.if_name, if_name);
+
+ ret = ioctl(fd, DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT, &args);
+ if (ret)
+ return errno;
+
+ return 0;
+}
+
+int dpaa_intr_disable(char *if_name)
+{
+ int ret = check_fd();
+
+ if (ret)
+ return ret;
+
+ ret = ioctl(fd, DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT, &if_name);
+ if (ret) {
+ if (errno == EINVAL)
+ printf("Failed to disable interrupt: Not Supported\n");
+ else
+ printf("Failed to disable interrupt\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+#define DPAA_IOCTL_GET_LINK_STATUS \
+ _IOWR(DPAA_IOCTL_MAGIC, 0x10, struct usdpaa_ioctl_link_status_args)
+
+int dpaa_get_link_status(char *if_name)
+{
+ int ret = check_fd();
+ struct usdpaa_ioctl_link_status_args args;
+
+ if (ret)
+ return ret;
+
+ strcpy(args.if_name, if_name);
+ args.link_status = 0;
+
+ ret = ioctl(fd, DPAA_IOCTL_GET_LINK_STATUS, &args);
+ if (ret) {
+ if (errno == EINVAL)
+ printf("Failed to get link status: Not Supported\n");
+ else
+ printf("Failed to get link status\n");
+ return ret;
+ }
+
+ return args.link_status;
+}
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 68d47be37..c66962d92 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2020 NXP
*
*/
/* System headers */
@@ -13,6 +13,7 @@
#include <pthread.h>
#include <sys/types.h>
#include <sys/syscall.h>
+#include <sys/eventfd.h>
#include <rte_byteorder.h>
#include <rte_common.h>
@@ -544,6 +545,23 @@ rte_dpaa_bus_dev_build(void)
return 0;
}
+static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
+{
+ int fd;
+
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd < 0) {
+ DPAA_BUS_ERR("Cannot set up eventfd, error %i (%s)",
+ errno, strerror(errno));
+ return errno;
+ }
+
+ intr_handle->fd = fd;
+ intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ return 0;
+}
+
static int
rte_dpaa_bus_probe(void)
{
@@ -591,6 +609,14 @@ rte_dpaa_bus_probe(void)
fclose(svr_file);
}
+ TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+ if (dev->device_type == FSL_DPAA_ETH) {
+ ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ if (ret)
+ DPAA_BUS_ERR("Error setting up interrupt.\n");
+ }
+ }
+
/* And initialize the PA->VA translation table */
dpaax_iova_table_populate();
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index b6293b61c..7a0a7d405 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,6 +2,7 @@
*
* Copyright 2010-2012 Freescale Semiconductor, Inc.
* All rights reserved.
+ * Copyright 2019-2020 NXP
*
*/
@@ -361,6 +362,7 @@ struct fman_if_ic_params {
*/
struct __fman_if {
struct fman_if __if;
+ char node_name[IF_NAME_MAX_LEN];
char node_path[PATH_MAX];
uint64_t regs_size;
void *ccsr_map;
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index d9ec94ee2..7305762c2 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -2,6 +2,7 @@
*
* Copyright 2010-2011 Freescale Semiconductor, Inc.
* All rights reserved.
+ * Copyright 2020 NXP
*
*/
@@ -74,4 +75,23 @@ struct dpaa_ioctl_irq_map {
int process_portal_irq_map(int fd, struct dpaa_ioctl_irq_map *irq);
int process_portal_irq_unmap(int fd);
+struct usdpaa_ioctl_link_status {
+ char if_name[IF_NAME_MAX_LEN];
+ uint32_t efd;
+};
+
+__rte_internal
+int dpaa_intr_enable(char *if_name, int efd);
+
+__rte_internal
+int dpaa_intr_disable(char *if_name);
+
+struct usdpaa_ioctl_link_status_args {
+ /* network device node name */
+ char if_name[IF_NAME_MAX_LEN];
+ int link_status;
+};
+__rte_internal
+int dpaa_get_link_status(char *if_name);
+
#endif /* __PROCESS_H */
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 2defa7992..96662d7be 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -15,6 +15,9 @@ INTERNAL {
dpaa_get_eth_port_cfg;
dpaa_get_qm_channel_caam;
dpaa_get_qm_channel_pool;
+ dpaa_get_link_status;
+ dpaa_intr_disable;
+ dpaa_intr_enable;
dpaa_svr_family;
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 25aff2d30..fdaa63a09 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2020 NXP
*
*/
#ifndef __RTE_DPAA_BUS_H__
@@ -30,6 +30,9 @@
#define SVR_LS1046A_FAMILY 0x87070000
#define SVR_MASK 0xffff0000
+/** Device driver supports link state interrupt */
+#define RTE_DPAA_DRV_INTR_LSC 0x0008
+
#define RTE_DEV_TO_DPAA_CONST(ptr) \
container_of(ptr, const struct rte_dpaa_device, device)
@@ -86,6 +89,7 @@ struct rte_dpaa_driver {
enum rte_dpaa_type drv_type;
rte_dpaa_probe_t probe;
rte_dpaa_remove_t remove;
+ uint32_t drv_flags; /**< Flags for controlling device.*/
};
/* Create storage for dqrr entries per lcore */
diff --git a/drivers/common/dpaax/compat.h b/drivers/common/dpaax/compat.h
index 90db68ce7..6793cb256 100644
--- a/drivers/common/dpaax/compat.h
+++ b/drivers/common/dpaax/compat.h
@@ -2,7 +2,7 @@
*
* Copyright 2011 Freescale Semiconductor, Inc.
* All rights reserved.
- * Copyright 2019 NXP
+ * Copyright 2019-2020 NXP
*
*/
@@ -390,4 +390,7 @@ static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
#define atomic_dec_return(v) rte_atomic32_sub_return(v, 1)
#define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
+/* Interface name len*/
+#define IF_NAME_MAX_LEN 16
+
#endif /* __COMPAT_H */
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 074079185..5c5e62871 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -45,6 +45,7 @@
#include <fsl_qman.h>
#include <fsl_bman.h>
#include <fsl_fman.h>
+#include <process.h>
int dpaa_logtype_pmd;
@@ -133,6 +134,11 @@ static struct rte_dpaa_driver rte_dpaa_pmd;
static int
dpaa_eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info);
+static int dpaa_eth_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete __rte_unused);
+
+static void dpaa_interrupt_handler(void *param);
+
static inline void
dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts)
{
@@ -197,9 +203,19 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint64_t tx_offloads = eth_conf->txmode.offloads;
+ struct rte_device *rdev = dev->device;
+ struct rte_dpaa_device *dpaa_dev;
+ struct fman_if *fif = dev->process_private;
+ struct __fman_if *__fif;
+ struct rte_intr_handle *intr_handle;
+ int ret;
PMD_INIT_FUNC_TRACE();
+ dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+ intr_handle = &dpaa_dev->intr_handle;
+ __fif = container_of(fif, struct __fman_if, __if);
+
/* Rx offloads which are enabled by default */
if (dev_rx_offloads_nodis & ~rx_offloads) {
DPAA_PMD_INFO(
@@ -243,6 +259,28 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
+ /* if the interrupts were configured on this devices*/
+ if (intr_handle && intr_handle->fd) {
+ if (dev->data->dev_conf.intr_conf.lsc != 0)
+ rte_intr_callback_register(intr_handle,
+ dpaa_interrupt_handler,
+ (void *)dev);
+
+ ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ if (ret) {
+ if (dev->data->dev_conf.intr_conf.lsc != 0) {
+ rte_intr_callback_unregister(intr_handle,
+ dpaa_interrupt_handler,
+ (void *)dev);
+ if (ret == EINVAL)
+ printf("Failed to enable interrupt: Not Supported\n");
+ else
+ printf("Failed to enable interrupt\n");
+ }
+ dev->data->dev_conf.intr_conf.lsc = 0;
+ dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC;
+ }
+ }
return 0;
}
@@ -271,6 +309,25 @@ dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
return NULL;
}
+static void dpaa_interrupt_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct rte_device *rdev = dev->device;
+ struct rte_dpaa_device *dpaa_dev;
+ struct rte_intr_handle *intr_handle;
+ uint64_t buf;
+ int bytes_read;
+
+ dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+ intr_handle = &dpaa_dev->intr_handle;
+
+ bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ if (bytes_read < 0)
+ DPAA_PMD_ERR("Error reading eventfd\n");
+ dpaa_eth_link_update(dev, 0);
+ _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+}
+
static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -300,9 +357,27 @@ static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+ struct __fman_if *__fif;
+ struct rte_device *rdev = dev->device;
+ struct rte_dpaa_device *dpaa_dev;
+ struct rte_intr_handle *intr_handle;
+
PMD_INIT_FUNC_TRACE();
+ dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+ intr_handle = &dpaa_dev->intr_handle;
+ __fif = container_of(fif, struct __fman_if, __if);
+
dpaa_eth_dev_stop(dev);
+
+ if (intr_handle && intr_handle->fd &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
+ dpaa_intr_disable(__fif->node_name);
+ rte_intr_callback_unregister(intr_handle,
+ dpaa_interrupt_handler,
+ (void *)dev);
+ }
}
static int
@@ -390,6 +465,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
struct dpaa_if *dpaa_intf = dev->data->dev_private;
struct rte_eth_link *link = &dev->data->dev_link;
struct fman_if *fif = dev->process_private;
+ struct __fman_if *__fif = container_of(fif, struct __fman_if, __if);
+ int ret;
PMD_INIT_FUNC_TRACE();
@@ -403,9 +480,23 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
- link->link_status = dpaa_intf->valid;
+ ret = dpaa_get_link_status(__fif->node_name);
+ if (ret < 0) {
+ if (ret == -EINVAL) {
+ DPAA_PMD_DEBUG("Using default link status-No Support");
+ ret = 1;
+ } else {
+ DPAA_PMD_ERR("rte_dpaa_get_link_status %d", ret);
+ return ret;
+ }
+ }
+
+ link->link_status = ret;
link->link_duplex = ETH_LINK_FULL_DUPLEX;
link->link_autoneg = ETH_LINK_AUTONEG;
+
+ DPAA_PMD_INFO("Port %d Link is %s\n", dev->data->port_id,
+ link->link_status ? "Up" : "Down");
return 0;
}
@@ -1736,6 +1827,9 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
qman_ern_register_cb(dpaa_free_mbuf);
+ if (dpaa_drv->drv_flags & RTE_DPAA_DRV_INTR_LSC)
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
+
/* Invoke PMD device initialization function */
diag = dpaa_dev_init(eth_dev);
if (diag == 0) {
@@ -1763,6 +1857,7 @@ rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
}
static struct rte_dpaa_driver rte_dpaa_pmd = {
+ .drv_flags = RTE_DPAA_DRV_INTR_LSC,
.drv_type = FSL_DPAA_ETH,
.probe = rte_dpaa_probe,
.remove = rte_dpaa_remove,
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 14/37] bus/dpaa: enable set link status
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (12 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 13/37] bus/dpaa: enable link state interrupt Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 15/37] net/dpaa: add support for fmlib in dpdk Hemant Agrawal
` (24 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
Enabled set link status API to start/stop phy
device from application.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/bus/dpaa/base/qbman/process.c | 27 +++++++++++++++++
drivers/bus/dpaa/include/process.h | 11 +++++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 +
drivers/net/dpaa/dpaa_ethdev.c | 35 ++++++++++++++++-------
4 files changed, 63 insertions(+), 11 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
index 68b7af243..6f7e37957 100644
--- a/drivers/bus/dpaa/base/qbman/process.c
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -366,3 +366,30 @@ int dpaa_get_link_status(char *if_name)
return args.link_status;
}
+
+#define DPAA_IOCTL_UPDATE_LINK_STATUS \
+ _IOW(DPAA_IOCTL_MAGIC, 0x11, struct usdpaa_ioctl_update_link_status_args)
+
+int dpaa_update_link_status(char *if_name, int link_status)
+{
+ struct usdpaa_ioctl_update_link_status_args args;
+ int ret;
+
+ ret = check_fd();
+ if (ret)
+ return ret;
+
+ strcpy(args.if_name, if_name);
+ args.link_status = link_status;
+
+ ret = ioctl(fd, DPAA_IOCTL_UPDATE_LINK_STATUS, &args);
+ if (ret) {
+ if (errno == EINVAL)
+ printf("Failed to set link status: Not Supported\n");
+ else
+ printf("Failed to set link status");
+ return ret;
+ }
+
+ return 0;
+}
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index 7305762c2..f52ea1635 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -91,7 +91,18 @@ struct usdpaa_ioctl_link_status_args {
char if_name[IF_NAME_MAX_LEN];
int link_status;
};
+
+struct usdpaa_ioctl_update_link_status_args {
+ /* network device node name */
+ char if_name[IF_NAME_MAX_LEN];
+ /* link status(ETH_LINK_UP/DOWN) */
+ int link_status;
+};
+
__rte_internal
int dpaa_get_link_status(char *if_name);
+__rte_internal
+int dpaa_update_link_status(char *if_name, int link_status);
+
#endif /* __PROCESS_H */
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 96662d7be..5dec8d9e5 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -19,6 +19,7 @@ INTERNAL {
dpaa_intr_disable;
dpaa_intr_enable;
dpaa_svr_family;
+ dpaa_update_link_status;
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
fman_if_add_mac_addr;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5c5e62871..7c4762002 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -480,18 +480,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
- ret = dpaa_get_link_status(__fif->node_name);
- if (ret < 0) {
- if (ret == -EINVAL) {
- DPAA_PMD_DEBUG("Using default link status-No Support");
- ret = 1;
- } else {
- DPAA_PMD_ERR("rte_dpaa_get_link_status %d", ret);
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ ret = dpaa_get_link_status(__fif->node_name);
+ if (ret < 0)
return ret;
- }
+ link->link_status = ret;
+ } else {
+ link->link_status = dpaa_intf->valid;
}
- link->link_status = ret;
link->link_duplex = ETH_LINK_FULL_DUPLEX;
link->link_autoneg = ETH_LINK_AUTONEG;
@@ -987,17 +984,33 @@ dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
static int dpaa_link_down(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+ struct __fman_if *__fif;
+
PMD_INIT_FUNC_TRACE();
- dpaa_eth_dev_stop(dev);
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+ else
+ dpaa_eth_dev_stop(dev);
return 0;
}
static int dpaa_link_up(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+ struct __fman_if *__fif;
+
PMD_INIT_FUNC_TRACE();
- dpaa_eth_dev_start(dev);
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+ else
+ dpaa_eth_dev_start(dev);
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 15/37] net/dpaa: add support for fmlib in dpdk
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (13 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 14/37] bus/dpaa: enable set link status Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-06-30 17:00 ` Ferruh Yigit
2020-05-27 13:23 ` [dpdk-dev] [PATCH 16/37] net/dpaa: add VSP support in FMLIB Hemant Agrawal
` (23 subsequent siblings)
38 siblings, 1 reply; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Sachin Saxena, Hemant Agrawal
This library is required for configuring FMAN for
various flow configurations.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/Makefile | 4 +-
drivers/net/dpaa/fmlib/dpaa_integration.h | 48 +
drivers/net/dpaa/fmlib/fm_ext.h | 968 ++++
drivers/net/dpaa/fmlib/fm_lib.c | 557 +++
drivers/net/dpaa/fmlib/fm_pcd_ext.h | 5164 +++++++++++++++++++++
drivers/net/dpaa/fmlib/fm_port_ext.h | 3512 ++++++++++++++
drivers/net/dpaa/fmlib/ncsw_ext.h | 153 +
drivers/net/dpaa/fmlib/net_ext.h | 383 ++
drivers/net/dpaa/meson.build | 3 +-
9 files changed, 10790 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/dpaa/fmlib/dpaa_integration.h
create mode 100644 drivers/net/dpaa/fmlib/fm_ext.h
create mode 100644 drivers/net/dpaa/fmlib/fm_lib.c
create mode 100644 drivers/net/dpaa/fmlib/fm_pcd_ext.h
create mode 100644 drivers/net/dpaa/fmlib/fm_port_ext.h
create mode 100644 drivers/net/dpaa/fmlib/ncsw_ext.h
create mode 100644 drivers/net/dpaa/fmlib/net_ext.h
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index d7bbc0e15..0d2f32ba1 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2017 NXP
+# Copyright 2017-2019 NXP
#
include $(RTE_SDK)/mk/rte.vars.mk
@@ -15,6 +15,7 @@ CFLAGS += -O3 $(WERROR_FLAGS)
CFLAGS += -Wno-pointer-arith
CFLAGS += -I$(RTE_SDK_DPAA)/
CFLAGS += -I$(RTE_SDK_DPAA)/include
+CFLAGS += -I$(RTE_SDK_DPAA)/fmlib
CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/base/qbman
@@ -26,6 +27,7 @@ CFLAGS += -I$(RTE_SDK)/lib/librte_eal/include
EXPORT_MAP := rte_pmd_dpaa_version.map
# Interfaces with DPDK
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += fmlib/fm_lib.c
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
diff --git a/drivers/net/dpaa/fmlib/dpaa_integration.h b/drivers/net/dpaa/fmlib/dpaa_integration.h
new file mode 100644
index 000000000..04ce1c83a
--- /dev/null
+++ b/drivers/net/dpaa/fmlib/dpaa_integration.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright 2009-2012 Freescale Semiconductor Inc.
+ * Copyright 2017-2020 NXP
+ */
+
+#ifndef __DPAA_INTEGRATION_H
+#define __DPAA_INTEGRATION_H
+
+#include "ncsw_ext.h"
+
+#define DPAA_VERSION 11
+
+#define BM_MAX_NUM_OF_POOLS 64 /**< Number of buffers pools */
+
+#define INTG_MAX_NUM_OF_FM 2
+
+/* Ports defines */
+#define FM_MAX_NUM_OF_1G_MACS 6
+#define FM_MAX_NUM_OF_10G_MACS 2
+#define FM_MAX_NUM_OF_MACS (FM_MAX_NUM_OF_1G_MACS + FM_MAX_NUM_OF_10G_MACS)
+#define FM_MAX_NUM_OF_OH_PORTS 6
+
+#define FM_MAX_NUM_OF_1G_RX_PORTS FM_MAX_NUM_OF_1G_MACS
+#define FM_MAX_NUM_OF_10G_RX_PORTS FM_MAX_NUM_OF_10G_MACS
+#define FM_MAX_NUM_OF_RX_PORTS (FM_MAX_NUM_OF_10G_RX_PORTS + FM_MAX_NUM_OF_1G_RX_PORTS)
+
+#define FM_MAX_NUM_OF_1G_TX_PORTS FM_MAX_NUM_OF_1G_MACS
+#define FM_MAX_NUM_OF_10G_TX_PORTS FM_MAX_NUM_OF_10G_MACS
+#define FM_MAX_NUM_OF_TX_PORTS (FM_MAX_NUM_OF_10G_TX_PORTS + FM_MAX_NUM_OF_1G_TX_PORTS)
+
+#define FM_PORT_MAX_NUM_OF_EXT_POOLS 4
+ /**< Number of external BM pools per Rx port */
+#define FM_PORT_NUM_OF_CONGESTION_GRPS 256
+ /**< Total number of congestion groups in QM */
+#define FM_MAX_NUM_OF_SUB_PORTALS 16
+#define FM_PORT_MAX_NUM_OF_OBSERVED_EXT_POOLS 0
+
+/* PCD defines */
+#define FM_PCD_PLCR_NUM_ENTRIES 256
+ /**< Total number of policer profiles */
+#define FM_PCD_KG_NUM_OF_SCHEMES 32
+ /**< Total number of KG schemes */
+#define FM_PCD_MAX_NUM_OF_CLS_PLANS 256
+ /**< Number of classification plan entries. */
+
+#define FM_MAX_NUM_OF_PFC_PRIORITIES 8
+
+#endif /* __DPAA_INTEGRATION_H */
diff --git a/drivers/net/dpaa/fmlib/fm_ext.h b/drivers/net/dpaa/fmlib/fm_ext.h
new file mode 100644
index 000000000..0f56dc54f
--- /dev/null
+++ b/drivers/net/dpaa/fmlib/fm_ext.h
@@ -0,0 +1,968 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright 2008-2012 Freescale Semiconductor Inc.
+ * Copyright 2017-2020 NXP
+ */
+
+#ifndef __FM_EXT_H
+#define __FM_EXT_H
+
+#include "ncsw_ext.h"
+#include "dpaa_integration.h"
+
+#define FM_IOC_TYPE_BASE (NCSW_IOC_TYPE_BASE + 1)
+#define FMT_IOC_TYPE_BASE (NCSW_IOC_TYPE_BASE + 3)
+
+#define MODULE_FM 0x00010000
+#define __ERR_MODULE__ MODULE_FM
+
+/* #define FM_LIB_DBG */
+
+#if defined(FM_LIB_DBG)
+ #define _fml_dbg(format, arg...) \
+ printf("fmlib [%s:%u] - " format, \
+ __func__, __LINE__, ##arg)
+#else
+ #define _fml_dbg(arg...)
+#endif
+
+/*#define FM_IOCTL_DBG*/
+
+#if defined(FM_IOCTL_DBG)
+ #define _fm_ioctl_dbg(format, arg...) \
+ printk("fm ioctl [%s:%u](cpu:%u) - " format, \
+ __func__, __LINE__, smp_processor_id(), ##arg)
+#else
+# define _fm_ioctl_dbg(arg...)
+#endif
+
+/**
+ @Group lnx_ioctl_ncsw_grp NetCommSw Linux User-Space (IOCTL) API
+ @{
+*/
+
+#define NCSW_IOC_TYPE_BASE 0xe0
+ /**< defines the IOCTL type for all the NCSW Linux module commands */
+
+/**
+ @Group lnx_usr_FM_grp Frame Manager API
+
+ @Description FM API functions, definitions and enums.
+
+ @{
+*/
+
+/**
+ @Group lnx_usr_FM_lib_grp FM library
+
+ @Description FM API functions, definitions and enums
+
+ The FM module is the main driver module and is a mandatory module
+ for FM driver users. This module must be initialized first prior
+ to any other drivers modules.
+ The FM is a "singleton" module. It is responsible of the common
+ HW modules: FPM, DMA, common QMI and common BMI initializations and
+ run-time control routines. This module must be initialized always
+ when working with any of the FM modules.
+ NOTE - We assume that the FM library will be initialized only by core No. 0!
+
+ @{
+*/
+
+/**
+ @Description Enum for defining port types
+*/
+typedef enum e_FmPortType {
+ e_FM_PORT_TYPE_OH_OFFLINE_PARSING = 0, /**< Offline parsing port */
+ e_FM_PORT_TYPE_RX, /**< 1G Rx port */
+ e_FM_PORT_TYPE_RX_10G, /**< 10G Rx port */
+ e_FM_PORT_TYPE_TX, /**< 1G Tx port */
+ e_FM_PORT_TYPE_TX_10G, /**< 10G Tx port */
+ e_FM_PORT_TYPE_RX_2_5G, /**< 2.5G Rx port */
+ e_FM_PORT_TYPE_TX_2_5G, /**< 2.5G Tx port */
+ e_FM_PORT_TYPE_DUMMY
+} e_FmPortType;
+
+/**
+ @Description Parse results memory layout
+*/
+typedef struct t_FmPrsResult {
+ volatile uint8_t lpid; /**< Logical port id */
+ volatile uint8_t shimr; /**< Shim header result */
+ volatile uint16_t l2r; /**< Layer 2 result */
+ volatile uint16_t l3r; /**< Layer 3 result */
+ volatile uint8_t l4r; /**< Layer 4 result */
+ volatile uint8_t cplan; /**< Classification plan id */
+ volatile uint16_t nxthdr; /**< Next Header */
+ volatile uint16_t cksum; /**< Running-sum */
+ volatile uint16_t flags_frag_off;
+ /**< Flags & fragment-offset field of the last IP-header */
+ volatile uint8_t route_type;
+ /**< Routing type field of a IPv6 routing extension header */
+ volatile uint8_t rhp_ip_valid;
+ /**< Routing Extension Header Present; last bit is IP valid */
+ volatile uint8_t shim_off[2]; /**< Shim offset */
+ volatile uint8_t ip_pid_off; /**< IP PID (last IP-proto) offset */
+ volatile uint8_t eth_off; /**< ETH offset */
+ volatile uint8_t llc_snap_off; /**< LLC_SNAP offset */
+ volatile uint8_t vlan_off[2]; /**< VLAN offset */
+ volatile uint8_t etype_off; /**< ETYPE offset */
+ volatile uint8_t pppoe_off; /**< PPP offset */
+ volatile uint8_t mpls_off[2]; /**< MPLS offset */
+ volatile uint8_t ip_off[2]; /**< IP offset */
+ volatile uint8_t gre_off; /**< GRE offset */
+ volatile uint8_t l4_off; /**< Layer 4 offset */
+ volatile uint8_t nxthdr_off; /**< Parser end point */
+} __rte_packed t_FmPrsResult;
+
+/**
+ @Collection FM Parser results
+*/
+#define FM_PR_L2_VLAN_STACK 0x00000100 /**< Parse Result: VLAN stack */
+#define FM_PR_L2_ETHERNET 0x00008000 /**< Parse Result: Ethernet*/
+#define FM_PR_L2_VLAN 0x00004000 /**< Parse Result: VLAN */
+#define FM_PR_L2_LLC_SNAP 0x00002000 /**< Parse Result: LLC_SNAP */
+#define FM_PR_L2_MPLS 0x00001000 /**< Parse Result: MPLS */
+#define FM_PR_L2_PPPoE 0x00000800 /**< Parse Result: PPPoE */
+/* @} */
+
+/**
+ @Collection FM Frame descriptor macros
+*/
+#define FM_FD_CMD_FCO 0x80000000 /**< Frame queue Context Override */
+#define FM_FD_CMD_RPD 0x40000000 /**< Read Prepended Data */
+#define FM_FD_CMD_UPD 0x20000000 /**< Update Prepended Data */
+#define FM_FD_CMD_DTC 0x10000000 /**< Do L4 Checksum */
+#define FM_FD_CMD_DCL4C 0x10000000 /**< Didn't calculate L4 Checksum */
+#define FM_FD_CMD_CFQ 0x00ffffff /**< Confirmation Frame Queue */
+
+#define FM_FD_ERR_UNSUPPORTED_FORMAT 0x04000000
+ /**< Not for Rx-Port! Unsupported Format */
+#define FM_FD_ERR_LENGTH 0x02000000 /**< Not for Rx-Port! Length Error */
+#define FM_FD_ERR_DMA 0x01000000 /**< DMA Data error */
+
+#define FM_FD_IPR 0x00000001 /**< IPR frame (not error) */
+
+#define FM_FD_ERR_IPR_NCSP (0x00100000 | FM_FD_IPR)
+ /**< IPR non-consistent-sp */
+#define FM_FD_ERR_IPR (0x00200000 | FM_FD_IPR) /**< IPR error */
+#define FM_FD_ERR_IPR_TO (0x00300000 | FM_FD_IPR) /**< IPR timeout */
+
+#ifdef FM_CAPWAP_SUPPORT
+#define FM_FD_ERR_CRE 0x00200000
+#define FM_FD_ERR_CHE 0x00100000
+#endif /* FM_CAPWAP_SUPPORT */
+
+#define FM_FD_ERR_PHYSICAL 0x00080000
+ /**< Rx FIFO overflow, FCS error, code error, running disparity
+ error (SGMII and TBI modes), FIFO parity error. PHY
+ Sequence error, PHY error control character detected. */
+#define FM_FD_ERR_SIZE 0x00040000
+ /**< Frame too long OR Frame size exceeds max_length_frame */
+#define FM_FD_ERR_CLS_DISCARD 0x00020000 /**< classification discard */
+#define FM_FD_ERR_EXTRACTION 0x00008000 /**< Extract Out of Frame */
+#define FM_FD_ERR_NO_SCHEME 0x00004000 /**< No Scheme Selected */
+#define FM_FD_ERR_KEYSIZE_OVERFLOW 0x00002000 /**< Keysize Overflow */
+#define FM_FD_ERR_COLOR_RED 0x00000800 /**< Frame color is red */
+#define FM_FD_ERR_COLOR_YELLOW 0x00000400 /**< Frame color is yellow */
+#define FM_FD_ERR_ILL_PLCR 0x00000200 /**< Illegal Policer Profile selected */
+#define FM_FD_ERR_PLCR_FRAME_LEN 0x00000100 /**< Policer frame length error */
+#define FM_FD_ERR_PRS_TIMEOUT 0x00000080 /**< Parser Time out Exceed */
+#define FM_FD_ERR_PRS_ILL_INSTRUCT 0x00000040 /**< Invalid Soft Parser instruction */
+#define FM_FD_ERR_PRS_HDR_ERR 0x00000020
+ /**< Header error was identified during parsing */
+#define FM_FD_ERR_BLOCK_LIMIT_EXCEEDED 0x00000008
+ /**< Frame parsed beyind 256 first bytes */
+
+#define FM_FD_TX_STATUS_ERR_MASK (FM_FD_ERR_UNSUPPORTED_FORMAT | \
+ FM_FD_ERR_LENGTH | \
+ FM_FD_ERR_DMA) /**< TX Error FD bits */
+
+#define FM_FD_RX_STATUS_ERR_MASK (FM_FD_ERR_UNSUPPORTED_FORMAT | \
+ FM_FD_ERR_LENGTH | \
+ FM_FD_ERR_DMA | \
+ FM_FD_ERR_IPR | \
+ FM_FD_ERR_IPR_TO | \
+ FM_FD_ERR_IPR_NCSP | \
+ FM_FD_ERR_PHYSICAL | \
+ FM_FD_ERR_SIZE | \
+ FM_FD_ERR_CLS_DISCARD | \
+ FM_FD_ERR_COLOR_RED | \
+ FM_FD_ERR_COLOR_YELLOW | \
+ FM_FD_ERR_ILL_PLCR | \
+ FM_FD_ERR_PLCR_FRAME_LEN | \
+ FM_FD_ERR_EXTRACTION | \
+ FM_FD_ERR_NO_SCHEME | \
+ FM_FD_ERR_KEYSIZE_OVERFLOW | \
+ FM_FD_ERR_PRS_TIMEOUT | \
+ FM_FD_ERR_PRS_ILL_INSTRUCT | \
+ FM_FD_ERR_PRS_HDR_ERR | \
+ FM_FD_ERR_BLOCK_LIMIT_EXCEEDED)
+ /**< RX Error FD bits */
+
+#define FM_FD_RX_STATUS_ERR_NON_FM 0x00400000 /**< non Frame-Manager error */
+/* @} */
+
+/**
+ @Description FM Exceptions
+*/
+typedef enum e_FmExceptions {
+ e_FM_EX_DMA_BUS_ERROR = 0, /**< DMA bus error. */
+ e_FM_EX_DMA_READ_ECC,
+ /**< Read Buffer ECC error (Valid for FM rev < 6)*/
+ e_FM_EX_DMA_SYSTEM_WRITE_ECC,
+ /**< Write Buffer ECC error on system side (Valid for FM rev < 6)*/
+ e_FM_EX_DMA_FM_WRITE_ECC,
+ /**< Write Buffer ECC error on FM side (Valid for FM rev < 6)*/
+ e_FM_EX_DMA_SINGLE_PORT_ECC,
+ /**< Single Port ECC error on FM side (Valid for FM rev > 6)*/
+ e_FM_EX_FPM_STALL_ON_TASKS, /**< Stall of tasks on FPM */
+ e_FM_EX_FPM_SINGLE_ECC, /**< Single ECC on FPM. */
+ e_FM_EX_FPM_DOUBLE_ECC, /**< Double ECC error on FPM ram access */
+ e_FM_EX_QMI_SINGLE_ECC, /**< Single ECC on QMI. */
+ e_FM_EX_QMI_DOUBLE_ECC, /**< Double bit ECC occurred on QMI */
+ e_FM_EX_QMI_DEQ_FROM_UNKNOWN_PORTID,/**< Dequeue from unknown port id */
+ e_FM_EX_BMI_LIST_RAM_ECC, /**< Linked List RAM ECC error */
+ e_FM_EX_BMI_STORAGE_PROFILE_ECC,/**< Storage Profile ECC Error */
+ e_FM_EX_BMI_STATISTICS_RAM_ECC, /**< Statistics Count RAM ECC Error Enable */
+ e_FM_EX_BMI_DISPATCH_RAM_ECC, /**< Dispatch RAM ECC Error Enable */
+ e_FM_EX_IRAM_ECC, /**< Double bit ECC occurred on IRAM*/
+ e_FM_EX_MURAM_ECC /**< Double bit ECC occurred on MURAM*/
+} e_FmExceptions;
+
+/**
+ @Description Enum for defining port DMA swap mode
+*/
+typedef enum e_FmDmaSwapOption {
+ e_FM_DMA_NO_SWP, /**< No swap, transfer data as is.*/
+ e_FM_DMA_SWP_PPC_LE, /**< The transferred data should be swapped
+ in PowerPc Little Endian mode. */
+ e_FM_DMA_SWP_BE /**< The transferred data should be swapped
+ in Big Endian mode */
+} e_FmDmaSwapOption;
+
+/**
+ @Description Enum for defining port DMA cache attributes
+*/
+typedef enum e_FmDmaCacheOption {
+ e_FM_DMA_NO_STASH = 0, /**< Cacheable, no Allocate (No Stashing) */
+ e_FM_DMA_STASH = 1 /**< Cacheable and Allocate (Stashing on) */
+} e_FmDmaCacheOption;
+/**
+ @Group lnx_usr_FM_init_grp FM Initialization Unit
+
+ @Description FM Initialization Unit
+
+ Initialization Flow
+ Initialization of the FM Module will be carried out by the application
+ according to the following sequence:
+ - Calling the configuration routine with basic parameters.
+ - Calling the advance initialization routines to change driver's defaults.
+ - Calling the initialization routine.
+
+ @{
+*/
+
+t_Handle FM_Open(uint8_t id);
+void FM_Close(t_Handle h_Fm);
+
+/**
+ @Description A structure for defining buffer prefix area content.
+*/
+typedef struct t_FmBufferPrefixContent {
+ uint16_t privDataSize; /**< Number of bytes to be left at the beginning
+ of the external buffer; Note that the private-area will
+ start from the base of the buffer address. */
+ bool passPrsResult; /**< TRUE to pass the parse result to/from the FM;
+ User may use FM_PORT_GetBufferPrsResult() in order to
+ get the parser-result from a buffer. */
+ bool passTimeStamp; /**< TRUE to pass the timeStamp to/from the FM
+ User may use FM_PORT_GetBufferTimeStamp() in order to
+ get the parser-result from a buffer. */
+ bool passHashResult; /**< TRUE to pass the KG hash result to/from the FM
+ User may use FM_PORT_GetBufferHashResult() in order to
+ get the parser-result from a buffer. */
+ bool passAllOtherPCDInfo;/**< Add all other Internal-Context information:
+ AD, hash-result, key, etc. */
+ uint16_t dataAlign;
+ /**< 0 to use driver's default alignment [64],
+ other value for selecting a data alignment (must be a power of 2);
+ if write optimization is used, must be >= 16. */
+ uint8_t manipExtraSpace;
+ /**< Maximum extra size needed (insertion-size minus removal-size);
+ Note that this field impacts the size of the buffer-prefix
+ (i.e. it pushes the data offset);
+ This field is irrelevant if DPAA_VERSION==10 */
+} t_FmBufferPrefixContent;
+
+/**
+ @Description A structure of information about each of the external
+ buffer pools used by a port or storage-profile.
+*/
+typedef struct t_FmExtPoolParams {
+ uint8_t id; /**< External buffer pool id */
+ uint16_t size; /**< External buffer pool buffer size */
+} t_FmExtPoolParams;
+
+/**
+ @Description A structure for informing the driver about the external
+ buffer pools allocated in the BM and used by a port or a
+ storage-profile.
+*/
+typedef struct t_FmExtPools {
+ uint8_t numOfPoolsUsed; /**< Number of pools use by this port */
+ t_FmExtPoolParams extBufPool[FM_PORT_MAX_NUM_OF_EXT_POOLS];
+ /**< Parameters for each port */
+} t_FmExtPools;
+
+/**
+ @Description A structure for defining backup BM Pools.
+*/
+typedef struct t_FmBackupBmPools {
+ uint8_t numOfBackupPools; /**< Number of BM backup pools -
+ must be smaller than the total number of
+ pools defined for the specified port.*/
+ uint8_t poolIds[FM_PORT_MAX_NUM_OF_EXT_POOLS];
+ /**< numOfBackupPools pool id's, specifying which
+ pools should be used only as backup. Pool
+ id's specified here must be a subset of the
+ pools used by the specified port.*/
+} t_FmBackupBmPools;
+
+/**
+ @Description A structure for defining BM pool depletion criteria
+*/
+typedef struct t_FmBufPoolDepletion {
+ bool poolsGrpModeEnable;
+ /**< select mode in which pause frames will be sent after
+ a number of pools (all together!) are depleted */
+ uint8_t numOfPools;
+ /**< the number of depleted pools that will invoke
+ pause frames transmission. */
+ bool poolsToConsider[BM_MAX_NUM_OF_POOLS];
+ /**< For each pool, TRUE if it should be considered for
+ depletion (Note - this pool must be used by this port!). */
+ bool singlePoolModeEnable;
+ /**< select mode in which pause frames will be sent after
+ a single-pool is depleted; */
+ bool poolsToConsiderForSingleMode[BM_MAX_NUM_OF_POOLS];
+ /**< For each pool, TRUE if it should be considered for
+ depletion (Note - this pool must be used by this port!) */
+#if (DPAA_VERSION >= 11)
+ bool pfcPrioritiesEn[FM_MAX_NUM_OF_PFC_PRIORITIES];
+ /**< This field is used by the MAC as the Priority Enable Vector
+ in the PFC frame which is transmitted */
+#endif /* (DPAA_VERSION >= 11) */
+} t_FmBufPoolDepletion;
+
+/** @} */ /* end of lnx_usr_FM_init_grp group */
+
+/**
+ @Group lnx_usr_FM_runtime_control_grp FM Runtime Control Unit
+
+ @Description FM Runtime control unit API functions, definitions and enums.
+ The FM driver provides a set of control routines.
+ These routines may only be called after the module was fully
+ initialized (both configuration and initialization routines were
+ called). They are typically used to get information from hardware
+ (status, counters/statistics, revision etc.), to modify a current
+ state or to force/enable a required action. Run-time control may
+ be called whenever necessary and as many times as needed.
+ @{
+*/
+
+/**
+ @Collection General FM defines.
+*/
+#define FM_MAX_NUM_OF_VALID_PORTS (FM_MAX_NUM_OF_OH_PORTS + \
+ FM_MAX_NUM_OF_1G_RX_PORTS + \
+ FM_MAX_NUM_OF_10G_RX_PORTS + \
+ FM_MAX_NUM_OF_1G_TX_PORTS + \
+ FM_MAX_NUM_OF_10G_TX_PORTS)
+ /**< Number of available FM ports */
+/* @} */
+
+/**
+ @Description A structure for Port bandwidth requirement. Port is identified
+ by type and relative id.
+*/
+typedef struct t_FmPortBandwidth {
+ e_FmPortType type; /**< FM port type */
+ uint8_t relativePortId; /**< Type relative port id */
+ uint8_t bandwidth; /**< bandwidth - (in term of percents) */
+} t_FmPortBandwidth;
+
+/**
+ @Description A Structure containing an array of Port bandwidth requirements.
+ The user should state the ports requiring bandwidth in terms of
+ percentage - i.e. all port's bandwidths in the array must add
+ up to 100.
+*/
+typedef struct t_FmPortsBandwidthParams {
+ uint8_t numOfPorts;
+ /**< The number of relevant ports, which is the
+ number of valid entries in the array below */
+ t_FmPortBandwidth portsBandwidths[FM_MAX_NUM_OF_VALID_PORTS];
+ /**< for each port, it's bandwidth (all port's bw must add up to 100.*/
+} t_FmPortsBandwidthParams;
+
+/**
+ @Description Enum for defining FM counters
+*/
+typedef enum e_FmCounters {
+ e_FM_COUNTERS_ENQ_TOTAL_FRAME = 0,/**< QMI total enqueued frames counter */
+ e_FM_COUNTERS_DEQ_TOTAL_FRAME, /**< QMI total dequeued frames counter */
+ e_FM_COUNTERS_DEQ_0, /**< QMI 0 frames from QMan counter */
+ e_FM_COUNTERS_DEQ_1, /**< QMI 1 frames from QMan counter */
+ e_FM_COUNTERS_DEQ_2, /**< QMI 2 frames from QMan counter */
+ e_FM_COUNTERS_DEQ_3, /**< QMI 3 frames from QMan counter */
+ e_FM_COUNTERS_DEQ_FROM_DEFAULT, /**< QMI dq from default queue counter */
+ e_FM_COUNTERS_DEQ_FROM_CONTEXT, /**< QMI dq from FQ context counter */
+ e_FM_COUNTERS_DEQ_FROM_FD, /**< QMI dq from FD command field counter */
+ e_FM_COUNTERS_DEQ_CONFIRM /**< QMI dq confirm counter */
+} e_FmCounters;
+
+/**
+ @Description A structure for returning FM revision information
+*/
+typedef struct t_FmRevisionInfo {
+ uint8_t majorRev; /**< Major revision */
+ uint8_t minorRev; /**< Minor revision */
+} t_FmRevisionInfo;
+
+/**
+ @Description A structure for returning FM ctrl code revision information
+*/
+typedef struct t_FmCtrlCodeRevisionInfo {
+ uint16_t packageRev; /**< Package revision */
+ uint8_t majorRev; /**< Major revision */
+ uint8_t minorRev; /**< Minor revision */
+} t_FmCtrlCodeRevisionInfo;
+
+/**
+ @Description A Structure for obtaining FM controller monitor values
+*/
+typedef struct t_FmCtrlMon {
+ uint8_t percentCnt[2]; /**< Percentage value */
+} t_FmCtrlMon;
+
+/**
+ @Function FM_SetPortsBandwidth
+
+ @Description Sets relative weights between ports when accessing common resources.
+
+ @Param[in] h_Fm A handle to an FM Module.
+ @Param[in] p_PortsBandwidth A structure of ports bandwidths in percentage, i.e.
+ total must equal 100.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+*/
+uint32_t FM_SetPortsBandwidth(t_Handle h_Fm,
+ t_FmPortsBandwidthParams *p_PortsBandwidth);
+
+/**
+ @Function FM_GetRevision
+
+ @Description Returns the FM revision
+
+ @Param[in] h_Fm A handle to an FM Module.
+ @Param[out] p_FmRevisionInfo A structure of revision information parameters.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+*/
+uint32_t FM_GetRevision(t_Handle h_Fm,
+ t_FmRevisionInfo *p_FmRevisionInfo);
+
+/**
+ @Function FM_GetFmanCtrlCodeRevision
+
+ @Description Returns the Fman controller code revision
+ (Not implemented in fm-lib just yet!)
+
+ @Param[in] h_Fm A handle to an FM Module.
+ @Param[out] p_RevisionInfo A structure of revision information parameters.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+*/
+uint32_t FM_GetFmanCtrlCodeRevision(t_Handle h_Fm,
+ t_FmCtrlCodeRevisionInfo *p_RevisionInfo);
+
+/**
+ @Function FM_GetCounter
+
+ @Description Reads one of the FM counters.
+
+ @Param[in] h_Fm A handle to an FM Module.
+ @Param[in] counter The requested counter.
+
+ @Return Counter's current value.
+
+ @Cautions Allowed only following FM_Init().
+ Note that it is user's responsibility to call this routine only
+ for enabled counters, and there will be no indication if a
+ disabled counter is accessed.
+*/
+uint32_t FM_GetCounter(t_Handle h_Fm, e_FmCounters counter);
+
+/**
+ @Function FM_ModifyCounter
+
+ @Description Sets a value to an enabled counter. Use "0" to reset the counter.
+
+ @Param[in] h_Fm A handle to an FM Module.
+ @Param[in] counter The requested counter.
+ @Param[in] val The requested value to be written into the counter.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+*/
+uint32_t FM_ModifyCounter(t_Handle h_Fm,
+ e_FmCounters counter, uint32_t val);
+
+/**
+ @Function FM_CtrlMonStart
+
+ @Description Start monitoring utilization of all available FM controllers.
+
+ In order to obtain FM controllers utilization the following sequence
+ should be used:
+ -# FM_CtrlMonStart()
+ -# FM_CtrlMonStop()
+ -# FM_CtrlMonGetCounters() - issued for each FM controller
+
+ @Param[in] h_Fm A handle to an FM Module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID).
+*/
+uint32_t FM_CtrlMonStart(t_Handle h_Fm);
+
+/**
+ @Function FM_CtrlMonStop
+
+ @Description Stop monitoring utilization of all available FM controllers.
+
+ In order to obtain FM controllers utilization the following sequence
+ should be used:
+ -# FM_CtrlMonStart()
+ -# FM_CtrlMonStop()
+ -# FM_CtrlMonGetCounters() - issued for each FM controller
+
+ @Param[in] h_Fm A handle to an FM Module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID).
+*/
+uint32_t FM_CtrlMonStop(t_Handle h_Fm);
+
+/**
+ @Function FM_CtrlMonGetCounters
+
+ @Description Obtain FM controller utilization parameters.
+
+ In order to obtain FM controllers utilization the following sequence
+ should be used:
+ -# FM_CtrlMonStart()
+ -# FM_CtrlMonStop()
+ -# FM_CtrlMonGetCounters() - issued for each FM controller
+
+ @Param[in] h_Fm A handle to an FM Module.
+ @Param[in] fmCtrlIndex FM Controller index for that utilization results
+ are requested.
+ @Param[in] p_Mon Pointer to utilization results structure.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID).
+*/
+uint32_t FM_CtrlMonGetCounters(t_Handle h_Fm,
+ uint8_t fmCtrlIndex, t_FmCtrlMon *p_Mon);
+
+/*
+ @Function FM_ForceIntr
+
+ @Description Causes an interrupt event on the requested source.
+
+ @Param[in] h_Fm A handle to an FM Module.
+ @Param[in] exception An exception to be forced.
+
+ @Return E_OK on success; Error code if the exception is not enabled,
+ or is not able to create interrupt.
+
+ @Cautions Allowed only following FM_Init().
+*/
+uint32_t FM_ForceIntr(t_Handle h_Fm, e_FmExceptions exception);
+
+/** @} */ /* end of lnx_usr_FM_runtime_control_grp group */
+/** @} */ /* end of lnx_usr_FM_lib_grp group */
+/** @} */ /* end of lnx_usr_FM_grp group */
+
+/**
+@Description FM Char device ioctls
+*/
+
+/**
+ @Group lnx_ioctl_FM_grp Frame Manager Linux IOCTL API
+
+ @Description FM Linux ioctls definitions and enums
+
+ @{
+*/
+
+/**
+ @Collection FM IOCTL device ('/dev') definitions
+*/
+#define DEV_FM_NAME "fm" /**< Name of the FM chardev */
+
+#define DEV_FM_MINOR_BASE 0
+#define DEV_FM_PCD_MINOR_BASE (DEV_FM_MINOR_BASE + 1)
+ /*/dev/fmx-pcd */
+#define DEV_FM_OH_PORTS_MINOR_BASE (DEV_FM_PCD_MINOR_BASE + 1)
+ /*/dev/fmx-port-ohy */
+#define DEV_FM_RX_PORTS_MINOR_BASE (DEV_FM_OH_PORTS_MINOR_BASE + FM_MAX_NUM_OF_OH_PORTS) /*/dev/fmx-port-rxy */
+#define DEV_FM_TX_PORTS_MINOR_BASE (DEV_FM_RX_PORTS_MINOR_BASE + FM_MAX_NUM_OF_RX_PORTS) /*/dev/fmx-port-txy */
+#define DEV_FM_MAX_MINORS (DEV_FM_TX_PORTS_MINOR_BASE + FM_MAX_NUM_OF_TX_PORTS)
+
+#define FM_IOC_NUM(n) (n)
+#define FM_PCD_IOC_NUM(n) (n + 20)
+#define FM_PORT_IOC_NUM(n) (n + 70)
+/* @} */
+
+#define IOC_FM_MAX_NUM_OF_PORTS 64
+
+/**
+ @Description Enum for defining port types
+ (must match enum e_FmPortType defined in fm_ext.h)
+*/
+typedef enum ioc_fm_port_type {
+ e_IOC_FM_PORT_TYPE_OH_OFFLINE_PARSING = 0, /**< Offline parsing port */
+ e_IOC_FM_PORT_TYPE_RX, /**< 1G Rx port */
+ e_IOC_FM_PORT_TYPE_RX_10G, /**< 10G Rx port */
+ e_IOC_FM_PORT_TYPE_TX, /**< 1G Tx port */
+ e_IOC_FM_PORT_TYPE_TX_10G, /**< 10G Tx port */
+ e_IOC_FM_PORT_TYPE_DUMMY
+} ioc_fm_port_type;
+
+/**
+ @Group lnx_ioctl_FM_lib_grp FM library
+
+ @Description FM API functions, definitions and enums
+ The FM module is the main driver module and is a mandatory module
+ for FM driver users. Before any further module initialization,
+ this module must be initialized.
+ The FM is a "single-tone" module. It is responsible of the common
+ HW modules: FPM, DMA, common QMI, common BMI initializations and
+ run-time control routines. This module must be initialized always
+ when working with any of the FM modules.
+ NOTE - We assumes that the FML will be initialize only by core No. 0!
+
+ @{
+*/
+
+/**
+ @Description FM Exceptions
+*/
+typedef enum ioc_fm_exceptions {
+ e_IOC_FM_EX_DMA_BUS_ERROR, /**< DMA bus error. */
+ e_IOC_EX_DMA_READ_ECC,
+ /**< Read Buffer ECC error (Valid for FM rev < 6)*/
+ e_IOC_EX_DMA_SYSTEM_WRITE_ECC,
+ /**< Write Buffer ECC error on system side (Valid for FM rev < 6)*/
+ e_IOC_EX_DMA_FM_WRITE_ECC,
+ /**< Write Buffer ECC error on FM side (Valid for FM rev < 6)*/
+ e_IOC_EX_DMA_SINGLE_PORT_ECC,
+ /**< Single Port ECC error on FM side (Valid for FM rev > 6)*/
+ e_IOC_EX_FPM_STALL_ON_TASKS, /**< Stall of tasks on FPM */
+ e_IOC_EX_FPM_SINGLE_ECC, /**< Single ECC on FPM. */
+ e_IOC_EX_FPM_DOUBLE_ECC, /**< Double ECC error on FPM ram access */
+ e_IOC_EX_QMI_SINGLE_ECC, /**< Single ECC on QMI. */
+ e_IOC_EX_QMI_DOUBLE_ECC, /**< Double bit ECC occurred on QMI */
+ e_IOC_EX_QMI_DEQ_FROM_UNKNOWN_PORTID,/**< Dequeue from unknown port id */
+ e_IOC_EX_BMI_LIST_RAM_ECC, /**< Linked List RAM ECC error */
+ e_IOC_EX_BMI_STORAGE_PROFILE_ECC,/**< Storage Profile ECC Error */
+ e_IOC_EX_BMI_STATISTICS_RAM_ECC,/**< Statistics Count RAM ECC Error Enable */
+ e_IOC_EX_BMI_DISPATCH_RAM_ECC, /**< Dispatch RAM ECC Error Enable */
+ e_IOC_EX_IRAM_ECC, /**< Double bit ECC occurred on IRAM*/
+ e_IOC_EX_MURAM_ECC /**< Double bit ECC occurred on MURAM*/
+} ioc_fm_exceptions;
+
+/**
+ @Group lnx_ioctl_FM_runtime_control_grp FM Runtime Control Unit
+
+ @Description FM Runtime control unit API functions, definitions and enums.
+ The FM driver provides a set of control routines for each module.
+ These routines may only be called after the module was fully
+ initialized (both configuration and initialization routines were
+ called). They are typically used to get information from hardware
+ (status, counters/statistics, revision etc.), to modify a current
+ state or to force/enable a required action. Run-time control may
+ be called whenever necessary and as many times as needed.
+ @{
+*/
+
+/**
+ @Collection General FM defines.
+ */
+#define IOC_FM_MAX_NUM_OF_VALID_PORTS (FM_MAX_NUM_OF_OH_PORTS + \
+ FM_MAX_NUM_OF_1G_RX_PORTS + \
+ FM_MAX_NUM_OF_10G_RX_PORTS + \
+ FM_MAX_NUM_OF_1G_TX_PORTS + \
+ FM_MAX_NUM_OF_10G_TX_PORTS)
+/* @} */
+
+/**
+ @Description Structure for Port bandwidth requirement. Port is identified
+ by type and relative id.
+ (must be identical to t_FmPortBandwidth defined in fm_ext.h)
+*/
+typedef struct ioc_fm_port_bandwidth_t {
+ ioc_fm_port_type type; /**< FM port type */
+ uint8_t relative_port_id; /**< Type relative port id */
+ uint8_t bandwidth; /**< bandwidth - (in term of percents) */
+} ioc_fm_port_bandwidth_t;
+
+/**
+ @Description A Structure containing an array of Port bandwidth requirements.
+ The user should state the ports requiring bandwidth in terms of
+ percentage - i.e. all port's bandwidths in the array must add
+ up to 100.
+ (must be identical to t_FmPortsBandwidthParams defined in fm_ext.h)
+*/
+typedef struct ioc_fm_port_bandwidth_params {
+ uint8_t num_of_ports;
+ /**< num of ports listed in the array below */
+ ioc_fm_port_bandwidth_t ports_bandwidths[IOC_FM_MAX_NUM_OF_VALID_PORTS];
+ /**< for each port, it's bandwidth (all port's
+ bandwidths must add up to 100.*/
+} ioc_fm_port_bandwidth_params;
+
+/**
+ @Description enum for defining FM counters
+*/
+typedef enum ioc_fm_counters {
+ e_IOC_FM_COUNTERS_ENQ_TOTAL_FRAME,/**< QMI total enqueued frames counter */
+ e_IOC_FM_COUNTERS_DEQ_TOTAL_FRAME,/**< QMI total dequeued frames counter */
+ e_IOC_FM_COUNTERS_DEQ_0, /**< QMI 0 frames from QMan counter */
+ e_IOC_FM_COUNTERS_DEQ_1, /**< QMI 1 frames from QMan counter */
+ e_IOC_FM_COUNTERS_DEQ_2, /**< QMI 2 frames from QMan counter */
+ e_IOC_FM_COUNTERS_DEQ_3, /**< QMI 3 frames from QMan counter */
+ e_IOC_FM_COUNTERS_DEQ_FROM_DEFAULT,
+ /**< QMI dequeue from default queue counter */
+ e_IOC_FM_COUNTERS_DEQ_FROM_CONTEXT,
+ /**< QMI dequeue from FQ context counter */
+ e_IOC_FM_COUNTERS_DEQ_FROM_FD,
+ /**< QMI dequeue from FD command field counter */
+ e_IOC_FM_COUNTERS_DEQ_CONFIRM, /**< QMI dequeue confirm counter */
+} ioc_fm_counters;
+
+typedef struct ioc_fm_obj_t {
+ void *obj;
+} ioc_fm_obj_t;
+
+/**
+ @Description A structure for returning revision information
+ (must match struct t_FmRevisionInfo declared in fm_ext.h)
+*/
+typedef struct ioc_fm_revision_info_t {
+ uint8_t major; /**< Major revision */
+ uint8_t minor; /**< Minor revision */
+} ioc_fm_revision_info_t;
+
+/**
+ @Description A structure for FM counters
+*/
+typedef struct ioc_fm_counters_params_t {
+ ioc_fm_counters cnt;/**< The requested counter */
+ uint32_t val;/**< The requested value to get/set from/into the counter */
+} ioc_fm_counters_params_t;
+
+typedef union ioc_fm_api_version_t {
+ struct {
+ uint8_t major;
+ uint8_t minor;
+ uint8_t respin;
+ uint8_t reserved;
+ } version;
+ uint32_t ver;
+} ioc_fm_api_version_t;
+
+typedef struct fm_ctrl_mon_t {
+ uint8_t percent_cnt[2];
+} fm_ctrl_mon_t;
+
+typedef struct ioc_fm_ctrl_mon_counters_params_t {
+ uint8_t fm_ctrl_index;
+ fm_ctrl_mon_t *p_mon;
+} ioc_fm_ctrl_mon_counters_params_t;
+
+/**
+ @Function FM_IOC_SET_PORTS_BANDWIDTH
+
+ @Description Sets relative weights between ports when accessing common resources.
+
+ @Param[in] ioc_fm_port_bandwidth_params Port bandwidth percentages,
+ their sum must equal 100.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+*/
+#define FM_IOC_SET_PORTS_BANDWIDTH \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(2), ioc_fm_port_bandwidth_params)
+
+/**
+ @Function FM_IOC_GET_REVISION
+
+ @Description Returns the FM revision
+
+ @Param[out] ioc_fm_revision_info_t A structure of revision information parameters.
+
+ @Return None.
+
+ @Cautions Allowed only following FM_Init().
+*/
+#define FM_IOC_GET_REVISION \
+ _IOR(FM_IOC_TYPE_BASE, FM_IOC_NUM(3), ioc_fm_revision_info_t)
+
+/**
+ @Function FM_IOC_GET_COUNTER
+
+ @Description Reads one of the FM counters.
+
+ @Param[in,out] ioc_fm_counters_params_t The requested counter parameters.
+
+ @Return Counter's current value.
+
+ @Cautions Allowed only following FM_Init().
+ Note that it is user's responsibilty to call this routine only
+ for enabled counters, and there will be no indication if a
+ disabled counter is accessed.
+*/
+#define FM_IOC_GET_COUNTER \
+ _IOWR(FM_IOC_TYPE_BASE, FM_IOC_NUM(4), ioc_fm_counters_params_t)
+
+/**
+ @Function FM_IOC_SET_COUNTER
+
+ @Description Sets a value to an enabled counter. Use "0" to reset the counter.
+
+ @Param[in] ioc_fm_counters_params_t The requested counter parameters.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+*/
+#define FM_IOC_SET_COUNTER \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(5), ioc_fm_counters_params_t)
+
+/**
+ @Function FM_IOC_FORCE_INTR
+
+ @Description Causes an interrupt event on the requested source.
+
+ @Param[in] ioc_fm_exceptions An exception to be forced.
+
+ @Return E_OK on success; Error code if the exception is not enabled,
+ or is not able to create interrupt.
+
+ @Cautions Allowed only following FM_Init().
+*/
+#define FM_IOC_FORCE_INTR \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(6), ioc_fm_exceptions)
+
+/**
+ @Function FM_IOC_GET_API_VERSION
+
+ @Description Reads the FMD IOCTL API version.
+
+ @Param[in,out] ioc_fm_api_version_t The requested counter parameters.
+
+ @Return Version's value.
+*/
+#define FM_IOC_GET_API_VERSION \
+ _IOR(FM_IOC_TYPE_BASE, FM_IOC_NUM(7), ioc_fm_api_version_t)
+
+/**
+ @Function FM_CtrlMonStart
+
+ @Description Start monitoring utilization of all available FM controllers.
+
+ In order to obtain FM controllers utilization the following sequence
+ should be used:
+ -# FM_CtrlMonStart()
+ -# FM_CtrlMonStop()
+ -# FM_CtrlMonGetCounters() - issued for each FM controller
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+*/
+#define FM_IOC_CTRL_MON_START \
+ _IO(FM_IOC_TYPE_BASE, FM_IOC_NUM(15))
+
+/**
+ @Function FM_CtrlMonStop
+
+ @Description Stop monitoring utilization of all available FM controllers.
+
+ In order to obtain FM controllers utilization the following sequence
+ should be used:
+ -# FM_CtrlMonStart()
+ -# FM_CtrlMonStop()
+ -# FM_CtrlMonGetCounters() - issued for each FM controller
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+*/
+#define FM_IOC_CTRL_MON_STOP \
+ _IO(FM_IOC_TYPE_BASE, FM_IOC_NUM(16))
+
+/**
+ @Function FM_CtrlMonGetCounters
+
+ @Description Obtain FM controller utilization parameters.
+
+ In order to obtain FM controllers utilization the following sequence
+ should be used:
+ -# FM_CtrlMonStart()
+ -# FM_CtrlMonStop()
+ -# FM_CtrlMonGetCounters() - issued for each FM controller
+
+ @Param[in] ioc_fm_ctrl_mon_counters_params_t
+ A structure holding the required parameters.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_IOC_CTRL_MON_GET_COUNTERS_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(17), ioc_compat_fm_ctrl_mon_counters_params_t)
+#endif
+#define FM_IOC_CTRL_MON_GET_COUNTERS \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(17), ioc_fm_ctrl_mon_counters_params_t)
+
+/** @} */ /* end of lnx_ioctl_FM_runtime_control_grp group */
+/** @} */ /* end of lnx_ioctl_FM_lib_grp group */
+/** @} */ /* end of lnx_ioctl_FM_grp */
+
+#define FMD_API_VERSION_MAJOR 21
+#define FMD_API_VERSION_MINOR 1
+#define FMD_API_VERSION_RESPIN 0
+
+#endif /* __FM_EXT_H */
diff --git a/drivers/net/dpaa/fmlib/fm_lib.c b/drivers/net/dpaa/fmlib/fm_lib.c
new file mode 100644
index 000000000..46d4bb766
--- /dev/null
+++ b/drivers/net/dpaa/fmlib/fm_lib.c
@@ -0,0 +1,557 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright 2008-2012 Freescale Semiconductor Inc.
+ * Copyright 2017-2020 NXP
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <termios.h>
+#include <sys/ioctl.h>
+#include <stdbool.h>
+#include <rte_common.h>
+
+#include "fm_ext.h"
+#include "fm_pcd_ext.h"
+#include "fm_port_ext.h"
+#include <dpaa_ethdev.h>
+
+#define DEV_TO_ID(p) \
+ do { \
+ t_Device *p_Dev = (t_Device *)p; \
+ p = UINT_TO_PTR(p_Dev->id); \
+ } while (0)
+
+/* Major and minor are in sync with FMD, respin is for fmlib identification */
+#define FM_LIB_VERSION_MAJOR 21
+#define FM_LIB_VERSION_MINOR 1
+#define FM_LIB_VERSION_RESPIN 0
+
+#if (FMD_API_VERSION_MAJOR != FM_LIB_VERSION_MAJOR) || \
+ (FMD_API_VERSION_MINOR != FM_LIB_VERSION_MINOR)
+#warning FMD and FMLIB version mismatch
+#endif
+
+uint32_t FM_GetApiVersion(t_Handle h_Fm, ioc_fm_api_version_t *p_version);
+
+t_Handle FM_Open(uint8_t id)
+{
+ t_Device *p_Dev;
+ int fd;
+ char devName[20];
+ static bool called;
+ ioc_fm_api_version_t ver;
+
+ _fml_dbg("Calling...\n");
+
+ p_Dev = (t_Device *)malloc(sizeof(t_Device));
+ if (!p_Dev)
+ return NULL;
+
+ memset(devName, 0, 20);
+ sprintf(devName, "%s%s%d", "/dev/", DEV_FM_NAME, id);
+ fd = open(devName, O_RDWR);
+ if (fd < 0) {
+ free(p_Dev);
+ return NULL;
+ }
+
+ p_Dev->id = id;
+ p_Dev->fd = fd;
+ if (!called) {
+ called = true;
+ FM_GetApiVersion((t_Handle)p_Dev, &ver);
+
+ if (FMD_API_VERSION_MAJOR != ver.version.major ||
+ FMD_API_VERSION_MINOR != ver.version.minor ||
+ FMD_API_VERSION_RESPIN != ver.version.respin) {
+ DPAA_PMD_WARN("Compiled against FMD API ver %u.%u.%u",
+ FMD_API_VERSION_MAJOR,
+ FMD_API_VERSION_MINOR, FMD_API_VERSION_RESPIN);
+ DPAA_PMD_WARN("Running with FMD API ver %u.%u.%u",
+ ver.version.major, ver.version.minor,
+ ver.version.respin);
+ }
+ }
+ _fml_dbg("Finishing.\n");
+
+ return (t_Handle)p_Dev;
+}
+
+void FM_Close(t_Handle h_Fm)
+{
+ t_Device *p_Dev = (t_Device *)h_Fm;
+
+ _fml_dbg("Calling...\n");
+
+ close(p_Dev->fd);
+ free(p_Dev);
+
+ _fml_dbg("Finishing.\n");
+}
+
+uint32_t FM_GetApiVersion(t_Handle h_Fm, ioc_fm_api_version_t *p_version)
+{
+ t_Device *p_Dev = (t_Device *)h_Fm;
+ int ret;
+
+ _fml_dbg("Calling...\n");
+
+ ret = ioctl(p_Dev->fd, FM_IOC_GET_API_VERSION, p_version);
+ if (ret) {
+ DPAA_PMD_ERR("cannot get API version, error %i (%s)\n",
+ errno, strerror(errno));
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+ }
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
+
+t_Handle FM_PCD_Open(t_FmPcdParams *p_FmPcdParams)
+{
+ t_Device *p_Dev;
+ int fd;
+ char devName[20];
+
+ _fml_dbg("Calling...\n");
+
+ p_Dev = (t_Device *)malloc(sizeof(t_Device));
+ if (!p_Dev)
+ return NULL;
+
+ memset(devName, 0, 20);
+ sprintf(devName, "%s%s%u-pcd", "/dev/", DEV_FM_NAME,
+ (uint32_t)((t_Device *)p_FmPcdParams->h_Fm)->id);
+ fd = open(devName, O_RDWR);
+ if (fd < 0) {
+ free(p_Dev);
+ return NULL;
+ }
+
+ p_Dev->id = ((t_Device *)p_FmPcdParams->h_Fm)->id;
+ p_Dev->fd = fd;
+ p_Dev->owners = 0;
+
+ _fml_dbg("Finishing.\n");
+
+ return (t_Handle)p_Dev;
+}
+
+void FM_PCD_Close(t_Handle h_FmPcd)
+{
+ t_Device *p_Dev = (t_Device *)h_FmPcd;
+
+ _fml_dbg("Calling...\n");
+
+ close(p_Dev->fd);
+
+ if (p_Dev->owners) {
+ printf(
+ "\nTrying to delete a previously created pcd handler(owners:%u)!!\n",
+ p_Dev->owners);
+ return;
+ }
+
+ free(p_Dev);
+
+ _fml_dbg("Finishing.\n");
+}
+
+uint32_t FM_PCD_Enable(t_Handle h_FmPcd)
+{
+ t_Device *p_Dev = (t_Device *)h_FmPcd;
+
+ _fml_dbg("Calling...\n");
+
+ if (ioctl(p_Dev->fd, FM_PCD_IOC_ENABLE))
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
+
+uint32_t FM_PCD_Disable(t_Handle h_FmPcd)
+{
+ t_Device *p_Dev = (t_Device *)h_FmPcd;
+
+ _fml_dbg("Calling...\n");
+
+ if (ioctl(p_Dev->fd, FM_PCD_IOC_DISABLE))
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
+
+t_Handle FM_PCD_NetEnvCharacteristicsSet(t_Handle h_FmPcd,
+ ioc_fm_pcd_net_env_params_t *params)
+{
+ t_Device *p_PcdDev = (t_Device *)h_FmPcd;
+ t_Device *p_Dev = NULL;
+
+ _fml_dbg("Calling...\n");
+
+ params->id = NULL;
+
+ if (ioctl(p_PcdDev->fd, FM_PCD_IOC_NET_ENV_CHARACTERISTICS_SET, params))
+ return NULL;
+
+ p_Dev = (t_Device *)malloc(sizeof(t_Device));
+ if (!p_Dev)
+ return NULL;
+
+ memset(p_Dev, 0, sizeof(t_Device));
+ p_Dev->h_UserPriv = (t_Handle)p_PcdDev;
+ p_PcdDev->owners++;
+ p_Dev->id = PTR_TO_UINT(params->id);
+
+ _fml_dbg("Finishing.\n");
+
+ return (t_Handle)p_Dev;
+}
+
+uint32_t FM_PCD_NetEnvCharacteristicsDelete(t_Handle h_NetEnv)
+{
+ t_Device *p_Dev = (t_Device *)h_NetEnv;
+ t_Device *p_PcdDev = NULL;
+ ioc_fm_obj_t id;
+
+ _fml_dbg("Calling...\n");
+
+ p_PcdDev = (t_Device *)p_Dev->h_UserPriv;
+ id.obj = UINT_TO_PTR(p_Dev->id);
+
+ if (ioctl(p_PcdDev->fd, FM_PCD_IOC_NET_ENV_CHARACTERISTICS_DELETE, &id))
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+
+ p_PcdDev->owners--;
+ free(p_Dev);
+
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
+
+t_Handle FM_PCD_KgSchemeSet(t_Handle h_FmPcd,
+ ioc_fm_pcd_kg_scheme_params_t *params)
+{
+ t_Device *p_PcdDev = (t_Device *)h_FmPcd;
+ t_Device *p_Dev = NULL;
+ int ret;
+
+ _fml_dbg("Calling...\n");
+
+ params->id = NULL;
+
+ if (params->param.modify) {
+ if (params->param.scm_id.scheme_id)
+ DEV_TO_ID(params->param.scm_id.scheme_id);
+ else
+ return NULL;
+ }
+
+ /* correct h_NetEnv param from scheme */
+ if (params->param.net_env_params.net_env_id)
+ DEV_TO_ID(params->param.net_env_params.net_env_id);
+
+ /* correct next engine params handlers: cc*/
+ if (params->param.next_engine == e_IOC_FM_PCD_CC &&
+ params->param.kg_next_engine_params.cc.tree_id)
+ DEV_TO_ID(params->param.kg_next_engine_params.cc.tree_id);
+
+ ret = ioctl(p_PcdDev->fd, FM_PCD_IOC_KG_SCHEME_SET, params);
+ if (ret) {
+ DPAA_PMD_ERR(" cannot set kg scheme, error %i (%s)\n",
+ errno, strerror(errno));
+ return NULL;
+ }
+
+ p_Dev = (t_Device *)malloc(sizeof(t_Device));
+ if (!p_Dev)
+ return NULL;
+
+ memset(p_Dev, 0, sizeof(t_Device));
+ p_Dev->h_UserPriv = (t_Handle)p_PcdDev;
+ /* increase owners only if a new scheme is created */
+ if (params->param.modify == false)
+ p_PcdDev->owners++;
+ p_Dev->id = PTR_TO_UINT(params->id);
+
+ _fml_dbg("Finishing.\n");
+
+ return (t_Handle)p_Dev;
+}
+
+uint32_t FM_PCD_KgSchemeDelete(t_Handle h_Scheme)
+{
+ t_Device *p_Dev = (t_Device *)h_Scheme;
+ t_Device *p_PcdDev = NULL;
+ ioc_fm_obj_t id;
+
+ _fml_dbg("Calling...\n");
+
+ p_PcdDev = (t_Device *)p_Dev->h_UserPriv;
+ id.obj = UINT_TO_PTR(p_Dev->id);
+
+ if (ioctl(p_PcdDev->fd, FM_PCD_IOC_KG_SCHEME_DELETE, &id)) {
+ DPAA_PMD_WARN("cannot delete kg scheme, error %i (%s)\n",
+ errno, strerror(errno));
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+ }
+
+ p_PcdDev->owners--;
+ free(p_Dev);
+
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
+
+#ifdef FM_CAPWAP_SUPPORT
+#error CAPWAP feature not supported
+#endif
+
+typedef struct {
+ e_FmPortType portType; /**< Port type */
+ uint8_t portId; /**< Port Id - relative to type */
+} t_FmPort;
+
+t_Handle FM_PORT_Open(t_FmPortParams *p_FmPortParams)
+{
+ t_Device *p_Dev;
+ int fd;
+ char devName[30];
+ t_FmPort *p_FmPort;
+
+ _fml_dbg("Calling...\n");
+
+ p_Dev = (t_Device *)malloc(sizeof(t_Device));
+ if (!p_Dev)
+ return NULL;
+
+ memset(p_Dev, 0, sizeof(t_Device));
+
+ p_FmPort = (t_FmPort *)malloc(sizeof(t_FmPort));
+ if (!p_FmPort) {
+ free(p_Dev);
+ return NULL;
+ }
+ memset(p_FmPort, 0, sizeof(t_FmPort));
+ memset(devName, 0, sizeof(devName));
+ switch (p_FmPortParams->portType) {
+ case e_FM_PORT_TYPE_OH_OFFLINE_PARSING:
+ sprintf(devName, "%s%s%u-port-oh%d", "/dev/", DEV_FM_NAME,
+ (uint32_t)((t_Device *)p_FmPortParams->h_Fm)->id,
+ p_FmPortParams->portId);
+ break;
+ case e_FM_PORT_TYPE_RX:
+ sprintf(devName, "%s%s%u-port-rx%d", "/dev/", DEV_FM_NAME,
+ (uint32_t)((t_Device *)p_FmPortParams->h_Fm)->id,
+ p_FmPortParams->portId);
+ break;
+ case e_FM_PORT_TYPE_RX_10G:
+ sprintf(devName, "%s%s%u-port-rx%d", "/dev/", DEV_FM_NAME,
+ (uint32_t)((t_Device *)p_FmPortParams->h_Fm)->id,
+ FM_MAX_NUM_OF_1G_RX_PORTS + p_FmPortParams->portId);
+ break;
+ case e_FM_PORT_TYPE_TX:
+ sprintf(devName, "%s%s%u-port-tx%d", "/dev/", DEV_FM_NAME,
+ (uint32_t)((t_Device *)p_FmPortParams->h_Fm)->id,
+ p_FmPortParams->portId);
+ break;
+ case e_FM_PORT_TYPE_TX_10G:
+ sprintf(devName, "%s%s%u-port-tx%d", "/dev/", DEV_FM_NAME,
+ (uint32_t)((t_Device *)p_FmPortParams->h_Fm)->id,
+ FM_MAX_NUM_OF_1G_TX_PORTS + p_FmPortParams->portId);
+ break;
+ default:
+ free(p_FmPort);
+ free(p_Dev);
+ return NULL;
+ }
+
+ fd = open(devName, O_RDWR);
+ if (fd < 0) {
+ free(p_FmPort);
+ free(p_Dev);
+ return NULL;
+ }
+
+ p_FmPort->portType = p_FmPortParams->portType;
+ p_FmPort->portId = p_FmPortParams->portId;
+ p_Dev->id = p_FmPortParams->portId;
+ p_Dev->fd = fd;
+ p_Dev->h_UserPriv = (t_Handle)p_FmPort;
+
+ _fml_dbg("Finishing.\n");
+
+ return (t_Handle)p_Dev;
+}
+
+void FM_PORT_Close(t_Handle h_FmPort)
+{
+ t_Device *p_Dev = (t_Device *)h_FmPort;
+
+ _fml_dbg("Calling...\n");
+
+ close(p_Dev->fd);
+ if (p_Dev->h_UserPriv)
+ free(p_Dev->h_UserPriv);
+ free(p_Dev);
+
+ _fml_dbg("Finishing.\n");
+}
+
+uint32_t FM_PORT_Disable(t_Handle h_FmPort)
+{
+ t_Device *p_Dev = (t_Device *)h_FmPort;
+
+ _fml_dbg("Calling...\n");
+
+ if (ioctl(p_Dev->fd, FM_PORT_IOC_DISABLE))
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
+
+uint32_t FM_PORT_Enable(t_Handle h_FmPort)
+{
+ t_Device *p_Dev = (t_Device *)h_FmPort;
+
+ _fml_dbg("Calling...\n");
+
+ if (ioctl(p_Dev->fd, FM_PORT_IOC_ENABLE))
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
+
+uint32_t FM_PORT_SetPCD(t_Handle h_FmPort,
+ ioc_fm_port_pcd_params_t *params)
+{
+ t_Device *p_Dev = (t_Device *)h_FmPort;
+
+ _fml_dbg("Calling...\n");
+
+ /* correct h_NetEnv param from t_FmPortPcdParams */
+ DEV_TO_ID(params->net_env_id);
+
+ /* correct pcd structures according to what support was set */
+ if (params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_CC ||
+ params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_CC_AND_PLCR ||
+ params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_CC) {
+ if (params->p_cc_params && params->p_cc_params->cc_tree_id)
+ DEV_TO_ID(params->p_cc_params->cc_tree_id);
+ else
+ DPAA_PMD_WARN("Coarse Clasification not set !");
+ }
+
+ if (params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG ||
+ params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_CC ||
+ params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_CC_AND_PLCR ||
+ params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_PLCR){
+ if (params->p_kg_params) {
+ uint32_t i;
+
+ for (i = 0; i < params->p_kg_params->num_of_schemes; i++)
+ if (params->p_kg_params->scheme_ids[i])
+ DEV_TO_ID(params->p_kg_params->scheme_ids[i]);
+ else
+ DPAA_PMD_WARN("Scheme:%u not set!!", i);
+
+ if (params->p_kg_params && params->p_kg_params->direct_scheme)
+ DEV_TO_ID(params->p_kg_params->direct_scheme_id);
+ } else {
+ DPAA_PMD_WARN("KeyGen not set !");
+ }
+ }
+
+ if (params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PLCR_ONLY ||
+ params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_PLCR ||
+ params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_CC_AND_PLCR ||
+ params->pcd_support == e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_PLCR) {
+ if (params->p_plcr_params) {
+ if (params->p_plcr_params->plcr_profile_id)
+ DEV_TO_ID(params->p_plcr_params->plcr_profile_id);
+ else
+ DPAA_PMD_WARN("Policer not set !");
+ }
+ }
+
+ if (params->p_ip_reassembly_manip)
+ DEV_TO_ID(params->p_ip_reassembly_manip);
+
+#if (DPAA_VERSION >= 11)
+ if (params->p_capwap_reassembly_manip)
+ DEV_TO_ID(params->p_capwap_reassembly_manip);
+#endif
+
+ if (ioctl(p_Dev->fd, FM_PORT_IOC_SET_PCD, params))
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
+
+uint32_t FM_PORT_DeletePCD(t_Handle h_FmPort)
+{
+ t_Device *p_Dev = (t_Device *)h_FmPort;
+
+ _fml_dbg("Calling...\n");
+
+ if (ioctl(p_Dev->fd, FM_PORT_IOC_DELETE_PCD))
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
+
+t_Handle CreateDevice(t_Handle h_UserPriv, t_Handle h_DevId)
+{
+ t_Device *p_UserPrivDev = (t_Device *)h_UserPriv;
+ t_Device *p_Dev = NULL;
+
+ _fml_dbg("Calling...\n");
+
+ p_Dev = (t_Device *)malloc(sizeof(t_Device));
+ if (!p_Dev)
+ return NULL;
+
+ memset(p_Dev, 0, sizeof(t_Device));
+ p_Dev->h_UserPriv = h_UserPriv;
+ p_UserPrivDev->owners++;
+ p_Dev->id = PTR_TO_UINT(h_DevId);
+
+ _fml_dbg("Finishing.\n");
+
+ return (t_Handle)p_Dev;
+}
+
+t_Handle GetDeviceId(t_Handle h_Dev)
+{
+ t_Device *p_Dev = (t_Device *)h_Dev;
+
+ return (t_Handle)p_Dev->id;
+}
+
+#if defined FMAN_V3H
+void Platform_is_FMAN_V3H(void)
+{
+}
+#elif defined FMAN_V3L
+void Platform_is_FMAN_V3L(void)
+{
+}
+#endif
diff --git a/drivers/net/dpaa/fmlib/fm_pcd_ext.h b/drivers/net/dpaa/fmlib/fm_pcd_ext.h
new file mode 100644
index 000000000..40f7094fe
--- /dev/null
+++ b/drivers/net/dpaa/fmlib/fm_pcd_ext.h
@@ -0,0 +1,5164 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright 2008-2012 Freescale Semiconductor Inc.
+ * Copyright 2017-2020 NXP
+ */
+
+#ifndef __FM_PCD_EXT_H
+#define __FM_PCD_EXT_H
+
+#include "ncsw_ext.h"
+#include "net_ext.h"
+#include "fm_ext.h"
+
+/**
+ @Description FM PCD ...
+ @Group lnx_ioctl_FM_grp Frame Manager Linux IOCTL API
+ @Description Frame Manager Linux ioctls definitions and enums
+ @{
+*/
+
+/**
+ @Group lnx_ioctl_FM_PCD_grp FM PCD
+ @Description Frame Manager PCD API functions, definitions and enums
+
+ The FM PCD module is responsible for the initialization of all
+ global classifying FM modules. This includes the parser general and
+ common registers, the key generator global and common registers,
+ and the policer global and common registers.
+ In addition, the FM PCD SW module will initialize all required
+ key generator schemes, coarse classification flows, and policer
+ profiles. When an FM module is configured to work with one of these
+ entities, it will register to it using the FM PORT API. The PCD
+ module will manage the PCD resources - i.e. resource management of
+ KeyGen schemes, etc.
+
+ @{
+*/
+
+/**
+ @Collection General PCD defines
+*/
+#define IOC_FM_PCD_MAX_NUM_OF_PRIVATE_HDRS 2
+/**< Number of units/headers saved for user */
+
+#define IOC_FM_PCD_PRS_NUM_OF_HDRS 16
+/**< Number of headers supported by HW parser */
+#define IOC_FM_PCD_MAX_NUM_OF_DISTINCTION_UNITS (32 - IOC_FM_PCD_MAX_NUM_OF_PRIVATE_HDRS)
+/**< Number of distinction units is limited by register size (32 bits) minus reserved bits for private headers. */
+#define IOC_FM_PCD_MAX_NUM_OF_INTERCHANGEABLE_HDRS 4
+/**< Maximum number of interchangeable headers in a distinction unit */
+#define IOC_FM_PCD_KG_NUM_OF_GENERIC_REGS 8
+/**< Total number of generic KeyGen registers */
+#define IOC_FM_PCD_KG_MAX_NUM_OF_EXTRACTS_PER_KEY 35
+/**< Max number allowed on any configuration; For HW implementation reasons, in most cases
+ * less than this will be allowed; The driver will return an initialization error if resource is unavailable.
+ */
+#define IOC_FM_PCD_KG_NUM_OF_EXTRACT_MASKS 4
+ /**< Total number of masks allowed on KeyGen extractions. */
+#define IOC_FM_PCD_KG_NUM_OF_DEFAULT_GROUPS 16
+ /**< Number of default value logical groups */
+#define IOC_FM_PCD_PRS_NUM_OF_LABELS 32
+ /**< Maximum number of SW parser labels */
+#define IOC_FM_PCD_SW_PRS_SIZE 0x00000800
+/**< Total size of SW parser area */
+
+#define IOC_FM_PCD_MAX_MANIP_INSRT_TEMPLATE_SIZE 128
+/**< Maximum size of insertion template for insert manipulation */
+
+#define IOC_FM_PCD_FRM_REPLIC_MAX_NUM_OF_ENTRIES 64
+ /**< Maximum possible entries for frame replicator group */
+/* @} */
+
+#ifdef FM_CAPWAP_SUPPORT
+#error "FM_CAPWAP_SUPPORT not implemented!"
+#endif
+
+/**
+ @Group lnx_ioctl_FM_PCD_init_grp FM PCD Initialization Unit
+
+ @Description Frame Manager PCD Initialization Unit API
+
+ @{
+*/
+
+/**
+ @Description PCD counters
+ (must match enum ioc_fm_pcd_counters defined in fm_pcd_ext.h)
+*/
+typedef enum ioc_fm_pcd_counters {
+ e_IOC_FM_PCD_KG_COUNTERS_TOTAL, /**< KeyGen counter */
+ e_IOC_FM_PCD_PLCR_COUNTERS_RED,
+ /**< Policer counter - counts the total number of RED packets that exit the Policer. */
+ e_IOC_FM_PCD_PLCR_COUNTERS_YELLOW,
+ /**< Policer counter - counts the total number of YELLOW packets that exit the Policer. */
+ e_IOC_FM_PCD_PLCR_COUNTERS_RECOLORED_TO_RED,
+ /**< Policer counter - counts the number of packets that changed color to RED by the Policer;
+ This is a subset of e_IOC_FM_PCD_PLCR_COUNTERS_RED packet count, indicating active color changes. */
+ e_IOC_FM_PCD_PLCR_COUNTERS_RECOLORED_TO_YELLOW,
+ /**< Policer counter - counts the number of packets that changed color to YELLOW by the Policer;
+ This is a subset of e_IOC_FM_PCD_PLCR_COUNTERS_YELLOW packet count, indicating active color changes. */
+ e_IOC_FM_PCD_PLCR_COUNTERS_TOTAL,
+ /**< Policer counter - counts the total number of packets passed in the Policer. */
+ e_IOC_FM_PCD_PLCR_COUNTERS_LENGTH_MISMATCH,
+ /**< Policer counter - counts the number of packets with length mismatch. */
+ e_IOC_FM_PCD_PRS_COUNTERS_PARSE_DISPATCH,
+ /**< Parser counter - counts the number of times the parser block is dispatched. */
+ e_IOC_FM_PCD_PRS_COUNTERS_L2_PARSE_RESULT_RETURNED,
+ /**< Parser counter - counts the number of times L2 parse result is returned (including errors). */
+ e_IOC_FM_PCD_PRS_COUNTERS_L3_PARSE_RESULT_RETURNED,
+ /**< Parser counter - counts the number of times L3 parse result is returned (including errors). */
+ e_IOC_FM_PCD_PRS_COUNTERS_L4_PARSE_RESULT_RETURNED,
+ /**< Parser counter - counts the number of times L4 parse result is returned (including errors). */
+ e_IOC_FM_PCD_PRS_COUNTERS_SHIM_PARSE_RESULT_RETURNED,
+ /**< Parser counter - counts the number of times SHIM parse result is returned (including errors). */
+ e_IOC_FM_PCD_PRS_COUNTERS_L2_PARSE_RESULT_RETURNED_WITH_ERR,
+ /**< Parser counter - counts the number of times L2 parse result is returned with errors. */
+ e_IOC_FM_PCD_PRS_COUNTERS_L3_PARSE_RESULT_RETURNED_WITH_ERR,
+ /**< Parser counter - counts the number of times L3 parse result is returned with errors. */
+ e_IOC_FM_PCD_PRS_COUNTERS_L4_PARSE_RESULT_RETURNED_WITH_ERR,
+ /**< Parser counter - counts the number of times L4 parse result is returned with errors. */
+ e_IOC_FM_PCD_PRS_COUNTERS_SHIM_PARSE_RESULT_RETURNED_WITH_ERR,
+ /**< Parser counter - counts the number of times SHIM parse result is returned with errors. */
+ e_IOC_FM_PCD_PRS_COUNTERS_SOFT_PRS_CYCLES,
+ /**< Parser counter - counts the number of cycles spent executing soft parser instruction (including stall cycles). */
+ e_IOC_FM_PCD_PRS_COUNTERS_SOFT_PRS_STALL_CYCLES,
+ /**< Parser counter - counts the number of cycles stalled waiting for parser internal memory reads while executing soft parser instruction. */
+ e_IOC_FM_PCD_PRS_COUNTERS_HARD_PRS_CYCLE_INCL_STALL_CYCLES,
+ /**< Parser counter - counts the number of cycles spent executing hard parser (including stall cycles). */
+ e_IOC_FM_PCD_PRS_COUNTERS_MURAM_READ_CYCLES,
+ /**< MURAM counter - counts the number of cycles while performing FMan Memory read. */
+ e_IOC_FM_PCD_PRS_COUNTERS_MURAM_READ_STALL_CYCLES,
+ /**< MURAM counter - counts the number of cycles stalled while performing FMan Memory read. */
+ e_IOC_FM_PCD_PRS_COUNTERS_MURAM_WRITE_CYCLES,
+ /**< MURAM counter - counts the number of cycles while performing FMan Memory write. */
+ e_IOC_FM_PCD_PRS_COUNTERS_MURAM_WRITE_STALL_CYCLES,
+ /**< MURAM counter - counts the number of cycles stalled while performing FMan Memory write. */
+ e_IOC_FM_PCD_PRS_COUNTERS_FPM_COMMAND_STALL_CYCLES
+ /**< FPM counter - counts the number of cycles stalled while performing a FPM Command. */
+} ioc_fm_pcd_counters;
+
+/**
+ @Description PCD interrupts
+ (must match enum ioc_fm_pcd_exceptions defined in fm_pcd_ext.h)
+*/
+typedef enum ioc_fm_pcd_exceptions {
+ e_IOC_FM_PCD_KG_EXCEPTION_DOUBLE_ECC,
+ /**< KeyGen double-bit ECC error is detected on internal memory read access. */
+ e_IOC_FM_PCD_KG_EXCEPTION_KEYSIZE_OVERFLOW,
+ /**< KeyGen scheme configuration error indicating a key size larger than 56 bytes. */
+ e_IOC_FM_PCD_PLCR_EXCEPTION_DOUBLE_ECC,
+ /**< Policer double-bit ECC error has been detected on PRAM read access. */
+ e_IOC_FM_PCD_PLCR_EXCEPTION_INIT_ENTRY_ERROR,
+ /**< Policer access to a non-initialized profile has been detected. */
+ e_IOC_FM_PCD_PLCR_EXCEPTION_PRAM_SELF_INIT_COMPLETE,
+ /**< Policer RAM self-initialization complete */
+ e_IOC_FM_PCD_PLCR_EXCEPTION_ATOMIC_ACTION_COMPLETE,
+ /**< Policer atomic action complete */
+ e_IOC_FM_PCD_PRS_EXCEPTION_DOUBLE_ECC,
+ /**< Parser double-bit ECC error */
+ e_IOC_FM_PCD_PRS_EXCEPTION_SINGLE_ECC
+ /**< Parser single-bit ECC error */
+} ioc_fm_pcd_exceptions;
+
+/** @} */ /* end of lnx_ioctl_FM_PCD_init_grp group */
+
+/**
+ @Group lnx_ioctl_FM_PCD_Runtime_grp FM PCD Runtime Unit
+
+ @Description Frame Manager PCD Runtime Unit
+
+The runtime control allows creation of PCD infrastructure modules
+such as Network Environment Characteristics, Classification Plan
+Groups and Coarse Classification Trees.
+It also allows on-the-fly initialization, modification and removal
+of PCD modules such as KeyGen schemes, coarse classification nodes
+and Policer profiles.
+
+In order to explain the programming model of the PCD driver interface
+a few terms should be explained, and will be used below.
+- Distinction Header - One of the 16 protocols supported by the FM parser,
+ or one of the SHIM headers (1 or 2). May be a header with a special
+ option (see below).
+- Interchangeable Headers Group - This is a group of Headers recognized
+ by either one of them. For example, if in a specific context the user
+ chooses to treat IPv4 and IPV6 in the same way, they may create an
+ interchangeable Headers Unit consisting of these 2 headers.
+- A Distinction Unit - a Distinction Header or an Interchangeable Headers
+ Group.
+- Header with special option - applies to Ethernet, MPLS, VLAN, IPv4 and
+ IPv6, includes multicast, broadcast and other protocol specific options.
+ In terms of hardware it relates to the options available in the classification
+ plan.
+- Network Environment Characteristics - a set of Distinction Units that define
+ the total recognizable header selection for a certain environment. This is
+ NOT the list of all headers that will ever appear in a flow, but rather
+ everything that needs distinction in a flow, where distinction is made by KeyGen
+ schemes and coarse classification action descriptors.
+
+The PCD runtime modules initialization is done in stages. The first stage after
+initializing the PCD module itself is to establish a Network Flows Environment
+Definition. The application may choose to establish one or more such environments.
+Later, when needed, the application will have to state, for some of its modules,
+to which single environment it belongs.
+
+ @{
+*/
+
+/**
+ @Description structure for FM counters
+*/
+typedef struct ioc_fm_pcd_counters_params_t {
+ ioc_fm_pcd_counters cnt; /**< The requested counter */
+ uint32_t val;/**< The requested value to get/set from/into the counter */
+} ioc_fm_pcd_counters_params_t;
+
+/**
+ @Description structure for FM exception definitios
+*/
+typedef struct ioc_fm_pcd_exception_params_t {
+ ioc_fm_pcd_exceptions exception; /**< The requested exception */
+ bool enable; /**< TRUE to enable interrupt, FALSE to mask it. */
+} ioc_fm_pcd_exception_params_t;
+
+/**
+ @Description A structure for SW parser labels
+ (must be identical to struct t_FmPcdPrsLabelParams defined in fm_pcd_ext.h)
+ */
+typedef struct ioc_fm_pcd_prs_label_params_t {
+ uint32_t instruction_offset;/**< SW parser label instruction offset (2 bytes
+ resolution), relative to Parser RAM. */
+ ioc_net_header_type hdr;/**< The existence of this header will invoke
+ the SW parser code. */
+ uint8_t index_per_hdr; /**< Normally 0, if more than one SW parser
+ attachments for the same header, use this
+ index to distinguish between them. */
+} ioc_fm_pcd_prs_label_params_t;
+
+/**
+ @Description A structure for SW parser
+ (Must match struct ioc_fm_pcd_prs_sw_params_t defined in fm_pcd_ext.h)
+ */
+typedef struct ioc_fm_pcd_prs_sw_params_t {
+ bool override; /**< FALSE to invoke a check that nothing else
+ was loaded to this address, including
+ internal patches.
+ TRUE to override any existing code.*/
+ uint32_t size; /**< SW parser code size */
+ uint16_t base; /**< SW parser base (in instruction counts!
+ must be larger than 0x20)*/
+ uint8_t *p_code; /**< SW parser code */
+ uint32_t sw_prs_data_params[IOC_FM_PCD_PRS_NUM_OF_HDRS];
+ /**< SW parser data (parameters) */
+ uint8_t num_of_labels; /**< Number of labels for SW parser. */
+ ioc_fm_pcd_prs_label_params_t labels_table[IOC_FM_PCD_PRS_NUM_OF_LABELS];
+ /**< SW parser labels table, containing num_of_labels entries */
+} ioc_fm_pcd_prs_sw_params_t;
+
+/**
+ @Description A structure to set the a KeyGen default value
+ */
+typedef struct ioc_fm_pcd_kg_dflt_value_params_t {
+ uint8_t valueId;/**< 0,1 - one of 2 global default values */
+ uint32_t value; /**< The requested default value */
+} ioc_fm_pcd_kg_dflt_value_params_t;
+
+/**
+ @Function FM_PCD_Enable
+
+ @Description This routine should be called after PCD is initialized for enabling all
+ PCD engines according to their existing configuration.
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only when PCD is disabled.
+*/
+#define FM_PCD_IOC_ENABLE _IO(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(1))
+
+/**
+ @Function FM_PCD_Disable
+
+ @Description This routine may be called when PCD is enabled in order to
+ disable all PCD engines. It may be called
+ only when none of the ports in the system are using the PCD.
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only when PCD is enabled.
+*/
+#define FM_PCD_IOC_DISABLE _IO(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(2))
+
+ /**
+ @Function FM_PCD_PrsLoadSw
+
+ @Description This routine may be called only when all ports in the
+ system are actively using the classification plan scheme.
+ In such cases it is recommended in order to save resources.
+ The driver automatically saves 8 classification plans for
+ ports that do NOT use the classification plan mechanism, to
+ avoid this (in order to save those entries) this routine may
+ be called.
+
+ @Param[in] ioc_fm_pcd_prs_sw_params_t
+ A pointer to the image of the software parser code.
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only when PCD is disabled.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_PRS_LOAD_SW_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(3), ioc_compat_fm_pcd_prs_sw_params_t)
+#endif
+#define FM_PCD_IOC_PRS_LOAD_SW _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(3), ioc_fm_pcd_prs_sw_params_t)
+
+/**
+ @Function FM_PCD_KgSetDfltValue
+
+ @Description Calling this routine sets a global default value to be used
+ by the KeyGen when parser does not recognize a required
+ field/header.
+ By default default values are 0.
+
+ @Param[in] ioc_fm_pcd_kg_dflt_value_params_t A pointer to a structure with the relevant parameters
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only when PCD is disabled.
+*/
+#define FM_PCD_IOC_KG_SET_DFLT_VALUE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(6), ioc_fm_pcd_kg_dflt_value_params_t)
+
+/**
+ @Function FM_PCD_KgSetAdditionalDataAfterParsing
+
+ @Description Calling this routine allows the keygen to access data past
+ the parser finishing point.
+
+ @Param[in] uint8_t payload-offset; the number of bytes beyond the parser location.
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only when PCD is disabled.
+*/
+#define FM_PCD_IOC_KG_SET_ADDITIONAL_DATA_AFTER_PARSING _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(7), uint8_t)
+
+/**
+ @Function FM_PCD_SetException
+
+ @Description Calling this routine enables/disables PCD interrupts.
+
+ @Param[in] ioc_fm_pcd_exception_params_t
+ Arguments struct with exception to be enabled/disabled.
+
+ @Return 0 on success; Error code otherwise.
+*/
+#define FM_PCD_IOC_SET_EXCEPTION _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(8), ioc_fm_pcd_exception_params_t)
+
+/**
+ @Function FM_PCD_GetCounter
+
+ @Description Reads one of the FM PCD counters.
+
+ @Param[in,out] ioc_fm_pcd_counters_params_t The requested counter parameters.
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Note that it is user's responsibilty to call this routine only
+ for enabled counters, and there will be no indication if a
+ disabled counter is accessed.
+*/
+#define FM_PCD_IOC_GET_COUNTER _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(9), ioc_fm_pcd_counters_params_t)
+
+/**
+
+ @Function FM_PCD_KgSchemeGetCounter
+
+ @Description Reads scheme packet counter.
+
+ @Param[in] h_Scheme scheme handle as returned by FM_PCD_KgSchemeSet().
+
+ @Return Counter's current value.
+
+ @Cautions Allowed only following FM_PCD_Init() & FM_PCD_KgSchemeSet().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_KG_SCHEME_GET_CNTR_COMPAT _IOR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(4), ioc_compat_fm_pcd_kg_scheme_spc_t)
+#endif
+#define FM_PCD_IOC_KG_SCHEME_GET_CNTR _IOR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(4), ioc_fm_pcd_kg_scheme_spc_t)
+
+#if 0
+TODO: unused IOCTL
+/**
+ @Function FM_PCD_ModifyCounter
+
+ @Description Writes a value to an enabled counter. Use "0" to reset the counter.
+
+ @Param[in] ioc_fm_pcd_counters_params_t - The requested counter parameters.
+
+ @Return 0 on success; Error code otherwise.
+*/
+#define FM_PCD_IOC_MODIFY_COUNTER _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(10), ioc_fm_pcd_counters_params_t)
+#define FM_PCD_IOC_SET_COUNTER FM_PCD_IOC_MODIFY_COUNTER
+#endif
+
+/**
+ @Function FM_PCD_ForceIntr
+
+ @Description Causes an interrupt event on the requested source.
+
+ @Param[in] ioc_fm_pcd_exceptions - An exception to be forced.
+
+ @Return 0 on success; error code if the exception is not enabled,
+ or is not able to create interrupt.
+*/
+#define FM_PCD_IOC_FORCE_INTR _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(11), ioc_fm_pcd_exceptions)
+
+/**
+ @Collection Definitions of coarse classification parameters as required by KeyGen
+ (when coarse classification is the next engine after this scheme).
+*/
+#define IOC_FM_PCD_MAX_NUM_OF_CC_TREES 8
+#define IOC_FM_PCD_MAX_NUM_OF_CC_GROUPS 16
+#define IOC_FM_PCD_MAX_NUM_OF_CC_UNITS 4
+#define IOC_FM_PCD_MAX_NUM_OF_KEYS 256
+#define IOC_FM_PCD_MAX_NUM_OF_FLOWS (4 * KILOBYTE)
+#define IOC_FM_PCD_MAX_SIZE_OF_KEY 56
+#define IOC_FM_PCD_MAX_NUM_OF_CC_ENTRIES_IN_GRP 16
+#define IOC_FM_PCD_LAST_KEY_INDEX 0xffff
+#define IOC_FM_PCD_MANIP_DSCP_VALUES 64
+/* @} */
+
+/**
+ @Collection A set of definitions to allow protocol
+ special option description.
+*/
+typedef uint32_t ioc_protocol_opt_t;
+ /**< A general type to define a protocol option. */
+
+typedef ioc_protocol_opt_t ioc_eth_protocol_opt_t;
+ /**< Ethernet protocol options. */
+#define IOC_ETH_BROADCAST 0x80000000 /**< Ethernet Broadcast. */
+#define IOC_ETH_MULTICAST 0x40000000 /**< Ethernet Multicast. */
+
+typedef ioc_protocol_opt_t ioc_vlan_protocol_opt_t;
+ /**< Vlan protocol options. */
+#define IOC_VLAN_STACKED 0x20000000 /**< Stacked VLAN. */
+
+typedef ioc_protocol_opt_t ioc_mpls_protocol_opt_t;
+ /**< MPLS protocol options. */
+#define IOC_MPLS_STACKED 0x10000000 /**< Stacked MPLS. */
+
+typedef ioc_protocol_opt_t ioc_ipv4_protocol_opt_t;
+ /**< IPv4 protocol options. */
+#define IOC_IPV4_BROADCAST_1 0x08000000 /**< IPv4 Broadcast. */
+#define IOC_IPV4_MULTICAST_1 0x04000000 /**< IPv4 Multicast. */
+#define IOC_IPV4_UNICAST_2 0x02000000 /**< Tunneled IPv4 - Unicast. */
+#define IOC_IPV4_MULTICAST_BROADCAST_2 0x01000000
+ /**< Tunneled IPv4 - Broadcast/Multicast. */
+
+#define IOC_IPV4_FRAG_1 0x00000008 /**< IPV4 reassembly option.
+ IPV4 Reassembly manipulation requires network
+ environment with IPV4 header and IPV4_FRAG_1 option */
+
+typedef ioc_protocol_opt_t ioc_ipv6_protocol_opt_t;
+ /**< IPv6 protocol options. */
+#define IOC_IPV6_MULTICAST_1 0x00800000 /**< IPv6 Multicast. */
+#define IOC_IPV6_UNICAST_2 0x00400000
+ /**< Tunneled IPv6 - Unicast. */
+#define IOC_IPV6_MULTICAST_2 0x00200000
+ /**< Tunneled IPv6 - Multicast. */
+
+#define IOC_IPV6_FRAG_1 0x00000004 /**< IPV6 reassembly option.
+ IPV6 Reassembly manipulation requires network
+ environment with IPV6 header and IPV6_FRAG_1 option */
+#if (DPAA_VERSION >= 11)
+typedef ioc_protocol_opt_t ioc_capwap_protocol_opt_t;
+ /**< CAPWAP protocol options. */
+#define CAPWAP_FRAG_1 0x00000008 /**< CAPWAP reassembly option.
+ CAPWAP Reassembly manipulation requires network
+ environment with CAPWAP header and CAPWAP_FRAG_1 option;
+ in case where fragment found, the fragment-extension offset
+ may be found at 'shim2' (in parser-result). */
+#endif /* (DPAA_VERSION >= 11) */
+
+/* @} */
+
+#define IOC_FM_PCD_MANIP_MAX_HDR_SIZE 256
+#define IOC_FM_PCD_MANIP_DSCP_TO_VLAN_TRANS 64
+/**
+ @Collection A set of definitions to support Header Manipulation selection.
+*/
+typedef uint32_t ioc_hdr_manip_flags_t;
+ /**< A general type to define a HMan update command flags. */
+
+typedef ioc_hdr_manip_flags_t ioc_ipv4_hdr_manip_update_flags_t;
+ /**< IPv4 protocol HMan update command flags. */
+
+#define IOC_HDR_MANIP_IPV4_TOS 0x80000000
+ /**< update TOS with the given value ('tos' field
+ of ioc_fm_pcd_manip_hdr_field_update_ipv4_t) */
+#define IOC_HDR_MANIP_IPV4_ID 0x40000000
+ /**< update IP ID with the given value ('id' field
+ of ioc_fm_pcd_manip_hdr_field_update_ipv4_t) */
+#define IOC_HDR_MANIP_IPV4_TTL 0x20000000 /**< Decrement TTL by 1 */
+#define IOC_HDR_MANIP_IPV4_SRC 0x10000000
+ /**< update IP source address with the given value
+ ('src' field of ioc_fm_pcd_manip_hdr_field_update_ipv4_t) */
+#define IOC_HDR_MANIP_IPV4_DST 0x08000000
+ /**< update IP destination address with the given value
+ ('dst' field of ioc_fm_pcd_manip_hdr_field_update_ipv4_t) */
+
+typedef ioc_hdr_manip_flags_t ioc_ipv6_hdr_manip_update_flags_t;
+ /**< IPv6 protocol HMan update command flags. */
+
+#define IOC_HDR_MANIP_IPV6_TC 0x80000000
+ /**< update Traffic Class address with the given value
+ ('traffic_class' field of ioc_fm_pcd_manip_hdr_field_update_ipv6_t) */
+#define IOC_HDR_MANIP_IPV6_HL 0x40000000 /**< Decrement Hop Limit by 1 */
+#define IOC_HDR_MANIP_IPV6_SRC 0x20000000
+ /**< update IP source address with the given value
+ ('src' field of ioc_fm_pcd_manip_hdr_field_update_ipv6_t) */
+#define IOC_HDR_MANIP_IPV6_DST 0x10000000
+ /**< update IP destination address with the given value
+ ('dst' field of ioc_fm_pcd_manip_hdr_field_update_ipv6_t) */
+
+typedef ioc_hdr_manip_flags_t ioc_tcp_udp_hdr_manip_update_flags_t;
+ /**< TCP/UDP protocol HMan update command flags. */
+
+#define IOC_HDR_MANIP_TCP_UDP_SRC 0x80000000
+ /**< update TCP/UDP source address with the given value
+ ('src' field of ioc_fm_pcd_manip_hdr_field_update_tcp_udp_t) */
+#define IOC_HDR_MANIP_TCP_UDP_DST 0x40000000
+ /**< update TCP/UDP destination address with the given value
+ ('dst' field of ioc_fm_pcd_manip_hdr_field_update_tcp_udp_t) */
+#define IOC_HDR_MANIP_TCP_UDP_CHECKSUM 0x20000000
+ /**< update TCP/UDP checksum */
+
+/* @} */
+
+/**
+ @Description A type used for returning the order of the key extraction.
+ each value in this array represents the index of the extraction
+ command as defined by the user in the initialization extraction array.
+ The valid size of this array is the user define number of extractions
+ required (also marked by the second '0' in this array).
+*/
+typedef uint8_t ioc_fm_pcd_kg_key_order_t [IOC_FM_PCD_KG_MAX_NUM_OF_EXTRACTS_PER_KEY];
+
+/**
+ @Description All PCD engines
+ (must match enum e_FmPcdEngine defined in fm_pcd_ext.h)
+*/
+typedef enum ioc_fm_pcd_engine {
+ e_IOC_FM_PCD_INVALID = 0, /**< Invalid PCD engine */
+ e_IOC_FM_PCD_DONE, /**< No PCD Engine indicated */
+ e_IOC_FM_PCD_KG, /**< KeyGen */
+ e_IOC_FM_PCD_CC, /**< Coarse Classifier */
+ e_IOC_FM_PCD_PLCR, /**< Policer */
+ e_IOC_FM_PCD_PRS, /**< Parser */
+#if DPAA_VERSION >= 11
+ e_IOC_FM_PCD_FR, /**< Frame Replicator */
+#endif /* DPAA_VERSION >= 11 */
+ e_IOC_FM_PCD_HASH /**< Hash Table */
+} ioc_fm_pcd_engine;
+
+/**
+ @Description An enum for selecting extraction by header types
+ (Must match enum e_FmPcdExtractByHdrType defined in fm_pcd_ext.h)
+*/
+typedef enum ioc_fm_pcd_extract_by_hdr_type {
+ e_IOC_FM_PCD_EXTRACT_FROM_HDR, /**< Extract bytes from header */
+ e_IOC_FM_PCD_EXTRACT_FROM_FIELD,/**< Extract bytes from header field */
+ e_IOC_FM_PCD_EXTRACT_FULL_FIELD /**< Extract a full field */
+} ioc_fm_pcd_extract_by_hdr_type;
+
+/**
+ @Description An enum for selecting extraction source (when it is not the header)
+ (Must match enum e_FmPcdExtractFrom defined in fm_pcd_ext.h)
+*/
+typedef enum ioc_fm_pcd_extract_from {
+ e_IOC_FM_PCD_EXTRACT_FROM_FRAME_START,
+ /**< KG & CC: Extract from beginning of frame */
+ e_IOC_FM_PCD_EXTRACT_FROM_DFLT_VALUE,
+ /**< KG only: Extract from a default value */
+ e_IOC_FM_PCD_EXTRACT_FROM_CURR_END_OF_PARSE,
+ /**< KG only: Extract from the point where parsing had finished */
+ e_IOC_FM_PCD_EXTRACT_FROM_KEY, /**< CC only: Field where saved KEY */
+ e_IOC_FM_PCD_EXTRACT_FROM_HASH, /**< CC only: Field where saved HASH */
+ e_IOC_FM_PCD_EXTRACT_FROM_PARSE_RESULT,
+ /**< KG & CC: Extract from the parser result */
+ e_IOC_FM_PCD_EXTRACT_FROM_ENQ_FQID,
+ /**< KG & CC: Extract from enqueue FQID */
+ e_IOC_FM_PCD_EXTRACT_FROM_FLOW_ID
+ /**< CC only: Field where saved Dequeue FQID */
+} ioc_fm_pcd_extract_from;
+
+/**
+ @Description An enum for selecting extraction type
+*/
+typedef enum ioc_fm_pcd_extract_type {
+ e_IOC_FM_PCD_EXTRACT_BY_HDR, /**< Extract according to header */
+ e_IOC_FM_PCD_EXTRACT_NON_HDR, /**< Extract from data that is not the header */
+ e_IOC_FM_PCD_KG_EXTRACT_PORT_PRIVATE_INFO
+ /**< Extract private info as specified by user */
+} ioc_fm_pcd_extract_type;
+
+/**
+ @Description An enum for selecting a default
+*/
+typedef enum ioc_fm_pcd_kg_extract_dflt_select {
+ e_IOC_FM_PCD_KG_DFLT_GBL_0, /**< Default selection is KG register 0 */
+ e_IOC_FM_PCD_KG_DFLT_GBL_1, /**< Default selection is KG register 1 */
+ e_IOC_FM_PCD_KG_DFLT_PRIVATE_0, /**< Default selection is a per scheme register 0 */
+ e_IOC_FM_PCD_KG_DFLT_PRIVATE_1, /**< Default selection is a per scheme register 1 */
+ e_IOC_FM_PCD_KG_DFLT_ILLEGAL /**< Illegal selection */
+} ioc_fm_pcd_kg_extract_dflt_select;
+
+/**
+ @Description Enumeration type defining all default groups - each group shares
+ a default value, one of four user-initialized values.
+*/
+typedef enum ioc_fm_pcd_kg_known_fields_dflt_types {
+ e_IOC_FM_PCD_KG_MAC_ADDR, /**< MAC Address */
+ e_IOC_FM_PCD_KG_TCI, /**< TCI field */
+ e_IOC_FM_PCD_KG_ENET_TYPE, /**< ENET Type */
+ e_IOC_FM_PCD_KG_PPP_SESSION_ID, /**< PPP Session id */
+ e_IOC_FM_PCD_KG_PPP_PROTOCOL_ID, /**< PPP Protocol id */
+ e_IOC_FM_PCD_KG_MPLS_LABEL, /**< MPLS label */
+ e_IOC_FM_PCD_KG_IP_ADDR, /**< IP addr */
+ e_IOC_FM_PCD_KG_PROTOCOL_TYPE, /**< Protocol type */
+ e_IOC_FM_PCD_KG_IP_TOS_TC, /**< TOS or TC */
+ e_IOC_FM_PCD_KG_IPV6_FLOW_LABEL, /**< IPV6 flow label */
+ e_IOC_FM_PCD_KG_IPSEC_SPI, /**< IPSEC SPI */
+ e_IOC_FM_PCD_KG_L4_PORT, /**< L4 Port */
+ e_IOC_FM_PCD_KG_TCP_FLAG, /**< TCP Flag */
+ e_IOC_FM_PCD_KG_GENERIC_FROM_DATA, /**< grouping implemented by SW,
+ any data extraction that is not the full
+ field described above */
+ e_IOC_FM_PCD_KG_GENERIC_FROM_DATA_NO_V, /**< grouping implemented by SW,
+ any data extraction without validation */
+ e_IOC_FM_PCD_KG_GENERIC_NOT_FROM_DATA /**< grouping implemented by SW,
+ extraction from parser result or
+ direct use of default value */
+} ioc_fm_pcd_kg_known_fields_dflt_types;
+
+/**
+ @Description Enumeration type for defining header index for scenarios with
+ multiple (tunneled) headers
+*/
+typedef enum ioc_fm_pcd_hdr_index {
+ e_IOC_FM_PCD_HDR_INDEX_NONE = 0,
+ /**< used when multiple headers not used, also
+ to specify regular IP (not tunneled). */
+ e_IOC_FM_PCD_HDR_INDEX_1,/**< may be used for VLAN, MPLS, tunneled IP */
+ e_IOC_FM_PCD_HDR_INDEX_2, /**< may be used for MPLS, tunneled IP */
+ e_IOC_FM_PCD_HDR_INDEX_3, /**< may be used for MPLS */
+ e_IOC_FM_PCD_HDR_INDEX_LAST = 0xFF /**< may be used for VLAN, MPLS */
+} ioc_fm_pcd_hdr_index;
+
+/**
+ @Description Enumeration type for selecting the policer profile functional type
+*/
+typedef enum ioc_fm_pcd_profile_type_selection {
+ e_IOC_FM_PCD_PLCR_PORT_PRIVATE, /**< Port dedicated profile */
+ e_IOC_FM_PCD_PLCR_SHARED /**< Shared profile (shared within partition) */
+} ioc_fm_pcd_profile_type_selection;
+
+/**
+ @Description Enumeration type for selecting the policer profile algorithm
+*/
+typedef enum ioc_fm_pcd_plcr_algorithm_selection {
+ e_IOC_FM_PCD_PLCR_PASS_THROUGH, /**< Policer pass through */
+ e_IOC_FM_PCD_PLCR_RFC_2698, /**< Policer algorithm RFC 2698 */
+ e_IOC_FM_PCD_PLCR_RFC_4115 /**< Policer algorithm RFC 4115 */
+} ioc_fm_pcd_plcr_algorithm_selection;
+
+/**
+ @Description Enumeration type for selecting a policer profile color mode
+*/
+typedef enum ioc_fm_pcd_plcr_color_mode {
+ e_IOC_FM_PCD_PLCR_COLOR_BLIND, /**< Color blind */
+ e_IOC_FM_PCD_PLCR_COLOR_AWARE /**< Color aware */
+} ioc_fm_pcd_plcr_color_mode;
+
+/**
+ @Description Enumeration type for selecting a policer profile color
+*/
+typedef enum ioc_fm_pcd_plcr_color {
+ e_IOC_FM_PCD_PLCR_GREEN, /**< Green */
+ e_IOC_FM_PCD_PLCR_YELLOW, /**< Yellow */
+ e_IOC_FM_PCD_PLCR_RED, /**< Red */
+ e_IOC_FM_PCD_PLCR_OVERRIDE /**< Color override */
+} ioc_fm_pcd_plcr_color;
+
+/**
+ @Description Enumeration type for selecting the policer profile packet frame length selector
+*/
+typedef enum ioc_fm_pcd_plcr_frame_length_select {
+ e_IOC_FM_PCD_PLCR_L2_FRM_LEN, /**< L2 frame length */
+ e_IOC_FM_PCD_PLCR_L3_FRM_LEN, /**< L3 frame length */
+ e_IOC_FM_PCD_PLCR_L4_FRM_LEN, /**< L4 frame length */
+ e_IOC_FM_PCD_PLCR_FULL_FRM_LEN /**< Full frame length */
+} ioc_fm_pcd_plcr_frame_length_select;
+
+/**
+ @Description Enumeration type for selecting roll-back frame
+*/
+typedef enum ioc_fm_pcd_plcr_roll_back_frame_select {
+ e_IOC_FM_PCD_PLCR_ROLLBACK_L2_FRM_LEN, /**< Rollback L2 frame length */
+ e_IOC_FM_PCD_PLCR_ROLLBACK_FULL_FRM_LEN /**< Rollback Full frame length */
+} ioc_fm_pcd_plcr_roll_back_frame_select;
+
+/**
+ @Description Enumeration type for selecting the policer profile packet or byte mode
+*/
+typedef enum ioc_fm_pcd_plcr_rate_mode {
+ e_IOC_FM_PCD_PLCR_BYTE_MODE, /**< Byte mode */
+ e_IOC_FM_PCD_PLCR_PACKET_MODE /**< Packet mode */
+} ioc_fm_pcd_plcr_rate_mode;
+
+/**
+ @Description Enumeration type for defining action of frame
+*/
+typedef enum ioc_fm_pcd_done_action {
+ e_IOC_FM_PCD_ENQ_FRAME = 0, /**< Enqueue frame */
+ e_IOC_FM_PCD_DROP_FRAME /**< Drop frame */
+} ioc_fm_pcd_done_action;
+
+/**
+ @Description Enumeration type for selecting the policer counter
+*/
+typedef enum ioc_fm_pcd_plcr_profile_counters {
+ e_IOC_FM_PCD_PLCR_PROFILE_GREEN_PACKET_TOTAL_COUNTER, /**< Green packets counter */
+ e_IOC_FM_PCD_PLCR_PROFILE_YELLOW_PACKET_TOTAL_COUNTER, /**< Yellow packets counter */
+ e_IOC_FM_PCD_PLCR_PROFILE_RED_PACKET_TOTAL_COUNTER, /**< Red packets counter */
+ e_IOC_FM_PCD_PLCR_PROFILE_RECOLOURED_YELLOW_PACKET_TOTAL_COUNTER, /**< Recolored yellow packets counter */
+ e_IOC_FM_PCD_PLCR_PROFILE_RECOLOURED_RED_PACKET_TOTAL_COUNTER /**< Recolored red packets counter */
+} ioc_fm_pcd_plcr_profile_counters;
+
+/**
+ @Description Enumeration type for selecting the PCD action after extraction
+*/
+typedef enum ioc_fm_pcd_action {
+ e_IOC_FM_PCD_ACTION_NONE, /**< NONE */
+ e_IOC_FM_PCD_ACTION_EXACT_MATCH, /**< Exact match on the selected extraction*/
+ e_IOC_FM_PCD_ACTION_INDEXED_LOOKUP /**< Indexed lookup on the selected extraction*/
+} ioc_fm_pcd_action;
+
+/**
+ @Description Enumeration type for selecting type of insert manipulation
+*/
+typedef enum ioc_fm_pcd_manip_hdr_insrt_type {
+ e_IOC_FM_PCD_MANIP_INSRT_GENERIC, /**< Insert according to offset & size */
+ e_IOC_FM_PCD_MANIP_INSRT_BY_HDR, /**< Insert according to protocol */
+#if (defined(FM_CAPWAP_SUPPORT) && (DPAA_VERSION == 10))
+ e_IOC_FM_PCD_MANIP_INSRT_BY_TEMPLATE /**< Insert template to start of frame */
+#endif /* FM_CAPWAP_SUPPORT */
+} ioc_fm_pcd_manip_hdr_insrt_type;
+
+/**
+ @Description Enumeration type for selecting type of remove manipulation
+*/
+typedef enum ioc_fm_pcd_manip_hdr_rmv_type {
+ e_IOC_FM_PCD_MANIP_RMV_GENERIC, /**< Remove according to offset & size */
+ e_IOC_FM_PCD_MANIP_RMV_BY_HDR /**< Remove according to offset & size */
+} ioc_fm_pcd_manip_hdr_rmv_type;
+
+/**
+ @Description An enum for selecting specific L2 fields removal
+*/
+typedef enum ioc_fm_pcd_manip_hdr_rmv_specific_l2 {
+ e_IOC_FM_PCD_MANIP_HDR_RMV_ETHERNET, /**< Ethernet/802.3 MAC */
+ e_IOC_FM_PCD_MANIP_HDR_RMV_STACKED_QTAGS, /**< stacked QTags */
+ e_IOC_FM_PCD_MANIP_HDR_RMV_ETHERNET_AND_MPLS,
+ /**< MPLS and Ethernet/802.3 MAC header unitl
+ the header which follows the MPLS header */
+ e_IOC_FM_PCD_MANIP_HDR_RMV_MPLS /**< Remove MPLS header (Unlimited MPLS labels) */
+} ioc_fm_pcd_manip_hdr_rmv_specific_l2;
+
+/**
+ @Description Enumeration type for selecting specific fields updates
+*/
+typedef enum ioc_fm_pcd_manip_hdr_field_update_type {
+ e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_VLAN, /**< VLAN updates */
+ e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_IPV4, /**< IPV4 updates */
+ e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_IPV6, /**< IPV6 updates */
+ e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_TCP_UDP, /**< TCP_UDP updates */
+} ioc_fm_pcd_manip_hdr_field_update_type;
+
+/**
+ @Description Enumeration type for selecting VLAN updates
+*/
+typedef enum ioc_fm_pcd_manip_hdr_field_update_vlan {
+ e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_VLAN_VPRI, /**< Replace VPri of outer most VLAN tag. */
+ e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_DSCP_TO_VLAN /**< DSCP to VLAN priority bits translation */
+} ioc_fm_pcd_manip_hdr_field_update_vlan;
+
+/**
+ @Description Enumeration type for selecting specific L2 fields removal
+*/
+typedef enum ioc_fm_pcd_manip_hdr_insrt_specific_l2 {
+ e_IOC_FM_PCD_MANIP_HDR_INSRT_MPLS /**< Insert MPLS header (Unlimited MPLS labels) */
+} ioc_fm_pcd_manip_hdr_insrt_specific_l2;
+
+#if (DPAA_VERSION >= 11)
+/**
+ @Description Enumeration type for selecting QoS mapping mode
+
+ Note: In all cases except 'e_FM_PCD_MANIP_HDR_QOS_MAPPING_NONE'
+ User should instruct the port to read the parser-result
+*/
+typedef enum ioc_fm_pcd_manip_hdr_qos_mapping_mode {
+ e_IOC_FM_PCD_MANIP_HDR_QOS_MAPPING_NONE = 0, /**< No mapping, QoS field will not be changed */
+ e_IOC_FM_PCD_MANIP_HDR_QOS_MAPPING_AS_IS, /**< QoS field will be overwritten by the last byte in the parser-result. */
+} ioc_fm_pcd_manip_hdr_qos_mapping_mode;
+
+/**
+ @Description Enumeration type for selecting QoS source
+
+ Note: In all cases except 'e_FM_PCD_MANIP_HDR_QOS_SRC_NONE'
+ User should left room for the parser-result on input/output buffer
+ and instruct the port to read/write the parser-result to the buffer (RPD should be set)
+*/
+typedef enum ioc_fm_pcd_manip_hdr_qos_src {
+ e_IOC_FM_PCD_MANIP_HDR_QOS_SRC_NONE = 0, /**< TODO */
+ e_IOC_FM_PCD_MANIP_HDR_QOS_SRC_USER_DEFINED, /**< QoS will be taken from the last byte in the parser-result. */
+} ioc_fm_pcd_manip_hdr_qos_src;
+#endif /* (DPAA_VERSION >= 11) */
+
+/**
+ @Description Enumeration type for selecting type of header insertion
+*/
+typedef enum ioc_fm_pcd_manip_hdr_insrt_by_hdr_type {
+ e_IOC_FM_PCD_MANIP_INSRT_BY_HDR_SPECIFIC_L2,/**< Specific L2 fields insertion */
+#if (DPAA_VERSION >= 11)
+ e_IOC_FM_PCD_MANIP_INSRT_BY_HDR_IP, /**< IP insertion */
+ e_IOC_FM_PCD_MANIP_INSRT_BY_HDR_UDP, /**< UDP insertion */
+ e_IOC_FM_PCD_MANIP_INSRT_BY_HDR_UDP_LITE, /**< UDP lite insertion */
+ e_IOC_FM_PCD_MANIP_INSRT_BY_HDR_CAPWAP /**< CAPWAP insertion */
+#endif /* (DPAA_VERSION >= 11) */
+} ioc_fm_pcd_manip_hdr_insrt_by_hdr_type;
+
+/**
+ @Description Enumeration type for selecting specific custom command
+*/
+typedef enum ioc_fm_pcd_manip_hdr_custom_type {
+ e_IOC_FM_PCD_MANIP_HDR_CUSTOM_IP_REPLACE, /**< Replace IPv4/IPv6 */
+ e_IOC_FM_PCD_MANIP_HDR_CUSTOM_GEN_FIELD_REPLACE,
+} ioc_fm_pcd_manip_hdr_custom_type;
+
+/**
+ @Description Enumeration type for selecting specific custom command
+*/
+typedef enum ioc_fm_pcd_manip_hdr_custom_ip_replace {
+ e_IOC_FM_PCD_MANIP_HDR_CUSTOM_REPLACE_IPV4_BY_IPV6, /**< Replace IPv4 by IPv6 */
+ e_IOC_FM_PCD_MANIP_HDR_CUSTOM_REPLACE_IPV6_BY_IPV4 /**< Replace IPv6 by IPv4 */
+} ioc_fm_pcd_manip_hdr_custom_ip_replace;
+
+/**
+ @Description Enumeration type for selecting type of header removal
+*/
+typedef enum ioc_fm_pcd_manip_hdr_rmv_by_hdr_type {
+ e_IOC_FM_PCD_MANIP_RMV_BY_HDR_SPECIFIC_L2 = 0,/**< Specific L2 fields removal */
+#if (DPAA_VERSION >= 11)
+ e_IOC_FM_PCD_MANIP_RMV_BY_HDR_CAPWAP, /**< CAPWAP removal */
+#endif /* (DPAA_VERSION >= 11) */
+#if (DPAA_VERSION >= 11) || ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT))
+ e_IOC_FM_PCD_MANIP_RMV_BY_HDR_FROM_START,/**< Locate from data that is not the header */
+#endif /* (DPAA_VERSION >= 11) || ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT)) */
+} ioc_fm_pcd_manip_hdr_rmv_by_hdr_type;
+
+/**
+ @Description Enumeration type for selecting type of timeout mode
+*/
+typedef enum ioc_fm_pcd_manip_reassem_time_out_mode {
+ e_IOC_FM_PCD_MANIP_TIME_OUT_BETWEEN_FRAMES,
+ /**< Limits the time of the reassembly process
+ from the first fragment to the last */
+ e_IOC_FM_PCD_MANIP_TIME_OUT_BETWEEN_FRAG
+ /**< Limits the time of receiving the fragment */
+} ioc_fm_pcd_manip_reassem_time_out_mode;
+
+/**
+ @Description Enumeration type for selecting type of WaysNumber mode
+*/
+typedef enum ioc_fm_pcd_manip_reassem_ways_number {
+ e_IOC_FM_PCD_MANIP_ONE_WAY_HASH = 1, /**< One way hash */
+ e_IOC_FM_PCD_MANIP_TWO_WAYS_HASH, /**< Two ways hash */
+ e_IOC_FM_PCD_MANIP_THREE_WAYS_HASH, /**< Three ways hash */
+ e_IOC_FM_PCD_MANIP_FOUR_WAYS_HASH, /**< Four ways hash */
+ e_IOC_FM_PCD_MANIP_FIVE_WAYS_HASH, /**< Five ways hash */
+ e_IOC_FM_PCD_MANIP_SIX_WAYS_HASH, /**< Six ways hash */
+ e_IOC_FM_PCD_MANIP_SEVEN_WAYS_HASH, /**< Seven ways hash */
+ e_IOC_FM_PCD_MANIP_EIGHT_WAYS_HASH /**< Eight ways hash */
+} ioc_fm_pcd_manip_reassem_ways_number;
+
+#if (defined(FM_CAPWAP_SUPPORT) && (DPAA_VERSION == 10))
+/**
+ @Description Enumeration type for selecting type of statistics mode
+*/
+typedef enum ioc_fm_pcd_stats {
+ e_IOC_FM_PCD_STATS_PER_FLOWID = 0 /**< Flow ID is used as index for getting statistics */
+} ioc_fm_pcd_stats;
+#endif
+
+/**
+ @Description Enumeration type for selecting manipulation type
+*/
+typedef enum ioc_fm_pcd_manip_type {
+ e_IOC_FM_PCD_MANIP_HDR = 0, /**< Header manipulation */
+ e_IOC_FM_PCD_MANIP_REASSEM, /**< Reassembly */
+ e_IOC_FM_PCD_MANIP_FRAG, /**< Fragmentation */
+ e_IOC_FM_PCD_MANIP_SPECIAL_OFFLOAD /**< Special Offloading */
+} ioc_fm_pcd_manip_type;
+
+/**
+ @Description Enumeration type for selecting type of statistics mode
+*/
+typedef enum ioc_fm_pcd_cc_stats_mode {
+ e_IOC_FM_PCD_CC_STATS_MODE_NONE = 0, /**< No statistics support */
+ e_IOC_FM_PCD_CC_STATS_MODE_FRAME, /**< Frame count statistics */
+ e_IOC_FM_PCD_CC_STATS_MODE_BYTE_AND_FRAME, /**< Byte and frame count statistics */
+#if (DPAA_VERSION >= 11)
+ e_IOC_FM_PCD_CC_STATS_MODE_RMON,/**< Byte and frame length range count statistics */
+#endif /* (DPAA_VERSION >= 11) */
+} ioc_fm_pcd_cc_stats_mode;
+
+/**
+ @Description Enumeration type for determining the action in case an IP packet
+ is larger than MTU but its DF (Don't Fragment) bit is set.
+*/
+typedef enum ioc_fm_pcd_manip_dont_frag_action {
+ e_IOC_FM_PCD_MANIP_DISCARD_PACKET = 0, /**< Discard packet */
+ e_IOC_FM_PCD_MANIP_ENQ_TO_ERR_Q_OR_DISCARD_PACKET = e_IOC_FM_PCD_MANIP_DISCARD_PACKET,
+ /**< Obsolete, cannot enqueue to error queue;
+ In practice, selects to discard packets;
+ Will be removed in the future */
+ e_IOC_FM_PCD_MANIP_FRAGMENT_PACKECT, /**< Fragment packet and continue normal processing */
+ e_IOC_FM_PCD_MANIP_CONTINUE_WITHOUT_FRAG /**< Continue normal processing without fragmenting the packet */
+} ioc_fm_pcd_manip_dont_frag_action;
+
+/**
+ @Description Enumeration type for selecting type of special offload manipulation
+*/
+typedef enum ioc_fm_pcd_manip_special_offload_type {
+ e_IOC_FM_PCD_MANIP_SPECIAL_OFFLOAD_IPSEC,/**< IPSec offload manipulation */
+#if (DPAA_VERSION >= 11)
+ e_IOC_FM_PCD_MANIP_SPECIAL_OFFLOAD_CAPWAP/**< CAPWAP offload manipulation */
+#endif /* (DPAA_VERSION >= 11) */
+} ioc_fm_pcd_manip_special_offload_type;
+
+/**
+ @Description A union of protocol dependent special options
+ (Must match union u_FmPcdHdrProtocolOpt defined in fm_pcd_ext.h)
+*/
+typedef union ioc_fm_pcd_hdr_protocol_opt_u {
+ ioc_eth_protocol_opt_t eth_opt; /**< Ethernet options */
+ ioc_vlan_protocol_opt_t vlan_opt; /**< Vlan options */
+ ioc_mpls_protocol_opt_t mpls_opt; /**< MPLS options */
+ ioc_ipv4_protocol_opt_t ipv4_opt; /**< IPv4 options */
+ ioc_ipv6_protocol_opt_t ipv6_opt; /**< IPv6 options */
+#if (DPAA_VERSION >= 11)
+ ioc_capwap_protocol_opt_t capwap_opt; /**< CAPWAP options */
+#endif /* (DPAA_VERSION >= 11) */
+} ioc_fm_pcd_hdr_protocol_opt_u;
+
+/**
+ @Description A union holding all known protocol fields
+*/
+typedef union ioc_fm_pcd_fields_u {
+ ioc_header_field_eth_t eth; /**< Ethernet*/
+ ioc_header_field_vlan_t vlan; /**< VLAN*/
+ ioc_header_field_llc_snap_t llc_snap; /**< LLC SNAP*/
+ ioc_header_field_pppoe_t pppoe; /**< PPPoE*/
+ ioc_header_field_mpls_t mpls; /**< MPLS*/
+ ioc_header_field_ip_t ip; /**< IP */
+ ioc_header_field_ipv4_t ipv4; /**< IPv4*/
+ ioc_header_field_ipv6_t ipv6; /**< IPv6*/
+ ioc_header_field_udp_t udp; /**< UDP */
+ ioc_header_field_udp_lite_t udp_lite; /**< UDP_Lite*/
+ ioc_header_field_tcp_t tcp; /**< TCP */
+ ioc_header_field_sctp_t sctp; /**< SCTP*/
+ ioc_header_field_dccp_t dccp; /**< DCCP*/
+ ioc_header_field_gre_t gre; /**< GRE */
+ ioc_header_field_minencap_t minencap;/**< Minimal Encapsulation */
+ ioc_header_field_ipsec_ah_t ipsec_ah; /**< IPSec AH*/
+ ioc_header_field_ipsec_esp_t ipsec_esp; /**< IPSec ESP*/
+ ioc_header_field_udp_encap_esp_t udp_encap_esp;
+ /**< UDP Encapsulation ESP */
+} ioc_fm_pcd_fields_u;
+
+/**
+ @Description Parameters for defining header extraction for key generation
+*/
+typedef struct ioc_fm_pcd_from_hdr_t {
+ uint8_t size; /**< Size in byte */
+ uint8_t offset; /**< Byte offset */
+} ioc_fm_pcd_from_hdr_t;
+
+/**
+ @Description Parameters for defining field extraction for key generation
+*/
+typedef struct ioc_fm_pcd_from_field_t {
+ ioc_fm_pcd_fields_u field; /**< Field selection */
+ uint8_t size; /**< Size in byte */
+ uint8_t offset; /**< Byte offset */
+} ioc_fm_pcd_from_field_t;
+
+/**
+ @Description Parameters for defining a single network environment unit
+ A distinction unit should be defined if it will later be used
+ by one or more PCD engines to distinguish between flows.
+ (Must match struct t_FmPcdDistinctionUnit defined in fm_pcd_ext.h)
+*/
+typedef struct ioc_fm_pcd_distinction_unit_t {
+ struct {
+ ioc_net_header_type hdr;/**< One of the headers supported by the FM */
+ ioc_fm_pcd_hdr_protocol_opt_u opt; /**< Select only one option! */
+ } hdrs[IOC_FM_PCD_MAX_NUM_OF_INTERCHANGEABLE_HDRS];
+} ioc_fm_pcd_distinction_unit_t;
+
+/**
+ @Description Parameters for defining all different distinction units supported
+ by a specific PCD Network Environment Characteristics module.
+
+ Each unit represent a protocol or a group of protocols that may
+ be used later by the different PCD engines to distinguish between flows.
+ (Must match struct t_FmPcdNetEnvParams defined in fm_pcd_ext.h)
+*/
+struct fm_pcd_net_env_params_t {
+ uint8_t num_of_distinction_units;
+ /**< Number of different units to be identified */
+ ioc_fm_pcd_distinction_unit_t
+ units[IOC_FM_PCD_MAX_NUM_OF_DISTINCTION_UNITS];
+ /**< An array of num_of_distinction_units of the
+ different units to be identified */
+};
+
+typedef struct ioc_fm_pcd_net_env_params_t {
+ struct fm_pcd_net_env_params_t param;
+ void *id;
+ /**< Output parameter; Returns the net-env Id to be used */
+} ioc_fm_pcd_net_env_params_t;
+
+/**
+ @Description Parameters for defining a single extraction action when
+ creating a key
+*/
+typedef struct ioc_fm_pcd_extract_entry_t {
+ ioc_fm_pcd_extract_type type; /**< Extraction type select */
+ union {
+ struct {
+ ioc_net_header_type hdr; /**< Header selection */
+ bool ignore_protocol_validation;
+ /**< Ignore protocol validation */
+ ioc_fm_pcd_hdr_index hdr_index;
+ /**< Relevant only for MPLS, VLAN and tunneled
+ IP. Otherwise should be cleared.*/
+ ioc_fm_pcd_extract_by_hdr_type type;
+ /**< Header extraction type select */
+ union {
+ ioc_fm_pcd_from_hdr_t from_hdr;
+ /**< Extract bytes from header parameters */
+ ioc_fm_pcd_from_field_t from_field;
+ /**< Extract bytes from field parameters */
+ ioc_fm_pcd_fields_u full_field;
+ /**< Extract full field parameters */
+ } extract_by_hdr_type;
+ } extract_by_hdr;/**< Used when type = e_IOC_FM_PCD_KG_EXTRACT_BY_HDR */
+ struct {
+ ioc_fm_pcd_extract_from src; /**< Non-header extraction source */
+ ioc_fm_pcd_action action; /**< Relevant for CC Only */
+ uint16_t ic_indx_mask; /**< Relevant only for CC when
+ action = e_IOC_FM_PCD_ACTION_INDEXED_LOOKUP;
+ Note that the number of bits that are set within
+ this mask must be log2 of the CC-node 'num_of_keys'.
+ Note that the mask cannot be set on the lower bits. */
+ uint8_t offset; /**< Byte offset */
+ uint8_t size; /**< Size in bytes */
+ } extract_non_hdr;
+ /**< Used when type = e_IOC_FM_PCD_KG_EXTRACT_NON_HDR */
+ } extract_params;
+} ioc_fm_pcd_extract_entry_t;
+
+/**
+ @Description A structure for defining masks for each extracted
+ field in the key.
+*/
+typedef struct ioc_fm_pcd_kg_extract_mask_t {
+ uint8_t extract_array_index; /**< Index in the extraction array, as initialized by user */
+ uint8_t offset; /**< Byte offset */
+ uint8_t mask; /**< A byte mask (selected bits will be ignored) */
+} ioc_fm_pcd_kg_extract_mask_t;
+
+/**
+ @Description A structure for defining default selection per groups
+ of fields
+*/
+typedef struct ioc_fm_pcd_kg_extract_dflt_t {
+ ioc_fm_pcd_kg_known_fields_dflt_types type; /**< Default type select*/
+ ioc_fm_pcd_kg_extract_dflt_select dflt_select; /**< Default register select */
+} ioc_fm_pcd_kg_extract_dflt_t;
+
+
+/**
+ @Description A structure for defining all parameters needed for
+ generation a key and using a hash function
+*/
+typedef struct ioc_fm_pcd_kg_key_extract_and_hash_params_t {
+ uint32_t private_dflt0; /**< Scheme default register 0 */
+ uint32_t private_dflt1; /**< Scheme default register 1 */
+ uint8_t num_of_used_extracts; /**< defines the valid size of the following array */
+ ioc_fm_pcd_extract_entry_t extract_array[IOC_FM_PCD_KG_MAX_NUM_OF_EXTRACTS_PER_KEY];
+ /**< An array of extraction definitions. */
+ uint8_t num_of_used_dflts; /**< defines the valid size of the following array */
+ ioc_fm_pcd_kg_extract_dflt_t dflts[IOC_FM_PCD_KG_NUM_OF_DEFAULT_GROUPS];
+ /**< For each extraction used in this scheme, specify the required
+ default register to be used when header is not found.
+ types not specified in this array will get undefined value. */
+ uint8_t num_of_used_masks; /**< Defines the valid size of the following array */
+ ioc_fm_pcd_kg_extract_mask_t masks[IOC_FM_PCD_KG_NUM_OF_EXTRACT_MASKS];
+ uint8_t hash_shift; /**< Hash result right shift.
+ Selects the 24 bits out of the 64 hash result.
+ 0 means using the 24 LSB's, otherwise use the
+ 24 LSB's after shifting right.*/
+ uint32_t hash_distribution_num_of_fqids; /**< must be > 1 and a power of 2. Represents the range
+ of queues for the key and hash functionality */
+ uint8_t hash_distribution_fqids_shift; /**< selects the FQID bits that will be effected by the hash */
+ bool symmetric_hash; /**< TRUE to generate the same hash for frames with swapped source and
+ destination fields on all layers; If TRUE, driver will check that for
+ all layers, if SRC extraction is selected, DST extraction must also be
+ selected, and vice versa. */
+} ioc_fm_pcd_kg_key_extract_and_hash_params_t;
+
+/**
+ @Description A structure of parameters for defining a single
+ Qid mask (extracted OR).
+*/
+typedef struct ioc_fm_pcd_kg_extracted_or_params_t {
+ ioc_fm_pcd_extract_type type; /**< Extraction type select */
+ union {
+ struct { /**< used when type = e_IOC_FM_PCD_KG_EXTRACT_BY_HDR */
+ ioc_net_header_type hdr;
+ ioc_fm_pcd_hdr_index hdr_index; /**< Relevant only for MPLS, VLAN and tunneled
+ IP. Otherwise should be cleared.*/
+ bool ignore_protocol_validation;
+
+ } extract_by_hdr;
+ ioc_fm_pcd_extract_from src; /**< used when type = e_IOC_FM_PCD_KG_EXTRACT_NON_HDR */
+ } extract_params;
+ uint8_t extraction_offset; /**< Offset for extraction */
+ ioc_fm_pcd_kg_extract_dflt_select dflt_value; /**< Select register from which extraction is taken if
+ field not found */
+ uint8_t mask; /**< Mask LSB byte of extraction (specified bits are ignored) */
+ uint8_t bit_offset_in_fqid;
+ /**< 0-31, Selects which bits of the 24 FQID bits to effect using
+ the extracted byte; Assume byte is placed as the 8 MSB's in
+ a 32 bit word where the lower bits
+ are the FQID; i.e if bitOffsetInFqid=1 than its LSB
+ will effect the FQID MSB, if bitOffsetInFqid=24 than the
+ extracted byte will effect the 8 LSB's of the FQID,
+ if bitOffsetInFqid=31 than the byte's MSB will effect
+ the FQID's LSB; 0 means - no effect on FQID;
+ Note that one, and only one of
+ bitOffsetInFqid or bitOffsetInPlcrProfile must be set (i.e,
+ extracted byte must effect either FQID or Policer profile).*/
+ uint8_t bit_offset_in_plcr_profile;
+ /**< 0-15, Selects which bits of the 8 policer profile id bits to
+ effect using the extracted byte; Assume byte is placed
+ as the 8 MSB's in a 16 bit word where the lower bits
+ are the policer profile id; i.e if bitOffsetInPlcrProfile=1
+ than its LSB will effect the profile MSB, if bitOffsetInFqid=8
+ than the extracted byte will effect the whole policer profile id,
+ if bitOffsetInFqid=15 than the byte's MSB will effect
+ the Policer Profile id's LSB;
+ 0 means - no effect on policer profile; Note that one, and only one of
+ bitOffsetInFqid or bitOffsetInPlcrProfile must be set (i.e,
+ extracted byte must effect either FQID or Policer profile).*/
+} ioc_fm_pcd_kg_extracted_or_params_t;
+
+/**
+ @Description A structure for configuring scheme counter
+*/
+typedef struct ioc_fm_pcd_kg_scheme_counter_t {
+ bool update; /**< FALSE to keep the current counter state
+ and continue from that point, TRUE to update/reset
+ the counter when the scheme is written. */
+ uint32_t value; /**< If update=TRUE, this value will be written into the
+ counter; clear this field to reset the counter. */
+} ioc_fm_pcd_kg_scheme_counter_t;
+
+
+/**
+ @Description A structure for retrieving FMKG_SE_SPC
+*/
+typedef struct ioc_fm_pcd_kg_scheme_spc_t {
+ uint32_t val; /**< return value */
+ void *id; /**< scheme handle */
+} ioc_fm_pcd_kg_scheme_spc_t;
+
+/**
+ @Description A structure for defining policer profile parameters as required by keygen
+ (when policer is the next engine after this scheme).
+ (Must match struct t_FmPcdKgPlcrProfile defined in fm_pcd_ext.h)
+*/
+typedef struct ioc_fm_pcd_kg_plcr_profile_t {
+ bool shared_profile; /**< TRUE if this profile is shared between ports
+ (i.e. managed by master partition) May not be TRUE
+ if profile is after Coarse Classification*/
+ bool direct; /**< If TRUE, direct_relative_profile_id only selects the profile
+ id, if FALSE fqid_offset_relative_profile_id_base is used
+ together with fqid_offset_shift and num_of_profiles
+ parameters, to define a range of profiles from
+ which the KeyGen result will determine the
+ destination policer profile. */
+ union {
+ uint16_t direct_relative_profile_id; /**< Used if 'direct' is TRUE, to select policer profile.
+ This parameter should indicate the policer profile offset within the port's
+ policer profiles or SHARED window. */
+ struct {
+ uint8_t fqid_offset_shift; /**< Shift of KG results without the qid base */
+ uint8_t fqid_offset_relative_profile_id_base;
+ /**< OR of KG results without the qid base
+ This parameter should indicate the policer profile
+ offset within the port's policer profiles window
+ or SHARED window depends on shared_profile */
+ uint8_t num_of_profiles; /**< Range of profiles starting at base */
+ } indirect_profile; /**< Indirect profile parameters */
+ } profile_select; /**< Direct/indirect profile selection and parameters */
+} ioc_fm_pcd_kg_plcr_profile_t;
+
+#if DPAA_VERSION >= 11
+/**
+ @Description Parameters for configuring a storage profile for a KeyGen scheme.
+*/
+typedef struct ioc_fm_pcd_kg_storage_profile_t {
+ bool direct;
+ /**< If TRUE, directRelativeProfileId only selects the
+ profile id;
+ If FALSE, fqidOffsetRelativeProfileIdBase is used
+ together with fqidOffsetShift and numOfProfiles
+ parameters to define a range of profiles from which
+ the KeyGen result will determine the destination
+ storage profile. */
+ union {
+ uint16_t direct_relative_profileId;
+ /**< Used when 'direct' is TRUE, to select a storage profile;
+ should indicate the storage profile offset within the
+ port's storage profiles window. */
+ struct {
+ uint8_t fqid_offset_shift;
+ /**< Shift of KeyGen results without the FQID base */
+ uint8_t fqid_offset_relative_profile_id_base;
+ /**< OR of KeyGen results without the FQID base;
+ should indicate the policer profile offset within the
+ port's storage profiles window. */
+ uint8_t num_of_profiles;
+ /**< Range of profiles starting at base. */
+ } indirect_profile;
+ /**< Indirect profile parameters. */
+ } profile_select;
+ /**< Direct/indirect profile selection and parameters. */
+} ioc_fm_pcd_kg_storage_profile_t;
+#endif /* DPAA_VERSION >= 11 */
+
+/**
+ @Description Parameters for defining CC as the next engine after KeyGen
+ (Must match struct t_FmPcdKgCc defined in fm_pcd_ext.h)
+*/
+typedef struct ioc_fm_pcd_kg_cc_t {
+ void *tree_id; /**< CC Tree id */
+ uint8_t grp_id; /**< CC group id within the CC tree */
+ bool plcr_next; /**< TRUE if after CC, in case of data frame,
+ policing is required. */
+ bool bypass_plcr_profile_generation;
+ /**< TRUE to bypass KeyGen policer profile generation;
+ selected profile is the one set at port initialization. */
+ ioc_fm_pcd_kg_plcr_profile_t plcr_profile; /**< Valid only if plcr_next = TRUE and
+ bypass_plcr_profile_generation = FALSE */
+} ioc_fm_pcd_kg_cc_t;
+
+/**
+ @Description Parameters for defining initializing a KeyGen scheme
+ (Must match struct t_FmPcdKgSchemeParams defined in fm_pcd_ext.h)
+*/
+struct fm_pcd_kg_scheme_params_t {
+ bool modify; /**< TRUE to change an existing scheme */
+ union {
+ uint8_t relative_scheme_id;
+ /**< if modify=FALSE: partition-relative scheme id */
+ void *scheme_id;
+ /**< if modify=TRUE: the id of an existing scheme */
+ } scm_id;
+ bool always_direct; /**< This scheme is reached only directly,
+ i.e. no need for match vector;
+ KeyGen will ignore it when matching */
+ struct {
+ /**< HL relevant only if always_direct=FALSE */
+ void *net_env_id;
+ /**< The id of the Network Environment as returned
+ by FM_PCD_NetEnvCharacteristicsSet() */
+ uint8_t num_of_distinction_units;
+ /**< Number of NetEnv units listed in unit_ids array */
+ uint8_t unit_ids[IOC_FM_PCD_MAX_NUM_OF_DISTINCTION_UNITS];
+ /**< Indexes as passed to SetNetEnvCharacteristics (?) array */
+ } net_env_params;
+ bool use_hash;
+ /**< use the KG Hash functionality */
+ ioc_fm_pcd_kg_key_extract_and_hash_params_t key_extract_and_hash_params;
+ /**< used only if useHash = TRUE */
+ bool bypass_fqid_generation;
+ /**< Normally - FALSE, TRUE to avoid FQID update in the IC;
+ In such a case FQID after KG will be the default FQID
+ defined for the relevant port, or the FQID defined by CC
+ in cases where CC was the previous engine. */
+ uint32_t base_fqid;
+ /**< Base FQID; Relevant only if bypass_fqid_generation = FALSE;
+ If hash is used and an even distribution is expected
+ according to hash_distribution_num_of_fqids, base_fqid must be aligned to
+ hash_distribution_num_of_fqids. */
+ uint8_t num_of_used_extracted_ors;
+ /**< Number of FQID masks listed in extracted_ors array*/
+ ioc_fm_pcd_kg_extracted_or_params_t
+ extracted_ors[IOC_FM_PCD_KG_NUM_OF_GENERIC_REGS];
+ /**< IOC_FM_PCD_KG_NUM_OF_GENERIC_REGS
+ registers are shared between qid_masks
+ functionality and some of the extraction
+ actions; Normally only some will be used
+ for qid_mask. Driver will return error if
+ resource is full at initialization time. */
+#if DPAA_VERSION >= 11
+ bool override_storage_profile;
+ /**< TRUE if KeyGen override previously decided storage profile */
+ ioc_fm_pcd_kg_storage_profile_t storage_profile;
+ /**< Used when override_storage_profile=TRUE */
+#endif /* DPAA_VERSION >= 11 */
+ ioc_fm_pcd_engine next_engine;
+ /**< may be BMI, PLCR or CC */
+ union {
+ /**< depends on nextEngine */
+ ioc_fm_pcd_done_action done_action;
+ /**< Used when next engine is BMI (done) */
+ ioc_fm_pcd_kg_plcr_profile_t plcr_profile;
+ /**< Used when next engine is PLCR */
+ ioc_fm_pcd_kg_cc_t cc;
+ /**< Used when next engine is CC */
+ } kg_next_engine_params;
+ ioc_fm_pcd_kg_scheme_counter_t scheme_counter;
+ /**< A structure of parameters for updating the scheme counter */
+};
+
+typedef struct ioc_fm_pcd_kg_scheme_params_t {
+ struct fm_pcd_kg_scheme_params_t param;
+ void *id;
+ /**< Returns the scheme Id to be used */
+} ioc_fm_pcd_kg_scheme_params_t;
+
+/**
+ @Collection
+*/
+#if DPAA_VERSION >= 11
+#define IOC_FM_PCD_CC_STATS_MAX_NUM_OF_FLR 10 /* Maximal supported number of frame length ranges */
+#define IOC_FM_PCD_CC_STATS_FLR_SIZE 2 /* Size in bytes of a frame length range limit */
+#endif /* DPAA_VERSION >= 11 */
+#define IOC_FM_PCD_CC_STATS_FLR_COUNT_SIZE 4 /* Size in bytes of a frame length range counter */
+/* @} */
+
+/**
+ @Description Parameters for defining CC as the next engine after a CC node.
+ (Must match struct t_FmPcdCcNextCcParams defined in fm_pcd_ext.h)
+*/
+typedef struct ioc_fm_pcd_cc_next_cc_params_t {
+ void *cc_node_id; /**< Id of the next CC node */
+} ioc_fm_pcd_cc_next_cc_params_t;
+
+#if DPAA_VERSION >= 11
+/**
+ @Description A structure for defining Frame Replicator as the next engine after a CC node.
+ (Must match struct t_FmPcdCcNextFrParams defined in fm_pcd_ext.h)
+*/
+typedef struct ioc_fm_pcd_cc_next_fr_params_t {
+ void *frm_replic_id; /**< The id of the next frame replicator group */
+} ioc_fm_pcd_cc_next_fr_params_t;
+#endif /* DPAA_VERSION >= 11 */
+
+/**
+ @Description A structure for defining PLCR params when PLCR is the
+ next engine after a CC node
+ (Must match struct t_FmPcdCcNextPlcrParams defined in fm_pcd_ext.h)
+*/
+typedef struct ioc_fm_pcd_cc_next_plcr_params_t {
+ bool override_params; /**< TRUE if CC override previously decided parameters*/
+ bool shared_profile; /**< Relevant only if overrideParams=TRUE:
+ TRUE if this profile is shared between ports */
+ uint16_t new_relative_profile_id; /**< Relevant only if overrideParams=TRUE:
+ (otherwise profile id is taken from keygen);
+ This parameter should indicate the policer
+ profile offset within the port's
+ policer profiles or from SHARED window.*/
+ uint32_t new_fqid; /**< Relevant only if overrideParams=TRUE:
+ FQID for enquing the frame;
+ In earlier chips if policer next engine is KEYGEN,
+ this parameter can be 0, because the KEYGEN always decides
+ the enqueue FQID.*/
+#if DPAA_VERSION >= 11
+ uint8_t new_relative_storage_profile_id;
+ /**< Indicates the relative storage profile offset within
+ the port's storage profiles window;
+ Relevant only if the port was configured with VSP. */
+#endif /* DPAA_VERSION >= 11 */
+} ioc_fm_pcd_cc_next_plcr_params_t;
+
+/**
+ @Description A structure for defining enqueue params when BMI is the
+ next engine after a CC node
+ (Must match struct t_FmPcdCcNextEnqueueParams defined in fm_pcd_ext.h)
+*/
+typedef struct ioc_fm_pcd_cc_next_enqueue_params_t {
+ ioc_fm_pcd_done_action action; /**< Action - when next engine is BMI (done) */
+ bool override_fqid; /**< TRUE if CC override previously decided fqid and vspid,
+ relevant if action = e_IOC_FM_PCD_ENQ_FRAME */
+ uint32_t new_fqid; /**< Valid if overrideFqid=TRUE, FQID for enqueuing the frame
+ (otherwise FQID is taken from KeyGen),
+ relevant if action = e_IOC_FM_PCD_ENQ_FRAME*/
+#if DPAA_VERSION >= 11
+ uint8_t new_relative_storage_profile_id;
+ /**< Valid if override_fqid=TRUE, Indicates the relative virtual
+ storage profile offset within the port's storage profiles
+ window; Relevant only if the port was configured with VSP. */
+#endif /* DPAA_VERSION >= 11 */
+
+} ioc_fm_pcd_cc_next_enqueue_params_t;
+
+/**
+ @Description A structure for defining KG params when KG is the next engine after a CC node
+ (Must match struct t_FmPcdCcNextKgParams defined in fm_pcd_ext.h)
+*/
+typedef struct ioc_fm_pcd_cc_next_kg_params_t {
+ bool override_fqid; /**< TRUE if CC override previously decided fqid and vspid,
+ Note - this parameters are irrelevant for earlier chips */
+ uint32_t new_fqid; /**< Valid if overrideFqid=TRUE, FQID for enqueuing the frame
+ (otherwise FQID is taken from KeyGen),
+ Note - this parameters are irrelevant for earlier chips */
+#if DPAA_VERSION >= 11
+ uint8_t new_relative_storage_profile_id;
+ /**< Valid if override_fqid=TRUE, Indicates the relative virtual
+ storage profile offset within the port's storage profiles
+ window; Relevant only if the port was configured with VSP. */
+#endif /* DPAA_VERSION >= 11 */
+ void *p_direct_scheme; /**< Direct scheme id to go to. */
+} ioc_fm_pcd_cc_next_kg_params_t;
+
+/**
+ @Description Parameters for defining the next engine after a CC node.
+ (Must match struct ioc_fm_pcd_cc_next_engine_params_t defined in fm_pcd_ext.h)
+*/
+typedef struct ioc_fm_pcd_cc_next_engine_params_t {
+ ioc_fm_pcd_engine next_engine; /**< User has to initialize parameters
+ according to nextEngine definition */
+ union {
+ ioc_fm_pcd_cc_next_cc_params_t cc_params; /**< Parameters in case next engine is CC */
+ ioc_fm_pcd_cc_next_plcr_params_t plcr_params; /**< Parameters in case next engine is PLCR */
+ ioc_fm_pcd_cc_next_enqueue_params_t enqueue_params; /**< Parameters in case next engine is BMI */
+ ioc_fm_pcd_cc_next_kg_params_t kg_params; /**< Parameters in case next engine is KG */
+#if DPAA_VERSION >= 11
+ ioc_fm_pcd_cc_next_fr_params_t fr_params; /**< Parameters in case next engine is FR */
+#endif /* DPAA_VERSION >= 11 */
+ } params; /**< Union used for all the next-engine parameters options */
+ void *manip_id; /**< Handle to Manipulation object.
+ Relevant if next engine is of type result
+ (e_IOC_FM_PCD_PLCR, e_IOC_FM_PCD_KG, e_IOC_FM_PCD_DONE) */
+ bool statistics_en; /**< If TRUE, statistics counters are incremented
+ for each frame passing through this
+ Coarse Classification entry. */
+} ioc_fm_pcd_cc_next_engine_params_t;
+
+/**
+ @Description Parameters for defining a single CC key
+*/
+typedef struct ioc_fm_pcd_cc_key_params_t {
+ uint8_t *p_key; /**< pointer to the key of the size defined in key_size */
+ uint8_t *p_mask; /**< pointer to the Mask per key of the size defined
+ in keySize. p_key and p_mask (if defined) has to be
+ of the same size defined in the key_size */
+ ioc_fm_pcd_cc_next_engine_params_t cc_next_engine_params;
+ /**< parameters for the next for the defined Key in p_key */
+
+} ioc_fm_pcd_cc_key_params_t;
+
+/**
+ @Description Parameters for defining CC keys parameters
+ The driver supports two methods for CC node allocation: dynamic and static.
+ Static mode was created in order to prevent runtime alloc/free
+ of FMan memory (MURAM), which may cause fragmentation; in this mode,
+ the driver automatically allocates the memory according to
+ 'max_num_of_keys' parameter. The driver calculates the maximal memory
+ size that may be used for this CC-Node taking into consideration
+ 'mask_support' and 'statistics_mode' parameters.
+ When 'action' = e_IOC_FM_PCD_ACTION_INDEXED_LOOKUP in the extraction
+ parameters of this node, 'max_num_of_keys' must be equal to 'num_of_keys'.
+ In dynamic mode, 'max_num_of_keys' must be zero. At initialization,
+ all required structures are allocated according to 'num_of_keys'
+ parameter. During runtime modification, these structures are
+ re-allocated according to the updated number of keys.
+
+ Please note that 'action' and 'ic_indx_mask' mentioned in the
+ specific parameter explanations are passed in the extraction
+ parameters of the node (fields of extractccparams.extractnonhdr).
+*/
+typedef struct ioc_keys_params_t {
+ uint16_t max_num_of_keys;/**< Maximum number of keys that will (ever) be used in this CC-Node;
+ A value of zero may be used for dynamic memory allocation. */
+ bool mask_support; /**< This parameter is relevant only if a node is initialized with
+ action = e_IOC_FM_PCD_ACTION_EXACT_MATCH and max_num_of_keys > 0;
+ Should be TRUE to reserve table memory for key masks, even if
+ initial keys do not contain masks, or if the node was initialized
+ as 'empty' (without keys); this will allow user to add keys with
+ masks at runtime. */
+ ioc_fm_pcd_cc_stats_mode statistics_mode;/**< Determines the supported statistics mode for all node's keys.
+ To enable statistics gathering, statistics should be enabled per
+ every key, using 'statistics_en' in next engine parameters structure
+ of that key;
+ If 'max_num_of_keys' is set, all required structures will be
+ preallocated for all keys. */
+#if (DPAA_VERSION >= 11)
+ uint16_t frame_length_ranges[IOC_FM_PCD_CC_STATS_MAX_NUM_OF_FLR];
+ /**< Relevant only for 'RMON' statistics mode
+ (this feature is supported only on B4860 device);
+ Holds a list of programmable thresholds. For each received frame,
+ its length in bytes is examined against these range thresholds and
+ the appropriate counter is incremented by 1. For example, to belong
+ to range i, the following should hold:
+ range i-1 threshold < frame length <= range i threshold
+ Each range threshold must be larger then its preceding range
+ threshold. Last range threshold must be 0xFFFF. */
+#endif /* (DPAA_VERSION >= 11) */
+ uint16_t num_of_keys; /**< Number of initial keys;
+ Note that in case of 'action' = e_IOC_FM_PCD_ACTION_INDEXED_LOOKUP,
+ this field should be power-of-2 of the number of bits that are
+ set in 'ic_indx_mask'. */
+ uint8_t key_size; /**< Size of key - for extraction of type FULL_FIELD, 'key_size' has
+ to be the standard size of the selected key; For other extraction
+ types, 'key_size' has to be as size of extraction; When 'action' =
+ e_IOC_FM_PCD_ACTION_INDEXED_LOOKUP, 'keySize' must be 2. */
+ ioc_fm_pcd_cc_key_params_t key_params[IOC_FM_PCD_MAX_NUM_OF_KEYS];
+ /**< An array with 'num_of_keys' entries, each entry specifies the
+ corresponding key parameters;
+ When 'action' = e_IOC_FM_PCD_ACTION_EXACT_MATCH, this value must not
+ exceed 255 (IOC_FM_PCD_MAX_NUM_OF_KEYS-1) as the last entry is saved
+ for the 'miss' entry. */
+ ioc_fm_pcd_cc_next_engine_params_t cc_next_engine_params_for_miss;
+ /**< Parameters for defining the next engine when a key is not matched;
+ Not relevant if action = e_IOC_FM_PCD_ACTION_INDEXED_LOOKUP. */
+} ioc_keys_params_t;
+
+/**
+ @Description Parameters for defining a CC node
+*/
+struct fm_pcd_cc_node_params_t {
+ ioc_fm_pcd_extract_entry_t extract_cc_params;
+ /**< Extraction parameters */
+ ioc_keys_params_t keys_params;
+ /**< Keys definition matching the selected extraction */
+};
+
+typedef struct ioc_fm_pcd_cc_node_params_t {
+ struct fm_pcd_cc_node_params_t param;
+ void *id;
+ /**< Output parameter; returns the CC node Id to be used */
+} ioc_fm_pcd_cc_node_params_t;
+
+/**
+ @Description Parameters for defining a hash table
+ (Must match struct ioc_fm_pcd_hash_table_params_t defined in fm_pcd_ext.h)
+*/
+struct fm_pcd_hash_table_params_t {
+ uint16_t max_num_of_keys;
+ /**< Maximum Number Of Keys that will (ever) be used in this Hash-table */
+ ioc_fm_pcd_cc_stats_mode statistics_mode;
+ /**< If not e_IOC_FM_PCD_CC_STATS_MODE_NONE, the required structures for the
+ requested statistics mode will be allocated according to max_num_of_keys. */
+ uint8_t kg_hash_shift;
+ /**< KG-Hash-shift as it was configured in the KG-scheme
+ that leads to this hash-table. */
+ uint16_t hash_res_mask;
+ /**< Mask that will be used on the hash-result;
+ The number-of-sets for this hash will be calculated
+ as (2^(number of bits set in 'hash_res_mask'));
+ The 4 lower bits must be cleared. */
+ uint8_t hash_shift;
+ /**< Byte offset from the beginning of the KeyGen hash result to the
+ 2-bytes to be used as hash index. */
+ uint8_t match_key_size;
+ /**< Size of the exact match keys held by the hash buckets */
+
+ ioc_fm_pcd_cc_next_engine_params_t cc_next_engine_params_for_miss;
+ /**< Parameters for defining the next engine when a key is not matched */
+};
+
+typedef struct ioc_fm_pcd_hash_table_params_t {
+ struct fm_pcd_hash_table_params_t param;
+ void *id;
+} ioc_fm_pcd_hash_table_params_t;
+
+/**
+ @Description A structure with the arguments for the FM_PCD_HashTableAddKey ioctl() call
+*/
+typedef struct ioc_fm_pcd_hash_table_add_key_params_t {
+ void *p_hash_tbl;
+ uint8_t key_size;
+ ioc_fm_pcd_cc_key_params_t key_params;
+} ioc_fm_pcd_hash_table_add_key_params_t;
+
+/**
+ @Description Parameters for defining a CC tree group.
+
+This structure defines a CC group in terms of NetEnv units
+and the action to be taken in each case. The unit_ids list must
+be given in order from low to high indices.
+
+ioc_fm_pcd_cc_next_engine_params_t is a list of 2^num_of_distinction_units
+structures where each defines the next action to be taken for
+each units combination. for example:
+num_of_distinction_units = 2
+unit_ids = {1,3}
+next_engine_per_entries_in_grp[0] = ioc_fm_pcd_cc_next_engine_params_t for the case that
+ unit 1 - not found; unit 3 - not found;
+next_engine_per_entries_in_grp[1] = ioc_fm_pcd_cc_next_engine_params_t for the case that
+ unit 1 - not found; unit 3 - found;
+next_engine_per_entries_in_grp[2] = ioc_fm_pcd_cc_next_engine_params_t for the case that
+ unit 1 - found; unit 3 - not found;
+next_engine_per_entries_in_grp[3] = ioc_fm_pcd_cc_next_engine_params_t for the case that
+ unit 1 - found; unit 3 - found;
+*/
+typedef struct ioc_fm_pcd_cc_grp_params_t {
+ uint8_t num_of_distinction_units; /**< Up to 4 */
+ uint8_t unit_ids[IOC_FM_PCD_MAX_NUM_OF_CC_UNITS];
+ /**< Indexes of the units as defined in FM_PCD_NetEnvCharacteristicsSet() */
+ ioc_fm_pcd_cc_next_engine_params_t next_engine_per_entries_in_grp[IOC_FM_PCD_MAX_NUM_OF_CC_ENTRIES_IN_GRP];
+ /**< Maximum entries per group is 16 */
+} ioc_fm_pcd_cc_grp_params_t;
+
+/**
+ @Description Parameters for defining the CC tree groups
+ (Must match struct ioc_fm_pcd_cc_tree_params_t defined in fm_pcd_ext.h)
+*/
+typedef struct ioc_fm_pcd_cc_tree_params_t {
+ void *net_env_id; /**< Id of the Network Environment as returned
+ by FM_PCD_NetEnvCharacteristicsSet() */
+ uint8_t num_of_groups; /**< Number of CC groups within the CC tree */
+ ioc_fm_pcd_cc_grp_params_t fm_pcd_cc_group_params[IOC_FM_PCD_MAX_NUM_OF_CC_GROUPS];
+ /**< Parameters for each group. */
+ void *id; /**< Output parameter; Returns the tree Id to be used */
+} ioc_fm_pcd_cc_tree_params_t;
+
+/**
+ @Description Parameters for defining policer byte rate
+*/
+typedef struct ioc_fm_pcd_plcr_byte_rate_mode_param_t {
+ ioc_fm_pcd_plcr_frame_length_select frame_length_selection; /**< Frame length selection */
+ ioc_fm_pcd_plcr_roll_back_frame_select roll_back_frame_selection; /**< relevant option only e_IOC_FM_PCD_PLCR_L2_FRM_LEN,
+ e_IOC_FM_PCD_PLCR_FULL_FRM_LEN */
+} ioc_fm_pcd_plcr_byte_rate_mode_param_t;
+
+/**
+ @Description Parameters for defining the policer profile (based on
+ RFC-2698 or RFC-4115 attributes).
+*/
+typedef struct ioc_fm_pcd_plcr_non_passthrough_alg_param_t {
+ ioc_fm_pcd_plcr_rate_mode rate_mode; /**< Byte / Packet */
+ ioc_fm_pcd_plcr_byte_rate_mode_param_t byte_mode_param; /**< Valid for Byte NULL for Packet */
+ uint32_t committed_info_rate; /**< KBits/Sec or Packets/Sec */
+ uint32_t committed_burst_size; /**< KBits or Packets */
+ uint32_t peak_or_excess_info_rate; /**< KBits/Sec or Packets/Sec */
+ uint32_t peak_or_excess_burst_size; /**< KBits or Packets */
+} ioc_fm_pcd_plcr_non_passthrough_alg_param_t;
+
+/**
+ @Description Parameters for defining the next engine after policer
+*/
+typedef union ioc_fm_pcd_plcr_next_engine_params_u {
+ ioc_fm_pcd_done_action action; /**< Action - when next engine is BMI (done) */
+ void *p_profile; /**< Policer profile handle - used when next engine
+ is PLCR, must be a SHARED profile */
+ void *p_direct_scheme; /**< Direct scheme select - when next engine is Keygen */
+} ioc_fm_pcd_plcr_next_engine_params_u;
+
+typedef struct ioc_fm_pcd_port_params_t {
+ ioc_fm_port_type port_type; /**< Type of port for this profile */
+ uint8_t port_id; /**< FM-Port id of port for this profile */
+} ioc_fm_pcd_port_params_t;
+
+/**
+ @Description Parameters for defining the policer profile entry
+ (Must match struct ioc_fm_pcd_plcr_profile_params_t defined in fm_pcd_ext.h)
+*/
+struct fm_pcd_plcr_profile_params_t {
+ bool modify;
+ /**< TRUE to change an existing profile */
+ union {
+ struct {
+ ioc_fm_pcd_profile_type_selection profile_type;
+ /**< Type of policer profile */
+ ioc_fm_pcd_port_params_t *p_fm_port;
+ /**< Relevant for per-port profiles only */
+ uint16_t relative_profile_id;
+ /**< Profile id - relative to shared group or to port */
+ } new_params;
+ /**< Use it when modify = FALSE */
+ void *p_profile;
+ /**< A handle to a profile - use it when modify=TRUE */
+ } profile_select;
+ ioc_fm_pcd_plcr_algorithm_selection alg_selection;
+ /**< Profile Algorithm PASS_THROUGH, RFC_2698, RFC_4115 */
+ ioc_fm_pcd_plcr_color_mode color_mode;
+ /**< COLOR_BLIND, COLOR_AWARE */
+
+ union {
+ ioc_fm_pcd_plcr_color dflt_color;
+ /**< For Color-Blind Pass-Through mode;
+ the policer will re-color
+ any incoming packet with the default value. */
+ ioc_fm_pcd_plcr_color override;
+ /**< For Color-Aware modes; the profile response to a
+ pre-color value of 2'b11. */
+ } color;
+
+ ioc_fm_pcd_plcr_non_passthrough_alg_param_t
+ non_passthrough_alg_param;
+ /**< RFC2698 or RFC4115 parameters */
+
+ ioc_fm_pcd_engine next_engine_on_green;
+ /**< Next engine for green-colored frames */
+ ioc_fm_pcd_plcr_next_engine_params_u params_on_green;
+ /**< Next engine parameters for green-colored frames */
+
+ ioc_fm_pcd_engine next_engine_on_yellow;
+ /**< Next engine for yellow-colored frames */
+ ioc_fm_pcd_plcr_next_engine_params_u params_on_yellow;
+ /**< Next engine parameters for yellow-colored frames */
+
+ ioc_fm_pcd_engine next_engine_on_red;
+ /**< Next engine for red-colored frames */
+ ioc_fm_pcd_plcr_next_engine_params_u params_on_red;
+ /**< Next engine parameters for red-colored frames */
+
+ bool trap_profile_on_flow_A; /**< Obsolete - do not use */
+ bool trap_profile_on_flow_B; /**< Obsolete - do not use */
+ bool trap_profile_on_flow_C; /**< Obsolete - do not use */
+};
+
+typedef struct ioc_fm_pcd_plcr_profile_params_t {
+ struct fm_pcd_plcr_profile_params_t param;
+ void *id;
+ /**< output parameter; Returns the profile Id to be used */
+} ioc_fm_pcd_plcr_profile_params_t;
+
+/**
+ @Description A structure for modifying CC tree next engine
+*/
+typedef struct ioc_fm_pcd_cc_tree_modify_next_engine_params_t {
+ void *id; /**< CC tree Id to be used */
+ uint8_t grp_indx; /**< A Group index in the tree */
+ uint8_t indx; /**< Entry index in the group defined by grp_index */
+ ioc_fm_pcd_cc_next_engine_params_t cc_next_engine_params;
+ /**< Parameters for the next for the defined Key in the p_Key */
+} ioc_fm_pcd_cc_tree_modify_next_engine_params_t;
+
+/**
+ @Description A structure for modifying CC node next engine
+*/
+typedef struct ioc_fm_pcd_cc_node_modify_next_engine_params_t {
+ void *id; /**< CC node Id to be used */
+ uint16_t key_indx; /**< Key index for Next Engine Params modifications;
+ NOTE: This parameter is IGNORED for miss-key! */
+ uint8_t key_size; /**< Key size of added key */
+ ioc_fm_pcd_cc_next_engine_params_t cc_next_engine_params;
+ /**< parameters for the next for the defined Key in the p_Key */
+} ioc_fm_pcd_cc_node_modify_next_engine_params_t;
+
+/**
+ @Description A structure for remove CC node key
+*/
+typedef struct ioc_fm_pcd_cc_node_remove_key_params_t {
+ void *id; /**< CC node Id to be used */
+ uint16_t key_indx; /**< Key index for Next Engine Params modifications;
+ NOTE: This parameter is IGNORED for miss-key! */
+} ioc_fm_pcd_cc_node_remove_key_params_t;
+
+/**
+ @Description A structure for modifying CC node key and next engine
+*/
+typedef struct ioc_fm_pcd_cc_node_modify_key_and_next_engine_params_t {
+ void *id; /**< CC node Id to be used */
+ uint16_t key_indx; /**< Key index for Next Engine Params modifications;
+ NOTE: This parameter is IGNORED for miss-key! */
+ uint8_t key_size; /**< Key size of added key */
+ ioc_fm_pcd_cc_key_params_t key_params; /**< it's array with numOfKeys entries each entry in
+ the array of the type ioc_fm_pcd_cc_key_params_t */
+} ioc_fm_pcd_cc_node_modify_key_and_next_engine_params_t;
+
+/**
+ @Description A structure for modifying CC node key
+*/
+typedef struct ioc_fm_pcd_cc_node_modify_key_params_t {
+ void *id; /**< CC node Id to be used */
+ uint16_t key_indx; /**< Key index for Next Engine Params modifications;
+ NOTE: This parameter is IGNORED for miss-key! */
+ uint8_t key_size; /**< Key size of added key */
+ uint8_t *p_key; /**< Pointer to the key of the size defined in key_size */
+ uint8_t *p_mask; /**< Pointer to the Mask per key of the size defined
+ in keySize. p_Key and p_Mask (if defined) have to be
+ of the same size as defined in the key_size */
+} ioc_fm_pcd_cc_node_modify_key_params_t;
+
+/**
+ @Description A structure with the arguments for the FM_PCD_HashTableRemoveKey ioctl() call
+*/
+typedef struct ioc_fm_pcd_hash_table_remove_key_params_t {
+ void *p_hash_tbl; /**< The id of the hash table */
+ uint8_t key_size; /**< The size of the key to remove */
+ uint8_t *p_key; /**< Pointer to the key to remove */
+} ioc_fm_pcd_hash_table_remove_key_params_t;
+
+/**
+ @Description Parameters for selecting a location for requested manipulation
+*/
+typedef struct ioc_fm_manip_hdr_info_t {
+ ioc_net_header_type hdr; /**< Header selection */
+ ioc_fm_pcd_hdr_index hdr_index; /**< Relevant only for MPLS, VLAN and tunneled IP. Otherwise should be cleared. */
+ bool by_field; /**< TRUE if the location of manipulation is according to some field in the specific header*/
+ ioc_fm_pcd_fields_u full_field; /**< Relevant only when by_field = TRUE: Extract field */
+} ioc_fm_manip_hdr_info_t;
+
+/**
+ @Description Parameters for defining header removal by header type
+*/
+typedef struct ioc_fm_pcd_manip_hdr_rmv_by_hdr_params_t {
+ ioc_fm_pcd_manip_hdr_rmv_by_hdr_type type; /**< Selection of header removal location */
+ union {
+#if ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT))
+ struct {
+ bool include;/**< If FALSE, remove until the specified header (not including the header);
+ If TRUE, remove also the specified header. */
+ ioc_fm_manip_hdr_info_t hdr_info;
+ } from_start_by_hdr; /**< Relevant when type = e_IOC_FM_PCD_MANIP_RMV_BY_HDR_FROM_START */
+#endif /* FM_CAPWAP_SUPPORT */
+#if (DPAA_VERSION >= 11)
+ ioc_fm_manip_hdr_info_t hdr_info; /**< Relevant when type = e_FM_PCD_MANIP_RMV_BY_HDR_FROM_START */
+#endif /* (DPAA_VERSION >= 11) */
+ ioc_fm_pcd_manip_hdr_rmv_specific_l2 specific_l2;/**< Relevant when type = e_IOC_FM_PCD_MANIP_BY_HDR_SPECIFIC_L2;
+ Defines which L2 headers to remove. */
+ } u;
+} ioc_fm_pcd_manip_hdr_rmv_by_hdr_params_t;
+
+/**
+ @Description Parameters for configuring IP fragmentation manipulation
+*/
+typedef struct ioc_fm_pcd_manip_frag_ip_params_t {
+ uint16_t size_for_fragmentation; /**< If length of the frame is greater than this value,
+ IP fragmentation will be executed.*/
+#if DPAA_VERSION == 10
+ uint8_t scratch_bpid; /**< Absolute buffer pool id according to BM configuration.*/
+#endif /* DPAA_VERSION == 10 */
+ bool sg_bpid_en; /**< Enable a dedicated buffer pool id for the Scatter/Gather buffer allocation;
+ If disabled, the Scatter/Gather buffer will be allocated from the same pool as the
+ received frame's buffer. */
+ uint8_t sg_bpid; /**< Scatter/Gather buffer pool id;
+ This parameter is relevant when 'sg_bpid_en=TRUE';
+ Same LIODN number is used for these buffers as for the received frames buffers, so buffers
+ of this pool need to be allocated in the same memory area as the received buffers.
+ If the received buffers arrive from different sources, the Scatter/Gather BP id should be
+ mutual to all these sources. */
+ ioc_fm_pcd_manip_dont_frag_action dont_frag_action; /**< Dont Fragment Action - If an IP packet is larger
+ than MTU and its DF bit is set, then this field will
+ determine the action to be taken.*/
+} ioc_fm_pcd_manip_frag_ip_params_t;
+
+/**
+ @Description Parameters for configuring IP reassembly manipulation.
+
+ This is a common structure for both IPv4 and IPv6 reassembly
+ manipulation. For reassembly of both IPv4 and IPv6, make sure to
+ set the 'hdr' field in ioc_fm_pcd_manip_reassem_params_t to IOC_HEADER_TYPE_IPv6.
+*/
+typedef struct ioc_fm_pcd_manip_reassem_ip_params_t {
+ uint8_t relative_scheme_id[2]; /**< Partition relative scheme id:
+ relativeSchemeId[0] - Relative scheme ID for IPV4 Reassembly manipulation;
+ relativeSchemeId[1] - Relative scheme ID for IPV6 Reassembly manipulation;
+ NOTE: The following comment is relevant only for FMAN v2 devices:
+ Relative scheme ID for IPv4/IPv6 Reassembly manipulation must be smaller than
+ the user schemes id to ensure that the reassembly's schemes will be first match.
+ The remaining schemes, if defined, should have higher relative scheme ID. */
+#if DPAA_VERSION >= 11
+ uint32_t non_consistent_sp_fqid; /**< In case that other fragments of the frame corresponds to different storage
+ profile than the opening fragment (Non-Consistent-SP state)
+ then one of two possible scenarios occurs:
+ if 'nonConsistentSpFqid != 0', the reassembled frame will be enqueued to
+ this fqid, otherwise a 'Non Consistent SP' bit will be set in the FD[status].*/
+#else
+ uint8_t sg_bpid; /**< Buffer pool id for the S/G frame created by the reassembly process */
+#endif /* DPAA_VERSION >= 11 */
+ uint8_t data_mem_id; /**< Memory partition ID for the IPR's external tables structure */
+ uint16_t data_liodn_offset; /**< LIODN offset for access the IPR's external tables structure. */
+ uint16_t min_frag_size[2]; /**< Minimum fragment size:
+ minFragSize[0] - for ipv4, minFragSize[1] - for ipv6 */
+ ioc_fm_pcd_manip_reassem_ways_number num_of_frames_per_hash_entry[2];
+ /**< Number of frames per hash entry needed for reassembly process:
+ numOfFramesPerHashEntry[0] - for ipv4 (max value is e_IOC_FM_PCD_MANIP_EIGHT_WAYS_HASH);
+ numOfFramesPerHashEntry[1] - for ipv6 (max value is e_IOC_FM_PCD_MANIP_SIX_WAYS_HASH). */
+ uint16_t max_num_frames_in_process;/**< Number of frames which can be processed by Reassembly in the same time;
+ Must be power of 2;
+ In the case numOfFramesPerHashEntry == e_IOC_FM_PCD_MANIP_FOUR_WAYS_HASH,
+ maxNumFramesInProcess has to be in the range of 4 - 512;
+ In the case numOfFramesPerHashEntry == e_IOC_FM_PCD_MANIP_EIGHT_WAYS_HASH,
+ maxNumFramesInProcess has to be in the range of 8 - 2048. */
+ ioc_fm_pcd_manip_reassem_time_out_mode time_out_mode; /**< Expiration delay initialized by Reassembly process */
+ uint32_t fqid_for_time_out_frames;/**< FQID in which time out frames will enqueue during Time Out Process */
+ uint32_t timeout_threshold_for_reassm_process;
+ /**< Represents the time interval in microseconds which defines
+ if opened frame (at least one fragment was processed but not all the fragments)is found as too old*/
+} ioc_fm_pcd_manip_reassem_ip_params_t;
+
+/**
+ @Description Parameters for defining IPSEC manipulation
+*/
+typedef struct ioc_fm_pcd_manip_special_offload_ipsec_params_t {
+ bool decryption; /**< TRUE if being used in decryption direction;
+ FALSE if being used in encryption direction. */
+ bool ecn_copy; /**< TRUE to copy the ECN bits from inner/outer to outer/inner
+ (direction depends on the 'decryption' field). */
+ bool dscp_copy; /**< TRUE to copy the DSCP bits from inner/outer to outer/inner
+ (direction depends on the 'decryption' field). */
+ bool variable_ip_hdr_len; /**< TRUE for supporting variable IP header length in decryption. */
+ bool variable_ip_version; /**< TRUE for supporting both IP version on the same SA in encryption */
+ uint8_t outer_ip_hdr_len; /**< If 'variable_ip_version == TRUE' than this field must be set to non-zero value;
+ It is specifies the length of the outer IP header that was configured in the
+ corresponding SA. */
+ uint16_t arw_size; /**< if <> '0' then will perform ARW check for this SA;
+ The value must be a multiplication of 16 */
+ void *arw_addr; /**< if arwSize <> '0' then this field must be set to non-zero value;
+ MUST be allocated from FMAN's MURAM that the post-sec op-port belong
+ Must be 4B aligned. Required MURAM size is '(NEXT_POWER_OF_2(arwSize+32))/8+4' Bytes */
+} ioc_fm_pcd_manip_special_offload_ipsec_params_t;
+
+#if (DPAA_VERSION >= 11)
+/**
+ @Description Parameters for configuring CAPWAP fragmentation manipulation
+
+ Restrictions:
+ - Maximum number of fragments per frame is 16.
+ - Transmit confirmation is not supported.
+ - Fragmentation nodes must be set as the last PCD action (i.e. the
+ corresponding CC node key must have next engine set to e_FM_PCD_DONE).
+ - Only BMan buffers shall be used for frames to be fragmented.
+ - NOTE: The following comment is relevant only for FMAN v3 devices: IPF
+ does not support VSP. Therefore, on the same port where we have IPF we
+ cannot support VSP.
+*/
+typedef struct ioc_fm_pcd_manip_frag_capwap_params_t {
+ uint16_t size_for_fragmentation; /**< If length of the frame is greater than this value,
+ CAPWAP fragmentation will be executed.*/
+ bool sg_bpid_en; /**< Enable a dedicated buffer pool id for the Scatter/Gather buffer allocation;
+ If disabled, the Scatter/Gather buffer will be allocated from the same pool as the
+ received frame's buffer. */
+ uint8_t sg_bpid; /**< Scatter/Gather buffer pool id;
+ This parameters is relevant when 'sgBpidEn=TRUE';
+ Same LIODN number is used for these buffers as for the received frames buffers, so buffers
+ of this pool need to be allocated in the same memory area as the received buffers.
+ If the received buffers arrive from different sources, the Scatter/Gather BP id should be
+ mutual to all these sources. */
+ bool compress_mode_en; /**< CAPWAP Header Options Compress Enable mode;
+ When this mode is enabled then only the first fragment include the CAPWAP header options
+ field (if user provides it in the input frame) and all other fragments exclude the CAPWAP
+ options field (CAPWAP header is updated accordingly).*/
+} ioc_fm_pcd_manip_frag_capwap_params_t;
+
+/**
+ @Description Parameters for configuring CAPWAP reassembly manipulation.
+
+ Restrictions:
+ - Application must define one scheme to catch the reassembled frames.
+ - Maximum number of fragments per frame is 16.
+
+*/
+typedef struct ioc_fm_pcd_manip_reassem_capwap_params_t {
+ uint8_t relative_scheme_id; /**< Partition relative scheme id;
+ NOTE: this id must be smaller than the user schemes id to ensure that the reassembly scheme will be first match;
+ Rest schemes, if defined, should have higher relative scheme ID. */
+ uint8_t data_mem_id; /**< Memory partition ID for the IPR's external tables structure */
+ uint16_t data_liodn_offset; /**< LIODN offset for access the IPR's external tables structure. */
+ uint16_t max_reassembled_frame_length;/**< The maximum CAPWAP reassembled frame length in bytes;
+ If maxReassembledFrameLength == 0, any successful reassembled frame length is
+ considered as a valid length;
+ if maxReassembledFrameLength > 0, a successful reassembled frame which its length
+ exceeds this value is considered as an error frame (FD status[CRE] bit is set). */
+ ioc_fm_pcd_manip_reassem_ways_number num_of_frames_per_hash_entry;
+ /**< Number of frames per hash entry needed for reassembly process */
+ uint16_t max_num_frames_in_process; /**< Number of frames which can be processed by reassembly in the same time;
+ Must be power of 2;
+ In the case numOfFramesPerHashEntry == e_FM_PCD_MANIP_FOUR_WAYS_HASH,
+ maxNumFramesInProcess has to be in the range of 4 - 512;
+ In the case numOfFramesPerHashEntry == e_FM_PCD_MANIP_EIGHT_WAYS_HASH,
+ maxNumFramesInProcess has to be in the range of 8 - 2048. */
+ ioc_fm_pcd_manip_reassem_time_out_mode time_out_mode; /**< Expiration delay initialized by Reassembly process */
+ uint32_t fqid_for_time_out_frames; /**< FQID in which time out frames will enqueue during Time Out Process;
+ Recommended value for this field is 0; in this way timed-out frames will be discarded */
+ uint32_t timeout_threshold_for_reassm_process;
+ /**< Represents the time interval in microseconds which defines
+ if opened frame (at least one fragment was processed but not all the fragments)is found as too old*/
+} ioc_fm_pcd_manip_reassem_capwap_params_t;
+
+/**
+ @Description structure for defining CAPWAP manipulation
+*/
+typedef struct ioc_fm_pcd_manip_special_offload_capwap_params_t {
+ bool dtls; /**< TRUE if continue to SEC DTLS encryption */
+ ioc_fm_pcd_manip_hdr_qos_src qos_src; /**< TODO */
+} ioc_fm_pcd_manip_special_offload_capwap_params_t;
+
+#endif /* (DPAA_VERSION >= 11) */
+
+/**
+ @Description Parameters for defining special offload manipulation
+*/
+typedef struct ioc_fm_pcd_manip_special_offload_params_t {
+ ioc_fm_pcd_manip_special_offload_type type;
+ /**< Type of special offload manipulation */
+ union {
+ ioc_fm_pcd_manip_special_offload_ipsec_params_t ipsec;
+ /**< Parameters for IPSec; Relevant when type = e_IOC_FM_PCD_MANIP_SPECIAL_OFFLOAD_IPSEC */
+
+ ioc_fm_pcd_manip_special_offload_capwap_params_t capwap;
+ /**< Parameters for CAPWAP; Relevant when type = e_FM_PCD_MANIP_SPECIAL_OFFLOAD_CAPWAP */
+ } u;
+} ioc_fm_pcd_manip_special_offload_params_t;
+
+/**
+ @Description Parameters for defining generic removal manipulation
+*/
+typedef struct ioc_fm_pcd_manip_hdr_rmv_generic_params_t {
+ uint8_t offset;
+ /**< Offset from beginning of header to the start location of the removal */
+ uint8_t size; /**< Size of removed section */
+} ioc_fm_pcd_manip_hdr_rmv_generic_params_t;
+
+/**
+ @Description Parameters for defining insertion manipulation
+*/
+typedef struct ioc_fm_pcd_manip_hdr_insrt_t {
+ uint8_t size; /**< size of inserted section */
+ uint8_t *p_data; /**< data to be inserted */
+} ioc_fm_pcd_manip_hdr_insrt_t;
+
+/**
+ @Description Parameters for defining generic insertion manipulation
+*/
+typedef struct ioc_fm_pcd_manip_hdr_insrt_generic_params_t {
+ uint8_t offset; /**< Offset from beginning of header to the start
+ location of the insertion */
+ uint8_t size; /**< Size of inserted section */
+ bool replace; /**< TRUE to override (replace) existing data at
+ 'offset', FALSE to insert */
+ uint8_t *p_data; /**< Pointer to data to be inserted */
+} ioc_fm_pcd_manip_hdr_insrt_generic_params_t;
+
+/**
+ @Description Parameters for defining header manipulation VLAN DSCP To Vpri translation
+*/
+typedef struct ioc_fm_pcd_manip_hdr_field_update_vlan_dscp_to_vpri_t {
+ uint8_t dscp_to_vpri_table[IOC_FM_PCD_MANIP_DSCP_TO_VLAN_TRANS];
+ /**< A table of VPri values for each DSCP value;
+ The index is the D_SCP value (0-0x3F) and the
+ value is the corresponding VPRI (0-15). */
+ uint8_t vpri_def_val;
+ /**< 0-7, Relevant only if if update_type =
+ e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_DSCP_TO_VLAN,
+ this field is the Q Tag default value if the IP header is not found. */
+} ioc_fm_pcd_manip_hdr_field_update_vlan_dscp_to_vpri_t;
+
+/**
+ @Description Parameters for defining header manipulation VLAN fields updates
+*/
+typedef struct ioc_fm_pcd_manip_hdr_field_update_vlan_t {
+ ioc_fm_pcd_manip_hdr_field_update_vlan update_type; /**< Selects VLAN update type */
+ union {
+ uint8_t vpri; /**< 0-7, Relevant only if If update_type =
+ e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_VLAN_PRI, this
+ is the new VLAN pri. */
+ ioc_fm_pcd_manip_hdr_field_update_vlan_dscp_to_vpri_t dscp_to_vpri;
+ /**< Parameters structure, Relevant only if update_type =
+ e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_DSCP_TO_VLAN. */
+ } u;
+} ioc_fm_pcd_manip_hdr_field_update_vlan_t;
+
+/**
+ @Description Parameters for defining header manipulation IPV4 fields updates
+*/
+typedef struct ioc_fm_pcd_manip_hdr_field_update_ipv4_t {
+ ioc_ipv4_hdr_manip_update_flags_t valid_updates; /**< ORed flag, selecting the required updates */
+ uint8_t tos; /**< 8 bit New TOS; Relevant if valid_updates contains
+ IOC_HDR_MANIP_IPV4_TOS */
+ uint16_t id; /**< 16 bit New IP ID; Relevant only if valid_updates
+ contains IOC_HDR_MANIP_IPV4_ID */
+ uint32_t src; /**< 32 bit New IP SRC; Relevant only if valid_updates
+ contains IOC_HDR_MANIP_IPV4_SRC */
+ uint32_t dst; /**< 32 bit New IP DST; Relevant only if valid_updates
+ contains IOC_HDR_MANIP_IPV4_DST */
+} ioc_fm_pcd_manip_hdr_field_update_ipv4_t;
+
+/**
+ @Description Parameters for defining header manipulation IPV6 fields updates
+*/
+typedef struct ioc_fm_pcd_manip_hdr_field_update_ipv6_t {
+ ioc_ipv6_hdr_manip_update_flags_t valid_updates; /**< ORed flag, selecting the required updates */
+ uint8_t traffic_class; /**< 8 bit New Traffic Class; Relevant if valid_updates contains
+ IOC_HDR_MANIP_IPV6_TC */
+ uint8_t src[IOC_NET_HEADER_FIELD_IPv6_ADDR_SIZE];
+ /**< 16 byte new IP SRC; Relevant only if valid_updates
+ contains IOC_HDR_MANIP_IPV6_SRC */
+ uint8_t dst[IOC_NET_HEADER_FIELD_IPv6_ADDR_SIZE];
+ /**< 16 byte new IP DST; Relevant only if valid_updates
+ contains IOC_HDR_MANIP_IPV6_DST */
+} ioc_fm_pcd_manip_hdr_field_update_ipv6_t;
+
+/**
+ @Description Parameters for defining header manipulation TCP/UDP fields updates
+*/
+typedef struct ioc_fm_pcd_manip_hdr_field_update_tcp_udp_t {
+ ioc_tcp_udp_hdr_manip_update_flags_t valid_updates; /**< ORed flag, selecting the required updates */
+ uint16_t src; /**< 16 bit New TCP/UDP SRC; Relevant only if valid_updates
+ contains IOC_HDR_MANIP_TCP_UDP_SRC */
+ uint16_t dst; /**< 16 bit New TCP/UDP DST; Relevant only if valid_updates
+ contains IOC_HDR_MANIP_TCP_UDP_DST */
+} ioc_fm_pcd_manip_hdr_field_update_tcp_udp_t;
+
+/**
+ @Description Parameters for defining header manipulation fields updates
+*/
+typedef struct ioc_fm_pcd_manip_hdr_field_update_params_t {
+ ioc_fm_pcd_manip_hdr_field_update_type type; /**< Type of header field update manipulation */
+ union {
+ ioc_fm_pcd_manip_hdr_field_update_vlan_t vlan; /**< Parameters for VLAN update. Relevant when
+ type = e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_VLAN */
+ ioc_fm_pcd_manip_hdr_field_update_ipv4_t ipv4; /**< Parameters for IPv4 update. Relevant when
+ type = e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_IPV4 */
+ ioc_fm_pcd_manip_hdr_field_update_ipv6_t ipv6; /**< Parameters for IPv6 update. Relevant when
+ type = e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_IPV6 */
+ ioc_fm_pcd_manip_hdr_field_update_tcp_udp_t tcp_udp;/**< Parameters for TCP/UDP update. Relevant when
+ type = e_IOC_FM_PCD_MANIP_HDR_FIELD_UPDATE_TCP_UDP */
+ } u;
+} ioc_fm_pcd_manip_hdr_field_update_params_t;
+
+/**
+ @Description Parameters for defining custom header manipulation for IP replacement
+*/
+typedef struct ioc_fm_pcd_manip_hdr_custom_ip_hdr_replace_t {
+ ioc_fm_pcd_manip_hdr_custom_ip_replace replace_type; /**< Selects replace update type */
+ bool dec_ttl_hl; /**< Decrement TTL (IPV4) or Hop limit (IPV6) by 1 */
+ bool update_ipv4_id; /**< Relevant when replace_type =
+ e_IOC_FM_PCD_MANIP_HDR_CUSTOM_REPLACE_IPV6_BY_IPV4 */
+ uint16_t id; /**< 16 bit New IP ID; Relevant only if
+ update_ipv4_id = TRUE */
+ uint8_t hdr_size; /**< The size of the new IP header */
+ uint8_t hdr[IOC_FM_PCD_MANIP_MAX_HDR_SIZE]; /**< The new IP header */
+} ioc_fm_pcd_manip_hdr_custom_ip_hdr_replace_t;
+
+/**
+ @Description Parameters for defining custom header manipulation
+*/
+typedef struct ioc_fm_pcd_manip_hdr_custom_params_t {
+ ioc_fm_pcd_manip_hdr_custom_type type;
+ /**< Type of header field update manipulation */
+ union {
+ ioc_fm_pcd_manip_hdr_custom_ip_hdr_replace_t ip_hdr_replace;
+ /**< Parameters IP header replacement */
+ } u;
+} ioc_fm_pcd_manip_hdr_custom_params_t;
+
+/**
+ @Description Parameters for defining specific L2 insertion manipulation
+*/
+typedef struct ioc_fm_pcd_manip_hdr_insrt_specific_l2_params_t {
+ ioc_fm_pcd_manip_hdr_insrt_specific_l2 specific_l2; /**< Selects which L2 headers to insert */
+ bool update; /**< TRUE to update MPLS header */
+ uint8_t size; /**< size of inserted section */
+ uint8_t *p_data; /**< data to be inserted */
+} ioc_fm_pcd_manip_hdr_insrt_specific_l2_params_t;
+
+#if (DPAA_VERSION >= 11)
+/**
+ @Description Parameters for defining IP insertion manipulation
+*/
+typedef struct ioc_fm_pcd_manip_hdr_insrt_ip_params_t {
+ bool calc_l4_checksum; /**< Calculate L4 checksum. */
+ ioc_fm_pcd_manip_hdr_qos_mapping_mode mapping_mode; /**< TODO */
+ uint8_t last_pid_offset; /**< the offset of the last Protocol within
+ the inserted header */
+ uint16_t id; /**< 16 bit New IP ID */
+ bool dont_frag_overwrite;
+ /**< IPv4 only. DF is overwritten with the hash-result next-to-last byte.
+ * This byte is configured to be overwritten when RPD is set. */
+ uint8_t last_dst_offset;
+ /**< IPv6 only. if routing extension exist, user should set the offset of the destination address
+ * in order to calculate UDP checksum pseudo header;
+ * Otherwise set it to '0'. */
+ ioc_fm_pcd_manip_hdr_insrt_t insrt; /**< size and data to be inserted. */
+} ioc_fm_pcd_manip_hdr_insrt_ip_params_t;
+#endif /* (DPAA_VERSION >= 11) */
+
+/**
+ @Description Parameters for defining header insertion manipulation by header type
+*/
+typedef struct ioc_fm_pcd_manip_hdr_insrt_by_hdr_params_t {
+ ioc_fm_pcd_manip_hdr_insrt_by_hdr_type type; /**< Selects manipulation type */
+ union {
+ ioc_fm_pcd_manip_hdr_insrt_specific_l2_params_t specific_l2_params;
+ /**< Used when type = e_IOC_FM_PCD_MANIP_INSRT_BY_HDR_SPECIFIC_L2:
+ Selects which L2 headers to remove */
+#if (DPAA_VERSION >= 11)
+ ioc_fm_pcd_manip_hdr_insrt_ip_params_t ip_params; /**< Used when type = e_FM_PCD_MANIP_INSRT_BY_HDR_IP */
+ ioc_fm_pcd_manip_hdr_insrt_t insrt; /**< Used when type is one of e_FM_PCD_MANIP_INSRT_BY_HDR_UDP,
+ e_FM_PCD_MANIP_INSRT_BY_HDR_UDP_LITE, or
+ e_FM_PCD_MANIP_INSRT_BY_HDR_CAPWAP */
+#endif /* (DPAA_VERSION >= 11) */
+ } u;
+} ioc_fm_pcd_manip_hdr_insrt_by_hdr_params_t;
+
+/**
+ @Description Parameters for defining header insertion manipulation
+*/
+typedef struct ioc_fm_pcd_manip_hdr_insrt_params_t {
+ ioc_fm_pcd_manip_hdr_insrt_type type; /**< Type of insertion manipulation */
+ union {
+ ioc_fm_pcd_manip_hdr_insrt_by_hdr_params_t by_hdr; /**< Parameters for defining header insertion manipulation by header type,
+ relevant if 'type' = e_IOC_FM_PCD_MANIP_INSRT_BY_HDR */
+ ioc_fm_pcd_manip_hdr_insrt_generic_params_t generic;/**< Parameters for defining generic header insertion manipulation,
+ relevant if type = e_IOC_FM_PCD_MANIP_INSRT_GENERIC */
+#if (defined(FM_CAPWAP_SUPPORT) && (DPAA_VERSION == 10))
+ ioc_fm_pcd_manip_hdr_insrt_by_template_params_t by_template;
+ /**< Parameters for defining header insertion manipulation by template,
+ relevant if 'type' = e_IOC_FM_PCD_MANIP_INSRT_BY_TEMPLATE */
+#endif /* FM_CAPWAP_SUPPORT */
+ } u;
+} ioc_fm_pcd_manip_hdr_insrt_params_t;
+
+/**
+ @Description Parameters for defining header removal manipulation
+*/
+typedef struct ioc_fm_pcd_manip_hdr_rmv_params_t {
+ ioc_fm_pcd_manip_hdr_rmv_type type; /**< Type of header removal manipulation */
+ union {
+ ioc_fm_pcd_manip_hdr_rmv_by_hdr_params_t by_hdr; /**< Parameters for defining header removal manipulation by header type,
+ relevant if type = e_IOC_FM_PCD_MANIP_RMV_BY_HDR */
+ ioc_fm_pcd_manip_hdr_rmv_generic_params_t generic; /**< Parameters for defining generic header removal manipulation,
+ relevant if type = e_IOC_FM_PCD_MANIP_RMV_GENERIC */
+ } u;
+} ioc_fm_pcd_manip_hdr_rmv_params_t;
+
+/**
+ @Description Parameters for defining header manipulation node
+*/
+typedef struct ioc_fm_pcd_manip_hdr_params_t {
+ bool rmv; /**< TRUE, to define removal manipulation */
+ ioc_fm_pcd_manip_hdr_rmv_params_t rmv_params; /**< Parameters for removal manipulation, relevant if 'rmv' = TRUE */
+
+ bool insrt; /**< TRUE, to define insertion manipulation */
+ ioc_fm_pcd_manip_hdr_insrt_params_t insrt_params; /**< Parameters for insertion manipulation, relevant if 'insrt' = TRUE */
+
+ bool field_update; /**< TRUE, to define field update manipulation */
+ ioc_fm_pcd_manip_hdr_field_update_params_t field_update_params; /**< Parameters for field update manipulation, relevant if 'fieldUpdate' = TRUE */
+
+ bool custom; /**< TRUE, to define custom manipulation */
+ ioc_fm_pcd_manip_hdr_custom_params_t custom_params; /**< Parameters for custom manipulation, relevant if 'custom' = TRUE */
+
+ bool dont_parse_after_manip;/**< FALSE to activate the parser a second time after
+ completing the manipulation on the frame */
+} ioc_fm_pcd_manip_hdr_params_t;
+
+/**
+ @Description structure for defining fragmentation manipulation
+*/
+typedef struct ioc_fm_pcd_manip_frag_params_t {
+ ioc_net_header_type hdr; /**< Header selection */
+ union {
+#if (DPAA_VERSION >= 11)
+ ioc_fm_pcd_manip_frag_capwap_params_t capwap_frag; /**< Parameters for defining CAPWAP fragmentation,
+ relevant if 'hdr' = HEADER_TYPE_CAPWAP */
+#endif /* (DPAA_VERSION >= 11) */
+ ioc_fm_pcd_manip_frag_ip_params_t ip_frag; /**< Parameters for defining IP fragmentation,
+ relevant if 'hdr' = HEADER_TYPE_Ipv4 or HEADER_TYPE_Ipv6 */
+ } u;
+} ioc_fm_pcd_manip_frag_params_t;
+
+/**
+ @Description structure for defining reassemble manipulation
+*/
+typedef struct ioc_fm_pcd_manip_reassem_params_t {
+ ioc_net_header_type hdr; /**< Header selection */
+ union {
+#if (DPAA_VERSION >= 11)
+ ioc_fm_pcd_manip_reassem_capwap_params_t capwap_reassem; /**< Parameters for defining CAPWAP reassembly,
+ relevant if 'hdr' = HEADER_TYPE_CAPWAP */
+#endif /* (DPAA_VERSION >= 11) */
+ ioc_fm_pcd_manip_reassem_ip_params_t ip_reassem; /**< Parameters for defining IP reassembly,
+ relevant if 'hdr' = HEADER_TYPE_Ipv4 or HEADER_TYPE_Ipv6 */
+ } u;
+} ioc_fm_pcd_manip_reassem_params_t;
+
+/**
+ @Description Parameters for defining a manipulation node
+*/
+struct fm_pcd_manip_params_t {
+ ioc_fm_pcd_manip_type type;
+ /**< Selects type of manipulation node */
+ union {
+ ioc_fm_pcd_manip_hdr_params_t hdr;
+ /**< Parameters for defining header manipulation node */
+ ioc_fm_pcd_manip_reassem_params_t reassem;
+ /**< Parameters for defining reassembly manipulation node */
+ ioc_fm_pcd_manip_frag_params_t frag;
+ /**< Parameters for defining fragmentation manipulation node */
+ ioc_fm_pcd_manip_special_offload_params_t special_offload;
+ /**< Parameters for defining special offload manipulation node */
+ } u;
+ void *p_next_manip;
+ /**< Handle to another (previously defined) manipulation node;
+ Allows concatenation of manipulation actions
+ This parameter is optional and may be NULL. */
+#if (defined(FM_CAPWAP_SUPPORT) && (DPAA_VERSION == 10))
+ bool frag_or_reasm;
+ /**< TRUE, if defined fragmentation/reassembly manipulation */
+ ioc_fm_pcd_manip_frag_or_reasm_params_t
+ frag_or_reasm_params;
+ /**< Parameters for fragmentation/reassembly manipulation,
+ relevant if frag_or_reasm = TRUE */
+#endif /* FM_CAPWAP_SUPPORT */
+};
+
+typedef struct ioc_fm_pcd_manip_params_t {
+ struct fm_pcd_manip_params_t param;
+ void *id;
+} ioc_fm_pcd_manip_params_t;
+
+/**
+ @Description Structure for retrieving IP reassembly statistics
+*/
+typedef struct ioc_fm_pcd_manip_reassem_ip_stats_t {
+ /* common counters for both IPv4 and IPv6 */
+ uint32_t timeout; /**< Counts the number of TimeOut occurrences */
+ uint32_t rfd_pool_busy; /**< Counts the number of failed attempts to allocate
+ a Reassembly Frame Descriptor */
+ uint32_t internal_buffer_busy; /**< Counts the number of times an internal buffer busy occurred */
+ uint32_t external_buffer_busy; /**< Counts the number of times external buffer busy occurred */
+ uint32_t sg_fragments; /**< Counts the number of Scatter/Gather fragments */
+ uint32_t dma_semaphore_depletion; /**< Counts the number of failed attempts to allocate a DMA semaphore */
+#if (DPAA_VERSION >= 11)
+ uint32_t non_consistent_sp; /**< Counts the number of Non Consistent Storage Profile events for
+ successfully reassembled frames */
+#endif /* (DPAA_VERSION >= 11) */
+struct {
+ uint32_t successfully_reassembled; /**< Counts the number of successfully reassembled frames */
+ uint32_t valid_fragments; /**< Counts the total number of valid fragments that
+ have been processed for all frames */
+ uint32_t processed_fragments; /**< Counts the number of processed fragments
+ (valid and error fragments) for all frames */
+ uint32_t malformed_fragments; /**< Counts the number of malformed fragments processed for all frames */
+ uint32_t discarded_fragments; /**< Counts the number of fragments discarded by the reassembly process */
+ uint32_t auto_learn_busy; /**< Counts the number of times a busy condition occurs when attempting
+ to access an IP-Reassembly Automatic Learning Hash set */
+ uint32_t more_than16fragments; /**< Counts the fragment occurrences in which the number of fragments-per-frame
+ exceeds 16 */
+ } specific_hdr_statistics[2]; /**< slot '0' is for IPv4, slot '1' is for IPv6 */
+} ioc_fm_pcd_manip_reassem_ip_stats_t;
+
+/**
+ @Description Structure for retrieving IP fragmentation statistics
+*/
+typedef struct ioc_fm_pcd_manip_frag_ip_stats_t {
+ uint32_t total_frames; /**< Number of frames that passed through the manipulation node */
+ uint32_t fragmented_frames; /**< Number of frames that were fragmented */
+ uint32_t generated_fragments; /**< Number of fragments that were generated */
+} ioc_fm_pcd_manip_frag_ip_stats_t;
+
+#if (DPAA_VERSION >= 11)
+/**
+ @Description Structure for retrieving CAPWAP reassembly statistics
+*/
+typedef struct ioc_fm_pcd_manip_reassem_capwap_stats_t {
+ uint32_t timeout; /**< Counts the number of timeout occurrences */
+ uint32_t rfd_pool_busy; /**< Counts the number of failed attempts to allocate
+ a Reassembly Frame Descriptor */
+ uint32_t internal_buffer_busy; /**< Counts the number of times an internal buffer busy occurred */
+ uint32_t external_buffer_busy; /**< Counts the number of times external buffer busy occurred */
+ uint32_t sg_fragments; /**< Counts the number of Scatter/Gather fragments */
+ uint32_t dma_semaphore_depletion; /**< Counts the number of failed attempts to allocate a DMA semaphore */
+ uint32_t successfully_reassembled; /**< Counts the number of successfully reassembled frames */
+ uint32_t valid_fragments; /**< Counts the total number of valid fragments that
+ have been processed for all frames */
+ uint32_t processed_fragments; /**< Counts the number of processed fragments
+ (valid and error fragments) for all frames */
+ uint32_t malformed_fragments; /**< Counts the number of malformed fragments processed for all frames */
+ uint32_t autoLearn_busy; /**< Counts the number of times a busy condition occurs when attempting
+ to access an Reassembly Automatic Learning Hash set */
+ uint32_t discarded_fragments; /**< Counts the number of fragments discarded by the reassembly process */
+ uint32_t more_than16fragments; /**< Counts the fragment occurrences in which the number of fragments-per-frame
+ exceeds 16 */
+ uint32_t exceed_max_reassembly_frame_len;/**< ounts the number of times that a successful reassembled frame
+ length exceeds MaxReassembledFrameLength value */
+} ioc_fm_pcd_manip_reassem_capwap_stats_t;
+
+/**
+ @Description Structure for retrieving CAPWAP fragmentation statistics
+*/
+typedef struct ioc_fm_pcd_manip_frag_capwap_stats_t {
+ uint32_t total_frames; /**< Number of frames that passed through the manipulation node */
+ uint32_t fragmented_frames; /**< Number of frames that were fragmented */
+ uint32_t generated_fragments; /**< Number of fragments that were generated */
+#if (defined(DEBUG_ERRORS) && (DEBUG_ERRORS > 0))
+ uint8_t sg_allocation_failure; /**< Number of allocation failure of s/g buffers */
+#endif /* (defined(DEBUG_ERRORS) && (DEBUG_ERRORS > 0)) */
+} ioc_fm_pcd_manip_frag_capwap_stats_t;
+#endif /* (DPAA_VERSION >= 11) */
+
+/**
+ @Description Structure for retrieving reassembly statistics
+*/
+typedef struct ioc_fm_pcd_manip_reassem_stats_t {
+ union {
+ ioc_fm_pcd_manip_reassem_ip_stats_t ip_reassem; /**< Structure for IP reassembly statistics */
+#if (DPAA_VERSION >= 11)
+ ioc_fm_pcd_manip_reassem_capwap_stats_t capwap_reassem; /**< Structure for CAPWAP reassembly statistics */
+#endif /* (DPAA_VERSION >= 11) */
+ } u;
+} ioc_fm_pcd_manip_reassem_stats_t;
+
+/**
+ @Description structure for retrieving fragmentation statistics
+*/
+typedef struct ioc_fm_pcd_manip_frag_stats_t {
+ union {
+ ioc_fm_pcd_manip_frag_ip_stats_t ip_frag; /**< Structure for IP fragmentation statistics */
+#if (DPAA_VERSION >= 11)
+ ioc_fm_pcd_manip_frag_capwap_stats_t capwap_frag; /**< Structure for CAPWAP fragmentation statistics */
+#endif /* (DPAA_VERSION >= 11) */
+ } u;
+} ioc_fm_pcd_manip_frag_stats_t;
+
+/**
+ @Description structure for defining manipulation statistics
+*/
+typedef struct ioc_fm_pcd_manip_stats_t {
+ union {
+ ioc_fm_pcd_manip_reassem_stats_t reassem; /**< Structure for reassembly statistics */
+ ioc_fm_pcd_manip_frag_stats_t frag; /**< Structure for fragmentation statistics */
+ } u;
+} ioc_fm_pcd_manip_stats_t;
+
+/**
+ @Description Parameters for acquiring manipulation statistics
+*/
+typedef struct ioc_fm_pcd_manip_get_stats_t {
+ void *id;
+ ioc_fm_pcd_manip_stats_t stats;
+} ioc_fm_pcd_manip_get_stats_t;
+
+#if DPAA_VERSION >= 11
+/**
+ @Description Parameters for defining frame replicator group and its members
+*/
+struct fm_pcd_frm_replic_group_params_t {
+ uint8_t max_num_of_entries; /**< Maximal number of members in the group - must be at least two */
+ uint8_t num_of_entries; /**< Number of members in the group - must be at least 1 */
+ ioc_fm_pcd_cc_next_engine_params_t next_engine_params[IOC_FM_PCD_FRM_REPLIC_MAX_NUM_OF_ENTRIES];
+ /**< Array of members' parameters */
+};
+
+typedef struct ioc_fm_pcd_frm_replic_group_params_t {
+ struct fm_pcd_frm_replic_group_params_t param;
+ void *id;
+} ioc_fm_pcd_frm_replic_group_params_t;
+
+typedef struct ioc_fm_pcd_frm_replic_member_t {
+ void *h_replic_group;
+ uint16_t member_index;
+} ioc_fm_pcd_frm_replic_member_t;
+
+typedef struct ioc_fm_pcd_frm_replic_member_params_t {
+ ioc_fm_pcd_frm_replic_member_t member;
+ ioc_fm_pcd_cc_next_engine_params_t next_engine_params;
+} ioc_fm_pcd_frm_replic_member_params_t;
+#endif /* DPAA_VERSION >= 11 */
+
+
+typedef struct ioc_fm_pcd_cc_key_statistics_t {
+ uint32_t byte_count; /**< This counter reflects byte count of frames that
+ were matched by this key. */
+ uint32_t frame_count; /**< This counter reflects count of frames that
+ were matched by this key. */
+#if (DPAA_VERSION >= 11)
+ uint32_t frame_length_range_count[IOC_FM_PCD_CC_STATS_MAX_NUM_OF_FLR];
+ /**< These counters reflect how many frames matched
+ this key in 'RMON' statistics mode:
+ Each counter holds the number of frames of a
+ specific frames length range, according to the
+ ranges provided at initialization. */
+#endif /* (DPAA_VERSION >= 11) */
+} ioc_fm_pcd_cc_key_statistics_t;
+
+
+typedef struct ioc_fm_pcd_cc_tbl_get_stats_t {
+ void *id;
+ uint16_t key_index;
+ ioc_fm_pcd_cc_key_statistics_t statistics;
+} ioc_fm_pcd_cc_tbl_get_stats_t;
+
+/**
+ @Function FM_PCD_MatchTableGetKeyStatistics
+
+ @Description This routine may be used to get statistics counters of specific key
+ in a CC Node.
+
+ If 'e_FM_PCD_CC_STATS_MODE_FRAME' and
+ 'e_FM_PCD_CC_STATS_MODE_BYTE_AND_FRAME' were set for this node,
+ these counters reflect how many frames passed that were matched
+ this key; The total frames count will be returned in the counter
+ of the first range (as only one frame length range was defined).
+ If 'e_FM_PCD_CC_STATS_MODE_RMON' was set for this node, the total
+ frame count will be separated to frame length counters, based on
+ provided frame length ranges.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keyIndex Key index for adding
+ @Param[out] p_KeyStatistics Key statistics counters
+
+ @Return The specific key statistics.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet().
+*/
+
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MATCH_TABLE_GET_KEY_STAT_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(12), ioc_compat_fm_pcd_cc_tbl_get_stats_t)
+#endif
+#define FM_PCD_IOC_MATCH_TABLE_GET_KEY_STAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(12), ioc_fm_pcd_cc_tbl_get_stats_t)
+
+/**
+ @Function FM_PCD_MatchTableGetMissStatistics
+
+ @Description This routine may be used to get statistics counters of miss entry
+ in a CC Node.
+
+ If 'e_FM_PCD_CC_STATS_MODE_FRAME' and
+ 'e_FM_PCD_CC_STATS_MODE_BYTE_AND_FRAME' were set for this node,
+ these counters reflect how many frames were not matched to any
+ existing key and therefore passed through the miss entry; The
+ total frames count will be returned in the counter of the
+ first range (as only one frame length range was defined).
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[out] p_MissStatistics Statistics counters for 'miss'
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet().
+*/
+
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MATCH_TABLE_GET_MISS_STAT_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(13), ioc_compat_fm_pcd_cc_tbl_get_stats_t)
+#endif
+#define FM_PCD_IOC_MATCH_TABLE_GET_MISS_STAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(13), ioc_fm_pcd_cc_tbl_get_stats_t)
+
+/**
+ @Function FM_PCD_HashTableGetMissStatistics
+
+ @Description This routine may be used to get statistics counters of 'miss'
+ entry of the a hash table.
+
+ If 'e_FM_PCD_CC_STATS_MODE_FRAME' and
+ 'e_FM_PCD_CC_STATS_MODE_BYTE_AND_FRAME' were set for this node,
+ these counters reflect how many frames were not matched to any
+ existing key and therefore passed through the miss entry;
+
+ @Param[in] h_HashTbl A handle to a hash table
+ @Param[out] p_MissStatistics Statistics counters for 'miss'
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+*/
+
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_HASH_TABLE_GET_MISS_STAT_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(14), ioc_compat_fm_pcd_cc_tbl_get_stats_t)
+#endif
+#define FM_PCD_IOC_HASH_TABLE_GET_MISS_STAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(14), ioc_fm_pcd_cc_tbl_get_stats_t)
+
+/**
+ @Function FM_PCD_NetEnvCharacteristicsSet
+
+ @Description Define a set of Network Environment Characteristics.
+
+ When setting an environment it is important to understand its
+ application. It is not meant to describe the flows that will run
+ on the ports using this environment, but what the user means TO DO
+ with the PCD mechanisms in order to parse-classify-distribute those
+ frames.
+ By specifying a distinction unit, the user means it would use that option
+ for distinction between frames at either a KeyGen scheme or a coarse
+ classification action descriptor. Using interchangeable headers to define a
+ unit means that the user is indifferent to which of the interchangeable
+ headers is present in the frame, and wants the distinction to be based
+ on the presence of either one of them.
+
+ Depending on context, there are limitations to the use of environments. A
+ port using the PCD functionality is bound to an environment. Some or even
+ all ports may share an environment but also an environment per port is
+ possible. When initializing a scheme, a classification plan group (see below),
+ or a coarse classification tree, one of the initialized environments must be
+ stated and related to. When a port is bound to a scheme, a classification
+ plan group, or a coarse classification tree, it MUST be bound to the same
+ environment.
+
+ The different PCD modules, may relate (for flows definition) ONLY on
+ distinction units as defined by their environment. When initializing a
+ scheme for example, it may not choose to select IPV4 as a match for
+ recognizing flows unless it was defined in the relating environment. In
+ fact, to guide the user through the configuration of the PCD, each module's
+ characterization in terms of flows is not done using protocol names, but using
+ environment indexes.
+
+ In terms of HW implementation, the list of distinction units sets the LCV vectors
+ and later used for match vector, classification plan vectors and coarse classification
+ indexing.
+
+ @Param[in,out] ioc_fm_pcd_net_env_params_t A structure defining the distinction units for this configuration.
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_NET_ENV_CHARACTERISTICS_SET_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(20), ioc_compat_fm_pcd_net_env_params_t)
+#endif
+#define FM_PCD_IOC_NET_ENV_CHARACTERISTICS_SET _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(20), ioc_fm_pcd_net_env_params_t)
+
+/**
+ @Function FM_PCD_NetEnvCharacteristicsDelete
+
+ @Description Deletes a set of Network Environment Charecteristics.
+
+ @Param[in] ioc_fm_obj_t - The id of a Network Environment object.
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_NET_ENV_CHARACTERISTICS_DELETE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(21), ioc_compat_fm_obj_t)
+#endif
+#define FM_PCD_IOC_NET_ENV_CHARACTERISTICS_DELETE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(21), ioc_fm_obj_t)
+
+/**
+ @Function FM_PCD_KgSchemeSet
+
+ @Description Initializing or modifying and enabling a scheme for the KeyGen.
+ This routine should be called for adding or modifying a scheme.
+ When a scheme needs modifying, the API requires that it will be
+ rewritten. In such a case 'modify' should be TRUE. If the
+ routine is called for a valid scheme and 'modify' is FALSE,
+ it will return error.
+
+ @Param[in,out] ioc_fm_pcd_kg_scheme_params_t A structure of parameters for defining the scheme
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_KG_SCHEME_SET_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(24), ioc_compat_fm_pcd_kg_scheme_params_t)
+#endif
+#define FM_PCD_IOC_KG_SCHEME_SET _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(24), ioc_fm_pcd_kg_scheme_params_t)
+
+/**
+ @Function FM_PCD_KgSchemeDelete
+
+ @Description Deleting an initialized scheme.
+
+ @Param[in] ioc_fm_obj_t scheme id as initialized by application at FM_PCD_IOC_KG_SET_SCHEME
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_KG_SCHEME_DELETE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(25), ioc_compat_fm_obj_t)
+#endif
+#define FM_PCD_IOC_KG_SCHEME_DELETE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(25), ioc_fm_obj_t)
+
+/**
+ @Function FM_PCD_CcRootBuild
+
+ @Description This routine must be called to define a complete coarse
+ classification tree. This is the way to define coarse
+ classification to a certain flow - the KeyGen schemes
+ may point only to trees defined in this way.
+
+ @Param[in,out] ioc_fm_pcd_cc_tree_params_t A structure of parameters to define the tree.
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_CC_ROOT_BUILD_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(26), compat_uptr_t)
+#endif
+#define FM_PCD_IOC_CC_ROOT_BUILD _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(26), void *) /* workaround ...*/
+
+/**
+ @Function FM_PCD_CcRootDelete
+
+ @Description Deleting a built tree.
+
+ @Param[in] ioc_fm_obj_t - The id of a CC tree.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_CC_ROOT_DELETE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(27), ioc_compat_fm_obj_t)
+#endif
+#define FM_PCD_IOC_CC_ROOT_DELETE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(27), ioc_fm_obj_t)
+
+/**
+ @Function FM_PCD_MatchTableSet
+
+ @Description This routine should be called for each CC (coarse classification)
+ node. The whole CC tree should be built bottom up so that each
+ node points to already defined nodes. p_NodeId returns the node
+ Id to be used by other nodes.
+
+ @Param[in,out] ioc_fm_pcd_cc_node_params_t A structure for defining the CC node params
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MATCH_TABLE_SET_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(28), compat_uptr_t)
+#endif
+#define FM_PCD_IOC_MATCH_TABLE_SET _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(28), void *) /* workaround ...*/
+
+/**
+ @Function FM_PCD_MatchTableDelete
+
+ @Description Deleting a built node.
+
+ @Param[in] ioc_fm_obj_t - The id of a CC node.
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MATCH_TABLE_DELETE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(29), ioc_compat_fm_obj_t)
+#endif
+#define FM_PCD_IOC_MATCH_TABLE_DELETE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(29), ioc_fm_obj_t)
+
+/**
+ @Function FM_PCD_CcRootModifyNextEngine
+
+ @Description Modify the Next Engine Parameters in the entry of the tree.
+
+ @Param[in] ioc_fm_pcd_cc_tree_modify_next_engine_params_t - Pointer to a structure with the relevant parameters
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_CcRootBuild().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_CC_ROOT_MODIFY_NEXT_ENGINE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(30), ioc_compat_fm_pcd_cc_tree_modify_next_engine_params_t)
+#endif
+#define FM_PCD_IOC_CC_ROOT_MODIFY_NEXT_ENGINE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(30), ioc_fm_pcd_cc_tree_modify_next_engine_params_t)
+
+/**
+ @Function FM_PCD_MatchTableModifyNextEngine
+
+ @Description Modify the Next Engine Parameters in the relevant key entry of the node.
+
+ @Param[in] ioc_fm_pcd_cc_node_modify_next_engine_params_t A pointer to a structure with the relevant parameters
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MATCH_TABLE_MODIFY_NEXT_ENGINE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(31), ioc_compat_fm_pcd_cc_node_modify_next_engine_params_t)
+#endif
+#define FM_PCD_IOC_MATCH_TABLE_MODIFY_NEXT_ENGINE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(31), ioc_fm_pcd_cc_node_modify_next_engine_params_t)
+
+/**
+ @Function FM_PCD_MatchTableModifyMissNextEngine
+
+ @Description Modify the Next Engine Parameters of the Miss key case of the node.
+
+ @Param[in] ioc_fm_pcd_cc_node_modify_next_engine_params_t - Pointer to a structure with the relevant parameters
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MATCH_TABLE_MODIFY_MISS_NEXT_ENGINE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(32), ioc_compat_fm_pcd_cc_node_modify_next_engine_params_t)
+#endif
+#define FM_PCD_IOC_MATCH_TABLE_MODIFY_MISS_NEXT_ENGINE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(32), ioc_fm_pcd_cc_node_modify_next_engine_params_t)
+
+/**
+ @Function FM_PCD_MatchTableRemoveKey
+
+ @Description Remove the key (including next engine parameters of this key)
+ defined by the index of the relevant node.
+
+ @Param[in] ioc_fm_pcd_cc_node_remove_key_params_t A pointer to a structure with the relevant parameters
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only after FM_PCD_MatchTableSet() has been called for this
+ node and for all of the nodes that lead to it.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MATCH_TABLE_REMOVE_KEY_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(33), ioc_compat_fm_pcd_cc_node_remove_key_params_t)
+#endif
+#define FM_PCD_IOC_MATCH_TABLE_REMOVE_KEY _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(33), ioc_fm_pcd_cc_node_remove_key_params_t)
+
+/**
+ @Function FM_PCD_MatchTableAddKey
+
+ @Description Add the key (including next engine parameters of this key in the
+ index defined by the keyIndex. Note that 'FM_PCD_LAST_KEY_INDEX'
+ may be used when the user doesn't care about the position of the
+ key in the table - in that case, the key will be automatically
+ added by the driver in the last available entry.
+
+ @Param[in] ioc_fm_pcd_cc_node_modify_key_and_next_engine_params_t A pointer to a structure with the relevant parameters
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only after FM_PCD_MatchTableSet() has been called for this
+ node and for all of the nodes that lead to it.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MATCH_TABLE_ADD_KEY_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(34), ioc_compat_fm_pcd_cc_node_modify_key_and_next_engine_params_t)
+#endif
+#define FM_PCD_IOC_MATCH_TABLE_ADD_KEY _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(34), ioc_fm_pcd_cc_node_modify_key_and_next_engine_params_t)
+
+/**
+ @Function FM_PCD_MatchTableModifyKeyAndNextEngine
+
+ @Description Modify the key and Next Engine Parameters of this key in the index defined by key_index.
+
+ @Param[in] ioc_fm_pcd_cc_node_modify_key_and_next_engine_params_t A pointer to a structure with the relevant parameters
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet() not only of the relevnt node but also
+ the node that points to this node
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MATCH_TABLE_MODIFY_KEY_AND_NEXT_ENGINE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(35), ioc_compat_fm_pcd_cc_node_modify_key_and_next_engine_params_t)
+#endif
+#define FM_PCD_IOC_MATCH_TABLE_MODIFY_KEY_AND_NEXT_ENGINE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(35), ioc_fm_pcd_cc_node_modify_key_and_next_engine_params_t)
+
+/**
+ @Function FM_PCD_MatchTableModifyKey
+
+ @Description Modify the key at the index defined by key_index.
+
+ @Param[in] ioc_fm_pcd_cc_node_modify_key_params_t - Pointer to a structure with the relevant parameters
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only after FM_PCD_MatchTableSet() has been called for this
+ node and for all of the nodes that lead to it.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MATCH_TABLE_MODIFY_KEY_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(36), ioc_compat_fm_pcd_cc_node_modify_key_params_t)
+#endif
+#define FM_PCD_IOC_MATCH_TABLE_MODIFY_KEY _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(36), ioc_fm_pcd_cc_node_modify_key_params_t)
+
+/**
+ @Function FM_PCD_HashTableSet
+
+ @Description This routine initializes a hash table structure.
+ KeyGen hash result determines the hash bucket.
+ Next, KeyGen key is compared against all keys of this
+ bucket (exact match).
+ Number of sets (number of buckets) of the hash equals to the
+ number of 1-s in 'hash_res_mask' in the provided parameters.
+ Number of hash table ways is then calculated by dividing
+ 'max_num_of_keys' equally between the hash sets. This is the maximal
+ number of keys that a hash bucket may hold.
+ The hash table is initialized empty and keys may be
+ added to it following the initialization. Keys masks are not
+ supported in current hash table implementation.
+ The initialized hash table can be integrated as a node in a
+ CC tree.
+
+ @Param[in,out] ioc_fm_pcd_hash_table_params_t - Pointer to a structure with the relevant parameters
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_HASH_TABLE_SET_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(37), ioc_compat_fm_pcd_hash_table_params_t)
+#endif
+#define FM_PCD_IOC_HASH_TABLE_SET _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(37), ioc_fm_pcd_hash_table_params_t)
+
+/**
+ @Function FM_PCD_HashTableDelete
+
+ @Description This routine deletes the provided hash table and released all
+ its allocated resources.
+
+ @Param[in] ioc_fm_obj_t - The ID of a hash table.
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_HASH_TABLE_DELETE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(37), ioc_compat_fm_obj_t)
+#endif
+#define FM_PCD_IOC_HASH_TABLE_DELETE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(37), ioc_fm_obj_t)
+
+/**
+ @Function FM_PCD_HashTableAddKey
+
+ @Description This routine adds the provided key (including next engine
+ parameters of this key) to the hash table.
+ The key is added as the last key of the bucket that it is
+ mapped to.
+
+ @Param[in] ioc_fm_pcd_hash_table_add_key_params_t - Pointer to a structure with the relevant parameters
+
+ @Return 0 on success; error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_HASH_TABLE_ADD_KEY_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(39), ioc_compat_fm_pcd_hash_table_add_key_params_t)
+#endif
+#define FM_PCD_IOC_HASH_TABLE_ADD_KEY _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(39), ioc_fm_pcd_hash_table_add_key_params_t)
+
+/**
+ @Function FM_PCD_HashTableRemoveKey
+
+ @Description This routine removes the requested key (including next engine
+ parameters of this key) from the hash table.
+
+ @Param[in] ioc_fm_pcd_hash_table_remove_key_params_t - Pointer to a structure with the relevant parameters
+
+ @Return 0 on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_HASH_TABLE_REMOVE_KEY_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(40), ioc_compat_fm_pcd_hash_table_remove_key_params_t)
+#endif
+#define FM_PCD_IOC_HASH_TABLE_REMOVE_KEY _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(40), ioc_fm_pcd_hash_table_remove_key_params_t)
+
+/**
+ @Function FM_PCD_PlcrProfileSet
+
+ @Description Sets a profile entry in the policer profile table.
+ The routine overrides any existing value.
+
+ @Param[in,out] ioc_fm_pcd_plcr_profile_params_t A structure of parameters for defining a
+ policer profile entry.
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_PLCR_PROFILE_SET_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(41), ioc_compat_fm_pcd_plcr_profile_params_t)
+#endif
+#define FM_PCD_IOC_PLCR_PROFILE_SET _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(41), ioc_fm_pcd_plcr_profile_params_t)
+
+/**
+ @Function FM_PCD_PlcrProfileDelete
+
+ @Description Delete a profile entry in the policer profile table.
+ The routine set entry to invalid.
+
+ @Param[in] ioc_fm_obj_t The id of a policer profile.
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_PLCR_PROFILE_DELETE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(41), ioc_compat_fm_obj_t)
+#endif
+#define FM_PCD_IOC_PLCR_PROFILE_DELETE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(41), ioc_fm_obj_t)
+
+/**
+ @Function FM_PCD_ManipNodeSet
+
+ @Description This routine should be called for defining a manipulation
+ node. A manipulation node must be defined before the CC node
+ that precedes it.
+
+ @Param[in] ioc_fm_pcd_manip_params_t - A structure of parameters defining the manipulation
+
+ @Return A handle to the initialized object on success; NULL code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MANIP_NODE_SET_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(43), ioc_compat_fm_pcd_manip_params_t)
+#endif
+#define FM_PCD_IOC_MANIP_NODE_SET _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(43), ioc_fm_pcd_manip_params_t)
+
+/**
+ @Function FM_PCD_ManipNodeReplace
+
+ @Description Change existing manipulation node to be according to new requirement.
+ (Here, it's implemented as a variant of the same IOCTL as for
+ FM_PCD_ManipNodeSet(), and one that when called, the 'id' member
+ in its 'ioc_fm_pcd_manip_params_t' argument is set to contain
+ the manip node's handle)
+
+ @Param[in] ioc_fm_pcd_manip_params_t - A structure of parameters defining the manipulation
+
+ @Return 0 on success; error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_ManipNodeSet().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MANIP_NODE_REPLACE_COMPAT FM_PCD_IOC_MANIP_NODE_SET_COMPAT
+#endif
+#define FM_PCD_IOC_MANIP_NODE_REPLACE FM_PCD_IOC_MANIP_NODE_SET
+
+/**
+ @Function FM_PCD_ManipNodeDelete
+
+ @Description Delete an existing manipulation node.
+
+ @Param[in] ioc_fm_obj_t The id of the manipulation node to delete.
+
+ @Return 0 on success; error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_ManipNodeSet().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MANIP_NODE_DELETE_COMPAT _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(44), ioc_compat_fm_obj_t)
+#endif
+#define FM_PCD_IOC_MANIP_NODE_DELETE _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(44), ioc_fm_obj_t)
+
+/**
+ @Function FM_PCD_ManipGetStatistics
+
+ @Description Retrieve the manipulation statistics.
+
+ @Param[in] h_ManipNode A handle to a manipulation node.
+ @Param[out] p_FmPcdManipStats A structure for retrieving the manipulation statistics
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_ManipNodeSet().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_MANIP_GET_STATS_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(50), ioc_compat_fm_pcd_manip_get_stats_t)
+#endif
+#define FM_PCD_IOC_MANIP_GET_STATS _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(50), ioc_fm_pcd_manip_get_stats_t)
+
+/**
+@Function FM_PCD_SetAdvancedOffloadSupport
+
+@Description This routine must be called in order to support the following features:
+ IP-fragmentation, IP-reassembly, IPsec, Header-manipulation, frame-replicator.
+
+@Param[in] h_FmPcd FM PCD module descriptor.
+
+@Return 0 on success; error code otherwise.
+
+@Cautions Allowed only when PCD is disabled.
+*/
+#define FM_PCD_IOC_SET_ADVANCED_OFFLOAD_SUPPORT _IO(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(45))
+
+#if (DPAA_VERSION >= 11)
+/**
+ @Function FM_PCD_FrmReplicSetGroup
+
+ @Description Initialize a Frame Replicator group.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] p_FrmReplicGroupParam A structure of parameters for the initialization of
+ the frame replicator group.
+
+ @Return A handle to the initialized object on success; NULL code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_FRM_REPLIC_GROUP_SET_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(46), ioc_compat_fm_pcd_frm_replic_group_params_t)
+#endif
+#define FM_PCD_IOC_FRM_REPLIC_GROUP_SET _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(46), ioc_fm_pcd_frm_replic_group_params_t)
+
+/**
+ @Function FM_PCD_FrmReplicDeleteGroup
+
+ @Description Delete a Frame Replicator group.
+
+ @Param[in] h_FrmReplicGroup A handle to the frame replicator group.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_FrmReplicSetGroup().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_FRM_REPLIC_GROUP_DELETE_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(47), ioc_compat_fm_obj_t)
+#endif
+#define FM_PCD_IOC_FRM_REPLIC_GROUP_DELETE _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(47), ioc_fm_obj_t)
+
+/**
+ @Function FM_PCD_FrmReplicAddMember
+
+ @Description Add the member in the index defined by the memberIndex.
+
+ @Param[in] h_FrmReplicGroup A handle to the frame replicator group.
+ @Param[in] memberIndex member index for adding.
+ @Param[in] p_MemberParams A pointer to the new member parameters.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_FrmReplicSetGroup() of this group.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_FRM_REPLIC_MEMBER_ADD_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(48), ioc_compat_fm_pcd_frm_replic_member_params_t)
+#endif
+#define FM_PCD_IOC_FRM_REPLIC_MEMBER_ADD _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(48), ioc_fm_pcd_frm_replic_member_params_t)
+
+/**
+ @Function FM_PCD_FrmReplicRemoveMember
+
+ @Description Remove the member defined by the index from the relevant group.
+
+ @Param[in] h_FrmReplicGroup A handle to the frame replicator group.
+ @Param[in] memberIndex member index for removing.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_FrmReplicSetGroup() of this group.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_FRM_REPLIC_MEMBER_REMOVE_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(49), ioc_compat_fm_pcd_frm_replic_member_t)
+#endif
+#define FM_PCD_IOC_FRM_REPLIC_MEMBER_REMOVE _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(49), ioc_fm_pcd_frm_replic_member_t)
+
+#endif
+
+#if (defined(FM_CAPWAP_SUPPORT) && (DPAA_VERSION == 10))
+/**
+ @Function FM_PCD_StatisticsSetNode
+
+ @Description This routine should be called for defining a statistics node.
+
+ @Param[in,out] ioc_fm_pcd_stats_params_t A structure of parameters defining the statistics
+
+ @Return 0 on success; Error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PCD_IOC_STATISTICS_SET_NODE_COMPAT _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(45), void *)
+#endif
+#define FM_PCD_IOC_STATISTICS_SET_NODE _IOWR(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(45), void *)
+
+#endif /* FM_CAPWAP_SUPPORT */
+
+/**
+ @Group FM_grp Frame Manager API
+
+ @Description Frame Manager Application Programming Interface
+
+ @{
+*/
+
+/**
+ @Group FM_PCD_grp FM PCD
+
+ @Description Frame Manager PCD (Parse-Classify-Distribute) API.
+
+ The FM PCD module is responsible for the initialization of all
+ global classifying FM modules. This includes the parser general and
+ common registers, the key generator global and common registers,
+ and the policer global and common registers.
+ In addition, the FM PCD SW module will initialize all required
+ key generator schemes, coarse classification flows, and policer
+ profiles. When FM module is configured to work with one of these
+ entities, it will register to it using the FM PORT API. The PCD
+ module will manage the PCD resources - i.e. resource management of
+ KeyGen schemes, etc.
+
+ @{
+*/
+
+/**
+ @Collection General PCD defines
+*/
+#define FM_PCD_MAX_NUM_OF_PRIVATE_HDRS 2 /**< Number of units/headers saved for user */
+
+#define FM_PCD_PRS_NUM_OF_HDRS 16 /**< Number of headers supported by HW parser */
+#define FM_PCD_MAX_NUM_OF_DISTINCTION_UNITS (32 - FM_PCD_MAX_NUM_OF_PRIVATE_HDRS)
+ /**< Number of distinction units is limited by
+ register size (32 bits) minus reserved bits
+ for private headers. */
+#define FM_PCD_MAX_NUM_OF_INTERCHANGEABLE_HDRS 4 /**< Maximum number of interchangeable headers
+ in a distinction unit */
+#define FM_PCD_KG_NUM_OF_GENERIC_REGS FM_KG_NUM_OF_GENERIC_REGS /**< Total number of generic KeyGen registers */
+#define FM_PCD_KG_MAX_NUM_OF_EXTRACTS_PER_KEY 35 /**< Max number allowed on any configuration;
+ For HW implementation reasons, in most
+ cases less than this will be allowed; The
+ driver will return an initialization error
+ if resource is unavailable. */
+#define FM_PCD_KG_NUM_OF_EXTRACT_MASKS 4 /**< Total number of masks allowed on KeyGen extractions. */
+#define FM_PCD_KG_NUM_OF_DEFAULT_GROUPS 16 /**< Number of default value logical groups */
+
+#define FM_PCD_PRS_NUM_OF_LABELS 32 /**< Maximum number of SW parser labels */
+#define FM_SW_PRS_MAX_IMAGE_SIZE (FM_PCD_SW_PRS_SIZE /*- FM_PCD_PRS_SW_OFFSET -FM_PCD_PRS_SW_TAIL_SIZE*/ - FM_PCD_PRS_SW_PATCHES_SIZE)
+ /**< Maximum size of SW parser code */
+
+#define FM_PCD_MAX_MANIP_INSRT_TEMPLATE_SIZE 128 /**< Maximum size of insertion template for
+ insert manipulation */
+
+#if (DPAA_VERSION >= 11)
+#define FM_PCD_FRM_REPLIC_MAX_NUM_OF_ENTRIES 64 /**< Maximum possible entries for frame replicator group */
+#endif /* (DPAA_VERSION >= 11) */
+/* @} */
+
+/**
+ @Group FM_PCD_init_grp FM PCD Initialization Unit
+
+ @Description Frame Manager PCD Initialization Unit API
+
+ @{
+*/
+
+/**
+ @Description Exceptions user callback routine, will be called upon an
+ exception passing the exception identification.
+
+ @Param[in] h_App - User's application descriptor.
+ @Param[in] exception - The exception.
+ */
+typedef void (t_FmPcdExceptionCallback) (t_Handle h_App, ioc_fm_pcd_exceptions exception);
+
+/**
+ @Description Exceptions user callback routine, will be called upon an exception
+ passing the exception identification.
+
+ @Param[in] h_App - User's application descriptor.
+ @Param[in] exception - The exception.
+ @Param[in] index - id of the relevant source (may be scheme or profile id).
+ */
+typedef void (t_FmPcdIdExceptionCallback) (t_Handle h_App,
+ ioc_fm_pcd_exceptions exception,
+ uint16_t index);
+
+/**
+ @Description A callback for enqueuing frame onto a QM queue.
+
+ @Param[in] h_QmArg - Application's handle passed to QM module on enqueue.
+ @Param[in] p_Fd - Frame descriptor for the frame.
+
+ @Return E_OK on success; Error code otherwise.
+ */
+typedef uint32_t (t_FmPcdQmEnqueueCallback) (t_Handle h_QmArg, void *p_Fd);
+
+/**
+ @Description Host-Command parameters structure.
+
+ When using Host command for PCD functionalities, a dedicated port
+ must be used. If this routine is called for a PCD in a single partition
+ environment, or it is the Master partition in a Multi-partition
+ environment, The port will be initialized by the PCD driver
+ initialization routine.
+ */
+typedef struct t_FmPcdHcParams {
+ uintptr_t portBaseAddr; /**< Virtual Address of Host-Command Port memory mapped registers.*/
+ uint8_t portId; /**< Port Id (0-6 relative to Host-Command/Offline-Parsing ports);
+ NOTE: When configuring Host Command port for
+ FMANv3 devices (DPAA_VERSION 11 and higher),
+ portId=0 MUST be used. */
+ uint16_t liodnBase; /**< LIODN base for this port, to be used together with LIODN offset
+ (irrelevant for P4080 revision 1.0) */
+ uint32_t errFqid; /**< Host-Command Port error queue Id. */
+ uint32_t confFqid; /**< Host-Command Port confirmation queue Id. */
+ uint32_t qmChannel; /**< QM channel dedicated to this Host-Command port;
+ will be used by the FM for dequeue. */
+ t_FmPcdQmEnqueueCallback *f_QmEnqueue; /**< Callback routine for enqueuing a frame to the QM */
+ t_Handle h_QmArg; /**< Application's handle passed to QM module on enqueue */
+} t_FmPcdHcParams;
+
+/**
+ @Description The main structure for PCD initialization
+ */
+typedef struct t_FmPcdParams {
+ bool prsSupport; /**< TRUE if Parser will be used for any of the FM ports. */
+ bool ccSupport; /**< TRUE if Coarse Classification will be used for any
+ of the FM ports. */
+ bool kgSupport; /**< TRUE if KeyGen will be used for any of the FM ports. */
+ bool plcrSupport; /**< TRUE if Policer will be used for any of the FM ports. */
+ t_Handle h_Fm; /**< A handle to the FM module. */
+ uint8_t numOfSchemes; /**< Number of schemes dedicated to this partition.
+ this parameter is relevant if 'kgSupport'=TRUE. */
+ bool useHostCommand; /**< Optional for single partition, Mandatory for Multi partition */
+ t_FmPcdHcParams hc; /**< Host Command parameters, relevant only if 'useHostCommand'=TRUE;
+ Relevant when FM not runs in "guest-mode". */
+
+ t_FmPcdExceptionCallback *f_Exception; /**< Callback routine for general PCD exceptions;
+ Relevant when FM not runs in "guest-mode". */
+ t_FmPcdIdExceptionCallback *f_ExceptionId; /**< Callback routine for specific KeyGen scheme or
+ Policer profile exceptions;
+ Relevant when FM not runs in "guest-mode". */
+ t_Handle h_App; /**< A handle to an application layer object; This handle will
+ be passed by the driver upon calling the above callbacks;
+ Relevant when FM not runs in "guest-mode". */
+ uint8_t partPlcrProfilesBase; /**< The first policer-profile-id dedicated to this partition.
+ this parameter is relevant if 'plcrSupport'=TRUE.
+ NOTE: this parameter relevant only when working with multiple partitions. */
+ uint16_t partNumOfPlcrProfiles; /**< Number of policer-profiles dedicated to this partition.
+ this parameter is relevant if 'plcrSupport'=TRUE.
+ NOTE: this parameter relevant only when working with multiple partitions. */
+} t_FmPcdParams;
+
+typedef struct t_FmPcdPrsLabelParams {
+ uint32_t instructionOffset;
+ ioc_net_header_type hdr;
+ uint8_t indexPerHdr;
+} t_FmPcdPrsLabelParams;
+
+typedef struct t_FmPcdPrsSwParams {
+ bool override;
+ uint32_t size;
+ uint16_t base;
+ uint8_t *p_Code;
+ uint32_t swPrsDataParams[FM_PCD_PRS_NUM_OF_HDRS];
+ uint8_t numOfLabels;
+ t_FmPcdPrsLabelParams labelsTable[FM_PCD_PRS_NUM_OF_LABELS];
+} t_FmPcdPrsSwParams;
+
+/**
+ @Function FM_PCD_Config
+
+ @Description Basic configuration of the PCD module.
+ Creates descriptor for the FM PCD module.
+
+ @Param[in] p_FmPcdParams A structure of parameters for the initialization of PCD.
+
+ @Return A handle to the initialized module.
+*/
+t_Handle FM_PCD_Config(t_FmPcdParams *p_FmPcdParams);
+
+/**
+ @Function FM_PCD_Init
+
+ @Description Initialization of the PCD module.
+
+ @Param[in] h_FmPcd - FM PCD module descriptor.
+
+ @Return E_OK on success; Error code otherwise.
+*/
+uint32_t FM_PCD_Init(t_Handle h_FmPcd);
+
+/**
+ @Function FM_PCD_Free
+
+ @Description Frees all resources that were assigned to FM module.
+
+ Calling this routine invalidates the descriptor.
+
+ @Param[in] h_FmPcd - FM PCD module descriptor.
+
+ @Return E_OK on success; Error code otherwise.
+*/
+uint32_t FM_PCD_Free(t_Handle h_FmPcd);
+
+/**
+ @Group FM_PCD_advanced_cfg_grp FM PCD Advanced Configuration Unit
+
+ @Description Frame Manager PCD Advanced Configuration API.
+
+ @{
+*/
+
+/**
+ @Function FM_PCD_ConfigException
+
+ @Description Calling this routine changes the internal driver data base
+ from its default selection of exceptions enabling.
+ [DEFAULT_numOfSharedPlcrProfiles].
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] exception The exception to be selected.
+ @Param[in] enable TRUE to enable interrupt, FALSE to mask it.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PCD_ConfigException(t_Handle h_FmPcd, ioc_fm_pcd_exceptions exception, bool enable);
+
+/**
+ @Function FM_PCD_ConfigHcFramesDataMemory
+
+ @Description Configures memory-partition-id for FMan-Controller Host-Command
+ frames. Calling this routine changes the internal driver data
+ base from its default configuration [0].
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] memId Memory partition ID.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions This routine may be called only if 'useHostCommand' was TRUE
+ when FM_PCD_Config() routine was called.
+*/
+uint32_t FM_PCD_ConfigHcFramesDataMemory(t_Handle h_FmPcd, uint8_t memId);
+
+/**
+ @Function FM_PCD_ConfigPlcrNumOfSharedProfiles
+
+ @Description Calling this routine changes the internal driver data base
+ from its default selection of exceptions enablement.
+ [DEFAULT_numOfSharedPlcrProfiles].
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] numOfSharedPlcrProfiles Number of profiles to
+ be shared between ports on this partition
+
+ @Return E_OK on success; Error code otherwise.
+*/
+uint32_t FM_PCD_ConfigPlcrNumOfSharedProfiles(t_Handle h_FmPcd, uint16_t numOfSharedPlcrProfiles);
+
+/**
+ @Function FM_PCD_ConfigPlcrAutoRefreshMode
+
+ @Description Calling this routine changes the internal driver data base
+ from its default selection of exceptions enablement.
+ By default auto-refresh is [DEFAULT_plcrAutoRefresh].
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] enable TRUE to enable, FALSE to disable
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PCD_ConfigPlcrAutoRefreshMode(t_Handle h_FmPcd, bool enable);
+
+/**
+ @Function FM_PCD_ConfigPrsMaxCycleLimit
+
+ @Description Calling this routine changes the internal data structure for
+ the maximum parsing time from its default value
+ [DEFAULT_MAX_PRS_CYC_LIM].
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] value 0 to disable the mechanism, or new
+ maximum parsing time.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PCD_ConfigPrsMaxCycleLimit(t_Handle h_FmPcd, uint16_t value);
+
+/** @} */ /* end of FM_PCD_advanced_cfg_grp group */
+/** @} */ /* end of FM_PCD_init_grp group */
+
+/**
+ @Group FM_PCD_Runtime_grp FM PCD Runtime Unit
+
+ @Description Frame Manager PCD Runtime Unit API
+
+ The runtime control allows creation of PCD infrastructure modules
+ such as Network Environment Characteristics, Classification Plan
+ Groups and Coarse Classification Trees.
+ It also allows on-the-fly initialization, modification and removal
+ of PCD modules such as KeyGen schemes, coarse classification nodes
+ and Policer profiles.
+
+ In order to explain the programming model of the PCD driver interface
+ a few terms should be explained, and will be used below.
+ - Distinction Header - One of the 16 protocols supported by the FM parser,
+ or one of the SHIM headers (1 or 2). May be a header with a special
+ option (see below).
+ - Interchangeable Headers Group - This is a group of Headers recognized
+ by either one of them. For example, if in a specific context the user
+ chooses to treat IPv4 and IPV6 in the same way, they may create an
+ interchangeable Headers Unit consisting of these 2 headers.
+ - A Distinction Unit - a Distinction Header or an Interchangeable Headers
+ Group.
+ - Header with special option - applies to Ethernet, MPLS, VLAN, IPv4 and
+ IPv6, includes multicast, broadcast and other protocol specific options.
+ In terms of hardware it relates to the options available in the classification
+ plan.
+ - Network Environment Characteristics - a set of Distinction Units that define
+ the total recognizable header selection for a certain environment. This is
+ NOT the list of all headers that will ever appear in a flow, but rather
+ everything that needs distinction in a flow, where distinction is made by KeyGen
+ schemes and coarse classification action descriptors.
+
+ The PCD runtime modules initialization is done in stages. The first stage after
+ initializing the PCD module itself is to establish a Network Flows Environment
+ Definition. The application may choose to establish one or more such environments.
+ Later, when needed, the application will have to state, for some of its modules,
+ to which single environment it belongs.
+
+ @{
+*/
+
+t_Handle FM_PCD_Open(t_FmPcdParams *p_FmPcdParams);
+void FM_PCD_Close(t_Handle h_FmPcd);
+
+/**
+ @Function FM_PCD_Enable
+
+ @Description This routine should be called after PCD is initialized for enabling all
+ PCD engines according to their existing configuration.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init() and when PCD is disabled.
+*/
+uint32_t FM_PCD_Enable(t_Handle h_FmPcd);
+
+/**
+ @Function FM_PCD_Disable
+
+ @Description This routine may be called when PCD is enabled in order to
+ disable all PCD engines. It may be called
+ only when none of the ports in the system are using the PCD.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init() and when PCD is enabled.
+*/
+uint32_t FM_PCD_Disable(t_Handle h_FmPcd);
+
+/**
+ @Function FM_PCD_GetCounter
+
+ @Description Reads one of the FM PCD counters.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] counter The requested counter.
+
+ @Return Counter's current value.
+
+ @Cautions Allowed only following FM_PCD_Init().
+ Note that it is user's responsibility to call this routine only
+ for enabled counters, and there will be no indication if a
+ disabled counter is accessed.
+*/
+uint32_t FM_PCD_GetCounter(t_Handle h_FmPcd, ioc_fm_pcd_counters counter);
+
+/**
+@Function FM_PCD_PrsLoadSw
+
+@Description This routine may be called in order to load software parsing code.
+
+@Param[in] h_FmPcd FM PCD module descriptor.
+@Param[in] p_SwPrs A pointer to a structure of software
+ parser parameters, including the software
+ parser image.
+
+@Return E_OK on success; Error code otherwise.
+
+@Cautions Allowed only following FM_PCD_Init() and when PCD is disabled.
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PCD_PrsLoadSw(t_Handle h_FmPcd, ioc_fm_pcd_prs_sw_params_t *p_SwPrs);
+
+/**
+@Function FM_PCD_SetAdvancedOffloadSupport
+
+@Description This routine must be called in order to support the following features:
+ IP-fragmentation, IP-reassembly, IPsec, Header-manipulation, frame-replicator.
+
+@Param[in] h_FmPcd FM PCD module descriptor.
+
+@Return E_OK on success; Error code otherwise.
+
+@Cautions Allowed only following FM_PCD_Init() and when PCD is disabled.
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PCD_SetAdvancedOffloadSupport(t_Handle h_FmPcd);
+
+/**
+ @Function FM_PCD_KgSetDfltValue
+
+ @Description Calling this routine sets a global default value to be used
+ by the KeyGen when parser does not recognize a required
+ field/header.
+ By default default values are 0.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] valueId 0,1 - one of 2 global default values.
+ @Param[in] value The requested default value.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init() and when PCD is disabled.
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PCD_KgSetDfltValue(t_Handle h_FmPcd, uint8_t valueId, uint32_t value);
+
+/**
+ @Function FM_PCD_KgSetAdditionalDataAfterParsing
+
+ @Description Calling this routine allows the KeyGen to access data past
+ the parser finishing point.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] payloadOffset the number of bytes beyond the parser location.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init() and when PCD is disabled.
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PCD_KgSetAdditionalDataAfterParsing(t_Handle h_FmPcd, uint8_t payloadOffset);
+
+/**
+ @Function FM_PCD_SetException
+
+ @Description Calling this routine enables/disables PCD interrupts.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] exception The exception to be selected.
+ @Param[in] enable TRUE to enable interrupt, FALSE to mask it.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PCD_SetException(t_Handle h_FmPcd, ioc_fm_pcd_exceptions exception, bool enable);
+
+/**
+ @Function FM_PCD_ModifyCounter
+
+ @Description Sets a value to an enabled counter. Use "0" to reset the counter.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] counter The requested counter.
+ @Param[in] value The requested value to be written into the counter.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PCD_ModifyCounter(t_Handle h_FmPcd, ioc_fm_pcd_counters counter, uint32_t value);
+
+/**
+ @Function FM_PCD_SetPlcrStatistics
+
+ @Description This routine may be used to enable/disable policer statistics
+ counter. By default the statistics is enabled.
+
+ @Param[in] h_FmPcd FM PCD module descriptor
+ @Param[in] enable TRUE to enable, FALSE to disable.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PCD_SetPlcrStatistics(t_Handle h_FmPcd, bool enable);
+
+/**
+ @Function FM_PCD_SetPrsStatistics
+
+ @Description Defines whether to gather parser statistics including all ports.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] enable TRUE to enable, FALSE to disable.
+
+ @Return None
+
+ @Cautions Allowed only following FM_PCD_Init().
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+void FM_PCD_SetPrsStatistics(t_Handle h_FmPcd, bool enable);
+
+#if (defined(DEBUG_ERRORS) && (DEBUG_ERRORS > 0))
+/**
+ @Function FM_PCD_DumpRegs
+
+ @Description Dumps all PCD registers
+
+ @Param[in] h_FmPcd A handle to an FM PCD Module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+ NOTE: this routine may be called only for FM in master mode
+ (i.e. 'guestId'=NCSW_MASTER_ID) or in a case that the registers
+ are mapped.
+*/
+uint32_t FM_PCD_DumpRegs(t_Handle h_FmPcd);
+
+/**
+ @Function FM_PCD_KgDumpRegs
+
+ @Description Dumps all PCD KG registers
+
+ @Param[in] h_FmPcd A handle to an FM PCD Module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+ NOTE: this routine may be called only for FM in master mode
+ (i.e. 'guestId'=NCSW_MASTER_ID) or in a case that the registers
+ are mapped.
+*/
+uint32_t FM_PCD_KgDumpRegs(t_Handle h_FmPcd);
+
+/**
+ @Function FM_PCD_PlcrDumpRegs
+
+ @Description Dumps all PCD Policer registers
+
+ @Param[in] h_FmPcd A handle to an FM PCD Module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+ NOTE: this routine may be called only for FM in master mode
+ (i.e. 'guestId'=NCSW_MASTER_ID) or in a case that the registers
+ are mapped.
+*/
+uint32_t FM_PCD_PlcrDumpRegs(t_Handle h_FmPcd);
+
+/**
+ @Function FM_PCD_PlcrProfileDumpRegs
+
+ @Description Dumps all PCD Policer profile registers
+
+ @Param[in] h_Profile A handle to a Policer profile.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+ NOTE: this routine may be called only for FM in master mode
+ (i.e. 'guestId'=NCSW_MASTER_ID) or in a case that the registers
+ are mapped.
+*/
+uint32_t FM_PCD_PlcrProfileDumpRegs(t_Handle h_Profile);
+
+/**
+ @Function FM_PCD_PrsDumpRegs
+
+ @Description Dumps all PCD Parser registers
+
+ @Param[in] h_FmPcd A handle to an FM PCD Module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+ NOTE: this routine may be called only for FM in master mode
+ (i.e. 'guestId'=NCSW_MASTER_ID) or in a case that the registers
+ are mapped.
+*/
+uint32_t FM_PCD_PrsDumpRegs(t_Handle h_FmPcd);
+
+/**
+ @Function FM_PCD_HcDumpRegs
+
+ @Description Dumps HC Port registers
+
+ @Param[in] h_FmPcd A handle to an FM PCD Module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+ NOTE: this routine may be called only for FM in master mode
+ (i.e. 'guestId'=NCSW_MASTER_ID).
+*/
+uint32_t FM_PCD_HcDumpRegs(t_Handle h_FmPcd);
+#endif /* (defined(DEBUG_ERRORS) && ... */
+
+
+/**
+ KeyGen FM_PCD_Runtime_build_grp FM PCD Runtime Building Unit
+
+ @Description Frame Manager PCD Runtime Building API
+
+ This group contains routines for setting, deleting and modifying
+ PCD resources, for defining the total PCD tree.
+ @{
+*/
+
+/**
+ @Collection Definitions of coarse classification
+ parameters as required by KeyGen (when coarse classification
+ is the next engine after this scheme).
+*/
+#define FM_PCD_MAX_NUM_OF_CC_TREES 8
+#define FM_PCD_MAX_NUM_OF_CC_GROUPS 16
+#define FM_PCD_MAX_NUM_OF_CC_UNITS 4
+#define FM_PCD_MAX_NUM_OF_KEYS 256
+#define FM_PCD_MAX_NUM_OF_FLOWS (4 * KILOBYTE)
+#define FM_PCD_MAX_SIZE_OF_KEY 56
+#define FM_PCD_MAX_NUM_OF_CC_ENTRIES_IN_GRP 16
+#define FM_PCD_LAST_KEY_INDEX 0xffff
+
+#define FM_PCD_MAX_NUM_OF_CC_NODES 255 /* Obsolete, not used - will be removed in the future */
+/* @} */
+
+/**
+ @Collection A set of definitions to allow protocol
+ special option description.
+*/
+typedef uint32_t protocolOpt_t; /**< A general type to define a protocol option. */
+
+typedef protocolOpt_t ethProtocolOpt_t; /**< Ethernet protocol options. */
+#define ETH_BROADCAST 0x80000000 /**< Ethernet Broadcast. */
+#define ETH_MULTICAST 0x40000000 /**< Ethernet Multicast. */
+
+typedef protocolOpt_t vlanProtocolOpt_t; /**< VLAN protocol options. */
+#define VLAN_STACKED 0x20000000 /**< Stacked VLAN. */
+
+typedef protocolOpt_t mplsProtocolOpt_t; /**< MPLS protocol options. */
+#define MPLS_STACKED 0x10000000 /**< Stacked MPLS. */
+
+typedef protocolOpt_t ipv4ProtocolOpt_t; /**< IPv4 protocol options. */
+#define IPV4_BROADCAST_1 0x08000000 /**< IPv4 Broadcast. */
+#define IPV4_MULTICAST_1 0x04000000 /**< IPv4 Multicast. */
+#define IPV4_UNICAST_2 0x02000000 /**< Tunneled IPv4 - Unicast. */
+#define IPV4_MULTICAST_BROADCAST_2 0x01000000 /**< Tunneled IPv4 - Broadcast/Multicast. */
+
+#define IPV4_FRAG_1 0x00000008 /**< IPV4 reassembly option.
+ IPV4 Reassembly manipulation requires network
+ environment with IPV4 header and IPV4_FRAG_1 option */
+
+typedef protocolOpt_t ipv6ProtocolOpt_t; /**< IPv6 protocol options. */
+#define IPV6_MULTICAST_1 0x00800000 /**< IPv6 Multicast. */
+#define IPV6_UNICAST_2 0x00400000 /**< Tunneled IPv6 - Unicast. */
+#define IPV6_MULTICAST_2 0x00200000 /**< Tunneled IPv6 - Multicast. */
+
+#define IPV6_FRAG_1 0x00000004 /**< IPV6 reassembly option.
+ IPV6 Reassembly manipulation requires network
+ environment with IPV6 header and IPV6_FRAG_1 option;
+ in case where fragment found, the fragment-extension offset
+ may be found at 'shim2' (in parser-result). */
+#if (DPAA_VERSION >= 11)
+typedef protocolOpt_t capwapProtocolOpt_t; /**< CAPWAP protocol options. */
+#define CAPWAP_FRAG_1 0x00000008 /**< CAPWAP reassembly option.
+ CAPWAP Reassembly manipulation requires network
+ environment with CAPWAP header and CAPWAP_FRAG_1 option;
+ in case where fragment found, the fragment-extension offset
+ may be found at 'shim2' (in parser-result). */
+#endif /* (DPAA_VERSION >= 11) */
+
+/* @} */
+
+#define FM_PCD_MANIP_MAX_HDR_SIZE 256
+#define FM_PCD_MANIP_DSCP_TO_VLAN_TRANS 64
+
+/**
+ @Collection A set of definitions to support Header Manipulation selection.
+*/
+typedef uint32_t hdrManipFlags_t;
+ /**< A general type to define a HMan update command flags. */
+
+typedef hdrManipFlags_t ipv4HdrManipUpdateFlags_t;
+ /**< IPv4 protocol HMan update command flags. */
+
+#define HDR_MANIP_IPV4_TOS 0x80000000
+ /**< update TOS with the given value ('tos' field
+ of t_FmPcdManipHdrFieldUpdateIpv4) */
+#define HDR_MANIP_IPV4_ID 0x40000000
+ /**< update IP ID with the given value ('id' field
+ of t_FmPcdManipHdrFieldUpdateIpv4) */
+#define HDR_MANIP_IPV4_TTL 0x20000000
+ /**< Decrement TTL by 1 */
+#define HDR_MANIP_IPV4_SRC 0x10000000
+ /**< update IP source address with the given value
+ ('src' field of t_FmPcdManipHdrFieldUpdateIpv4) */
+#define HDR_MANIP_IPV4_DST 0x08000000
+ /**< update IP destination address with the given value
+ ('dst' field of t_FmPcdManipHdrFieldUpdateIpv4) */
+
+typedef hdrManipFlags_t ipv6HdrManipUpdateFlags_t; /**< IPv6 protocol HMan update command flags. */
+
+#define HDR_MANIP_IPV6_TC 0x80000000
+ /**< update Traffic Class address with the given value
+ ('trafficClass' field of t_FmPcdManipHdrFieldUpdateIpv6) */
+#define HDR_MANIP_IPV6_HL 0x40000000
+ /**< Decrement Hop Limit by 1 */
+#define HDR_MANIP_IPV6_SRC 0x20000000
+ /**< update IP source address with the given value
+ ('src' field of t_FmPcdManipHdrFieldUpdateIpv6) */
+#define HDR_MANIP_IPV6_DST 0x10000000
+ /**< update IP destination address with the given value
+ ('dst' field of t_FmPcdManipHdrFieldUpdateIpv6) */
+
+typedef hdrManipFlags_t tcpUdpHdrManipUpdateFlags_t;
+ /**< TCP/UDP protocol HMan update command flags. */
+
+#define HDR_MANIP_TCP_UDP_SRC 0x80000000
+ /**< update TCP/UDP source address with the given value
+ ('src' field of t_FmPcdManipHdrFieldUpdateTcpUdp) */
+#define HDR_MANIP_TCP_UDP_DST 0x40000000
+ /**< update TCP/UDP destination address with the given value
+ ('dst' field of t_FmPcdManipHdrFieldUpdateTcpUdp) */
+#define HDR_MANIP_TCP_UDP_CHECKSUM 0x20000000
+ /**< update TCP/UDP checksum */
+
+/* @} */
+
+/**
+ @Description A type used for returning the order of the key extraction.
+ each value in this array represents the index of the extraction
+ command as defined by the user in the initialization extraction array.
+ The valid size of this array is the user define number of extractions
+ required (also marked by the second '0' in this array).
+*/
+typedef uint8_t t_FmPcdKgKeyOrder[FM_PCD_KG_MAX_NUM_OF_EXTRACTS_PER_KEY];
+
+#if ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT))
+/**
+ @Description Enumeration type for selecting type of statistics mode
+*/
+typedef enum ioc_fm_pcd_stats_type_t {
+ e_FM_PCD_STATS_PER_FLOWID = 0
+ /**< Flow ID is used as index for getting statistics */
+} ioc_fm_pcd_stats_type_t;
+#endif /* ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT)) */
+
+/**
+ @Collection Definitions for CC statistics
+*/
+#if (DPAA_VERSION >= 11)
+#define FM_PCD_CC_STATS_MAX_NUM_OF_FLR 10
+ /* Maximal supported number of frame length ranges */
+#define FM_PCD_CC_STATS_FLR_SIZE 2
+ /* Size in bytes of a frame length range limit */
+#endif /* (DPAA_VERSION >= 11) */
+#define FM_PCD_CC_STATS_COUNTER_SIZE 4
+ /* Size in bytes of a frame length range counter */
+/* @} */
+
+/**
+ @Description Parameters for defining CC keys parameters
+ The driver supports two methods for CC node allocation: dynamic and static.
+ Static mode was created in order to prevent runtime alloc/free
+ of FMan memory (MURAM), which may cause fragmentation; in this mode,
+ the driver automatically allocates the memory according to
+ 'maxNumOfKeys' parameter. The driver calculates the maximal memory
+ size that may be used for this CC-Node taking into consideration
+ 'maskSupport' and 'statisticsMode' parameters.
+ When 'action' = e_FM_PCD_ACTION_INDEXED_LOOKUP in the extraction
+ parameters of this node, 'maxNumOfKeys' must be equal to 'numOfKeys'.
+ In dynamic mode, 'maxNumOfKeys' must be zero. At initialization,
+ all required structures are allocated according to 'numOfKeys'
+ parameter. During runtime modification, these structures are
+ re-allocated according to the updated number of keys.
+
+ Please note that 'action' and 'icIndxMask' mentioned in the
+ specific parameter explanations are passed in the extraction
+ parameters of the node (fields of extractCcParams.extractNonHdr).
+*/
+typedef struct t_KeysParams {
+ uint16_t maxNumOfKeys;
+ /**< Maximum number of keys that will (ever) be used in this CC-Node;
+ A value of zero may be used for dynamic memory allocation. */
+ bool maskSupport;
+ /**< This parameter is relevant only if a node is initialized with
+ 'action' = e_FM_PCD_ACTION_EXACT_MATCH and maxNumOfKeys > 0;
+ Should be TRUE to reserve table memory for key masks, even if
+ initial keys do not contain masks, or if the node was initialized
+ as 'empty' (without keys); this will allow user to add keys with
+ masks at runtime.
+ NOTE that if user want to use only global-masks (i.e. one common mask
+ for all the entries within this table, this parameter should set to 'FALSE'. */
+ ioc_fm_pcd_cc_stats_mode statisticsMode;
+ /**< Determines the supported statistics mode for all node's keys.
+ To enable statistics gathering, statistics should be enabled per
+ every key, using 'statisticsEn' in next engine parameters structure
+ of that key;
+ If 'maxNumOfKeys' is set, all required structures will be
+ preallocated for all keys. */
+#if (DPAA_VERSION >= 11)
+ uint16_t frameLengthRanges[FM_PCD_CC_STATS_MAX_NUM_OF_FLR];
+ /**< Relevant only for 'RMON' statistics mode
+ (this feature is supported only on B4860 device);
+ Holds a list of programmable thresholds - for each received frame,
+ its length in bytes is examined against these range thresholds and
+ the appropriate counter is incremented by 1 - for example, to belong
+ to range i, the following should hold:
+ range i-1 threshold < frame length <= range i threshold
+ Each range threshold must be larger then its preceding range
+ threshold, and last range threshold must be 0xFFFF. */
+#endif /* (DPAA_VERSION >= 11) */
+ uint16_t numOfKeys;
+ /**< Number of initial keys;
+ Note that in case of 'action' = e_FM_PCD_ACTION_INDEXED_LOOKUP,
+ this field should be power-of-2 of the number of bits that are
+ set in 'icIndxMask'. */
+ uint8_t keySize;
+ /**< Size of key - for extraction of type FULL_FIELD, 'keySize' has
+ to be the standard size of the selected key; For other extraction
+ types, 'keySize' has to be as size of extraction; When 'action' =
+ e_FM_PCD_ACTION_INDEXED_LOOKUP, 'keySize' must be 2. */
+ ioc_fm_pcd_cc_key_params_t keyParams[FM_PCD_MAX_NUM_OF_KEYS];
+ /**< An array with 'numOfKeys' entries, each entry specifies the
+ corresponding key parameters;
+ When 'action' = e_FM_PCD_ACTION_EXACT_MATCH, this value must not
+ exceed 255 (FM_PCD_MAX_NUM_OF_KEYS-1) as the last entry is saved
+ for the 'miss' entry. */
+ ioc_fm_pcd_cc_next_engine_params_t ccNextEngineParamsForMiss;
+ /**< Parameters for defining the next engine when a key is not matched;
+ Not relevant if action = e_FM_PCD_ACTION_INDEXED_LOOKUP. */
+} t_KeysParams;
+
+#if ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT))
+/**
+ @Description Parameters for defining an insertion manipulation
+ of type e_FM_PCD_MANIP_INSRT_TO_START_OF_FRAME_TEMPLATE
+*/
+typedef struct ioc_fm_pcd_manip_hdr_insrt_by_template_params_t {
+ uint8_t size; /**< Size of insert template to the start of the frame. */
+ uint8_t hdrTemplate[FM_PCD_MAX_MANIP_INSRT_TEMPLATE_SIZE];
+ /**< Array of the insertion template. */
+
+ bool modifyOuterIp;
+ /**< TRUE if user want to modify some fields in outer IP. */
+ struct {
+ uint16_t ipOuterOffset;
+ /**< Offset of outer IP in the insert template, relevant if modifyOuterIp = TRUE.*/
+ uint16_t dscpEcn;
+ /**< value of dscpEcn in IP outer, relevant if modifyOuterIp = TRUE.
+ in IPV4 dscpEcn only byte - it has to be adjusted to the right*/
+ bool udpPresent;
+ /**< TRUE if UDP is present in the insert template, relevant if modifyOuterIp = TRUE.*/
+ uint8_t udpOffset;
+ /**< Offset in the insert template of UDP, relevant
+ if modifyOuterIp = TRUE and udpPresent=TRUE.*/
+ uint8_t ipIdentGenId;
+ /**< Used by FMan-CTRL to calculate IP-identification field,
+ relevant if modifyOuterIp = TRUE.*/
+ bool recalculateLength;
+ /**< TRUE if recalculate length has to be performed due to the engines in the path
+ which can change the frame later, relevant if modifyOuterIp = TRUE.*/
+ struct {
+ uint8_t blockSize;
+ /**< The CAAM block-size; Used by FMan-CTRL to calculate the IP Total Length field.*/
+ uint8_t extraBytesAddedAlignedToBlockSize;
+ /**< Used by FMan-CTRL to calculate the IP Total Length field and UDP length*/
+ uint8_t extraBytesAddedNotAlignedToBlockSize;
+ /**< Used by FMan-CTRL to calculate the IP Total Length field and UDP length.*/
+ } recalculateLengthParams;
+ /**< Recalculate length parameters - relevant
+ if modifyOuterIp = TRUE and recalculateLength = TRUE */
+ } modifyOuterIpParams;
+ /**< Outer IP modification parameters - ignored if modifyOuterIp is FALSE */
+
+ bool modifyOuterVlan;
+ /**< TRUE if user wants to modify VPri field in the outer VLAN header*/
+ struct {
+ uint8_t vpri; /**< Value of VPri, relevant if modifyOuterVlan = TRUE
+ VPri only 3 bits, it has to be adjusted to the right*/
+ } modifyOuterVlanParams;
+} ioc_fm_pcd_manip_hdr_insrt_by_template_params_t;
+
+/**
+ @Description Parameters for defining CAPWAP fragmentation
+*/
+typedef struct ioc_capwap_fragmentation_params {
+ uint16_t sizeForFragmentation;
+ /**< if length of the frame is greater than this value,
+ CAPWAP fragmentation will be executed. */
+ bool headerOptionsCompr;
+ /**< TRUE - first fragment include the CAPWAP header options field,
+ and all other fragments exclude the CAPWAP options field,
+ FALSE - all fragments include CAPWAP header options field. */
+} ioc_capwap_fragmentation_params;
+
+/**
+ @Description Parameters for defining CAPWAP reassembly
+*/
+typedef struct ioc_capwap_reassembly_params {
+ uint16_t maxNumFramesInProcess;
+ /**< Number of frames which can be reassembled concurrently; must be power of 2.
+ In case numOfFramesPerHashEntry == e_FM_PCD_MANIP_FOUR_WAYS_HASH,
+ maxNumFramesInProcess has to be in the range of 4 - 512,
+ In case numOfFramesPerHashEntry == e_FM_PCD_MANIP_EIGHT_WAYS_HASH,
+ maxNumFramesInProcess has to be in the range of 8 - 2048 */
+
+ bool haltOnDuplicationFrag;
+ /**< If TRUE, reassembly process will be halted due to duplicated fragment,
+ and all processed fragments will be enqueued with error indication;
+ If FALSE, only duplicated fragments will be enqueued with error indication. */
+
+ e_FmPcdManipReassemTimeOutMode timeOutMode;
+ /**< Expiration delay initialized by the reassembly process */
+ uint32_t fqidForTimeOutFrames;
+ /**< FQID in which time out frames will enqueue during Time Out Process */
+ uint32_t timeoutRoutineRequestTime;
+ /**< Represents the time interval in microseconds between consecutive
+ timeout routine requests It has to be power of 2. */
+ uint32_t timeoutThresholdForReassmProcess;
+ /**< Time interval (microseconds) for marking frames in process as too old;
+ Frames in process are those for which at least one fragment was received
+ but not all fragments. */
+
+ e_FmPcdManipReassemWaysNumber numOfFramesPerHashEntry;
+ /**< Number of frames per hash entry (needed for the reassembly process) */
+} ioc_capwap_reassembly_params;
+
+/**
+ @Description Parameters for defining fragmentation/reassembly manipulation
+*/
+typedef struct ioc_fm_pcd_manip_frag_or_reasm_params_t {
+ bool frag; /**< TRUE if using the structure for fragmentation,
+ otherwise this structure is used for reassembly */
+ uint8_t sgBpid; /**< Scatter/Gather buffer pool id;
+ Same LIODN number is used for these buffers as for
+ the received frames buffers, so buffers of this pool
+ need to be allocated in the same memory area as the
+ received buffers. If the received buffers arrive
+ from different sources, the Scatter/Gather BP id
+ should be mutual to all these sources. */
+ ioc_net_header_type hdr; /**< Header selection */
+ union {
+ ioc_capwap_fragmentation_params capwapFragParams;
+ /**< Structure for CAPWAP fragmentation,
+ relevant if 'frag' = TRUE, 'hdr' = HEADER_TYPE_CAPWAP */
+ ioc_capwap_reassembly_params capwapReasmParams;
+ /**< Structure for CAPWAP reassembly,
+ relevant if 'frag' = FALSE, 'hdr' = HEADER_TYPE_CAPWAP */
+ } u;
+} ioc_fm_pcd_manip_frag_or_reasm_params_t;
+#endif /* ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT)) */
+
+/**
+ @Description Parameters for defining custom header manipulation for generic field replacement
+*/
+typedef struct ioc_fm_pcd_manip_hdr_custom_gen_field_replace {
+ uint8_t srcOffset; /**< Location of new data - Offset from
+ Parse Result (>= 16, srcOffset+size <= 32, ) */
+ uint8_t dstOffset; /**< Location of data to be overwritten - Offset from
+ start of frame (dstOffset + size <= 256). */
+ uint8_t size; /**< The number of bytes (<=16) to be replaced */
+ uint8_t mask; /**< Optional 1 byte mask. Set to select bits for
+ replacement (1 - bit will be replaced);
+ Clear to use field as is. */
+ uint8_t maskOffset; /**< Relevant if mask != 0;
+ Mask offset within the replaces "size" */
+} ioc_fm_pcd_manip_hdr_custom_gen_field_replace;
+
+#if ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT))
+/**
+ @Description structure for defining statistics node
+*/
+typedef struct ioc_fm_pcd_stats_params_t {
+ ioc_fm_pcd_stats_type_t type; /**< type of statistics node */
+} ioc_fm_pcd_stats_params_t;
+#endif /* ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT)) */
+
+/**
+ @Function FM_PCD_NetEnvCharacteristicsSet
+
+ @Description Define a set of Network Environment Characteristics.
+
+ When setting an environment it is important to understand its
+ application. It is not meant to describe the flows that will run
+ on the ports using this environment, but what the user means TO DO
+ with the PCD mechanisms in order to parse-classify-distribute those
+ frames.
+ By specifying a distinction unit, the user means it would use that option
+ for distinction between frames at either a KeyGen scheme or a coarse
+ classification action descriptor. Using interchangeable headers to define a
+ unit means that the user is indifferent to which of the interchangeable
+ headers is present in the frame, and wants the distinction to be based
+ on the presence of either one of them.
+
+ Depending on context, there are limitations to the use of environments. A
+ port using the PCD functionality is bound to an environment. Some or even
+ all ports may share an environment but also an environment per port is
+ possible. When initializing a scheme, a classification plan group (see below),
+ or a coarse classification tree, one of the initialized environments must be
+ stated and related to. When a port is bound to a scheme, a classification
+ plan group, or a coarse classification tree, it MUST be bound to the same
+ environment.
+
+ The different PCD modules, may relate (for flows definition) ONLY on
+ distinction units as defined by their environment. When initializing a
+ scheme for example, it may not choose to select IPV4 as a match for
+ recognizing flows unless it was defined in the relating environment. In
+ fact, to guide the user through the configuration of the PCD, each module's
+ characterization in terms of flows is not done using protocol names, but using
+ environment indexes.
+
+ In terms of HW implementation, the list of distinction units sets the LCV
+ vectors and later used for match vector, classification plan vectors and
+ coarse classification indexing.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] p_NetEnvParams A structure of parameters for the initialization of
+ the network environment.
+
+ @Return A handle to the initialized object on success; NULL code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+t_Handle FM_PCD_NetEnvCharacteristicsSet(t_Handle,
+ ioc_fm_pcd_net_env_params_t *);
+
+/**
+ @Function FM_PCD_NetEnvCharacteristicsDelete
+
+ @Description Deletes a set of Network Environment Characteristics.
+
+ @Param[in] h_NetEnv A handle to the Network environment.
+
+ @Return E_OK on success; Error code otherwise.
+*/
+uint32_t FM_PCD_NetEnvCharacteristicsDelete(t_Handle h_NetEnv);
+
+/**
+ @Function FM_PCD_KgSchemeSet
+
+ @Description Initializing or modifying and enabling a scheme for the KeyGen.
+ This routine should be called for adding or modifying a scheme.
+ When a scheme needs modifying, the API requires that it will be
+ rewritten. In such a case 'modify' should be TRUE. If the
+ routine is called for a valid scheme and 'modify' is FALSE,
+ it will return error.
+
+ @Param[in] h_FmPcd If this is a new scheme - A handle to an FM PCD Module.
+ Otherwise NULL (ignored by driver).
+ @Param[in,out] p_SchemeParams A structure of parameters for defining the scheme
+
+ @Return A handle to the initialized scheme on success; NULL code otherwise.
+ When used as "modify" (rather than for setting a new scheme),
+ p_SchemeParams->id.h_Scheme will return NULL if action fails due to scheme
+ BUSY state.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+t_Handle FM_PCD_KgSchemeSet(t_Handle h_FmPcd,
+ ioc_fm_pcd_kg_scheme_params_t *p_SchemeParams);
+
+/**
+ @Function FM_PCD_KgSchemeDelete
+
+ @Description Deleting an initialized scheme.
+
+ @Param[in] h_Scheme scheme handle as returned by FM_PCD_KgSchemeSet()
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init() & FM_PCD_KgSchemeSet().
+*/
+uint32_t FM_PCD_KgSchemeDelete(t_Handle h_Scheme);
+
+/**
+ @Function FM_PCD_KgSchemeGetCounter
+
+ @Description Reads scheme packet counter.
+
+ @Param[in] h_Scheme scheme handle as returned by FM_PCD_KgSchemeSet().
+
+ @Return Counter's current value.
+
+ @Cautions Allowed only following FM_PCD_Init() & FM_PCD_KgSchemeSet().
+*/
+uint32_t FM_PCD_KgSchemeGetCounter(t_Handle h_Scheme);
+
+/**
+ @Function FM_PCD_KgSchemeSetCounter
+
+ @Description Writes scheme packet counter.
+
+ @Param[in] h_Scheme scheme handle as returned by FM_PCD_KgSchemeSet().
+ @Param[in] value New scheme counter value - typically '0' for
+ resetting the counter.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init() & FM_PCD_KgSchemeSet().
+*/
+uint32_t FM_PCD_KgSchemeSetCounter(t_Handle h_Scheme,
+ uint32_t value);
+
+/**
+ @Function FM_PCD_PlcrProfileSet
+
+ @Description Sets a profile entry in the policer profile table.
+ The routine overrides any existing value.
+
+ @Param[in] h_FmPcd A handle to an FM PCD Module.
+ @Param[in] p_Profile A structure of parameters for defining a
+ policer profile entry.
+
+ @Return A handle to the initialized object on success; NULL code otherwise.
+ When used as "modify" (rather than for setting a new profile),
+ p_Profile->id.h_Profile will return NULL if action fails due to profile
+ BUSY state.
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+t_Handle FM_PCD_PlcrProfileSet(t_Handle h_FmPcd,
+ ioc_fm_pcd_plcr_profile_params_t *p_Profile);
+
+/**
+ @Function FM_PCD_PlcrProfileDelete
+
+ @Description Delete a profile entry in the policer profile table.
+ The routine set entry to invalid.
+
+ @Param[in] h_Profile A handle to the profile.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+uint32_t FM_PCD_PlcrProfileDelete(t_Handle h_Profile);
+
+/**
+ @Function FM_PCD_PlcrProfileGetCounter
+
+ @Description Sets an entry in the classification plan.
+ The routine overrides any existing value.
+
+ @Param[in] h_Profile A handle to the profile.
+ @Param[in] counter Counter selector.
+
+ @Return specific counter value.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+uint32_t FM_PCD_PlcrProfileGetCounter(t_Handle h_Profile,
+ ioc_fm_pcd_plcr_profile_counters counter);
+
+/**
+ @Function FM_PCD_PlcrProfileSetCounter
+
+ @Description Sets an entry in the classification plan.
+ The routine overrides any existing value.
+
+ @Param[in] h_Profile A handle to the profile.
+ @Param[in] counter Counter selector.
+ @Param[in] value value to set counter with.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+uint32_t FM_PCD_PlcrProfileSetCounter(t_Handle h_Profile,
+ ioc_fm_pcd_plcr_profile_counters counter,
+ uint32_t value);
+
+/**
+ @Function FM_PCD_CcRootBuild
+
+ @Description This routine must be called to define a complete coarse
+ classification tree. This is the way to define coarse
+ classification to a certain flow - the KeyGen schemes
+ may point only to trees defined in this way.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] p_Params A structure of parameters to define the tree.
+
+ @Return A handle to the initialized object on success; NULL code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+t_Handle FM_PCD_CcRootBuild(t_Handle h_FmPcd,
+ ioc_fm_pcd_cc_tree_params_t *p_Params);
+
+/**
+ @Function FM_PCD_CcRootDelete
+
+ @Description Deleting an built tree.
+
+ @Param[in] h_CcTree A handle to a CC tree.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+uint32_t FM_PCD_CcRootDelete(t_Handle h_CcTree);
+
+/**
+ @Function FM_PCD_CcRootModifyNextEngine
+
+ @Description Modify the Next Engine Parameters in the entry of the tree.
+
+ @Param[in] h_CcTree A handle to the tree
+ @Param[in] grpId A Group index in the tree
+ @Param[in] index Entry index in the group defined by grpId
+ @Param[in] p_FmPcdCcNextEngineParams Pointer to new next engine parameters
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_CcBuildTree().
+*/
+uint32_t FM_PCD_CcRootModifyNextEngine(t_Handle h_CcTree,
+ uint8_t grpId,
+ uint8_t index,
+ ioc_fm_pcd_cc_next_engine_params_t *p_FmPcdCcNextEngineParams);
+
+/**
+ @Function FM_PCD_MatchTableSet
+
+ @Description This routine should be called for each CC (coarse classification)
+ node. The whole CC tree should be built bottom up so that each
+ node points to already defined nodes.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] p_Param A structure of parameters defining the CC node
+
+ @Return A handle to the initialized object on success; NULL code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+t_Handle FM_PCD_MatchTableSet(t_Handle h_FmPcd,
+ ioc_fm_pcd_cc_node_params_t *p_Param);
+
+/**
+ @Function FM_PCD_MatchTableDelete
+
+ @Description Deleting an built node.
+
+ @Param[in] h_CcNode A handle to a CC node.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+uint32_t FM_PCD_MatchTableDelete(t_Handle h_CcNode);
+
+/**
+ @Function FM_PCD_MatchTableModifyMissNextEngine
+
+ @Description Modify the Next Engine Parameters of the Miss key case of the node.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] p_FmPcdCcNextEngineParams Parameters for defining next engine
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet();
+ Not relevant in the case the node is of type 'INDEXED_LOOKUP'.
+ When configuring nextEngine = e_FM_PCD_CC, note that
+ p_FmPcdCcNextEngineParams->ccParams.h_CcNode must be different
+ from the currently changed table.
+
+*/
+uint32_t FM_PCD_MatchTableModifyMissNextEngine(t_Handle h_CcNode,
+ ioc_fm_pcd_cc_next_engine_params_t *p_FmPcdCcNextEngineParams);
+
+/**
+ @Function FM_PCD_MatchTableRemoveKey
+
+ @Description Remove the key (including next engine parameters of this key)
+ defined by the index of the relevant node.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keyIndex Key index for removing
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet() was called for this
+ node and the nodes that lead to it.
+*/
+uint32_t FM_PCD_MatchTableRemoveKey(t_Handle h_CcNode,
+ uint16_t keyIndex);
+
+/**
+ @Function FM_PCD_MatchTableAddKey
+
+ @Description Add the key (including next engine parameters of this key in the
+ index defined by the keyIndex. Note that 'FM_PCD_LAST_KEY_INDEX'
+ may be used by user that don't care about the position of the
+ key in the table - in that case, the key will be automatically
+ added by the driver in the last available entry.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keyIndex Key index for adding.
+ @Param[in] keySize Key size of added key
+ @Param[in] p_KeyParams A pointer to the parameters includes
+ new key with Next Engine Parameters
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet() was called for this
+ node and the nodes that lead to it.
+*/
+uint32_t FM_PCD_MatchTableAddKey(t_Handle h_CcNode,
+ uint16_t keyIndex,
+ uint8_t keySize,
+ ioc_fm_pcd_cc_key_params_t *p_KeyParams);
+
+/**
+ @Function FM_PCD_MatchTableModifyNextEngine
+
+ @Description Modify the Next Engine Parameters in the relevant key entry of the node.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keyIndex Key index for Next Engine modifications
+ @Param[in] p_FmPcdCcNextEngineParams Parameters for defining next engine
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet().
+ When configuring nextEngine = e_FM_PCD_CC, note that
+ p_FmPcdCcNextEngineParams->ccParams.h_CcNode must be different
+ from the currently changed table.
+
+*/
+uint32_t FM_PCD_MatchTableModifyNextEngine(t_Handle h_CcNode,
+ uint16_t keyIndex,
+ ioc_fm_pcd_cc_next_engine_params_t *p_FmPcdCcNextEngineParams);
+
+/**
+ @Function FM_PCD_MatchTableModifyKeyAndNextEngine
+
+ @Description Modify the key and Next Engine Parameters of this key in the
+ index defined by the keyIndex.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keyIndex Key index for adding
+ @Param[in] keySize Key size of added key
+ @Param[in] p_KeyParams A pointer to the parameters includes
+ modified key and modified Next Engine Params
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet() was called for this
+ node and the nodes that lead to it.
+ When configuring nextEngine = e_FM_PCD_CC, note that
+ p_FmPcdCcNextEngineParams->ccParams.h_CcNode must be different
+ from the currently changed table.
+*/
+uint32_t FM_PCD_MatchTableModifyKeyAndNextEngine(t_Handle h_CcNode,
+ uint16_t keyIndex,
+ uint8_t keySize,
+ ioc_fm_pcd_cc_key_params_t *p_KeyParams);
+
+/**
+ @Function FM_PCD_MatchTableModifyKey
+
+ @Description Modify the key in the index defined by the keyIndex.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keyIndex Key index for adding
+ @Param[in] keySize Key size of added key
+ @Param[in] p_Key A pointer to the new key
+ @Param[in] p_Mask A pointer to the new mask if relevant,
+ otherwise pointer to NULL
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet() was called for this
+ node and the nodes that lead to it.
+*/
+uint32_t FM_PCD_MatchTableModifyKey(t_Handle h_CcNode,
+ uint16_t keyIndex,
+ uint8_t keySize,
+ uint8_t *p_Key,
+ uint8_t *p_Mask);
+
+/**
+ @Function FM_PCD_MatchTableFindNRemoveKey
+
+ @Description Remove the key (including next engine parameters of this key)
+ defined by the key and mask. Note that this routine will search
+ the node to locate the index of the required key (& mask) to remove.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keySize Key size of the one to remove.
+ @Param[in] p_Key A pointer to the requested key to remove.
+ @Param[in] p_Mask A pointer to the mask if relevant,
+ otherwise pointer to NULL
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet() was called for this
+ node and the nodes that lead to it.
+*/
+uint32_t FM_PCD_MatchTableFindNRemoveKey(t_Handle h_CcNode,
+ uint8_t keySize,
+ uint8_t *p_Key,
+ uint8_t *p_Mask);
+
+/**
+ @Function FM_PCD_MatchTableFindNModifyNextEngine
+
+ @Description Modify the Next Engine Parameters in the relevant key entry of
+ the node. Note that this routine will search the node to locate
+ the index of the required key (& mask) to modify.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keySize Key size of the one to modify.
+ @Param[in] p_Key A pointer to the requested key to modify.
+ @Param[in] p_Mask A pointer to the mask if relevant,
+ otherwise pointer to NULL
+ @Param[in] p_FmPcdCcNextEngineParams Parameters for defining next engine
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet().
+ When configuring nextEngine = e_FM_PCD_CC, note that
+ p_FmPcdCcNextEngineParams->ccParams.h_CcNode must be different
+ from the currently changed table.
+*/
+uint32_t FM_PCD_MatchTableFindNModifyNextEngine(t_Handle h_CcNode,
+ uint8_t keySize,
+ uint8_t *p_Key,
+ uint8_t *p_Mask,
+ ioc_fm_pcd_cc_next_engine_params_t *p_FmPcdCcNextEngineParams);
+
+/**
+ @Function FM_PCD_MatchTableFindNModifyKeyAndNextEngine
+
+ @Description Modify the key and Next Engine Parameters of this key in the
+ index defined by the keyIndex. Note that this routine will search
+ the node to locate the index of the required key (& mask) to modify.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keySize Key size of the one to modify.
+ @Param[in] p_Key A pointer to the requested key to modify.
+ @Param[in] p_Mask A pointer to the mask if relevant,
+ otherwise pointer to NULL
+ @Param[in] p_KeyParams A pointer to the parameters includes
+ modified key and modified Next Engine Params
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet() was called for this
+ node and the nodes that lead to it.
+ When configuring nextEngine = e_FM_PCD_CC, note that
+ p_FmPcdCcNextEngineParams->ccParams.h_CcNode must be different
+ from the currently changed table.
+*/
+uint32_t FM_PCD_MatchTableFindNModifyKeyAndNextEngine(
+ t_Handle h_CcNode,
+ uint8_t keySize,
+ uint8_t *p_Key,
+ uint8_t *p_Mask,
+ ioc_fm_pcd_cc_key_params_t *p_KeyParams);
+
+/**
+ @Function FM_PCD_MatchTableFindNModifyKey
+
+ @Description Modify the key in the index defined by the keyIndex. Note that
+ this routine will search the node to locate the index of the
+ required key (& mask) to modify.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keySize Key size of the one to modify.
+ @Param[in] p_Key A pointer to the requested key to modify.
+ @Param[in] p_Mask A pointer to the mask if relevant,
+ otherwise pointer to NULL
+ @Param[in] p_NewKey A pointer to the new key
+ @Param[in] p_NewMask A pointer to the new mask if relevant,
+ otherwise pointer to NULL
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet() was called for this
+ node and the nodes that lead to it.
+*/
+uint32_t FM_PCD_MatchTableFindNModifyKey(t_Handle h_CcNode,
+ uint8_t keySize,
+ uint8_t *p_Key,
+ uint8_t *p_Mask,
+ uint8_t *p_NewKey,
+ uint8_t *p_NewMask);
+
+/**
+ @Function FM_PCD_MatchTableGetKeyCounter
+
+ @Description This routine may be used to get a counter of specific key in a CC
+ Node; This counter reflects how many frames passed that were matched
+ this key.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keyIndex Key index for adding
+
+ @Return The specific key counter.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet().
+*/
+uint32_t FM_PCD_MatchTableGetKeyCounter(t_Handle h_CcNode,
+ uint16_t keyIndex);
+
+/**
+ @Function FM_PCD_MatchTableGetKeyStatistics
+
+ @Description This routine may be used to get statistics counters of specific key
+ in a CC Node.
+
+ If 'e_FM_PCD_CC_STATS_MODE_FRAME' and
+ 'e_FM_PCD_CC_STATS_MODE_BYTE_AND_FRAME' were set for this node,
+ these counters reflect how many frames passed that were matched
+ this key; The total frames count will be returned in the counter
+ of the first range (as only one frame length range was defined).
+ If 'e_FM_PCD_CC_STATS_MODE_RMON' was set for this node, the total
+ frame count will be separated to frame length counters, based on
+ provided frame length ranges.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keyIndex Key index for adding
+ @Param[out] p_KeyStatistics Key statistics counters
+
+ @Return The specific key statistics.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet().
+*/
+uint32_t FM_PCD_MatchTableGetKeyStatistics(t_Handle h_CcNode,
+ uint16_t keyIndex,
+ ioc_fm_pcd_cc_key_statistics_t *p_KeyStatistics);
+
+/**
+ @Function FM_PCD_MatchTableGetMissStatistics
+
+ @Description This routine may be used to get statistics counters of miss entry
+ in a CC Node.
+
+ If 'e_FM_PCD_CC_STATS_MODE_FRAME' and
+ 'e_FM_PCD_CC_STATS_MODE_BYTE_AND_FRAME' were set for this node,
+ these counters reflect how many frames were not matched to any
+ existing key and therefore passed through the miss entry; The
+ total frames count will be returned in the counter of the
+ first range (as only one frame length range was defined).
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[out] p_MissStatistics Statistics counters for 'miss'
+
+ @Return The statistics for 'miss'.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet().
+*/
+uint32_t FM_PCD_MatchTableGetMissStatistics(t_Handle h_CcNode,
+ ioc_fm_pcd_cc_key_statistics_t *p_MissStatistics);
+
+/**
+ @Function FM_PCD_MatchTableFindNGetKeyStatistics
+
+ @Description This routine may be used to get statistics counters of specific key
+ in a CC Node.
+
+ If 'e_FM_PCD_CC_STATS_MODE_FRAME' and
+ 'e_FM_PCD_CC_STATS_MODE_BYTE_AND_FRAME' were set for this node,
+ these counters reflect how many frames passed that were matched
+ this key; The total frames count will be returned in the counter
+ of the first range (as only one frame length range was defined).
+ If 'e_FM_PCD_CC_STATS_MODE_RMON' was set for this node, the total
+ frame count will be separated to frame length counters, based on
+ provided frame length ranges.
+ Note that this routine will search the node to locate the index
+ of the required key based on received key parameters.
+
+ @Param[in] h_CcNode A handle to the node
+ @Param[in] keySize Size of the requested key
+ @Param[in] p_Key A pointer to the requested key
+ @Param[in] p_Mask A pointer to the mask if relevant,
+ otherwise pointer to NULL
+ @Param[out] p_KeyStatistics Key statistics counters
+
+ @Return The specific key statistics.
+
+ @Cautions Allowed only following FM_PCD_MatchTableSet().
+*/
+uint32_t FM_PCD_MatchTableFindNGetKeyStatistics(t_Handle h_CcNode,
+ uint8_t keySize,
+ uint8_t *p_Key,
+ uint8_t *p_Mask,
+ ioc_fm_pcd_cc_key_statistics_t *p_KeyStatistics);
+
+/*
+ @Function FM_PCD_MatchTableGetNextEngine
+
+ @Description Gets NextEngine of the relevant keyIndex.
+
+ @Param[in] h_CcNode A handle to the node.
+ @Param[in] keyIndex keyIndex in the relevant node.
+ @Param[out] p_FmPcdCcNextEngineParams here updated nextEngine parameters for
+ the relevant keyIndex of the CC Node
+ received as parameter to this function
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+uint32_t FM_PCD_MatchTableGetNextEngine(t_Handle h_CcNode,
+ uint16_t keyIndex,
+ ioc_fm_pcd_cc_next_engine_params_t *p_FmPcdCcNextEngineParams);
+
+/*
+ @Function FM_PCD_MatchTableGetIndexedHashBucket
+
+ @Description This routine simulates KeyGen operation on the provided key and
+ calculates to which hash bucket it will be mapped.
+
+ @Param[in] h_CcNode A handle to the node.
+ @Param[in] kgKeySize Key size as it was configured in the KG
+ scheme that leads to this hash.
+ @Param[in] p_KgKey Pointer to the key; must be like the key
+ that the KG is generated, i.e. the same
+ extraction and with mask if exist.
+ @Param[in] kgHashShift Hash-shift as it was configured in the KG
+ scheme that leads to this hash.
+ @Param[out] p_CcNodeBucketHandle Pointer to the bucket of the provided key.
+ @Param[out] p_BucketIndex Index to the bucket of the provided key
+ @Param[out] p_LastIndex Pointer to last index in the bucket of the
+ provided key.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet()
+*/
+uint32_t FM_PCD_MatchTableGetIndexedHashBucket(t_Handle h_CcNode,
+ uint8_t kgKeySize,
+ uint8_t *p_KgKey,
+ uint8_t kgHashShift,
+ t_Handle *p_CcNodeBucketHandle,
+ uint8_t *p_BucketIndex,
+ uint16_t *p_LastIndex);
+
+/**
+ @Function FM_PCD_HashTableSet
+
+ @Description This routine initializes a hash table structure.
+ KeyGen hash result determines the hash bucket.
+ Next, KeyGen key is compared against all keys of this
+ bucket (exact match).
+ Number of sets (number of buckets) of the hash equals to the
+ number of 1-s in 'hashResMask' in the provided parameters.
+ Number of hash table ways is then calculated by dividing
+ 'maxNumOfKeys' equally between the hash sets. This is the maximal
+ number of keys that a hash bucket may hold.
+ The hash table is initialized empty and keys may be
+ added to it following the initialization. Keys masks are not
+ supported in current hash table implementation.
+ The initialized hash table can be integrated as a node in a
+ CC tree.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] p_Param A structure of parameters defining the hash table
+
+ @Return A handle to the initialized object on success; NULL code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+t_Handle FM_PCD_HashTableSet(t_Handle h_FmPcd,
+ ioc_fm_pcd_hash_table_params_t *p_Param);
+
+/**
+ @Function FM_PCD_HashTableDelete
+
+ @Description This routine deletes the provided hash table and released all
+ its allocated resources.
+
+ @Param[in] h_HashTbl A handle to a hash table
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+*/
+uint32_t FM_PCD_HashTableDelete(t_Handle h_HashTbl);
+
+/**
+ @Function FM_PCD_HashTableAddKey
+
+ @Description This routine adds the provided key (including next engine
+ parameters of this key) to the hash table.
+ The key is added as the last key of the bucket that it is
+ mapped to.
+
+ @Param[in] h_HashTbl A handle to a hash table
+ @Param[in] keySize Key size of added key
+ @Param[in] p_KeyParams A pointer to the parameters includes
+ new key with next engine parameters; The pointer
+ to the key mask must be NULL, as masks are not
+ supported in hash table implementation.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+*/
+uint32_t FM_PCD_HashTableAddKey(t_Handle h_HashTbl,
+ uint8_t keySize,
+ ioc_fm_pcd_cc_key_params_t *p_KeyParams);
+
+/**
+ @Function FM_PCD_HashTableRemoveKey
+
+ @Description This routine removes the requested key (including next engine
+ parameters of this key) from the hash table.
+
+ @Param[in] h_HashTbl A handle to a hash table
+ @Param[in] keySize Key size of the one to remove.
+ @Param[in] p_Key A pointer to the requested key to remove.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+*/
+uint32_t FM_PCD_HashTableRemoveKey(t_Handle h_HashTbl,
+ uint8_t keySize,
+ uint8_t *p_Key);
+
+/**
+ @Function FM_PCD_HashTableModifyNextEngine
+
+ @Description This routine modifies the next engine for the provided key. The
+ key should be previously added to the hash table.
+
+ @Param[in] h_HashTbl A handle to a hash table
+ @Param[in] keySize Key size of the key to modify.
+ @Param[in] p_Key A pointer to the requested key to modify.
+ @Param[in] p_FmPcdCcNextEngineParams A structure for defining new next engine
+ parameters.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+ When configuring nextEngine = e_FM_PCD_CC, note that
+ p_FmPcdCcNextEngineParams->ccParams.h_CcNode must be different
+ from the currently changed table.
+*/
+uint32_t FM_PCD_HashTableModifyNextEngine(t_Handle h_HashTbl,
+ uint8_t keySize,
+ uint8_t *p_Key,
+ ioc_fm_pcd_cc_next_engine_params_t *p_FmPcdCcNextEngineParams);
+
+/**
+ @Function FM_PCD_HashTableModifyMissNextEngine
+
+ @Description This routine modifies the next engine on key match miss.
+
+ @Param[in] h_HashTbl A handle to a hash table
+ @Param[in] p_FmPcdCcNextEngineParams A structure for defining new next engine
+ parameters.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+ When configuring nextEngine = e_FM_PCD_CC, note that
+ p_FmPcdCcNextEngineParams->ccParams.h_CcNode must be different
+ from the currently changed table.
+*/
+uint32_t FM_PCD_HashTableModifyMissNextEngine(t_Handle h_HashTbl,
+ ioc_fm_pcd_cc_next_engine_params_t *p_FmPcdCcNextEngineParams);
+
+/*
+ @Function FM_PCD_HashTableGetMissNextEngine
+
+ @Description Gets NextEngine in case of key match miss.
+
+ @Param[in] h_HashTbl A handle to a hash table
+ @Param[out] p_FmPcdCcNextEngineParams Next engine parameters for the specified
+ hash table.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+*/
+uint32_t FM_PCD_HashTableGetMissNextEngine(t_Handle h_HashTbl,
+ ioc_fm_pcd_cc_next_engine_params_t *p_FmPcdCcNextEngineParams);
+
+/**
+ @Function FM_PCD_HashTableFindNGetKeyStatistics
+
+ @Description This routine may be used to get statistics counters of specific key
+ in a hash table.
+
+ If 'e_FM_PCD_CC_STATS_MODE_FRAME' and
+ 'e_FM_PCD_CC_STATS_MODE_BYTE_AND_FRAME' were set for this node,
+ these counters reflect how many frames passed that were matched
+ this key; The total frames count will be returned in the counter
+ of the first range (as only one frame length range was defined).
+ If 'e_FM_PCD_CC_STATS_MODE_RMON' was set for this node, the total
+ frame count will be separated to frame length counters, based on
+ provided frame length ranges.
+ Note that this routine will identify the bucket of this key in
+ the hash table and will search the bucket to locate the index
+ of the required key based on received key parameters.
+
+ @Param[in] h_HashTbl A handle to a hash table
+ @Param[in] keySize Size of the requested key
+ @Param[in] p_Key A pointer to the requested key
+ @Param[out] p_KeyStatistics Key statistics counters
+
+ @Return The specific key statistics.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+*/
+uint32_t FM_PCD_HashTableFindNGetKeyStatistics(t_Handle h_HashTbl,
+ uint8_t keySize,
+ uint8_t *p_Key,
+ ioc_fm_pcd_cc_key_statistics_t *p_KeyStatistics);
+
+/**
+ @Function FM_PCD_HashTableGetMissStatistics
+
+ @Description This routine may be used to get statistics counters of 'miss'
+ entry of the a hash table.
+
+ If 'e_FM_PCD_CC_STATS_MODE_FRAME' and
+ 'e_FM_PCD_CC_STATS_MODE_BYTE_AND_FRAME' were set for this node,
+ these counters reflect how many frames were not matched to any
+ existing key and therefore passed through the miss entry;
+
+ @Param[in] h_HashTbl A handle to a hash table
+ @Param[out] p_MissStatistics Statistics counters for 'miss'
+
+ @Return The statistics for 'miss'.
+
+ @Cautions Allowed only following FM_PCD_HashTableSet().
+*/
+uint32_t FM_PCD_HashTableGetMissStatistics(t_Handle h_HashTbl,
+ ioc_fm_pcd_cc_key_statistics_t *p_MissStatistics);
+
+/**
+ @Function FM_PCD_ManipNodeSet
+
+ @Description This routine should be called for defining a manipulation
+ node. A manipulation node must be defined before the CC node
+ that precedes it.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] p_FmPcdManipParams A structure of parameters defining the manipulation
+
+ @Return A handle to the initialized object on success; NULL code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+t_Handle FM_PCD_ManipNodeSet(t_Handle h_FmPcd,
+ ioc_fm_pcd_manip_params_t *p_FmPcdManipParams);
+
+/**
+ @Function FM_PCD_ManipNodeDelete
+
+ @Description Delete an existing manipulation node.
+
+ @Param[in] h_ManipNode A handle to a manipulation node.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_ManipNodeSet().
+*/
+uint32_t FM_PCD_ManipNodeDelete(t_Handle h_ManipNode);
+
+/**
+ @Function FM_PCD_ManipGetStatistics
+
+ @Description Retrieve the manipulation statistics.
+
+ @Param[in] h_ManipNode A handle to a manipulation node.
+ @Param[out] p_FmPcdManipStats A structure for retrieving the manipulation statistics
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_ManipNodeSet().
+*/
+uint32_t FM_PCD_ManipGetStatistics(t_Handle h_ManipNode,
+ ioc_fm_pcd_manip_stats_t *p_FmPcdManipStats);
+
+/**
+ @Function FM_PCD_ManipNodeReplace
+
+ @Description Change existing manipulation node to be according to new requirement.
+
+ @Param[in] h_ManipNode A handle to a manipulation node.
+ @Param[out] p_ManipParams A structure of parameters defining the change requirement
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_ManipNodeSet().
+*/
+uint32_t FM_PCD_ManipNodeReplace(t_Handle h_ManipNode,
+ioc_fm_pcd_manip_params_t *p_ManipParams);
+
+#if (DPAA_VERSION >= 11)
+/**
+ @Function FM_PCD_FrmReplicSetGroup
+
+ @Description Initialize a Frame Replicator group.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] p_FrmReplicGroupParam A structure of parameters for the initialization of
+ the frame replicator group.
+
+ @Return A handle to the initialized object on success; NULL code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+t_Handle FM_PCD_FrmReplicSetGroup(t_Handle h_FmPcd,
+ ioc_fm_pcd_frm_replic_group_params_t *p_FrmReplicGroupParam);
+
+/**
+ @Function FM_PCD_FrmReplicDeleteGroup
+
+ @Description Delete a Frame Replicator group.
+
+ @Param[in] h_FrmReplicGroup A handle to the frame replicator group.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_FrmReplicSetGroup().
+*/
+uint32_t FM_PCD_FrmReplicDeleteGroup(t_Handle h_FrmReplicGroup);
+
+/**
+ @Function FM_PCD_FrmReplicAddMember
+
+ @Description Add the member in the index defined by the memberIndex.
+
+ @Param[in] h_FrmReplicGroup A handle to the frame replicator group.
+ @Param[in] memberIndex member index for adding.
+ @Param[in] p_MemberParams A pointer to the new member parameters.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_FrmReplicSetGroup() of this group.
+*/
+uint32_t FM_PCD_FrmReplicAddMember(t_Handle h_FrmReplicGroup,
+ uint16_t memberIndex,
+ ioc_fm_pcd_cc_next_engine_params_t *p_MemberParams);
+
+/**
+ @Function FM_PCD_FrmReplicRemoveMember
+
+ @Description Remove the member defined by the index from the relevant group.
+
+ @Param[in] h_FrmReplicGroup A handle to the frame replicator group.
+ @Param[in] memberIndex member index for removing.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PCD_FrmReplicSetGroup() of this group.
+*/
+uint32_t FM_PCD_FrmReplicRemoveMember(t_Handle h_FrmReplicGroup,
+ uint16_t memberIndex);
+#endif /* (DPAA_VERSION >= 11) */
+
+#if ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT))
+/**
+ @Function FM_PCD_StatisticsSetNode
+
+ @Description This routine should be called for defining a statistics node.
+
+ @Param[in] h_FmPcd FM PCD module descriptor.
+ @Param[in] p_FmPcdstatsParams A structure of parameters defining the statistics
+
+ @Return A handle to the initialized object on success; NULL code otherwise.
+
+ @Cautions Allowed only following FM_PCD_Init().
+*/
+t_Handle FM_PCD_StatisticsSetNode(t_Handle h_FmPcd,
+ ioc_fm_pcd_stats_params_t *p_FmPcdstatsParams);
+#endif /* ((DPAA_VERSION == 10) && defined(FM_CAPWAP_SUPPORT)) */
+
+/** @} */ /* end of FM_PCD_Runtime_build_grp group */
+/** @} */ /* end of FM_PCD_Runtime_grp group */
+/** @} */ /* end of FM_PCD_grp group */
+/** @} */ /* end of FM_grp group */
+
+#endif /* __FM_PCD_EXT_H */
diff --git a/drivers/net/dpaa/fmlib/fm_port_ext.h b/drivers/net/dpaa/fmlib/fm_port_ext.h
new file mode 100644
index 000000000..e937eec5b
--- /dev/null
+++ b/drivers/net/dpaa/fmlib/fm_port_ext.h
@@ -0,0 +1,3512 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright 2008-2012 Freescale Semiconductor Inc.
+ * Copyright 2017-2020 NXP
+ */
+
+#ifndef __FM_PORT_EXT_H
+#define __FM_PORT_EXT_H
+
+#include <errno.h>
+#include "ncsw_ext.h"
+#include "fm_pcd_ext.h"
+#include "fm_ext.h"
+#include "net_ext.h"
+#include "dpaa_integration.h"
+
+/**
+ @Description FM Port routines
+*/
+
+/**
+
+ @Group lnx_ioctl_FM_grp Frame Manager Linux IOCTL API
+
+ @Description FM Linux ioctls definitions and enums
+
+ @{
+*/
+
+/**
+ @Group lnx_ioctl_FM_PORT_grp FM Port
+
+ @Description FM Port API
+
+ The FM uses a general module called "port" to represent a Tx port
+ (MAC), an Rx port (MAC), offline parsing flow or host command
+ flow. There may be up to 17 (may change) ports in an FM - 5 Tx
+ ports (4 for the 1G MACs, 1 for the 10G MAC), 5 Rx Ports, and 7
+ Host command/Offline parsing ports. The SW driver manages these
+ ports as sub-modules of the FM, i.e. after an FM is initialized,
+ its ports may be initialized and operated upon.
+
+ The port is initialized aware of its type, but other functions on
+ a port may be indifferent to its type. When necessary, the driver
+ verifies coherency and returns error if applicable.
+
+ On initialization, user specifies the port type and it's index
+ (relative to the port's type). Host command and Offline parsing
+ ports share the same id range, I.e user may not initialized host
+ command port 0 and offline parsing port 0.
+
+ @{
+*/
+
+/**
+ @Description An enum for defining port PCD modes.
+ (Must match enum e_FmPortPcdSupport defined in fm_port_ext.h)
+
+ This enum defines the superset of PCD engines support - i.e. not
+ all engines have to be used, but all have to be enabled. The real
+ flow of a specific frame depends on the PCD configuration and the
+ frame headers and payload.
+ Note: the first engine and the first engine after the parser (if
+ exists) should be in order, the order is important as it will
+ define the flow of the port. However, as for the rest engines
+ (the ones that follows), the order is not important anymore as
+ it is defined by the PCD graph itself.
+*/
+typedef enum ioc_fm_port_pcd_support {
+ e_IOC_FM_PORT_PCD_SUPPORT_NONE = 0 /**< BMI to BMI, PCD is not used */
+ , e_IOC_FM_PORT_PCD_SUPPORT_PRS_ONLY /**< Use only Parser */
+ , e_IOC_FM_PORT_PCD_SUPPORT_PLCR_ONLY /**< Use only Policer */
+ , e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_PLCR/**< Use Parser and Policer */
+ , e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG /**< Use Parser and Keygen */
+ , e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_CC
+ /**< Use Parser, Keygen and Coarse Classification */
+ , e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_CC_AND_PLCR
+ /**< Use all PCD engines */
+ , e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_PLCR
+ /**< Use Parser, Keygen and Policer */
+ , e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_CC
+ /**< Use Parser and Coarse Classification */
+ , e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_CC_AND_PLCR
+ /**< Use Parser and Coarse Classification and Policer */
+ , e_IOC_FM_PORT_PCD_SUPPORT_CC_ONLY /**< Use only Coarse Classification */
+#if (defined(FM_CAPWAP_SUPPORT) && (DPAA_VERSION == 10))
+ , e_IOC_FM_PORT_PCD_SUPPORT_CC_AND_KG
+ /**< Use Coarse Classification,and Keygen */
+ , e_IOC_FM_PORT_PCD_SUPPORT_CC_AND_KG_AND_PLCR
+ /**< Use Coarse Classification, Keygen and Policer */
+#endif /* FM_CAPWAP_SUPPORT */
+} ioc_fm_port_pcd_support;
+
+/**
+ @Collection FM Frame error
+*/
+typedef uint32_t ioc_fm_port_frame_err_select_t;
+ /**< typedef for defining Frame Descriptor errors */
+
+/* @} */
+
+/**
+ @Description An enum for defining Dual Tx rate limiting scale.
+ (Must match e_FmPortDualRateLimiterScaleDown defined in fm_port_ext.h)
+*/
+typedef enum ioc_fm_port_dual_rate_limiter_scale_down {
+ e_IOC_FM_PORT_DUAL_RATE_LIMITER_NONE = 0,
+ /**< Use only single rate limiter*/
+ e_IOC_FM_PORT_DUAL_RATE_LIMITER_SCALE_DOWN_BY_2,
+ /**< Divide high rate limiter by 2 */
+ e_IOC_FM_PORT_DUAL_RATE_LIMITER_SCALE_DOWN_BY_4,
+ /**< Divide high rate limiter by 4 */
+ e_IOC_FM_PORT_DUAL_RATE_LIMITER_SCALE_DOWN_BY_8
+ /**< Divide high rate limiter by 8 */
+} ioc_fm_port_dual_rate_limiter_scale_down;
+
+/**
+ @Description A structure for defining Tx rate limiting
+ (Must match struct t_FmPortRateLimit defined in fm_port_ext.h)
+*/
+typedef struct ioc_fm_port_rate_limit_t {
+ uint16_t max_burst_size;/**< in KBytes for Tx ports, in frames
+ for offline parsing ports. (note that
+ for early chips burst size is
+ rounded up to a multiply of 1000 frames).*/
+ uint32_t rate_limit;/**< in Kb/sec for Tx ports, in frame/sec for
+ offline parsing ports. Rate limit refers to
+ data rate (rather than line rate). */
+ ioc_fm_port_dual_rate_limiter_scale_down rate_limit_divider;
+ /**< For offline parsing ports only. Not-valid
+ for some earlier chip revisions */
+} ioc_fm_port_rate_limit_t;
+
+
+/**
+ @Group lnx_ioctl_FM_PORT_runtime_control_grp FM Port Runtime Control Unit
+
+ @Description FM Port Runtime control unit API functions, definitions and enums.
+
+ @{
+*/
+
+/**
+ @Description An enum for defining FM Port counters.
+ (Must match enum e_FmPortCounters defined in fm_port_ext.h)
+*/
+typedef enum ioc_fm_port_counters {
+ e_IOC_FM_PORT_COUNTERS_CYCLE, /**< BMI performance counter */
+ e_IOC_FM_PORT_COUNTERS_TASK_UTIL, /**< BMI performance counter */
+ e_IOC_FM_PORT_COUNTERS_QUEUE_UTIL, /**< BMI performance counter */
+ e_IOC_FM_PORT_COUNTERS_DMA_UTIL, /**< BMI performance counter */
+ e_IOC_FM_PORT_COUNTERS_FIFO_UTIL, /**< BMI performance counter */
+ e_IOC_FM_PORT_COUNTERS_RX_PAUSE_ACTIVATION,
+ /**< BMI Rx only performance counter */
+ e_IOC_FM_PORT_COUNTERS_FRAME, /**< BMI statistics counter */
+ e_IOC_FM_PORT_COUNTERS_DISCARD_FRAME, /**< BMI statistics counter */
+ e_IOC_FM_PORT_COUNTERS_DEALLOC_BUF,
+ /**< BMI deallocate buffer statistics counter */
+ e_IOC_FM_PORT_COUNTERS_RX_BAD_FRAME, /**< BMI Rx only statistics counter */
+ e_IOC_FM_PORT_COUNTERS_RX_LARGE_FRAME, /**< BMI Rx only statistics counter */
+ e_IOC_FM_PORT_COUNTERS_RX_FILTER_FRAME,
+ /**< BMI Rx & OP only statistics counter */
+ e_IOC_FM_PORT_COUNTERS_RX_LIST_DMA_ERR,
+ /**< BMI Rx, OP & HC only statistics counter */
+ e_IOC_FM_PORT_COUNTERS_RX_OUT_OF_BUFFERS_DISCARD,
+ /**< BMI Rx, OP & HC statistics counter */
+ e_IOC_FM_PORT_COUNTERS_PREPARE_TO_ENQUEUE_COUNTER,
+ /**< BMI Rx, OP & HC only statistics counter */
+ e_IOC_FM_PORT_COUNTERS_WRED_DISCARD,/**< BMI OP & HC only statistics counter */
+ e_IOC_FM_PORT_COUNTERS_LENGTH_ERR, /**< BMI non-Rx statistics counter */
+ e_IOC_FM_PORT_COUNTERS_UNSUPPRTED_FORMAT,/**< BMI non-Rx statistics counter */
+ e_IOC_FM_PORT_COUNTERS_DEQ_TOTAL,/**< QMI total QM dequeues counter */
+ e_IOC_FM_PORT_COUNTERS_ENQ_TOTAL,/**< QMI total QM enqueues counter */
+ e_IOC_FM_PORT_COUNTERS_DEQ_FROM_DEFAULT,/**< QMI counter */
+ e_IOC_FM_PORT_COUNTERS_DEQ_CONFIRM /**< QMI counter */
+} ioc_fm_port_counters;
+
+typedef struct ioc_fm_port_bmi_stats_t {
+ uint32_t cnt_cycle;
+ uint32_t cnt_task_util;
+ uint32_t cnt_queue_util;
+ uint32_t cnt_dma_util;
+ uint32_t cnt_fifo_util;
+ uint32_t cnt_rx_pause_activation;
+ uint32_t cnt_frame;
+ uint32_t cnt_discard_frame;
+ uint32_t cnt_dealloc_buf;
+ uint32_t cnt_rx_bad_frame;
+ uint32_t cnt_rx_large_frame;
+ uint32_t cnt_rx_filter_frame;
+ uint32_t cnt_rx_list_dma_err;
+ uint32_t cnt_rx_out_of_buffers_discard;
+ uint32_t cnt_wred_discard;
+ uint32_t cnt_length_err;
+ uint32_t cnt_unsupported_format;
+} ioc_fm_port_bmi_stats_t;
+
+/**
+ @Description Structure for Port id parameters.
+ (Description may be inaccurate;
+ must match struct t_FmPortCongestionGrps defined in fm_port_ext.h)
+
+ Fields commented 'IN' are passed by the port module to be used
+ by the FM module.
+ Fields commented 'OUT' will be filled by FM before returning to port.
+*/
+typedef struct ioc_fm_port_congestion_groups_t {
+ uint16_t num_of_congestion_grps_to_consider;
+ /**< The number of required congestion groups
+ to define the size of the following array */
+ uint8_t congestion_grps_to_consider[FM_PORT_NUM_OF_CONGESTION_GRPS];
+ /**< An array of CG indexes;
+ Note that the size of the array should be
+ 'num_of_congestion_grps_to_consider'. */
+#if DPAA_VERSION >= 11
+ bool pfc_priorities_enable[FM_PORT_NUM_OF_CONGESTION_GRPS][FM_MAX_NUM_OF_PFC_PRIORITIES];
+ /**< A matrix that represents the map between the CG ids
+ defined in 'congestion_grps_to_consider' to the priorities
+ mapping array. */
+#endif /* DPAA_VERSION >= 11 */
+} ioc_fm_port_congestion_groups_t;
+
+
+/**
+ @Function FM_PORT_Disable
+
+ @Description Gracefully disable an FM port. The port will not start new
+ tasks after all tasks associated with the port are terminated.
+
+ @Return 0 on success; error code otherwise.
+
+ @Cautions This is a blocking routine, it returns after port is
+ gracefully stopped, i.e. the port will not except new frames,
+ but it will finish all frames or tasks which were already began
+*/
+#define FM_PORT_IOC_DISABLE _IO(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(1))
+
+/**
+ @Function FM_PORT_Enable
+
+ @Description A runtime routine provided to allow disable/enable of port.
+
+ @Return 0 on success; error code otherwise.
+*/
+#define FM_PORT_IOC_ENABLE _IO(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(2))
+
+/**
+ @Function FM_PORT_SetRateLimit
+
+ @Description Calling this routine enables rate limit algorithm.
+ By default, this functionality is disabled.
+
+ Note that rate - limit mechanism uses the FM time stamp.
+ The selected rate limit specified here would be
+ rounded DOWN to the nearest 16M.
+
+ May be used for Tx and offline parsing ports only
+
+ @Param[in] ioc_fm_port_rate_limit A structure of rate limit parameters
+
+ @Return 0 on success; error code otherwise.
+*/
+#define FM_PORT_IOC_SET_RATE_LIMIT \
+ IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(3), ioc_fm_port_rate_limit_t)
+
+/**
+ @Function FM_PORT_DeleteRateLimit
+
+ @Description Calling this routine disables the previously enabled rate limit.
+
+ May be used for Tx and offline parsing ports only
+
+ @Return 0 on success; error code otherwise.
+*/
+#define FM_PORT_IOC_DELETE_RATE_LIMIT _IO(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(5))
+#define FM_PORT_IOC_REMOVE_RATE_LIMIT FM_PORT_IOC_DELETE_RATE_LIMIT
+
+/**
+ @Function FM_PORT_AddCongestionGrps
+
+ @Description This routine effects the corresponding Tx port.
+ It should be called in order to enable pause
+ frame transmission in case of congestion in one or more
+ of the congestion groups relevant to this port.
+ Each call to this routine may add one or more congestion
+ groups to be considered relevant to this port.
+
+ May be used for Rx, or RX+OP ports only (depending on chip)
+
+ @Param[in] ioc_fm_port_congestion_groups_t - A pointer to an array of
+ congestion group ids to consider.
+
+ @Return 0 on success; error code otherwise.
+*/
+#define FM_PORT_IOC_ADD_CONGESTION_GRPS \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(34), ioc_fm_port_congestion_groups_t)
+
+/**
+ @Function FM_PORT_RemoveCongestionGrps
+
+ @Description This routine effects the corresponding Tx port. It should be
+ called when congestion groups were
+ defined for this port and are no longer relevant, or pause
+ frames transmitting is not required on their behalf.
+ Each call to this routine may remove one or more congestion
+ groups to be considered relevant to this port.
+
+ May be used for Rx, or RX+OP ports only (depending on chip)
+
+ @Param[in] ioc_fm_port_congestion_groups_t - A pointer to an array of
+ congestion group ids to consider.
+
+ @Return 0 on success; error code otherwise.
+*/
+#define FM_PORT_IOC_REMOVE_CONGESTION_GRPS \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(35), ioc_fm_port_congestion_groups_t)
+
+/**
+ @Function FM_PORT_SetErrorsRoute
+
+ @Description Errors selected for this routine will cause a frame with that error
+ to be enqueued to error queue.
+ Errors not selected for this routine will cause a frame with that error
+ to be enqueued to the one of the other port queues.
+ By default all errors are defined to be enqueued to error queue.
+ Errors that were configured to be discarded (at initialization)
+ may not be selected here.
+
+ May be used for Rx and offline parsing ports only
+
+ @Param[in] ioc_fm_port_frame_err_select_t A list of errors to enqueue to error queue
+
+ @Return 0 on success; error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+ (szbs001: How is it possible to have one function that needs to be
+ called BEFORE FM_PORT_Init() implemented as an ioctl,
+ which will ALWAYS be called AFTER the FM_PORT_Init()
+ for that port!?!?!?!???!?!??!?!?)
+*/
+#define FM_PORT_IOC_SET_ERRORS_ROUTE \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(4), ioc_fm_port_frame_err_select_t)
+
+/**
+ @Group lnx_ioctl_FM_PORT_pcd_runtime_control_grp FM Port PCD Runtime Control Unit
+
+ @Description FM Port PCD Runtime control unit API functions, definitions and enums.
+
+ @{
+*/
+
+/**
+ @Description A structure defining the KG scheme after the parser.
+ (Must match struct ioc_fm_pcd_kg_scheme_select_t defined in fm_port_ext.h)
+
+ This is relevant only to change scheme selection mode - from
+ direct to indirect and vice versa, or when the scheme is selected directly,
+ to select the scheme id.
+
+*/
+typedef struct ioc_fm_pcd_kg_scheme_select_t {
+ bool direct; /**< TRUE to use 'scheme_id' directly, FALSE to use LCV.*/
+ void *scheme_id;/**< Relevant for 'direct'=TRUE only.
+ 'scheme_id' selects the scheme after parser. */
+} ioc_fm_pcd_kg_scheme_select_t;
+
+/**
+ @Description Scheme IDs structure
+ (Must match struct ioc_fm_pcd_port_schemes_params_t defined in fm_port_ext.h)
+*/
+typedef struct ioc_fm_pcd_port_schemes_params_t {
+ uint8_t num_of_schemes; /**< Number of schemes for port to be bound to. */
+ void *scheme_ids[FM_PCD_KG_NUM_OF_SCHEMES];
+ /**< Array of 'num_of_schemes' schemes for the port to be bound to */
+} ioc_fm_pcd_port_schemes_params_t;
+
+/**
+ @Description A union for defining port protocol parameters for parser
+ (Must match union u_FmPcdHdrPrsOpts defined in fm_port_ext.h)
+*/
+typedef union ioc_fm_pcd_hdr_prs_opts_u {
+ /* MPLS */
+ struct {
+ bool label_interpretation_enable;
+ /**< When this bit is set, the last MPLS label will be
+ interpreted as described in HW spec table. When the bit
+ is cleared, the parser will advance to MPLS next parse */
+ ioc_net_header_type next_parse;/**< must be equal or higher than IPv4 */
+ } mpls_prs_options;
+
+ /* VLAN */
+ struct {
+ uint16_t tag_protocol_id1;
+ /**< User defined Tag Protocol Identifier, to be recognized
+ on VLAN TAG on top of 0x8100 and 0x88A8 */
+ uint16_t tag_protocol_id2;
+ /**< User defined Tag Protocol Identifier, to be recognized
+ on VLAN TAG on top of 0x8100 and 0x88A8 */
+ } vlan_prs_options;
+
+ /* PPP */
+ struct{
+ bool enable_mtu_check;
+ /**< Check validity of MTU according to RFC2516 */
+ } pppoe_prs_options;
+
+ /* IPV6 */
+ struct {
+ bool routing_hdr_disable;
+ /**< Disable routing header */
+ } ipv6_prs_options;
+
+ /* UDP */
+ struct {
+ bool pad_ignore_checksum;
+ /**< TRUE to ignore pad in checksum */
+ } udp_prs_options;
+
+ /* TCP */
+ struct {
+ bool pad_ignore_checksum;
+ /**< TRUE to ignore pad in checksum */
+ } tcp_prs_options;
+} ioc_fm_pcd_hdr_prs_opts_u;
+
+/**
+ @Description A structure for defining each header for the parser
+ (must match struct t_FmPcdPrsAdditionalHdrParams defined in fm_port_ext.h)
+*/
+typedef struct ioc_fm_pcd_prs_additional_hdr_params_t {
+ ioc_net_header_type hdr; /**< Selected header */
+ bool err_disable; /**< TRUE to disable error indication */
+ bool soft_prs_enable;/**< Enable jump to SW parser when this
+ header is recognized by the HW parser. */
+ uint8_t index_per_hdr; /**< Normally 0, if more than one sw parser
+ attachments exists for the same header,
+ (in the main sw parser code) use this
+ index to distinguish between them. */
+ bool use_prs_opts; /**< TRUE to use parser options. */
+ ioc_fm_pcd_hdr_prs_opts_u prs_opts;/**< A unuion according to header type,
+ defining the parser options selected.*/
+} ioc_fm_pcd_prs_additional_hdr_params_t;
+
+/**
+ @Description A structure for defining port PCD parameters
+ (Must match t_FmPortPcdPrsParams defined in fm_port_ext.h)
+*/
+typedef struct ioc_fm_port_pcd_prs_params_t {
+ uint8_t prs_res_priv_info;
+ /**< The private info provides a method of inserting
+ port information into the parser result. This information
+ may be extracted by KeyGen and be used for frames
+ distribution when a per-port distinction is required,
+ it may also be used as a port logical id for analyzing
+ incoming frames. */
+ uint8_t parsing_offset;
+ /**< Number of bytes from begining of packet to start parsing */
+ ioc_net_header_type first_prs_hdr;
+ /**< The type of the first header axpected at 'parsing_offset' */
+ bool include_in_prs_statistics;
+ /**< TRUE to include this port in the parser statistics */
+ uint8_t num_of_hdrs_with_additional_params;
+ /**< Normally 0, some headers may get special parameters */
+ ioc_fm_pcd_prs_additional_hdr_params_t additional_params[IOC_FM_PCD_PRS_NUM_OF_HDRS];
+ /**< 'num_of_hdrs_with_additional_params' structures
+ additional parameters for each header that requires them */
+ bool set_vlan_tpid1;
+ /**< TRUE to configure user selection of Ethertype to
+ indicate a VLAN tag (in addition to the TPID values
+ 0x8100 and 0x88A8). */
+ uint16_t vlan_tpid1;
+ /**< extra tag to use if set_vlan_tpid1=TRUE. */
+ bool set_vlan_tpid2;
+ /**< TRUE to configure user selection of Ethertype to
+ indicate a VLAN tag (in addition to the TPID values
+ 0x8100 and 0x88A8). */
+ uint16_t vlan_tpid2; /**< extra tag to use if set_vlan_tpid1=TRUE. */
+} ioc_fm_port_pcd_prs_params_t;
+
+/**
+ @Description A structure for defining coarse alassification parameters
+ (Must match t_FmPortPcdCcParams defined in fm_port_ext.h)
+*/
+typedef struct ioc_fm_port_pcd_cc_params_t {
+ void *cc_tree_id; /**< CC tree id */
+} ioc_fm_port_pcd_cc_params_t;
+
+/**
+ @Description A structure for defining keygen parameters
+ (Must match t_FmPortPcdKgParams defined in fm_port_ext.h)
+*/
+typedef struct ioc_fm_port_pcd_kg_params_t {
+ uint8_t num_of_schemes;
+ /**< Number of schemes for port to be bound to. */
+ void *scheme_ids[FM_PCD_KG_NUM_OF_SCHEMES];
+ /**< Array of 'num_of_schemes' schemes for the
+ port to be bound to */
+ bool direct_scheme;
+ /**< TRUE for going from parser to a specific scheme,
+ regardless of parser result */
+ void *direct_scheme_id;
+ /**< Scheme id, as returned by FM_PCD_KgSetScheme;
+ relevant only if direct=TRUE. */
+} ioc_fm_port_pcd_kg_params_t;
+
+/**
+ @Description A structure for defining policer parameters
+ (Must match t_FmPortPcdPlcrParams defined in fm_port_ext.h)
+*/
+typedef struct ioc_fm_port_pcd_plcr_params_t {
+ void *plcr_profile_id;/**< Selected profile handle;
+ relevant in one of the following cases:
+ e_IOC_FM_PORT_PCD_SUPPORT_PLCR_ONLY or
+ e_IOC_FM_PORT_PCD_SUPPORT_PRS_AND_PLCR were selected,
+ or if any flow uses a KG scheme where policer
+ profile is not generated (bypass_plcr_profile_generation selected) */
+} ioc_fm_port_pcd_plcr_params_t;
+
+/**
+ @Description A structure for defining port PCD parameters
+ (Must match struct t_FmPortPcdParams defined in fm_port_ext.h)
+*/
+typedef struct ioc_fm_port_pcd_params_t {
+ ioc_fm_port_pcd_support pcd_support;
+ /**< Relevant for Rx and offline ports only.
+ Describes the active PCD engines for this port. */
+ void *net_env_id; /**< HL Unused in PLCR only mode */
+ ioc_fm_port_pcd_prs_params_t *p_prs_params;
+ /**< Parser parameters for this port */
+ ioc_fm_port_pcd_cc_params_t *p_cc_params;
+ /**< Coarse classification parameters for this port */
+ ioc_fm_port_pcd_kg_params_t *p_kg_params;
+ /**< Keygen parameters for this port */
+ ioc_fm_port_pcd_plcr_params_t *p_plcr_params;
+ /**< Policer parameters for this port */
+ void *p_ip_reassembly_manip;/**< IP Reassembly manipulation */
+#if (DPAA_VERSION >= 11)
+ void *p_capwap_reassembly_manip;
+ /**< CAPWAP Reassembly manipulation */
+#endif /* (DPAA_VERSION >= 11) */
+} ioc_fm_port_pcd_params_t;
+
+/**
+ @Description A structure for defining the Parser starting point
+ (Must match struct ioc_fm_pcd_prs_start_t defined in fm_port_ext.h)
+*/
+typedef struct ioc_fm_pcd_prs_start_t {
+ uint8_t parsing_offset; /**< Number of bytes from begining of packet to
+ start parsing */
+ ioc_net_header_type first_prs_hdr;/**< The type of the first header axpected at
+ 'parsing_offset' */
+} ioc_fm_pcd_prs_start_t;
+
+/**
+ @Description FQID parameters structure
+*/
+typedef struct ioc_fm_port_pcd_fqids_params_t {
+ uint32_t num_fqids; /**< Number of fqids to be allocated for the port */
+ uint8_t alignment; /**< Alignment required for this port */
+ uint32_t base_fqid; /**< output parameter - the base fqid */
+} ioc_fm_port_pcd_fqids_params_t;
+
+/**
+ @Function FM_PORT_IOC_ALLOC_PCD_FQIDS
+
+ @Description Allocates FQID's
+
+ May be used for Rx and offline parsing ports only
+
+ @Param[in,out] ioc_fm_port_pcd_fqids_params_t Parameters for allocating FQID's
+
+ @Return 0 on success; error code otherwise.
+*/
+#define FM_PORT_IOC_ALLOC_PCD_FQIDS \
+ _IOWR(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(19), ioc_fm_port_pcd_fqids_params_t)
+
+/**
+ @Function FM_PORT_IOC_FREE_PCD_FQIDS
+
+ @Description Frees previously-allocated FQIDs
+
+ May be used for Rx and offline parsing ports only
+
+ @Param[in] uint32_t Base FQID of previously allocated range.
+
+ @Return 0 on success; error code otherwise.
+*/
+#define FM_PORT_IOC_FREE_PCD_FQIDS \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(19), uint32_t)
+
+/**
+ @Function FM_PORT_SetPCD
+
+ @Description Calling this routine defines the port's PCD configuration.
+ It changes it from its default configuration which is PCD
+ disabled (BMI to BMI) and configures it according to the passed
+ parameters.
+
+ May be used for Rx and offline parsing ports only
+
+ @Param[in] ioc_fm_port_pcd_params_t
+ A Structure of parameters defining the port's PCD configuration.
+
+ @Return 0 on success; error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PORT_IOC_SET_PCD_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(20), ioc_compat_fm_port_pcd_params_t)
+#endif
+#define FM_PORT_IOC_SET_PCD \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(20), ioc_fm_port_pcd_params_t)
+
+/**
+ @Function FM_PORT_DeletePCD
+
+ @Description Calling this routine releases the port's PCD configuration.
+ The port returns to its default configuration which is PCD
+ disabled (BMI to BMI) and all PCD configuration is removed.
+
+ May be used for Rx and offline parsing ports which are
+ in PCD mode only
+
+ @Return 0 on success; error code otherwise.
+*/
+#define FM_PORT_IOC_DELETE_PCD _IO(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(21))
+
+/**
+ @Function FM_PORT_AttachPCD
+
+ @Description This routine may be called after FM_PORT_DetachPCD was called,
+ to return to the originally configured PCD support flow.
+ The couple of routines are used to allow PCD configuration changes
+ that demand that PCD will not be used while changes take place.
+
+ May be used for Rx and offline parsing ports which are
+ in PCD mode only
+
+ @Return 0 on success; error code otherwise.
+*/
+#define FM_PORT_IOC_ATTACH_PCD _IO(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(23))
+
+/**
+ @Function FM_PORT_DetachPCD
+
+ @Description Calling this routine detaches the port from its PCD functionality.
+ The port returns to its default flow which is BMI to BMI.
+
+ May be used for Rx and offline parsing ports which are
+ in PCD mode only
+
+ @Return 0 on success; error code otherwise.
+*/
+#define FM_PORT_IOC_DETACH_PCD _IO(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(22))
+
+/**
+ @Function FM_PORT_PcdPlcrAllocProfiles
+
+ @Description This routine may be called only for ports that use the Policer in
+ order to allocate private policer profiles.
+
+ @Param[in] uint16_t The number of required policer profiles
+
+ @Return 0 on success; error code otherwise.
+
+ @Cautions Allowed before FM_PORT_SetPCD() only.
+*/
+#define FM_PORT_IOC_PCD_PLCR_ALLOC_PROFILES \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(24), uint16_t)
+
+/**
+ @Function FM_PORT_PcdPlcrFreeProfiles
+
+ @Description This routine should be called for freeing private policer profiles.
+
+ @Return 0 on success; error code otherwise.
+
+ @Cautions Allowed before FM_PORT_SetPCD() only.
+*/
+#define FM_PORT_IOC_PCD_PLCR_FREE_PROFILES \
+ _IO(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(25))
+
+/**
+ @Function FM_PORT_PcdKgModifyInitialScheme
+
+ @Description This routine may be called only for ports that use the keygen in
+ order to change the initial scheme frame should be routed to.
+ The change may be of a scheme id (in case of direct mode),
+ from direct to indirect, or from indirect to direct - specifying the scheme id.
+
+ @Param[in] ioc_fm_pcd_kg_scheme_select_t
+ A structure of parameters for defining whether
+ a scheme is direct/indirect, and if direct - scheme id.
+
+ @Return 0 on success; error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PORT_IOC_PCD_KG_MODIFY_INITIAL_SCHEME_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(26), ioc_compat_fm_pcd_kg_scheme_select_t)
+#endif
+#define FM_PORT_IOC_PCD_KG_MODIFY_INITIAL_SCHEME \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(26), ioc_fm_pcd_kg_scheme_select_t)
+
+/**
+ @Function FM_PORT_PcdPlcrModifyInitialProfile
+
+ @Description This routine may be called for ports with flows
+ e_IOC_FM_PCD_SUPPORT_PLCR_ONLY or e_IOC_FM_PCD_SUPPORT_PRS_AND_PLCR
+ only, to change the initial Policer profile frame should be routed to.
+ The change may be of a profile and / or absolute / direct mode selection.
+
+ @Param[in] ioc_fm_obj_t Policer profile Id as returned from FM_PCD_PlcrSetProfile.
+
+ @Return 0 on success; error code otherwise.
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PORT_IOC_PCD_PLCR_MODIFY_INITIAL_PROFILE_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(27), ioc_compat_fm_obj_t)
+#endif
+#define FM_PORT_IOC_PCD_PLCR_MODIFY_INITIAL_PROFILE \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(27), ioc_fm_obj_t)
+
+/**
+ @Function FM_PORT_PcdCcModifyTree
+
+ @Description This routine may be called to change this port connection to
+ a pre - initializes coarse classification Tree.
+
+ @Param[in] ioc_fm_obj_t Id of new coarse classification tree selected for this port.
+
+ @Return 0 on success; error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_SetPCD() and FM_PORT_DetachPCD()
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PORT_IOC_PCD_CC_MODIFY_TREE_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(28), ioc_compat_fm_obj_t)
+#endif
+#define FM_PORT_IOC_PCD_CC_MODIFY_TREE \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(28), ioc_fm_obj_t)
+
+/**
+ @Function FM_PORT_PcdKgBindSchemes
+
+ @Description These routines may be called for modifying the binding of ports
+ to schemes. The scheme itself is not added,
+ just this specific port starts using it.
+
+ @Param[in] ioc_fm_pcd_port_schemes_params_t Schemes parameters structre
+
+ @Return 0 on success; error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_SetPCD().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PORT_IOC_PCD_KG_BIND_SCHEMES_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(30), ioc_compat_fm_pcd_port_schemes_params_t)
+#endif
+#define FM_PORT_IOC_PCD_KG_BIND_SCHEMES \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(30), ioc_fm_pcd_port_schemes_params_t)
+
+/**
+ @Function FM_PORT_PcdKgUnbindSchemes
+
+ @Description These routines may be called for modifying the binding of ports
+ to schemes. The scheme itself is not removed or invalidated,
+ just this specific port stops using it.
+
+ @Param[in] ioc_fm_pcd_port_schemes_params_t Schemes parameters structre
+
+ @Return 0 on success; error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_SetPCD().
+*/
+#if defined(CONFIG_COMPAT)
+#define FM_PORT_IOC_PCD_KG_UNBIND_SCHEMES_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(31), ioc_compat_fm_pcd_port_schemes_params_t)
+#endif
+#define FM_PORT_IOC_PCD_KG_UNBIND_SCHEMES \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(31), ioc_fm_pcd_port_schemes_params_t)
+
+#define ENET_NUM_OCTETS_PER_ADDRESS 6
+ /**< Number of octets (8-bit bytes) in an ethernet address */
+typedef struct ioc_fm_port_mac_addr_params_t {
+ uint8_t addr[ENET_NUM_OCTETS_PER_ADDRESS];
+} ioc_fm_port_mac_addr_params_t;
+
+/**
+ @Function FM_MAC_AddHashMacAddr
+
+ @Description Add an Address to the hash table. This is for filter purpose only.
+
+ @Param[in] ioc_fm_port_mac_addr_params_t - Ethernet Mac address
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_MAC_Init(). It is a filter only address.
+ @Cautions Some address need to be filtered out in upper FM blocks.
+*/
+#define FM_PORT_IOC_ADD_RX_HASH_MAC_ADDR \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(36), ioc_fm_port_mac_addr_params_t)
+
+/**
+ @Function FM_MAC_RemoveHashMacAddr
+
+ @Description Delete an Address to the hash table. This is for filter purpose only.
+
+ @Param[in] ioc_fm_port_mac_addr_params_t - Ethernet Mac address
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_MAC_Init().
+*/
+#define FM_PORT_IOC_REMOVE_RX_HASH_MAC_ADDR \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(37), ioc_fm_port_mac_addr_params_t)
+
+typedef struct ioc_fm_port_tx_pause_frames_params_t {
+ uint8_t priority;
+ uint16_t pause_time;
+ uint16_t thresh_time;
+} ioc_fm_port_tx_pause_frames_params_t;
+
+/**
+ @Function FM_MAC_SetTxPauseFrames
+
+ @Description Enable/Disable transmission of Pause-Frames.
+ The routine changes the default configuration:
+ pause-time - [0xf000]
+ threshold-time - [0]
+
+ @Param[in] ioc_fm_port_tx_pause_frames_params_t
+ A structure holding the required parameters.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_MAC_Init().
+ PFC is supported only on new mEMAC; i.e. in MACs that don't have
+ PFC support (10G-MAC and dTSEC), user should use 'FM_MAC_NO_PFC'
+ in the 'priority' field.
+*/
+#define FM_PORT_IOC_SET_TX_PAUSE_FRAMES \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(40), ioc_fm_port_tx_pause_frames_params_t)
+
+typedef struct ioc_fm_port_mac_statistics_t {
+ /* RMON */
+ uint64_t e_stat_pkts_64; /**< r-10G tr-DT 64 byte frame counter */
+ uint64_t e_stat_pkts_65_to_127;/**< r-10G 65 to 127 byte frame counter */
+ uint64_t e_stat_pkts_128_to_255;/**< r-10G 128 to 255 byte frame counter */
+ uint64_t e_stat_pkts_256_to_511;/**< r-10G 256 to 511 byte frame counter */
+ uint64_t e_stat_pkts_512_to_1023;/**< r-10G 512 to 1023 byte frame counter*/
+ uint64_t e_stat_pkts_1024_to_1518;
+ /**< r-10G 1024 to 1518 byte frame counter */
+ uint64_t e_stat_pkts_1519_to_1522;
+ /**< r-10G 1519 to 1522 byte good frame count */
+ /* */
+ uint64_t e_stat_fragments;
+ /**< Total number of packets that were less than 64 octets long with a wrong CRC.*/
+ uint64_t e_stat_jabbers;
+ /**< Total number of packets longer than valid maximum length octets */
+ uint64_t e_stat_drop_events;
+ /**< number of dropped packets due to internal errors of the MAC Client
+ (during recieve). */
+ uint64_t e_stat_CRC_align_errors;
+ /**< Incremented when frames of correct length but with CRC error are received.*/
+ uint64_t e_stat_undersize_pkts;
+ /**< Incremented for frames under 64 bytes with a valid FCS and otherwise
+ well formed; This count does not include range length errors */
+ uint64_t e_stat_oversize_pkts;
+ /**< Incremented for frames which exceed 1518 (non VLAN) or 1522 (VLAN)
+ and contains a valid FCS and otherwise well formed */
+ /* Pause */
+ uint64_t te_stat_pause; /**< Pause MAC Control received */
+ uint64_t re_stat_pause; /**< Pause MAC Control sent */
+ /* MIB II */
+ uint64_t if_in_octets; /**< Total number of byte received. */
+ uint64_t if_in_pkts; /**< Total number of packets received.*/
+ uint64_t if_in_ucast_pkts; /**< Total number of unicast frame received;
+ NOTE: this counter is not supported on dTSEC MAC */
+ uint64_t if_in_mcast_pkts;/**< Total number of multicast frame received*/
+ uint64_t if_in_bcast_pkts;/**< Total number of broadcast frame received */
+ uint64_t if_in_discards;
+ /**< Frames received, but discarded due to problems within the MAC RX. */
+ uint64_t if_in_errors; /**< Number of frames received with error:
+ - FIFO Overflow Error
+ - CRC Error
+ - Frame Too Long Error
+ - Alignment Error
+ - The dedicated Error Code (0xfe, not a code error) was received */
+ uint64_t if_out_octets; /**< Total number of byte sent. */
+ uint64_t if_out_pkts; /**< Total number of packets sent .*/
+ uint64_t if_out_ucast_pkts; /**< Total number of unicast frame sent;
+ NOTE: this counter is not supported on dTSEC MAC */
+ uint64_t if_out_mcast_pkts; /**< Total number of multicast frame sent */
+ uint64_t if_out_bcast_pkts; /**< Total number of multicast frame sent */
+ uint64_t if_out_discards;
+ /**< Frames received, but discarded due to problems within the MAC TX N/A!.*/
+ uint64_t if_out_errors; /**< Number of frames transmitted with error:
+ - FIFO Overflow Error
+ - FIFO Underflow Error
+ - Other */
+} ioc_fm_port_mac_statistics_t;
+
+/**
+ @Function FM_MAC_GetStatistics
+
+ @Description get all MAC statistics counters
+
+ @Param[out] ioc_fm_port_mac_statistics_t A structure holding the statistics
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_Init().
+*/
+#define FM_PORT_IOC_GET_MAC_STATISTICS \
+ _IOR(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(41), ioc_fm_port_mac_statistics_t)
+
+/**
+ @Function FM_PORT_GetBmiCounters
+
+ @Description Read port's BMI stat counters and place them into
+ a designated structure of counters.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[out] p_BmiStats counters structure
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+
+#define FM_PORT_IOC_GET_BMI_COUNTERS \
+ _IOR(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(42), ioc_fm_port_bmi_stats_t)
+
+/** @} */ /* end of lnx_ioctl_FM_PORT_pcd_runtime_control_grp group */
+/** @} */ /* end of lnx_ioctl_FM_PORT_runtime_control_grp group */
+
+/** @} */ /* end of lnx_ioctl_FM_PORT_grp group */
+/** @} */ /* end of lnx_ioctl_FM_grp group */
+
+
+/**
+ @Group gen_id General Drivers Utilities
+
+ @Description External routines.
+
+ @{
+*/
+
+/**
+ @Group gen_error_id Errors, Events and Debug
+
+ @Description External routines.
+
+ @{
+*/
+
+/**
+The scheme below provides the bits description for error codes:
+
+ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
+| Reserved (should be zero) | Module ID |
+
+ 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
+| Error Type |
+*/
+
+#define ERROR_CODE(_err) ((((uint32_t)_err) & 0x0000FFFF) | __ERR_MODULE__)
+
+#define GET_ERROR_TYPE(_errcode) ((_errcode) & 0x0000FFFF)
+ /**< Extract module code from error code (#uint32_t) */
+
+#define GET_ERROR_MODULE(_errcode) ((_errcode) & 0x00FF0000)
+ /**< Extract error type (#e_ErrorType) from
+ error code (#uint32_t) */
+
+#define RETURN_ERROR(_level, _err, _vmsg) { \
+ return ERROR_CODE(_err); \
+}
+
+/**
+ @Description Error Type Enumeration
+*/
+typedef enum e_ErrorType {
+ E_OK = 0 /* Never use "RETURN_ERROR" with E_OK; Use "return E_OK;"*/
+ , E_WRITE_FAILED = EIO /**< Write access failed on memory/device.*/
+ /* String: none, or device name.*/
+ , E_NO_DEVICE = ENXIO /**< The associated device is not initialized.*/
+ /* String: none.*/
+ , E_NOT_AVAILABLE = EAGAIN
+ /**< Resource is unavailable.*/
+ /* String: none, unless the operation is not the main goal
+ of the function (in this case add resource description). */
+ , E_NO_MEMORY = ENOMEM /**< External memory allocation failed.*/
+ /* String: description of item for which allocation failed. */
+ , E_INVALID_ADDRESS = EFAULT
+ /**< Invalid address.*/
+ /* String: description of the specific violation.*/
+ , E_BUSY = EBUSY /**< Resource or module is busy.*/
+ /* String: none, unless the operation is not the main goal
+ of the function (in this case add resource description). */
+ , E_ALREADY_EXISTS = EEXIST
+ /**< Requested resource or item already exists.*/
+ /* Use when resource duplication or sharing are not allowed.
+ String: none, unless the operation is not the main goal
+ of the function (in this case add item description).*/
+ , E_INVALID_OPERATION = ENODEV
+ /**< The operation/command is invalid (unrecognized).*/
+ /* String: none.*/
+ , E_INVALID_VALUE = EDOM /**< Invalid value.*/
+ /* Use for non-enumeration parameters, and
+ only when other error types are not suitable.
+ String: parameter description + "(should be <attribute>)",
+ e.g: "Maximum Rx buffer length (should be divisible by 8)",
+ "Channel number (should be even)".*/
+ , E_NOT_IN_RANGE = ERANGE/**< Parameter value is out of range.*/
+ /* Don't use this error for enumeration parameters.
+ String: parameter description + "(should be %d-%d)",
+ e.g: "Number of pad characters (should be 0-15)".*/
+ , E_NOT_SUPPORTED = ENOSYS
+ /**< The function is not supported or not implemented.*/
+ /* String: none.*/
+ , E_INVALID_STATE /**< The operation is not allowed in current module state.*/
+ /* String: none.*/
+ , E_INVALID_HANDLE /**< Invalid handle of module or object.*/
+ /* String: none, unless the function takes in more than one
+ handle (in this case add the handle description)*/
+ , E_INVALID_ID /**< Invalid module ID (usually enumeration or index).*/
+ /* String: none, unless the function takes in more than one
+ ID (in this case add the ID description)*/
+ , E_NULL_POINTER /**< Unexpected NULL pointer.*/
+ /* String: pointer description.*/
+ , E_INVALID_SELECTION /**< Invalid selection or mode.*/
+ /* Use for enumeration values, only when other error types
+ are not suitable.
+ String: parameter description.*/
+ , E_INVALID_COMM_MODE /**< Invalid communication mode.*/
+ /* String: none, unless the function takes in more than one
+ communication mode indications (in this case add
+ parameter description).*/
+ , E_INVALID_MEMORY_TYPE /**< Invalid memory type.*/
+ /* String: none, unless the function takes in more than one
+ memory types (in this case add memory description,
+ e.g: "Data memory", "Buffer descriptors memory").*/
+ , E_INVALID_CLOCK /**< Invalid clock.*/
+ /* String: none, unless the function takes in more than one
+ clocks (in this case add clock description,
+ e.g: "Rx clock", "Tx clock").*/
+ , E_CONFLICT /**< Some setting conflicts with another setting.*/
+ /* String: description of the conflicting settings.*/
+ , E_NOT_ALIGNED /**< Non-aligned address.*/
+ /* String: parameter description + "(should be %d-bytes aligned)",
+ e.g: "Rx data buffer (should be 32-bytes aligned)".*/
+ , E_NOT_FOUND /**< Requested resource or item was not found.*/
+ /* Use only when the resource/item is uniquely identified.
+ String: none, unless the operation is not the main goal
+ of the function (in this case add item description).*/
+ , E_FULL /**< Resource is full.*/
+ /* String: none, unless the operation is not the main goal
+ of the function (in this case add resource description). */
+ , E_EMPTY /**< Resource is empty.*/
+ /* String: none, unless the operation is not the main goal
+ of the function (in this case add resource description). */
+ , E_ALREADY_FREE /**< Specified resource or item is already free or deleted.*/
+ /* String: none, unless the operation is not the main goal
+ of the function (in this case add item description).*/
+ , E_READ_FAILED /**< Read access failed on memory/device.*/
+ /* String: none, or device name.*/
+ , E_INVALID_FRAME /**< Invalid frame object (NULL handle or missing buffers).*/
+ /* String: none.*/
+ , E_SEND_FAILED /**< Send operation failed on device.*/
+ /* String: none, or device name.*/
+ , E_RECEIVE_FAILED /**< Receive operation failed on device.*/
+ /* String: none, or device name.*/
+ , E_TIMEOUT/* = ETIMEDOUT*/ /**< The operation timed out.*/
+ /* String: none.*/
+
+ , E_DUMMY_LAST /* NEVER USED */
+
+} e_ErrorType;
+
+/**
+
+ @Group FM_grp Frame Manager API
+
+ @Description FM API functions, definitions and enums
+
+ @{
+*/
+
+/**
+ @Group FM_PORT_grp FM Port
+
+ @Description FM Port API
+
+ The FM uses a general module called "port" to represent a Tx port
+ (MAC), an Rx port (MAC) or Offline Parsing port.
+ The number of ports in an FM varies between SOCs.
+ The SW driver manages these ports as sub-modules of the FM, i.e.
+ after an FM is initialized, its ports may be initialized and
+ operated upon.
+
+ The port is initialized aware of its type, but other functions on
+ a port may be indifferent to its type. When necessary, the driver
+ verifies coherence and returns error if applicable.
+
+ On initialization, user specifies the port type and it's index
+ (relative to the port's type) - always starting at 0.
+
+ @{
+*/
+
+/**
+ @Description An enum for defining port PCD modes.
+ This enum defines the superset of PCD engines support - i.e. not
+ all engines have to be used, but all have to be enabled. The real
+ flow of a specific frame depends on the PCD configuration and the
+ frame headers and payload.
+ Note: the first engine and the first engine after the parser (if
+ exists) should be in order, the order is important as it will
+ define the flow of the port. However, as for the rest engines
+ (the ones that follows), the order is not important anymore as
+ it is defined by the PCD graph itself.
+*/
+typedef enum e_FmPortPcdSupport {
+ e_FM_PORT_PCD_SUPPORT_NONE = 0 /**< BMI to BMI, PCD is not used */
+ , e_FM_PORT_PCD_SUPPORT_PRS_ONLY /**< Use only Parser */
+ , e_FM_PORT_PCD_SUPPORT_PLCR_ONLY /**< Use only Policer */
+ , e_FM_PORT_PCD_SUPPORT_PRS_AND_PLCR /**< Use Parser and Policer */
+ , e_FM_PORT_PCD_SUPPORT_PRS_AND_KG /**< Use Parser and Keygen */
+ , e_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_CC
+ /**< Use Parser, Keygen and Coarse Classification */
+ , e_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_CC_AND_PLCR
+ /**< Use all PCD engines */
+ , e_FM_PORT_PCD_SUPPORT_PRS_AND_KG_AND_PLCR
+ /**< Use Parser, Keygen and Policer */
+ , e_FM_PORT_PCD_SUPPORT_PRS_AND_CC
+ /**< Use Parser and Coarse Classification */
+ , e_FM_PORT_PCD_SUPPORT_PRS_AND_CC_AND_PLCR
+ /**< Use Parser and Coarse Classification and Policer */
+ , e_FM_PORT_PCD_SUPPORT_CC_ONLY /**< Use only Coarse Classification */
+#ifdef FM_CAPWAP_SUPPORT
+ , e_FM_PORT_PCD_SUPPORT_CC_AND_KG
+ /**< Use Coarse Classification,and Keygen */
+ , e_FM_PORT_PCD_SUPPORT_CC_AND_KG_AND_PLCR
+ /**< Use Coarse Classification, Keygen and Policer */
+#endif /* FM_CAPWAP_SUPPORT */
+} e_FmPortPcdSupport;
+
+/**
+ @Description Port interrupts
+*/
+typedef enum e_FmPortExceptions {
+ e_FM_PORT_EXCEPTION_IM_BUSY /**< Independent-Mode Rx-BUSY */
+} e_FmPortExceptions;
+
+/**
+ @Collection General FM Port defines
+*/
+#define FM_PORT_PRS_RESULT_NUM_OF_WORDS 8
+ /**< Number of 4 bytes words in parser result */
+/* @} */
+
+/**
+ @Collection FM Frame error
+*/
+typedef uint32_t fmPortFrameErrSelect_t;
+ /**< typedef for defining Frame Descriptor errors */
+
+#define FM_PORT_FRM_ERR_UNSUPPORTED_FORMAT FM_FD_ERR_UNSUPPORTED_FORMAT
+ /**< Not for Rx-Port! Unsupported Format */
+#define FM_PORT_FRM_ERR_LENGTH FM_FD_ERR_LENGTH
+ /**< Not for Rx-Port! Length Error */
+#define FM_PORT_FRM_ERR_DMA FM_FD_ERR_DMA /**< DMA Data error */
+#define FM_PORT_FRM_ERR_NON_FM FM_FD_RX_STATUS_ERR_NON_FM
+ /**< non Frame-Manager error; probably come from SEC that
+ was chained to FM */
+
+#define FM_PORT_FRM_ERR_IPRE (FM_FD_ERR_IPR & ~FM_FD_IPR)
+ /**< IPR error */
+#define FM_PORT_FRM_ERR_IPR_NCSP (FM_FD_ERR_IPR_NCSP & ~FM_FD_IPR)
+ /**< IPR non-consistent-sp */
+
+#define FM_PORT_FRM_ERR_IPFE 0
+ /**< Obsolete; will be removed in the future */
+
+#ifdef FM_CAPWAP_SUPPORT
+#define FM_PORT_FRM_ERR_CRE FM_FD_ERR_CRE
+#define FM_PORT_FRM_ERR_CHE FM_FD_ERR_CHE
+#endif /* FM_CAPWAP_SUPPORT */
+
+#define FM_PORT_FRM_ERR_PHYSICAL FM_FD_ERR_PHYSICAL
+ /**< Rx FIFO overflow, FCS error, code error, running disparity
+ error (SGMII and TBI modes), FIFO parity error. PHY
+ Sequence error, PHY error control character detected. */
+#define FM_PORT_FRM_ERR_SIZE FM_FD_ERR_SIZE
+ /**< Frame too long OR Frame size exceeds max_length_frame*/
+#define FM_PORT_FRM_ERR_CLS_DISCARD FM_FD_ERR_CLS_DISCARD
+ /**< indicates a classifier "drop" operation */
+#define FM_PORT_FRM_ERR_EXTRACTION FM_FD_ERR_EXTRACTION
+ /**< Extract Out of Frame */
+#define FM_PORT_FRM_ERR_NO_SCHEME FM_FD_ERR_NO_SCHEME
+ /**< No Scheme Selected */
+#define FM_PORT_FRM_ERR_KEYSIZE_OVERFLOW FM_FD_ERR_KEYSIZE_OVERFLOW
+ /**< Keysize Overflow */
+#define FM_PORT_FRM_ERR_COLOR_RED FM_FD_ERR_COLOR_RED
+ /**< Frame color is red */
+#define FM_PORT_FRM_ERR_COLOR_YELLOW FM_FD_ERR_COLOR_YELLOW
+ /**< Frame color is yellow */
+#define FM_PORT_FRM_ERR_ILL_PLCR FM_FD_ERR_ILL_PLCR
+ /**< Illegal Policer Profile selected */
+#define FM_PORT_FRM_ERR_PLCR_FRAME_LEN FM_FD_ERR_PLCR_FRAME_LEN
+ /**< Policer frame length error */
+#define FM_PORT_FRM_ERR_PRS_TIMEOUT FM_FD_ERR_PRS_TIMEOUT
+ /**< Parser Time out Exceed */
+#define FM_PORT_FRM_ERR_PRS_ILL_INSTRUCT FM_FD_ERR_PRS_ILL_INSTRUCT
+ /**< Invalid Soft Parser instruction */
+#define FM_PORT_FRM_ERR_PRS_HDR_ERR FM_FD_ERR_PRS_HDR_ERR
+ /**< Header error was identified during parsing */
+#define FM_PORT_FRM_ERR_BLOCK_LIMIT_EXCEEDED FM_FD_ERR_BLOCK_LIMIT_EXCEEDED
+ /**< Frame parsed beyind 256 first bytes */
+#define FM_PORT_FRM_ERR_PROCESS_TIMEOUT 0x00000001
+ /**< FPM Frame Processing Timeout Exceeded */
+/* @} */
+
+
+/**
+ @Group FM_PORT_init_grp FM Port Initialization Unit
+
+ @Description FM Port Initialization Unit
+
+ @{
+*/
+
+/**
+ @Description Exceptions user callback routine, will be called upon an
+ exception passing the exception identification.
+
+ @Param[in] h_App - User's application descriptor.
+ @Param[in] exception - The exception.
+*/
+typedef void (t_FmPortExceptionCallback) (t_Handle h_App,
+ e_FmPortExceptions exception);
+
+/**
+ @Description User callback function called by driver with received data.
+
+ User provides this function. Driver invokes it.
+
+ @Param[in] h_App Application's handle originally specified to
+ the API Config function
+ @Param[in] p_Data A pointer to data received
+ @Param[in] length length of received data
+ @Param[in] status receive status and errors
+ @Param[in] position position of buffer in frame
+ @Param[in] h_BufContext A handle of the user acossiated with this buffer
+
+ @Retval e_RX_STORE_RESPONSE_CONTINUE - order the driver to continue Rx
+ operation for all ready data.
+ @Retval e_RX_STORE_RESPONSE_PAUSE - order the driver to stop Rx operation.
+*/
+typedef e_RxStoreResponse(t_FmPortImRxStoreCallback) (t_Handle h_App,
+ uint8_t *p_Data,
+ uint16_t length,
+ uint16_t status,
+ uint8_t position,
+ t_Handle h_BufContext);
+
+/**
+ @Description User callback function called by driver when transmit completed.
+
+ User provides this function. Driver invokes it.
+
+ @Param[in] h_App Application's handle originally specified to
+ the API Config function
+ @Param[in] p_Data A pointer to data received
+ @Param[in] status transmit status and errors
+ @Param[in] lastBuffer is last buffer in frame
+ @Param[in] h_BufContext A handle of the user acossiated with this buffer
+ */
+typedef void (t_FmPortImTxConfCallback) (t_Handle h_App,
+ uint8_t *p_Data,
+ uint16_t status,
+ t_Handle h_BufContext);
+
+/**
+ @Description A structure for additional Rx port parameters
+*/
+typedef struct t_FmPortRxParams {
+ uint32_t errFqid; /**< Error Queue Id. */
+ uint32_t dfltFqid; /**< Default Queue Id.*/
+ uint16_t liodnOffset; /**< Port's LIODN offset. */
+ t_FmExtPools extBufPools;/**< Which external buffer pools are used
+ (up to FM_PORT_MAX_NUM_OF_EXT_POOLS), and their sizes. */
+} t_FmPortRxParams;
+
+/**
+ @Description A structure for additional non-Rx port parameters
+*/
+typedef struct t_FmPortNonRxParams {
+ uint32_t errFqid; /**< Error Queue Id. */
+ uint32_t dfltFqid;/**< For Tx - Default Confirmation queue,
+ 0 means no Tx confirmation for processed
+ frames. For OP port - default Rx queue. */
+ uint32_t qmChannel;
+ /**< QM-channel dedicated to this port; will be used by the FM for dequeue. */
+} t_FmPortNonRxParams;
+
+/**
+ @Description A structure for additional Rx port parameters
+*/
+typedef struct t_FmPortImRxTxParams {
+ t_Handle h_FmMuram;
+ /**< A handle of the FM-MURAM partition */
+ uint16_t liodnOffset;
+ /**< For Rx ports only. Port's LIODN Offset. */
+ uint8_t dataMemId;
+ /**< Memory partition ID for data buffers */
+ uint32_t dataMemAttributes;
+ /**< Memory attributes for data buffers */
+ t_BufferPoolInfo rxPoolParams; /**< For Rx ports only. */
+ t_FmPortImRxStoreCallback *f_RxStore; /**< For Rx ports only. */
+ t_FmPortImTxConfCallback *f_TxConf; /**< For Tx ports only. */
+} t_FmPortImRxTxParams;
+
+/**
+ @Description A union for additional parameters depending on port type
+*/
+typedef union u_FmPortSpecificParams {
+ t_FmPortImRxTxParams imRxTxParams;
+ /**< Rx/Tx Independent-Mode port parameter structure */
+ t_FmPortRxParams rxParams; /**< Rx port parameters structure */
+ t_FmPortNonRxParams nonRxParams;/**< Non-Rx port parameters structure */
+} u_FmPortSpecificParams;
+
+/**
+ @Description A structure representing FM initialization parameters
+*/
+typedef struct t_FmPortParams {
+ uintptr_t baseAddr;
+ /**< Virtual Address of memory mapped FM Port registers.*/
+ t_Handle h_Fm;
+ /**< A handle to the FM object this port related to */
+ e_FmPortType portType; /**< Port type */
+ uint8_t portId; /**< Port Id - relative to type;
+ NOTE: When configuring Offline Parsing port for
+ FMANv3 devices (DPAA_VERSION 11 and higher),
+ it is highly recommended NOT to use portId=0 due to lack
+ of HW resources on portId=0. */
+ bool independentModeEnable;
+ /**< This port is Independent-Mode - Used for Rx/Tx ports only!*/
+ uint16_t liodnBase;
+ /**< Irrelevant for P4080 rev 1. LIODN base for this port, to be
+ used together with LIODN offset. */
+ u_FmPortSpecificParams specificParams;
+ /**< Additional parameters depending on port type. */
+
+ t_FmPortExceptionCallback *f_Exception;
+ /**< Relevant for IM only Callback routine to be called on BUSY exception */
+ t_Handle h_App;
+ /**< A handle to an application layer object; This handle will
+ be passed by the driver upon calling the above callbacks */
+} t_FmPortParams;
+
+/**
+ @Function FM_PORT_Config
+
+ @Description Creates a descriptor for the FM PORT module.
+
+ The routine returns a handle(descriptor) to the FM PORT object.
+ This descriptor must be passed as first parameter to all other
+ FM PORT function calls.
+
+ No actual initialization or configuration of FM hardware is
+ done by this routine.
+
+ @Param[in] p_FmPortParams - Pointer to data structure of parameters
+
+ @Retval Handle to FM object, or NULL for Failure.
+*/
+t_Handle FM_PORT_Config(t_FmPortParams *p_FmPortParams);
+
+/**
+ @Function FM_PORT_Init
+
+ @Description Initializes the FM PORT module by defining the software structure
+ and configuring the hardware registers.
+
+ @Param[in] h_FmPort - FM PORT module descriptor
+
+ @Return E_OK on success; Error code otherwise.
+*/
+uint32_t FM_PORT_Init(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_Free
+
+ @Description Frees all resources that were assigned to FM PORT module.
+
+ Calling this routine invalidates the descriptor.
+
+ @Param[in] h_FmPort - FM PORT module descriptor
+
+ @Return E_OK on success; Error code otherwise.
+*/
+uint32_t FM_PORT_Free(t_Handle h_FmPort);
+
+t_Handle FM_PORT_Open(t_FmPortParams *p_FmPortParams);
+void FM_PORT_Close(t_Handle h_FmPort);
+
+
+/**
+ @Group FM_PORT_advanced_init_grp FM Port Advanced Configuration Unit
+
+ @Description Configuration functions used to change default values.
+
+ @{
+*/
+
+/**
+ @Description enum for defining QM frame dequeue
+*/
+typedef enum e_FmPortDeqType {
+ e_FM_PORT_DEQ_TYPE1, /**< Dequeue from the SP channel - with priority precedence,
+ and Intra-Class Scheduling respected. */
+ e_FM_PORT_DEQ_TYPE2, /**< Dequeue from the SP channel - with active FQ precedence,
+ and Intra-Class Scheduling respected. */
+ e_FM_PORT_DEQ_TYPE3 /**< Dequeue from the SP channel - with active FQ precedence,
+ and override Intra-Class Scheduling */
+} e_FmPortDeqType;
+
+/**
+ @Description enum for defining QM frame dequeue
+*/
+typedef enum e_FmPortDeqPrefetchOption {
+ e_FM_PORT_DEQ_NO_PREFETCH, /**< QMI preforms a dequeue action for a single frame
+ only when a dedicated portID Tnum is waiting. */
+ e_FM_PORT_DEQ_PARTIAL_PREFETCH, /**< QMI preforms a dequeue action for 3 frames
+ when one dedicated portId tnum is waiting. */
+ e_FM_PORT_DEQ_FULL_PREFETCH /**< QMI preforms a dequeue action for 3 frames when
+ no dedicated portId tnums are waiting. */
+
+} e_FmPortDeqPrefetchOption;
+
+/**
+ @Description enum for defining port default color
+*/
+typedef enum e_FmPortColor {
+ e_FM_PORT_COLOR_GREEN, /**< Default port color is green */
+ e_FM_PORT_COLOR_YELLOW, /**< Default port color is yellow */
+ e_FM_PORT_COLOR_RED, /**< Default port color is red */
+ e_FM_PORT_COLOR_OVERRIDE/**< Ignore color */
+} e_FmPortColor;
+
+/**
+ @Description A structure for defining Dual Tx rate limiting scale
+*/
+typedef enum e_FmPortDualRateLimiterScaleDown {
+ e_FM_PORT_DUAL_RATE_LIMITER_NONE = 0, /**< Use only single rate limiter*/
+ e_FM_PORT_DUAL_RATE_LIMITER_SCALE_DOWN_BY_2,
+ /**< Divide high rate limiter by 2 */
+ e_FM_PORT_DUAL_RATE_LIMITER_SCALE_DOWN_BY_4,
+ /**< Divide high rate limiter by 4 */
+ e_FM_PORT_DUAL_RATE_LIMITER_SCALE_DOWN_BY_8
+ /**< Divide high rate limiter by 8 */
+} e_FmPortDualRateLimiterScaleDown;
+
+/**
+ @Description A structure for defining FM port resources
+*/
+typedef struct t_FmPortRsrc {
+ uint32_t num; /**< Committed required resource */
+ uint32_t extra; /**< Extra (not committed) required resource */
+} t_FmPortRsrc;
+
+/**
+ @Description A structure for defining observed pool depletion
+*/
+typedef struct t_FmPortObservedBufPoolDepletion {
+ t_FmBufPoolDepletion poolDepletionParams;
+ /**< parameters to define pool depletion */
+ t_FmExtPools poolsParams;
+ /**< Which external buffer pools are observed
+ (up to FM_PORT_MAX_NUM_OF_OBSERVED_EXT_POOLS),
+ and their sizes. */
+} t_FmPortObservedBufPoolDepletion;
+
+/**
+ @Description A structure for defining Tx rate limiting
+*/
+typedef struct t_FmPortRateLimit {
+ uint16_t maxBurstSize;
+ /**< in KBytes for Tx ports, in frames
+ for OP ports. (note that
+ for early chips burst size is
+ rounded up to a multiply of 1000 frames).*/
+ uint32_t rateLimit;
+ /**< in Kb/sec for Tx ports, in frame/sec for
+ OP ports. Rate limit refers to
+ data rate (rather than line rate). */
+ e_FmPortDualRateLimiterScaleDown rateLimitDivider;
+ /**< For OP ports only. Not-valid
+ for some earlier chip revisions */
+} t_FmPortRateLimit;
+
+/**
+ @Description A structure for defining the parameters of
+ the Rx port performance counters
+*/
+typedef struct t_FmPortPerformanceCnt {
+ uint8_t taskCompVal; /**< Task compare value */
+ uint8_t queueCompVal; /**< Rx queue/Tx confirm queue compare
+ value (unused for H/O) */
+ uint8_t dmaCompVal; /**< Dma compare value */
+ uint32_t fifoCompVal; /**< Fifo compare value (in bytes) */
+} t_FmPortPerformanceCnt;
+
+/**
+ @Description A structure for defining the sizes of the Deep Sleep
+ the Auto Response tables
+*/
+typedef struct t_FmPortDsarTablesSizes {
+ uint16_t maxNumOfArpEntries;
+ uint16_t maxNumOfEchoIpv4Entries;
+ uint16_t maxNumOfNdpEntries;
+ uint16_t maxNumOfEchoIpv6Entries;
+ uint16_t maxNumOfSnmpIPV4Entries;
+ uint16_t maxNumOfSnmpIPV6Entries;
+ uint16_t maxNumOfSnmpOidEntries;
+ uint16_t maxNumOfSnmpOidChar;
+ /* total amount of character needed for the snmp table */
+ uint16_t maxNumOfIpProtFiltering;
+ uint16_t maxNumOfTcpPortFiltering;
+ uint16_t maxNumOfUdpPortFiltering;
+} t_FmPortDsarTablesSizes;
+
+/**
+ @Function FM_PORT_ConfigDsarSupport
+
+ @Description This function will allocate the amount of MURAM needed for
+ this max number of entries for Deep Sleep Auto Response.
+ it will calculate all needed MURAM for autoresponse including
+ necessary common stuff.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] params A pointer to a structure containing the maximum
+ sizes of the auto response tables
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDsarSupport(t_Handle h_FmPortRx,
+ t_FmPortDsarTablesSizes *params);
+
+/**
+ @Function FM_PORT_ConfigNumOfOpenDmas
+
+ @Description Calling this routine changes the max number of open DMA's
+ available for this port. It changes this parameter in the
+ internal driver data base from its default configuration
+ [OP: 1]
+ [1G-RX, 1G-TX: 1 (+1)]
+ [10G-RX, 10G-TX: 8 (+8)]
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_OpenDmas A pointer to a structure of parameters defining
+ the open DMA allocation.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigNumOfOpenDmas(t_Handle h_FmPort,
+ t_FmPortRsrc *p_OpenDmas);
+
+/**
+ @Function FM_PORT_ConfigNumOfTasks
+
+ @Description Calling this routine changes the max number of tasks
+ available for this port. It changes this parameter in the
+ internal driver data base from its default configuration
+ [OP : 1]
+ [1G - RX, 1G - TX : 3 ( + 2)]
+ [10G - RX, 10G - TX : 16 ( + 8)]
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_NumOfTasks A pointer to a structure of parameters defining
+ the tasks allocation.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigNumOfTasks(t_Handle h_FmPort,
+ t_FmPortRsrc *p_NumOfTasks);
+
+/**
+ @Function FM_PORT_ConfigSizeOfFifo
+
+ @Description Calling this routine changes the max FIFO size configured for this port.
+
+ This function changes the internal driver data base from its
+ default configuration. Please refer to the driver's User Guide for
+ information on default FIFO sizes in the various devices.
+ [OP: 2KB]
+ [1G-RX, 1G-TX: 11KB]
+ [10G-RX, 10G-TX: 12KB]
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_SizeOfFifo A pointer to a structure of parameters defining
+ the FIFO allocation.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigSizeOfFifo(t_Handle h_FmPort,
+ t_FmPortRsrc *p_SizeOfFifo);
+
+/**
+ @Function FM_PORT_ConfigDeqHighPriority
+
+ @Description Calling this routine changes the dequeue priority in the
+ internal driver data base from its default configuration
+ 1G: [DEFAULT_PORT_deqHighPriority_1G]
+ 10G: [DEFAULT_PORT_deqHighPriority_10G]
+
+ May be used for Non - Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] highPri TRUE to select high priority, FALSE for normal operation.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDeqHighPriority(t_Handle h_FmPort, bool highPri);
+
+/**
+ @Function FM_PORT_ConfigDeqType
+
+ @Description Calling this routine changes the dequeue type parameter in the
+ internal driver data base from its default configuration
+ [DEFAULT_PORT_deqType].
+
+ May be used for Non - Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] deqType According to QM definition.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDeqType(t_Handle h_FmPort,
+ e_FmPortDeqType deqType);
+
+/**
+ @Function FM_PORT_ConfigDeqPrefetchOption
+
+ @Description Calling this routine changes the dequeue prefetch option parameter in the
+ internal driver data base from its default configuration
+ [DEFAULT_PORT_deqPrefetchOption]
+ Note: Available for some chips only
+
+ May be used for Non - Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] deqPrefetchOption New option
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDeqPrefetchOption(t_Handle h_FmPort,
+ e_FmPortDeqPrefetchOption deqPrefetchOption);
+
+/**
+ @Function FM_PORT_ConfigDeqByteCnt
+
+ @Description Calling this routine changes the dequeue byte count parameter in
+ the internal driver data base from its default configuration
+ 1G:[DEFAULT_PORT_deqByteCnt_1G].
+ 10G:[DEFAULT_PORT_deqByteCnt_10G].
+
+ May be used for Non - Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] deqByteCnt New byte count
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDeqByteCnt(t_Handle h_FmPort,
+ uint16_t deqByteCnt);
+
+/**
+ @Function FM_PORT_ConfigBufferPrefixContent
+
+ @Description Defines the structure, size and content of the application buffer.
+ The prefix will
+ In Tx ports, if 'passPrsResult', the application
+ should set a value to their offsets in the prefix of
+ the FM will save the first 'privDataSize', than,
+ depending on 'passPrsResult' and 'passTimeStamp', copy parse result
+ and timeStamp, and the packet itself (in this order), to the
+ application buffer, and to offset.
+ Calling this routine changes the buffer margins definitions
+ in the internal driver data base from its default
+ configuration: Data size: [DEFAULT_PORT_bufferPrefixContent_privDataSize]
+ Pass Parser result: [DEFAULT_PORT_bufferPrefixContent_passPrsResult].
+ Pass timestamp: [DEFAULT_PORT_bufferPrefixContent_passTimeStamp].
+
+ May be used for all ports
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in,out] p_FmBufferPrefixContent A structure of parameters describing the
+ structure of the buffer.
+ Out parameter: Start margin - offset
+ of data from start of external buffer.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigBufferPrefixContent(t_Handle h_FmPort,
+ t_FmBufferPrefixContent *p_FmBufferPrefixContent);
+
+/**
+ @Function FM_PORT_ConfigCheksumLastBytesIgnore
+
+ @Description Calling this routine changes the number of checksum bytes to ignore
+ parameter in the internal driver data base from its default configuration
+ [DEFAULT_PORT_cheksumLastBytesIgnore]
+
+ May be used by Tx & Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] cheksumLastBytesIgnore New value
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigCheksumLastBytesIgnore(t_Handle h_FmPort,
+ uint8_t cheksumLastBytesIgnore);
+
+/**
+ @Function FM_PORT_ConfigCutBytesFromEnd
+
+ @Description Calling this routine changes the number of bytes to cut from a
+ frame's end parameter in the internal driver data base
+ from its default configuration [DEFAULT_PORT_cutBytesFromEnd]
+ Note that if the result of (frame length before chop - cutBytesFromEnd) is
+ less than 14 bytes, the chop operation is not executed.
+
+ May be used for Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] cutBytesFromEnd New value
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigCutBytesFromEnd(t_Handle h_FmPort,
+ uint8_t cutBytesFromEnd);
+
+/**
+ @Function FM_PORT_ConfigPoolDepletion
+
+ @Description Calling this routine enables pause frame generation depending on the
+ depletion status of BM pools. It also defines the conditions to activate
+ this functionality. By default, this functionality is disabled.
+
+ May be used for Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_BufPoolDepletion A structure of pool depletion parameters
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigPoolDepletion(t_Handle h_FmPort,
+ t_FmBufPoolDepletion *p_BufPoolDepletion);
+
+/**
+ @Function FM_PORT_ConfigObservedPoolDepletion
+
+ @Description Calling this routine enables a mechanism to stop port enqueue
+ depending on the depletion status of selected BM pools.
+ It also defines the conditions to activate
+ this functionality. By default, this functionality is disabled.
+
+ Note: Available for some chips only
+
+ May be used for OP ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_FmPortObservedBufPoolDepletion
+ A structure of parameters for pool depletion.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigObservedPoolDepletion(t_Handle h_FmPort,
+ t_FmPortObservedBufPoolDepletion *p_FmPortObservedBufPoolDepletion);
+
+/**
+ @Function FM_PORT_ConfigExtBufPools
+
+ @Description This routine should be called for OP ports
+ that internally use BM buffer pools. In such cases, e.g. for fragmentation and
+ re-assembly, the FM needs new BM buffers. By calling this routine the user
+ specifies the BM buffer pools that should be used.
+
+ Note: Available for some chips only
+
+ May be used for OP ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_FmExtPools A structure of parameters for the external pools.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigExtBufPools(t_Handle h_FmPort,
+ t_FmExtPools *p_FmExtPools);
+
+/**
+ @Function FM_PORT_ConfigBackupPools
+
+ @Description Calling this routine allows the configuration of some of the BM pools
+ defined for this port as backup pools.
+ A pool configured to be a backup pool will be used only if all other
+ enabled non - backup pools are depleted.
+
+ May be used for Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_FmPortBackupBmPools An array of pool id's. All pools specified here will
+ be defined as backup pools.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigBackupPools(t_Handle h_FmPort,
+ t_FmBackupBmPools *p_FmPortBackupBmPools);
+
+/**
+ @Function FM_PORT_ConfigFrmDiscardOverride
+
+ @Description Calling this routine changes the error frames destination parameter
+ in the internal driver data base from its default configuration :
+ override =[DEFAULT_PORT_frmDiscardOverride]
+
+ May be used for Rx and OP ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] override TRUE to override discarding of error frames and
+ enqueueing them to error queue.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigFrmDiscardOverride(t_Handle h_FmPort,
+ bool override);
+
+/**
+ @Function FM_PORT_ConfigErrorsToDiscard
+
+ @Description Calling this routine changes the behaviour on error parameter
+ in the internal driver data base from its default configuration :
+ [DEFAULT_PORT_errorsToDiscard].
+ If a requested error was previously defined as "ErrorsToEnqueue" it's
+ definition will change and the frame will be discarded.
+ Errors that were not defined either as "ErrorsToEnqueue" nor as
+ "ErrorsToDiscard", will be forwarded to CPU.
+
+ May be used for Rx and OP ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] errs A list of errors to discard
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigErrorsToDiscard(t_Handle h_FmPort,
+ fmPortFrameErrSelect_t errs);
+
+/**
+ @Function FM_PORT_ConfigDmaSwapData
+
+ @Description Calling this routine changes the DMA swap data aparameter
+ in the internal driver data base from its default
+ configuration[DEFAULT_PORT_dmaSwapData]
+
+ May be used for all port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] swapData New selection
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDmaSwapData(t_Handle h_FmPort,
+ e_FmDmaSwapOption swapData);
+
+/**
+ @Function FM_PORT_ConfigDmaIcCacheAttr
+
+ @Description Calling this routine changes the internal context cache
+ attribute parameter in the internal driver data base
+ from its default configuration[DEFAULT_PORT_dmaIntContextCacheAttr]
+
+ May be used for all port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] intContextCacheAttr New selection
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDmaIcCacheAttr(t_Handle h_FmPort,
+ e_FmDmaCacheOption intContextCacheAttr);
+
+/**
+ @Function FM_PORT_ConfigDmaHdrAttr
+
+ @Description Calling this routine changes the header cache
+ attribute parameter in the internal driver data base
+ from its default configuration[DEFAULT_PORT_dmaHeaderCacheAttr]
+
+ May be used for all port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] headerCacheAttr New selection
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDmaHdrAttr(t_Handle h_FmPort,
+ e_FmDmaCacheOption headerCacheAttr);
+
+/**
+ @Function FM_PORT_ConfigDmaScatterGatherAttr
+
+ @Description Calling this routine changes the scatter gather cache
+ attribute parameter in the internal driver data base
+ from its default configuration[DEFAULT_PORT_dmaScatterGatherCacheAttr]
+
+ May be used for all port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] scatterGatherCacheAttr New selection
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDmaScatterGatherAttr(t_Handle h_FmPort,
+ e_FmDmaCacheOption scatterGatherCacheAttr);
+
+/**
+ @Function FM_PORT_ConfigDmaWriteOptimize
+
+ @Description Calling this routine changes the write optimization
+parameter in the internal driver data base
+from its default configuration : By default optimize = [DEFAULT_PORT_dmaWriteOptimize].
+Note:
+
+1. For head optimization, data alignment must be >= 16 (supported by default).
+
+3. For tail optimization, note that the optimization is performed by extending the write transaction
+of the frame payload at the tail as needed to achieve optimal bus transfers, so that the last write
+is extended to be on 16 / 64 bytes aligned block (chip dependent).
+
+Relevant for non - Tx port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] optimize TRUE to enable optimization, FALSE for normal operation
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDmaWriteOptimize(t_Handle h_FmPort,
+ bool optimize);
+
+/**
+ @Function FM_PORT_ConfigNoScatherGather
+
+ @Description Calling this routine changes the noScatherGather parameter in internal driver
+ data base from its default configuration.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] noScatherGather
+ (TRUE - frame is discarded if can not be stored in single buffer,
+ FALSE - frame can be stored in scatter gather (S / G) format).
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigNoScatherGather(t_Handle h_FmPort,
+ bool noScatherGather);
+
+/**
+ @Function FM_PORT_ConfigDfltColor
+
+ @Description Calling this routine changes the internal default color parameter
+ in the internal driver data base
+ from its default configuration[DEFAULT_PORT_color]
+
+ May be used for all port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] color New selection
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDfltColor(t_Handle h_FmPort, e_FmPortColor color);
+
+/**
+ @Function FM_PORT_ConfigSyncReq
+
+ @Description Calling this routine changes the synchronization attribute parameter
+ in the internal driver data base from its default configuration :
+ syncReq =[DEFAULT_PORT_syncReq]
+
+ May be used for all port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] syncReq TRUE to request synchronization, FALSE otherwize.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigSyncReq(t_Handle h_FmPort, bool syncReq);
+
+/**
+ @Function FM_PORT_ConfigForwardReuseIntContext
+
+ @Description This routine is relevant for Rx ports that are routed to OP port.
+ It changes the internal context reuse option in the internal
+ driver data base from its default configuration :
+ reuse =[DEFAULT_PORT_forwardIntContextReuse]
+
+ May be used for Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] reuse TRUE to reuse internal context on frames
+ forwarded to OP port.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigForwardReuseIntContext(t_Handle h_FmPort,
+ bool reuse);
+
+/**
+ @Function FM_PORT_ConfigDontReleaseTxBufToBM
+
+ @Description This routine should be called if no Tx confirmation
+ is done, and yet buffers should not be released to the BM.
+
+ Normally, buffers are returned using the Tx confirmation
+ process. When Tx confirmation is not used (defFqid = 0),
+ buffers are typically released to the BM. This routine
+ may be called to avoid this behavior and not release the
+ buffers.
+
+ May be used for Tx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigDontReleaseTxBufToBM(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_ConfigIMMaxRxBufLength
+
+ @Description Changes the maximum receive buffer length from its default
+ configuration: Closest rounded down power of 2 value of the
+ data buffer size.
+
+ The maximum receive buffer length directly affects the structure
+ of received frames (single- or multi-buffered) and the performance
+ of both the FM and the driver.
+
+ The selection between single- or multi-buffered frames should be
+ done according to the characteristics of the specific application.
+ The recommended mode is to use a single data buffer per packet,
+ as this mode provides the best performance. However, the user can
+ select to use multiple data buffers per packet.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] newVal Maximum receive buffer length (in bytes).
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+ This routine is to be used only if Independent-Mode is enabled.
+*/
+uint32_t FM_PORT_ConfigIMMaxRxBufLength(t_Handle h_FmPort,
+ uint16_t newVal);
+
+/**
+ @Function FM_PORT_ConfigIMRxBdRingLength
+
+ @Description Changes the receive BD ring length from its default
+ configuration:[DEFAULT_PORT_rxBdRingLength]
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] newVal The desired BD ring length.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+ This routine is to be used only if Independent-Mode is enabled.
+*/
+uint32_t FM_PORT_ConfigIMRxBdRingLength(t_Handle h_FmPort,
+ uint16_t newVal);
+
+/**
+ @Function FM_PORT_ConfigIMTxBdRingLength
+
+ @Description Changes the transmit BD ring length from its default
+ configuration:[DEFAULT_PORT_txBdRingLength]
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] newVal The desired BD ring length.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+ This routine is to be used only if Independent-Mode is enabled.
+*/
+uint32_t FM_PORT_ConfigIMTxBdRingLength(t_Handle h_FmPort,
+ uint16_t newVal);
+
+/**
+ @Function FM_PORT_ConfigIMFmanCtrlExternalStructsMemory
+
+ @Description Configures memory partition and attributes for FMan-Controller
+ data structures (e.g. BD rings).
+ Calling this routine changes the internal driver data base
+ from its default configuration
+ [DEFAULT_PORT_ImfwExtStructsMemId,
+ DEFAULT_PORT_ImfwExtStructsMemAttr].
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] memId Memory partition ID.
+ @Param[in] memAttributes Memory attributes mask
+ (a combination of MEMORY_ATTR_x flags).
+
+ @Return E_OK on success; Error code otherwise.
+*/
+uint32_t FM_PORT_ConfigIMFmanCtrlExternalStructsMemory(
+ t_Handle h_FmPort,
+ uint8_t memId,
+ uint32_t memAttributes);
+
+/**
+ @Function FM_PORT_ConfigIMPolling
+
+ @Description Changes the Rx flow from interrupt driven (default) to polling.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+ This routine is to be used only if Independent-Mode is enabled.
+*/
+uint32_t FM_PORT_ConfigIMPolling(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_ConfigMaxFrameLength
+
+ @Description Changes the definition of the max size of frame that should be
+ transmitted/received on this port from
+ its default value [DEFAULT_PORT_maxFrameLength].
+ This parameter is used for confirmation of the minimum Fifo
+ size calculations and only for Tx ports or ports working in
+ independent mode. This should be larger than the maximum possible
+ MTU that will be used for this port (i.e. its MAC).
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] length Max size of frame
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+ This routine is to be used only if Independent-Mode is enabled.
+*/
+uint32_t FM_PORT_ConfigMaxFrameLength(t_Handle h_FmPort,
+ uint16_t length);
+
+/*
+ @Function FM_PORT_ConfigTxFifoMinFillLevel
+
+ @Description Calling this routine changes the fifo minimum
+ fill level parameter in the internal driver data base
+ from its default configuration[DEFAULT_PORT_txFifoMinFillLevel]
+
+ May be used for Tx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] minFillLevel New value
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigTxFifoMinFillLevel(t_Handle h_FmPort,
+ uint32_t minFillLevel);
+
+/*
+ @Function FM_PORT_ConfigFifoDeqPipelineDepth
+
+ @Description Calling this routine changes the fifo dequeue
+ pipeline depth parameter in the internal driver data base
+
+ from its default configuration :
+ 1G ports : [DEFAULT_PORT_fifoDeqPipelineDepth_1G],
+ 10G port : [DEFAULT_PORT_fifoDeqPipelineDepth_10G],
+ OP port : [DEFAULT_PORT_fifoDeqPipelineDepth_OH]
+
+ May be used for Tx / OP ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] deqPipelineDepth New value
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigFifoDeqPipelineDepth(t_Handle h_FmPort,
+ uint8_t deqPipelineDepth);
+
+/*
+ @Function FM_PORT_ConfigTxFifoLowComfLevel
+
+ @Description Calling this routine changes the fifo low comfort level
+ parameter in internal driver data base
+ from its default configuration[DEFAULT_PORT_txFifoLowComfLevel]
+
+ May be used for Tx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] fifoLowComfLevel New value
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigTxFifoLowComfLevel(t_Handle h_FmPort,
+ uint32_t fifoLowComfLevel);
+
+/*
+ @Function FM_PORT_ConfigRxFifoThreshold
+
+ @Description Calling this routine changes the threshold of the FIFO
+ fill level parameter in the internal driver data base
+ from its default configuration[DEFAULT_PORT_rxFifoThreshold]
+
+ If the total number of buffers which are
+ currently in use and associated with the
+ specific RX port exceed this threshold, the
+ BMI will signal the MAC to send a pause frame
+ over the link.
+
+ May be used for Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] fifoThreshold New value
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigRxFifoThreshold(t_Handle h_FmPort,
+ uint32_t fifoThreshold);
+
+/*
+ @Function FM_PORT_ConfigRxFifoPriElevationLevel
+
+ @Description Calling this routine changes the priority elevation level
+ parameter in the internal driver data base from its default
+ configuration[DEFAULT_PORT_rxFifoPriElevationLevel]
+
+ If the total number of buffers which are currently in use and
+ associated with the specific RX port exceed the amount specified
+ in priElevationLevel, BMI will signal the main FM's DMA to
+
+ elevate the FM priority on the system bus.
+
+ May be used for Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] priElevationLevel New value
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigRxFifoPriElevationLevel(t_Handle h_FmPort,
+ uint32_t priElevationLevel);
+
+#ifdef FM_HEAVY_TRAFFIC_HANG_ERRATA_FMAN_A005669
+/*
+ @Function FM_PORT_ConfigBCBWorkaround
+
+ @Description Configures BCB errata workaround.
+
+ When BCB errata is applicable, the workaround is always
+ performed by FM Controller. Thus, this functions doesn't
+ actually enable errata workaround but rather allows driver
+ to perform adjustments required due to errata workaround
+ execution in FM controller.
+
+ Applying BCB workaround also configures FM_PORT_FRM_ERR_PHYSICAL
+ errors to be discarded. Thus FM_PORT_FRM_ERR_PHYSICAL can't be
+ set by FM_PORT_SetErrorsRoute() function.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigBCBWorkaround(t_Handle h_FmPort);
+#endif /* FM_HEAVY_TRAFFIC_HANG_ERRATA_FMAN_A005669 */
+
+#if (DPAA_VERSION >= 11)
+/*
+ @Function FM_PORT_ConfigInternalBuffOffset
+
+ @Description Configures internal buffer offset.
+
+ May be used for Rx and OP ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] val New value
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_ConfigInternalBuffOffset(t_Handle h_FmPort, uint8_t val);
+#endif /* (DPAA_VERSION >= 11) */
+
+/** @} */ /* end of FM_PORT_advanced_init_grp group */
+/** @} */ /* end of FM_PORT_init_grp group */
+
+/**
+ @Group FM_PORT_runtime_control_grp FM Port Runtime Control Unit
+
+ @Description FM Port Runtime control unit API functions, definitions and enums.
+
+ @{
+*/
+
+/**
+ @Description enum for defining FM Port counters
+*/
+typedef enum e_FmPortCounters {
+ e_FM_PORT_COUNTERS_CYCLE, /**< BMI performance counter */
+ e_FM_PORT_COUNTERS_TASK_UTIL, /**< BMI performance counter */
+ e_FM_PORT_COUNTERS_QUEUE_UTIL, /**< BMI performance counter */
+ e_FM_PORT_COUNTERS_DMA_UTIL, /**< BMI performance counter */
+ e_FM_PORT_COUNTERS_FIFO_UTIL, /**< BMI performance counter */
+ e_FM_PORT_COUNTERS_RX_PAUSE_ACTIVATION,
+ /**< BMI Rx only performance counter */
+ e_FM_PORT_COUNTERS_FRAME, /**< BMI statistics counter */
+ e_FM_PORT_COUNTERS_DISCARD_FRAME, /**< BMI statistics counter */
+ e_FM_PORT_COUNTERS_DEALLOC_BUF, /**< BMI deallocate buffer statistics counter */
+ e_FM_PORT_COUNTERS_RX_BAD_FRAME, /**< BMI Rx only statistics counter */
+ e_FM_PORT_COUNTERS_RX_LARGE_FRAME, /**< BMI Rx only statistics counter */
+ e_FM_PORT_COUNTERS_RX_FILTER_FRAME,/**< BMI Rx & OP only statistics counter */
+ e_FM_PORT_COUNTERS_RX_LIST_DMA_ERR,
+ /**< BMI Rx, OP & HC only statistics counter */
+ e_FM_PORT_COUNTERS_RX_OUT_OF_BUFFERS_DISCARD,
+ /**< BMI Rx, OP & HC statistics counter */
+ e_FM_PORT_COUNTERS_PREPARE_TO_ENQUEUE_COUNTER,
+ /**< BMI Rx, OP & HC only statistics counter */
+ e_FM_PORT_COUNTERS_WRED_DISCARD,/**< BMI OP & HC only statistics counter */
+ e_FM_PORT_COUNTERS_LENGTH_ERR, /**< BMI non-Rx statistics counter */
+ e_FM_PORT_COUNTERS_UNSUPPRTED_FORMAT, /**< BMI non-Rx statistics counter */
+ e_FM_PORT_COUNTERS_DEQ_TOTAL, /**< QMI total QM dequeues counter */
+ e_FM_PORT_COUNTERS_ENQ_TOTAL, /**< QMI total QM enqueues counter */
+ e_FM_PORT_COUNTERS_DEQ_FROM_DEFAULT, /**< QMI counter */
+ e_FM_PORT_COUNTERS_DEQ_CONFIRM /**< QMI counter */
+} e_FmPortCounters;
+
+typedef struct t_FmPortBmiStats {
+ uint32_t cntCycle;
+ uint32_t cntTaskUtil;
+ uint32_t cntQueueUtil;
+ uint32_t cntDmaUtil;
+ uint32_t cntFifoUtil;
+ uint32_t cntRxPauseActivation;
+ uint32_t cntFrame;
+ uint32_t cntDiscardFrame;
+ uint32_t cntDeallocBuf;
+ uint32_t cntRxBadFrame;
+ uint32_t cntRxLargeFrame;
+ uint32_t cntRxFilterFrame;
+ uint32_t cntRxListDmaErr;
+ uint32_t cntRxOutOfBuffersDiscard;
+ uint32_t cntWredDiscard;
+ uint32_t cntLengthErr;
+ uint32_t cntUnsupportedFormat;
+} t_FmPortBmiStats;
+
+/**
+ @Description Structure for Port id parameters.
+ Fields commented 'IN' are passed by the port module to be used
+ by the FM module.
+ Fields commented 'OUT' will be filled by FM before returning to port.
+*/
+typedef struct t_FmPortCongestionGrps {
+ uint16_t numOfCongestionGrpsToConsider;
+ /**< The number of required CGs
+ to define the size of the following array */
+ uint8_t congestionGrpsToConsider[FM_PORT_NUM_OF_CONGESTION_GRPS];
+ /**< An array of CG indexes;
+ Note that the size of the array should be
+ 'numOfCongestionGrpsToConsider'. */
+#if (DPAA_VERSION >= 11)
+ bool pfcPrioritiesEn[FM_PORT_NUM_OF_CONGESTION_GRPS][FM_MAX_NUM_OF_PFC_PRIORITIES];
+ /**< a matrix that represents the map between the CG ids
+ defined in 'congestionGrpsToConsider' to the priorties
+ mapping array. */
+#endif /* (DPAA_VERSION >= 11) */
+} t_FmPortCongestionGrps;
+
+/**
+ @Description Structure for Deep Sleep Auto Response ARP Entry
+*/
+typedef struct t_FmPortDsarArpEntry {
+ uint32_t ipAddress;
+ uint8_t mac[6];
+ bool isVlan;
+ uint16_t vid;
+} t_FmPortDsarArpEntry;
+
+/**
+ @Description Structure for Deep Sleep Auto Response ARP info
+*/
+typedef struct t_FmPortDsarArpInfo {
+ uint8_t tableSize;
+ t_FmPortDsarArpEntry *p_AutoResTable;
+ bool enableConflictDetection;
+ /* when TRUE Conflict Detection will be checked and wake the host if needed */
+} t_FmPortDsarArpInfo;
+
+/**
+ @Description Structure for Deep Sleep Auto Response NDP Entry
+*/
+typedef struct t_FmPortDsarNdpEntry {
+ uint32_t ipAddress[4];
+ uint8_t mac[6];
+ bool isVlan;
+ uint16_t vid;
+} t_FmPortDsarNdpEntry;
+
+/**
+ @Description Structure for Deep Sleep Auto Response NDP info
+*/
+typedef struct t_FmPortDsarNdpInfo {
+ uint32_t multicastGroup;
+
+ uint8_t tableSizeAssigned;
+ t_FmPortDsarNdpEntry *p_AutoResTableAssigned;
+ /* This list refer to solicitation IP addresses.
+ Note that all IP addresses must be from the same multicast group.
+ This will be checked and if not operation will fail. */
+ uint8_t tableSizeTmp;
+ t_FmPortDsarNdpEntry *p_AutoResTableTmp;
+ /* This list refer to temp IP addresses.
+ Note that all temp IP addresses must be from the same multicast group.
+ This will be checked and if not operation will fail. */
+
+ bool enableConflictDetection;
+ /* when TRUE Conflict Detection will be checked and wake the host if needed */
+
+} t_FmPortDsarNdpInfo;
+
+/**
+ @Description Structure for Deep Sleep Auto Response ICMPV4 info
+*/
+typedef struct t_FmPortDsarEchoIpv4Info {
+ uint8_t tableSize;
+ t_FmPortDsarArpEntry *p_AutoResTable;
+} t_FmPortDsarEchoIpv4Info;
+
+/**
+ @Description Structure for Deep Sleep Auto Response ICMPV6 info
+*/
+typedef struct t_FmPortDsarEchoIpv6Info {
+ uint8_t tableSize;
+ t_FmPortDsarNdpEntry *p_AutoResTable;
+} t_FmPortDsarEchoIpv6Info;
+
+/**
+@Description Deep Sleep Auto Response SNMP OIDs table entry
+
+*/
+typedef struct {
+ uint16_t oidSize;
+ uint8_t *oidVal; /* only the oid string */
+ uint16_t resSize;
+ uint8_t *resVal; /* resVal will be the entire reply,
+ i.e. "Type|Length|Value" */
+} t_FmPortDsarOidsEntry;
+
+/**
+ @Description Deep Sleep Auto Response SNMP IPv4 Addresses Table Entry
+ Refer to the FMan Controller spec for more details.
+*/
+typedef struct {
+ uint32_t ipv4Addr; /*!< 32 bit IPv4 Address. */
+ bool isVlan;
+ uint16_t vid; /*!< 12 bits VLAN ID. The 4 left-most bits should be cleared*/
+ /*!< This field should be 0x0000 for an entry with no VLAN tag or a null VLAN ID. */
+} t_FmPortDsarSnmpIpv4AddrTblEntry;
+
+/**
+ @Description Deep Sleep Auto Response SNMP IPv6 Addresses Table Entry
+ Refer to the FMan Controller spec for more details.
+*/
+typedef struct {
+ uint32_t ipv6Addr[4]; /*!< 4 * 32 bit IPv6 Address.*/
+ bool isVlan;
+ uint16_t vid; /*!< 12 bits VLAN ID. The 4 left-most bits should be cleared*/
+ /*!< This field should be 0x0000 for an entry with no VLAN tag or a null VLAN ID. */
+} t_FmPortDsarSnmpIpv6AddrTblEntry;
+
+/**
+ @Description Deep Sleep Auto Response SNMP Descriptor
+
+*/
+typedef struct {
+ uint16_t control; /**< Control bits [0-15]. */
+ uint16_t maxSnmpMsgLength;/**< Maximal allowed SNMP message length. */
+ uint16_t numOfIpv4Addresses; /**< Number of entries in IPv4 addresses table. */
+ uint16_t numOfIpv6Addresses; /**< Number of entries in IPv6 addresses table. */
+ t_FmPortDsarSnmpIpv4AddrTblEntry *p_Ipv4AddrTbl;
+ /**< Pointer to IPv4 addresses table. */
+ t_FmPortDsarSnmpIpv6AddrTblEntry *p_Ipv6AddrTbl;
+ /**< Pointer to IPv6 addresses table. */
+ uint8_t *p_RdOnlyCommunityStr;
+ /**< Pointer to the Read Only Community String. */
+ uint8_t *p_RdWrCommunityStr;/**< Pointer to the Read Write Community String. */
+ t_FmPortDsarOidsEntry *p_OidsTbl;/**< Pointer to OIDs table. */
+ uint32_t oidsTblSize; /**< Number of entries in OIDs table. */
+} t_FmPortDsarSnmpInfo;
+
+/**
+ @Description Structure for Deep Sleep Auto Response filtering Entry
+*/
+typedef struct t_FmPortDsarFilteringEntry {
+ uint16_t srcPort;
+ uint16_t dstPort;
+ uint16_t srcPortMask;
+ uint16_t dstPortMask;
+} t_FmPortDsarFilteringEntry;
+
+/**
+ @Description Structure for Deep Sleep Auto Response filtering info
+*/
+typedef struct t_FmPortDsarFilteringInfo {
+ /* IP protocol filtering parameters */
+ uint8_t ipProtTableSize;
+ uint8_t *p_IpProtTablePtr;
+ bool ipProtPassOnHit;
+ /* when TRUE, miss in the table will cause the packet to be droped,
+ hit will pass the packet to UDP/TCP filters if needed and if not
+ to the classification tree. If the classification tree will pass
+ the packet to a queue it will cause a wake interupt.
+ When FALSE it the other way around. */
+ /* UDP port filtering parameters */
+ uint8_t udpPortsTableSize;
+ t_FmPortDsarFilteringEntry *p_UdpPortsTablePtr;
+ bool udpPortPassOnHit;
+ /* when TRUE, miss in the table will cause the packet to be droped,
+ hit will pass the packet to classification tree.
+ If the classification tree will pass the packet to a queue it
+ will cause a wake interupt.
+ When FALSE it the other way around. */
+ /* TCP port filtering parameters */
+ uint16_t tcpFlagsMask;
+ uint8_t tcpPortsTableSize;
+ t_FmPortDsarFilteringEntry *p_TcpPortsTablePtr;
+ bool tcpPortPassOnHit;
+ /* when TRUE, miss in the table will cause the packet to be droped,
+ hit will pass the packet to classification tree.
+ If the classification tree will pass the packet to a queue it
+ will cause a wake interupt.
+ When FALSE it the other way around. */
+} t_FmPortDsarFilteringInfo;
+
+/**
+ @Description Structure for Deep Sleep Auto Response parameters
+*/
+typedef struct t_FmPortDsarParams {
+ t_Handle h_FmPortTx;
+ t_FmPortDsarArpInfo *p_AutoResArpInfo;
+ t_FmPortDsarEchoIpv4Info *p_AutoResEchoIpv4Info;
+ t_FmPortDsarNdpInfo *p_AutoResNdpInfo;
+ t_FmPortDsarEchoIpv6Info *p_AutoResEchoIpv6Info;
+ t_FmPortDsarSnmpInfo *p_AutoResSnmpInfo;
+ t_FmPortDsarFilteringInfo *p_AutoResFilteringInfo;
+} t_FmPortDsarParams;
+
+/**
+ @Function FM_PORT_EnterDsar
+
+ @Description Enter Deep Sleep Auto Response mode.
+ This function write the appropriate values to in the relevant
+ tables in the MURAM.
+
+ @Param[in] h_FmPortRx - FM PORT module descriptor
+ @Param[in] params - Auto Response parameters
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_EnterDsar(t_Handle h_FmPortRx,
+ t_FmPortDsarParams *params);
+
+/**
+ @Function FM_PORT_EnterDsarFinal
+
+ @Description Enter Deep Sleep Auto Response mode.
+ This function sets the Tx port in independent mode as needed
+ and redirect the receive flow to go through the
+ Dsar Fman-ctrl code
+
+ @Param[in] h_DsarRxPort - FM Rx PORT module descriptor
+ @Param[in] h_DsarTxPort - FM Tx PORT module descriptor
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_EnterDsarFinal(t_Handle h_DsarRxPort,
+ t_Handle h_DsarTxPort);
+
+/**
+ @Function FM_PORT_ExitDsar
+
+ @Description Exit Deep Sleep Auto Response mode.
+ This function reverse the AR mode and put the ports back into
+ their original wake mode
+
+ @Param[in] h_FmPortRx - FM PORT Rx module descriptor
+ @Param[in] h_FmPortTx - FM PORT Tx module descriptor
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_EnterDsar().
+*/
+void FM_PORT_ExitDsar(t_Handle h_FmPortRx, t_Handle h_FmPortTx);
+
+/**
+ @Function FM_PORT_IsInDsar
+
+ @Description This function returns TRUE if the port was set as Auto Response
+ and FALSE if not. Once Exit AR mode it will return FALSE as well
+ until re-enabled once more.
+
+ @Param[in] h_FmPort - FM PORT module descriptor
+
+ @Return E_OK on success; Error code otherwise.
+*/
+bool FM_PORT_IsInDsar(t_Handle h_FmPort);
+
+typedef struct t_FmPortDsarStats {
+ uint32_t arpArCnt;
+ uint32_t echoIcmpv4ArCnt;
+ uint32_t ndpArCnt;
+ uint32_t echoIcmpv6ArCnt;
+ uint32_t snmpGetCnt;
+ uint32_t snmpGetNextCnt;
+} t_FmPortDsarStats;
+
+/**
+ @Function FM_PORT_GetDsarStats
+
+ @Description Return statistics for Deep Sleep Auto Response
+
+ @Param[in] h_FmPortRx - FM PORT module descriptor
+ @Param[out] stats - structure containing the statistics counters
+
+ @Return E_OK on success; Error code otherwise.
+*/
+uint32_t FM_PORT_GetDsarStats(t_Handle h_FmPortRx,
+ t_FmPortDsarStats *stats);
+
+#if (defined(DEBUG_ERRORS) && (DEBUG_ERRORS > 0))
+/**
+ @Function FM_PORT_DumpRegs
+
+ @Description Dump all regs.
+
+ Calling this routine invalidates the descriptor.
+
+ @Param[in] h_FmPort - FM PORT module descriptor
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_DumpRegs(t_Handle h_FmPort);
+#endif /* (defined(DEBUG_ERRORS) && ... */
+
+/**
+ @Function FM_PORT_GetBufferDataOffset
+
+ @Description Relevant for Rx ports.
+ Returns the data offset from the beginning of the data buffer
+
+ @Param[in] h_FmPort - FM PORT module descriptor
+
+ @Return data offset.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_GetBufferDataOffset(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_GetBufferICInfo
+
+ @Description Returns the Internal Context offset from the beginning of the data buffer
+
+ @Param[in] h_FmPort - FM PORT module descriptor
+ @Param[in] p_Data - A pointer to the data buffer.
+
+ @Return Internal context info pointer on success, NULL if 'allOtherInfo' was not
+ configured for this port.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint8_t *FM_PORT_GetBufferICInfo(t_Handle h_FmPort, char *p_Data);
+
+/**
+ @Function FM_PORT_GetBufferPrsResult
+
+ @Description Returns the pointer to the parse result in the data buffer.
+ In Rx ports this is relevant after reception, if parse
+ result is configured to be part of the data passed to the
+ application. For non Rx ports it may be used to get the pointer
+ of the area in the buffer where parse result should be
+ initialized - if so configured.
+ See FM_PORT_ConfigBufferPrefixContent for data buffer prefix
+ configuration.
+
+ @Param[in] h_FmPort - FM PORT module descriptor
+ @Param[in] p_Data - A pointer to the data buffer.
+
+ @Return Parse result pointer on success, NULL if parse result was not
+ configured for this port.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+t_FmPrsResult *FM_PORT_GetBufferPrsResult(t_Handle h_FmPort,
+ char *p_Data);
+
+/**
+ @Function FM_PORT_GetBufferTimeStamp
+
+ @Description Returns the time stamp in the data buffer.
+ Relevant for Rx ports for getting the buffer time stamp.
+ See FM_PORT_ConfigBufferPrefixContent for data buffer prefix
+ configuration.
+
+ @Param[in] h_FmPort - FM PORT module descriptor
+ @Param[in] p_Data - A pointer to the data buffer.
+
+ @Return A pointer to the hash result on success, NULL otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint64_t *FM_PORT_GetBufferTimeStamp(t_Handle h_FmPort, char *p_Data);
+
+/**
+ @Function FM_PORT_GetBufferHashResult
+
+ @Description Given a data buffer, on the condition that hash result was defined
+ as a part of the buffer content(see FM_PORT_ConfigBufferPrefixContent)
+ this routine will return the pointer to the hash result location in the
+ buffer prefix.
+
+ @Param[in] h_FmPort - FM PORT module descriptor
+ @Param[in] p_Data - A pointer to the data buffer.
+
+ @Return A pointer to the hash result on success, NULL otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint8_t *FM_PORT_GetBufferHashResult(t_Handle h_FmPort, char *p_Data);
+
+/**
+ @Function FM_PORT_Disable
+
+ @Description Gracefully disable an FM port. The port will not start new tasks after all
+ tasks associated with the port are terminated.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+ This is a blocking routine, it returns after port is
+ gracefully stopped, i.e. the port will not except new frames,
+ but it will finish all frames or tasks which were already began
+*/
+uint32_t FM_PORT_Disable(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_Enable
+
+ @Description A runtime routine provided to allow disable/enable of port.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_Enable(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_SetRateLimit
+
+ @Description Calling this routine enables rate limit algorithm.
+ By default, this functionality is disabled.
+
+ Note that rate - limit mechanism uses the FM time stamp.
+ The selected rate limit specified here would be
+ rounded DOWN to the nearest 16M.
+
+ May be used for Tx and OP ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_RateLimit A structure of rate limit parameters
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+ If rate limit is set on a port that need to send PFC frames,
+ it might violate the stop transmit timing.
+*/
+uint32_t FM_PORT_SetRateLimit(t_Handle h_FmPort,
+ t_FmPortRateLimit *p_RateLimit);
+
+/**
+ @Function FM_PORT_DeleteRateLimit
+
+ @Description Calling this routine disables and clears rate limit
+ initialization.
+
+ May be used for Tx and OP ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_DeleteRateLimit(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_SetPfcPrioritiesMappingToQmanWQ
+
+ @Description Calling this routine maps each PFC received priority to the transmit WQ.
+ This WQ will be blocked upon receiving a PFC frame with this priority.
+
+ May be used for Tx ports only.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] prio PFC priority (0 - 7).
+ @Param[in] wq Work Queue (0 - 7).
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_SetPfcPrioritiesMappingToQmanWQ(t_Handle h_FmPort,
+ uint8_t prio, uint8_t wq);
+
+/**
+ @Function FM_PORT_SetStatisticsCounters
+
+ @Description Calling this routine enables/disables port's statistics counters.
+ By default, counters are enabled.
+
+ May be used for all port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] enable TRUE to enable, FALSE to disable.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_SetStatisticsCounters(t_Handle h_FmPort, bool enable);
+
+/**
+ @Function FM_PORT_SetFrameQueueCounters
+
+ @Description Calling this routine enables/disables port's enqueue/dequeue counters.
+ By default, counters are enabled.
+
+ May be used for all ports
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] enable TRUE to enable, FALSE to disable.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_SetFrameQueueCounters(t_Handle h_FmPort,
+ bool enable);
+
+/**
+ @Function FM_PORT_AnalyzePerformanceParams
+
+ @Description User may call this routine to so the driver will analyze if the
+ basic performance parameters are correct and also the driver may
+ suggest of improvements; The basic parameters are FIFO sizes, number
+ of DMAs and number of TNUMs for the port.
+
+ May be used for all port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_AnalyzePerformanceParams(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_SetAllocBufCounter
+
+ @Description Calling this routine enables/disables BM pool allocate
+ buffer counters.
+ By default, counters are enabled.
+
+ May be used for Rx ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] poolId BM pool id.
+ @Param[in] enable TRUE to enable, FALSE to disable.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_SetAllocBufCounter(t_Handle h_FmPort,
+ uint8_t poolId, bool enable);
+
+/**
+ @Function FM_PORT_GetBmiCounters
+
+ @Description Read port's BMI stat counters and place them into
+ a designated structure of counters.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[out] p_BmiStats counters structure
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_GetBmiCounters(t_Handle h_FmPort,
+ t_FmPortBmiStats *p_BmiStats);
+
+/**
+ @Function FM_PORT_GetCounter
+
+ @Description Reads one of the FM PORT counters.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] fmPortCounter The requested counter.
+
+ @Return Counter's current value.
+
+ @Cautions Allowed only following FM_PORT_Init().
+ Note that it is user's responsibility to call this routine only
+ for enabled counters, and there will be no indication if a
+ disabled counter is accessed.
+*/
+uint32_t FM_PORT_GetCounter(t_Handle h_FmPort,
+ e_FmPortCounters fmPortCounter);
+
+/**
+ @Function FM_PORT_ModifyCounter
+
+ @Description Sets a value to an enabled counter. Use "0" to reset the counter.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] fmPortCounter The requested counter.
+ @Param[in] value The requested value to be written into the counter.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_ModifyCounter(t_Handle h_FmPort,
+ e_FmPortCounters fmPortCounter, uint32_t value);
+
+/**
+ @Function FM_PORT_GetAllocBufCounter
+
+ @Description Reads one of the FM PORT buffer counters.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] poolId The requested pool.
+
+ @Return Counter's current value.
+
+ @Cautions Allowed only following FM_PORT_Init().
+ Note that it is user's responsibility to call this routine only
+ for enabled counters, and there will be no indication if a
+ disabled counter is accessed.
+*/
+uint32_t FM_PORT_GetAllocBufCounter(t_Handle h_FmPort,
+ uint8_t poolId);
+
+/**
+ @Function FM_PORT_ModifyAllocBufCounter
+
+ @Description Sets a value to an enabled counter. Use "0" to reset the counter.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] poolId The requested pool.
+ @Param[in] value The requested value to be written into the counter.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_ModifyAllocBufCounter(t_Handle h_FmPort,
+ uint8_t poolId, uint32_t value);
+
+/**
+ @Function FM_PORT_AddCongestionGrps
+
+ @Description This routine effects the corresponding Tx port.
+ It should be called in order to enable pause
+ frame transmission in case of congestion in one or more
+ of the congestion groups relevant to this port.
+ Each call to this routine may add one or more congestion
+ groups to be considered relevant to this port.
+
+ May be used for Rx, or RX + OP ports only (depending on chip)
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_CongestionGrps A pointer to an array of congestion groups
+ id's to consider.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_AddCongestionGrps(t_Handle h_FmPort,
+ t_FmPortCongestionGrps *p_CongestionGrps);
+
+/**
+ @Function FM_PORT_RemoveCongestionGrps
+
+ @Description This routine effects the corresponding Tx port. It should be
+ called when congestion groups were
+ defined for this port and are no longer relevant, or pause
+ frames transmitting is not required on their behalf.
+ Each call to this routine may remove one or more congestion
+ groups to be considered relevant to this port.
+
+ May be used for Rx, or RX + OP ports only (depending on chip)
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_CongestionGrps A pointer to an array of congestion groups
+ id's to consider.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_RemoveCongestionGrps(t_Handle h_FmPort,
+ t_FmPortCongestionGrps *p_CongestionGrps);
+
+/**
+ @Function FM_PORT_IsStalled
+
+ @Description A routine for checking whether the specified port is stalled.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return TRUE if port is stalled, FALSE otherwize
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+bool FM_PORT_IsStalled(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_ReleaseStalled
+
+ @Description This routine may be called in case the port was stalled and may
+ now be released.
+ Note that this routine is available only on older FMan revisions
+ (FMan v2, DPAA v1.0 only).
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_ReleaseStalled(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_SetRxL4ChecksumVerify
+
+ @Description This routine is relevant for Rx ports (1G and 10G). The routine
+ set / clear the L3 / L4 checksum verification (on RX side).
+ Note that this takes affect only if hw - parser is enabled !
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] l4Checksum boolean indicates whether to do L3/L4 checksum
+ on frames or not.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_SetRxL4ChecksumVerify(t_Handle h_FmPort,
+ bool l4Checksum);
+
+/**
+ @Function FM_PORT_SetErrorsRoute
+
+ @Description Errors selected for this routine will cause a frame with that error
+ to be enqueued to error queue.
+ Errors not selected for this routine will cause a frame with that error
+ to be enqueued to the one of the other port queues.
+ By default all errors are defined to be enqueued to error queue.
+ Errors that were configured to be discarded(at initialization)
+ may not be selected here.
+
+ May be used for Rx and OP ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] errs A list of errors to enqueue to error queue
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Config() and before FM_PORT_Init().
+*/
+uint32_t FM_PORT_SetErrorsRoute(t_Handle h_FmPort,
+ fmPortFrameErrSelect_t errs);
+
+/**
+ @Function FM_PORT_SetIMExceptions
+
+ @Description Calling this routine enables/disables FM PORT interrupts.
+
+ @Param[in] h_FmPort FM PORT module descriptor.
+ @Param[in] exception The exception to be selected.
+ @Param[in] enable TRUE to enable interrupt, FALSE to mask it.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+ This routine should NOT be called from guest-partition
+ (i.e. guestId != NCSW_MASTER_ID)
+*/
+uint32_t FM_PORT_SetIMExceptions(t_Handle h_FmPort,
+ e_FmPortExceptions exception, bool enable);
+
+/*
+ @Function FM_PORT_SetPerformanceCounters
+
+ @Description Calling this routine enables/disables port's performance counters.
+ By default, counters are enabled.
+
+ May be used for all port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] enable TRUE to enable, FALSE to disable.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_SetPerformanceCounters(t_Handle h_FmPort,
+ bool enable);
+
+/*
+ @Function FM_PORT_SetPerformanceCountersParams
+
+ @Description Calling this routine defines port's performance
+ counters parameters.
+
+ May be used for all port types
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_FmPortPerformanceCnt A pointer to a structure of performance
+ counters parameters.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_SetPerformanceCountersParams(t_Handle h_FmPort,
+ t_FmPortPerformanceCnt *p_FmPortPerformanceCnt);
+
+/**
+ @Group FM_PORT_pcd_runtime_control_grp FM Port PCD Runtime Control Unit
+
+ @Description FM Port PCD Runtime control unit API functions, definitions and enums.
+
+ @Function FM_PORT_SetPCD
+
+ @Description Calling this routine defines the port's PCD configuration.
+ It changes it from its default configuration which is PCD
+ disabled (BMI to BMI) and configures it according to the passed
+ parameters.
+
+ May be used for Rx and OP ports only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_FmPortPcd A Structure of parameters defining the port's PCD
+ configuration.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_SetPCD(t_Handle h_FmPort,
+ ioc_fm_port_pcd_params_t *p_FmPortPcd);
+
+/**
+ @Function FM_PORT_DeletePCD
+
+ @Description Calling this routine releases the port's PCD configuration.
+ The port returns to its default configuration which is PCD
+ disabled (BMI to BMI) and all PCD configuration is removed.
+
+ May be used for Rx and OP ports which are
+ in PCD mode only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_DeletePCD(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_AttachPCD
+
+ @Description This routine may be called after FM_PORT_DetachPCD was called,
+ to return to the originally configured PCD support flow.
+ The couple of routines are used to allow PCD configuration changes
+ that demand that PCD will not be used while changes take place.
+
+ May be used for Rx and OP ports which are
+ in PCD mode only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+*/
+uint32_t FM_PORT_AttachPCD(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_DetachPCD
+
+ @Description Calling this routine detaches the port from its PCD functionality.
+ The port returns to its default flow which is BMI to BMI.
+
+ May be used for Rx and OP ports which are
+ in PCD mode only
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_AttachPCD().
+*/
+uint32_t FM_PORT_DetachPCD(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_PcdPlcrAllocProfiles
+
+ @Description This routine may be called only for ports that use the Policer in
+ order to allocate private policer profiles.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] numOfProfiles The number of required policer profiles
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init() and FM_PCD_Init(),
+ and before FM_PORT_SetPCD().
+*/
+uint32_t FM_PORT_PcdPlcrAllocProfiles(t_Handle h_FmPort,
+ uint16_t numOfProfiles);
+
+/**
+ @Function FM_PORT_PcdPlcrFreeProfiles
+
+ @Description This routine should be called for freeing private policer profiles.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init() and FM_PCD_Init(),
+ and before FM_PORT_SetPCD().
+*/
+uint32_t FM_PORT_PcdPlcrFreeProfiles(t_Handle h_FmPort);
+
+/**
+ @Function FM_PORT_PcdKgModifyInitialScheme
+
+ @Description This routine may be called only for ports that use the keygen in
+ order to change the initial scheme frame should be routed to.
+ The change may be of a scheme id(in case of direct mode),
+ from direct to indirect, or from indirect to direct - specifying the scheme id.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_FmPcdKgScheme A structure of parameters for defining whether
+ a scheme is direct / indirect, and if direct - scheme id.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init() and FM_PORT_SetPCD().
+*/
+uint32_t FM_PORT_PcdKgModifyInitialScheme(t_Handle h_FmPort,
+ ioc_fm_pcd_kg_scheme_select_t *p_FmPcdKgScheme);
+
+/**
+ @Function FM_PORT_PcdPlcrModifyInitialProfile
+
+ @Description This routine may be called for ports with flows
+ e_FM_PORT_PCD_SUPPORT_PLCR_ONLY or e_FM_PORT_PCD_SUPPORT_PRS_AND_PLCR
+ only, to change the initial Policer profile frame should be
+ routed to. The change may be of a profile and / or absolute / direct
+ mode selection.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] h_Profile Policer profile handle
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init() and FM_PORT_SetPCD().
+*/
+uint32_t FM_PORT_PcdPlcrModifyInitialProfile(t_Handle h_FmPort,
+ t_Handle h_Profile);
+
+/**
+ @Function FM_PORT_PcdCcModifyTree
+
+ @Description This routine may be called for ports that use coarse classification tree
+if the user wishes to replace the tree. The routine may not be called while port
+receives packets using the PCD functionalities, therefor port must be first detached
+from the PCD, only than the routine may be called, and than port be attached to PCD again.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] h_CcTree A CC tree that was already built. The tree id as returned from
+ the BuildTree routine.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init(), FM_PORT_SetPCD() and FM_PORT_DetachPCD()
+*/
+uint32_t FM_PORT_PcdCcModifyTree(t_Handle h_FmPort, t_Handle h_CcTree);
+
+/**
+ @Function FM_PORT_PcdKgBindSchemes
+
+ @Description These routines may be called for adding more schemes for the
+ port to be bound to. The selected schemes are not added,
+ just this specific port starts using them.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_PortScheme A structure defining the list of schemes to be added.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init() and FM_PORT_SetPCD().
+*/
+uint32_t FM_PORT_PcdKgBindSchemes(t_Handle h_FmPort,
+ ioc_fm_pcd_port_schemes_params_t *p_PortScheme);
+
+/**
+ @Function FM_PORT_PcdKgUnbindSchemes
+
+ @Description These routines may be called for adding more schemes for the
+ port to be bound to. The selected schemes are not removed or invalidated,
+ just this specific port stops using them.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_PortScheme A structure defining the list of schemes to be added.
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init() and FM_PORT_SetPCD().
+*/
+uint32_t FM_PORT_PcdKgUnbindSchemes(t_Handle h_FmPort,
+ ioc_fm_pcd_port_schemes_params_t *p_PortScheme);
+
+/**
+ @Function FM_PORT_GetIPv4OptionsCount
+
+ @Description TODO
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[out] p_Ipv4OptionsCount will hold the counter value
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init()
+*/
+uint32_t FM_PORT_GetIPv4OptionsCount(t_Handle h_FmPort,
+ uint32_t *p_Ipv4OptionsCount);
+
+/** @} */ /* end of FM_PORT_pcd_runtime_control_grp group */
+/** @} */ /* end of FM_PORT_runtime_control_grp group */
+
+/**
+ @Group FM_PORT_runtime_data_grp FM Port Runtime Data-path Unit
+
+ @Description FM Port Runtime data unit API functions, definitions and enums.
+ This API is valid only if working in Independent-Mode.
+
+ @{
+*/
+
+/**
+ @Function FM_PORT_ImTx
+
+ @Description Tx function, called to transmit a data buffer on the port.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+ @Param[in] p_Data A pointer to an LCP data buffer.
+ @Param[in] length Size of data for transmission.
+ @Param[in] lastBuffer Buffer position - TRUE for the last buffer
+ of a frame, including a single buffer frame
+ @Param[in] h_BufContext A handle of the user acossiated with this buffer
+
+ @Return E_OK on success; Error code otherwise.
+
+ @Cautions Allowed only following FM_PORT_Init().
+ NOTE - This routine can be used only when working in
+ Independent-Mode mode.
+*/
+uint32_t FM_PORT_ImTx(t_Handle h_FmPort,
+ uint8_t *p_Data,
+ uint16_t length,
+ bool lastBuffer,
+ t_Handle h_BufContext);
+
+/**
+ @Function FM_PORT_ImTxConf
+
+ @Description Tx port confirmation routine, optional, may be called to verify
+ transmission of all frames. The procedure performed by this
+ routine will be performed automatically on next buffer transmission,
+ but if desired, calling this routine will invoke this action on
+ demand.
+
+ @Param[in] h_FmPort A handle to a FM Port module.
+
+ @Cautions Allowed only following FM_PORT_Init().
+ NOTE - This routine can be used only when working in
+ Independent-Mode mode.
+*/
+void FM_PORT_ImTxConf(t_Handle h_FmPort);
+
+uint32_t FM_PORT_ImRx(t_Handle h_FmPort);
+
+/** @} */ /* end of FM_PORT_runtime_data_grp group */
+/** @} */ /* end of FM_PORT_grp group */
+/** @} */ /* end of FM_grp group */
+#endif /* __FM_PORT_EXT_H */
diff --git a/drivers/net/dpaa/fmlib/ncsw_ext.h b/drivers/net/dpaa/fmlib/ncsw_ext.h
new file mode 100644
index 000000000..319107c53
--- /dev/null
+++ b/drivers/net/dpaa/fmlib/ncsw_ext.h
@@ -0,0 +1,153 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright 2008-2012 Freescale Semiconductor Inc.
+ * Copyright 2017-2020 NXP
+ */
+
+#ifndef __NCSW_EXT_H
+#define __NCSW_EXT_H
+
+#include <stdint.h>
+
+#define PTR_TO_UINT(_ptr) ((uintptr_t)(_ptr))
+#define UINT_TO_PTR(_val) ((void *)(uintptr_t)(_val))
+
+/* physAddress_t should be uintptr_t */
+typedef uint64_t physAddress_t;
+
+/**
+ @Description Possible RxStore callback responses.
+*/
+typedef enum e_RxStoreResponse {
+ e_RX_STORE_RESPONSE_PAUSE
+ /**< Pause invoking callback with received data;
+ in polling mode, start again invoking callback
+ only next time user invokes the receive routine;
+ in interrupt mode, start again invoking callback
+ only next time a receive event triggers an interrupt;
+ in all cases, received data that are pending are not
+ lost, rather, their processing is temporarily deferred;
+ in all cases, received data are processed in the order
+ in which they were received. */
+ , e_RX_STORE_RESPONSE_CONTINUE
+ /**< Continue invoking callback with received data. */
+} e_RxStoreResponse;
+
+
+/**
+ @Description General Handle
+*/
+typedef void *t_Handle; /**< handle, used as object's descriptor */
+
+/* @} */
+
+/**
+ @Function t_GetBufFunction
+
+ @Description User callback function called by driver to get data buffer.
+
+ User provides this function. Driver invokes it.
+
+ @Param[in] h_BufferPool - A handle to buffer pool manager
+ @Param[out] p_BufContextHandle - Returns the user's private context that
+ should be associated with the buffer
+
+ @Return Pointer to data buffer, NULL if error
+ */
+typedef uint8_t * (t_GetBufFunction)(t_Handle h_BufferPool,
+ t_Handle *p_BufContextHandle);
+
+/**
+ @Function t_PutBufFunction
+
+ @Description User callback function called by driver to return data buffer.
+
+ User provides this function. Driver invokes it.
+
+ @Param[in] h_BufferPool - A handle to buffer pool manager
+ @Param[in] p_Buffer - A pointer to buffer to return
+ @Param[in] h_BufContext - The user's private context associated with
+ the returned buffer
+
+ @Return E_OK on success; Error code otherwise
+ */
+typedef uint32_t (t_PutBufFunction)(t_Handle h_BufferPool,
+ uint8_t *p_Buffer,
+ t_Handle h_BufContext);
+
+/**
+ @Function t_PhysToVirt
+
+ @Description Translates a physical address to the matching virtual address.
+
+ @Param[in] addr - The physical address to translate.
+
+ @Return Virtual address.
+*/
+typedef void *t_PhysToVirt(physAddress_t addr);
+
+/**
+ @Function t_VirtToPhys
+
+ @Description Translates a virtual address to the matching physical address.
+
+ @Param[in] addr - The virtual address to translate.
+
+ @Return Physical address.
+*/
+typedef physAddress_t t_VirtToPhys(void *addr);
+
+/**
+ @Description Buffer Pool Information Structure.
+*/
+typedef struct t_BufferPoolInfo {
+ t_Handle h_BufferPool; /**< A handle to the buffer pool mgr */
+ t_GetBufFunction *f_GetBuf; /**< User callback to get a free buffer */
+ t_PutBufFunction *f_PutBuf; /**< User callback to return a buffer */
+ uint16_t bufferSize; /**< Buffer size (in bytes) */
+
+ t_PhysToVirt *f_PhysToVirt; /**< User callback to translate pool buffers
+ physical addresses to virtual addresses */
+ t_VirtToPhys *f_VirtToPhys; /**< User callback to translate pool buffers
+ virtual addresses to physical addresses */
+} t_BufferPoolInfo;
+
+/**
+ @Description User callback function called by driver with receive data.
+
+ User provides this function. Driver invokes it.
+
+ @Param[in] h_App - Application's handle, as was provided to the
+ driver by the user
+ @Param[in] queueId - Receive queue ID
+ @Param[in] p_Data - Pointer to the buffer with received data
+ @Param[in] h_BufContext - The user's private context associated with
+ the given data buffer
+ @Param[in] length - Length of received data
+ @Param[in] status - Receive status and errors
+ @Param[in] position - Position of buffer in frame
+ @Param[in] flags - Driver-dependent information
+
+ @Retval e_RX_STORE_RESPONSE_CONTINUE - order the driver to continue Rx
+ operation for all ready data.
+ @Retval e_RX_STORE_RESPONSE_PAUSE- order the driver to stop Rx ops.
+ */
+typedef e_RxStoreResponse(t_RxStoreFunction)(t_Handle h_App,
+ uint32_t queueId,
+ uint8_t *p_Data,
+ t_Handle h_BufContext,
+ uint32_t length,
+ uint16_t status,
+ uint8_t position,
+ uint32_t flags);
+
+typedef struct t_Device {
+ uintptr_t id; /**< the device id */
+ int fd; /**< the device file descriptor */
+ t_Handle h_UserPriv;
+ uint32_t owners;
+} t_Device;
+
+t_Handle CreateDevice(t_Handle h_UserPriv, t_Handle h_DevId);
+t_Handle GetDeviceId(t_Handle h_Dev);
+
+#endif /* __NCSW_EXT_H */
diff --git a/drivers/net/dpaa/fmlib/net_ext.h b/drivers/net/dpaa/fmlib/net_ext.h
new file mode 100644
index 000000000..12e4bc7cc
--- /dev/null
+++ b/drivers/net/dpaa/fmlib/net_ext.h
@@ -0,0 +1,383 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright 2008-2012 Freescale Semiconductor Inc.
+ * Copyright 2017-2019 NXP
+ */
+
+#ifndef __NET_EXT_H
+#define __NET_EXT_H
+
+#include "ncsw_ext.h"
+
+/**
+ @Description This file contains common and general netcomm headers definitions.
+*/
+
+typedef uint8_t ioc_header_field_ppp_t;
+
+#define IOC_NET_HEADER_FIELD_PPP_PID (1)
+#define IOC_NET_HEADER_FIELD_PPP_COMPRESSED (IOC_NET_HEADER_FIELD_PPP_PID << 1)
+#define IOC_NET_HEADER_FIELD_PPP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_PPP_PID << 2) - 1)
+
+typedef uint8_t ioc_header_field_pppoe_t;
+
+#define IOC_NET_HEADER_FIELD_PPPoE_VER (1)
+#define IOC_NET_HEADER_FIELD_PPPoE_TYPE (IOC_NET_HEADER_FIELD_PPPoE_VER << 1)
+#define IOC_NET_HEADER_FIELD_PPPoE_CODE (IOC_NET_HEADER_FIELD_PPPoE_VER << 2)
+#define IOC_NET_HEADER_FIELD_PPPoE_SID (IOC_NET_HEADER_FIELD_PPPoE_VER << 3)
+#define IOC_NET_HEADER_FIELD_PPPoE_LEN (IOC_NET_HEADER_FIELD_PPPoE_VER << 4)
+#define IOC_NET_HEADER_FIELD_PPPoE_SESSION (IOC_NET_HEADER_FIELD_PPPoE_VER << 5)
+#define IOC_NET_HEADER_FIELD_PPPoE_PID (IOC_NET_HEADER_FIELD_PPPoE_VER << 6)
+#define IOC_NET_HEADER_FIELD_PPPoE_ALL_FIELDS ((IOC_NET_HEADER_FIELD_PPPoE_VER << 7) - 1)
+
+#define IOC_NET_HEADER_FIELD_PPPMUX_PID (1)
+#define IOC_NET_HEADER_FIELD_PPPMUX_CKSUM (IOC_NET_HEADER_FIELD_PPPMUX_PID << 1)
+#define IOC_NET_HEADER_FIELD_PPPMUX_COMPRESSED (IOC_NET_HEADER_FIELD_PPPMUX_PID << 2)
+#define IOC_NET_HEADER_FIELD_PPPMUX_ALL_FIELDS ((IOC_NET_HEADER_FIELD_PPPMUX_PID << 3) - 1)
+
+#define IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_PFF (1)
+#define IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_LXT (IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_PFF << 1)
+#define IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_LEN (IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_PFF << 2)
+#define IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_PID (IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_PFF << 3)
+#define IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_USE_PID (IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_PFF << 4)
+#define IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_ALL_FIELDS ((IOC_NET_HEADER_FIELD_PPPMUX_SUBFRAME_PFF << 5) - 1)
+
+typedef uint8_t ioc_header_field_eth_t;
+
+#define IOC_NET_HEADER_FIELD_ETH_DA (1)
+#define IOC_NET_HEADER_FIELD_ETH_SA (IOC_NET_HEADER_FIELD_ETH_DA << 1)
+#define IOC_NET_HEADER_FIELD_ETH_LENGTH (IOC_NET_HEADER_FIELD_ETH_DA << 2)
+#define IOC_NET_HEADER_FIELD_ETH_TYPE (IOC_NET_HEADER_FIELD_ETH_DA << 3)
+#define IOC_NET_HEADER_FIELD_ETH_FINAL_CKSUM (IOC_NET_HEADER_FIELD_ETH_DA << 4)
+#define IOC_NET_HEADER_FIELD_ETH_PADDING (IOC_NET_HEADER_FIELD_ETH_DA << 5)
+#define IOC_NET_HEADER_FIELD_ETH_ALL_FIELDS ((IOC_NET_HEADER_FIELD_ETH_DA << 6) - 1)
+
+#define IOC_NET_HEADER_FIELD_ETH_ADDR_SIZE 6
+
+typedef uint16_t ioc_header_field_ip_t;
+
+#define IOC_NET_HEADER_FIELD_IP_VER (1)
+#define IOC_NET_HEADER_FIELD_IP_DSCP (IOC_NET_HEADER_FIELD_IP_VER << 2)
+#define IOC_NET_HEADER_FIELD_IP_ECN (IOC_NET_HEADER_FIELD_IP_VER << 3)
+#define IOC_NET_HEADER_FIELD_IP_PROTO (IOC_NET_HEADER_FIELD_IP_VER << 4)
+
+#define IOC_NET_HEADER_FIELD_IP_PROTO_SIZE 1
+
+typedef uint16_t ioc_header_field_ipv4_t;
+
+#define IOC_NET_HEADER_FIELD_IPv4_VER (1)
+#define IOC_NET_HEADER_FIELD_IPv4_HDR_LEN (IOC_NET_HEADER_FIELD_IPv4_VER << 1)
+#define IOC_NET_HEADER_FIELD_IPv4_TOS (IOC_NET_HEADER_FIELD_IPv4_VER << 2)
+#define IOC_NET_HEADER_FIELD_IPv4_TOTAL_LEN (IOC_NET_HEADER_FIELD_IPv4_VER << 3)
+#define IOC_NET_HEADER_FIELD_IPv4_ID (IOC_NET_HEADER_FIELD_IPv4_VER << 4)
+#define IOC_NET_HEADER_FIELD_IPv4_FLAG_D (IOC_NET_HEADER_FIELD_IPv4_VER << 5)
+#define IOC_NET_HEADER_FIELD_IPv4_FLAG_M (IOC_NET_HEADER_FIELD_IPv4_VER << 6)
+#define IOC_NET_HEADER_FIELD_IPv4_OFFSET (IOC_NET_HEADER_FIELD_IPv4_VER << 7)
+#define IOC_NET_HEADER_FIELD_IPv4_TTL (IOC_NET_HEADER_FIELD_IPv4_VER << 8)
+#define IOC_NET_HEADER_FIELD_IPv4_PROTO (IOC_NET_HEADER_FIELD_IPv4_VER << 9)
+#define IOC_NET_HEADER_FIELD_IPv4_CKSUM (IOC_NET_HEADER_FIELD_IPv4_VER << 10)
+#define IOC_NET_HEADER_FIELD_IPv4_SRC_IP (IOC_NET_HEADER_FIELD_IPv4_VER << 11)
+#define IOC_NET_HEADER_FIELD_IPv4_DST_IP (IOC_NET_HEADER_FIELD_IPv4_VER << 12)
+#define IOC_NET_HEADER_FIELD_IPv4_OPTS (IOC_NET_HEADER_FIELD_IPv4_VER << 13)
+#define IOC_NET_HEADER_FIELD_IPv4_OPTS_COUNT (IOC_NET_HEADER_FIELD_IPv4_VER << 14)
+#define IOC_NET_HEADER_FIELD_IPv4_ALL_FIELDS ((IOC_NET_HEADER_FIELD_IPv4_VER << 15) - 1)
+
+#define IOC_NET_HEADER_FIELD_IPv4_ADDR_SIZE 4
+#define IOC_NET_HEADER_FIELD_IPv4_PROTO_SIZE 1
+
+typedef uint8_t ioc_header_field_ipv6_t;
+
+#define IOC_NET_HEADER_FIELD_IPv6_VER (1)
+#define IOC_NET_HEADER_FIELD_IPv6_TC (IOC_NET_HEADER_FIELD_IPv6_VER << 1)
+#define IOC_NET_HEADER_FIELD_IPv6_SRC_IP (IOC_NET_HEADER_FIELD_IPv6_VER << 2)
+#define IOC_NET_HEADER_FIELD_IPv6_DST_IP (IOC_NET_HEADER_FIELD_IPv6_VER << 3)
+#define IOC_NET_HEADER_FIELD_IPv6_NEXT_HDR (IOC_NET_HEADER_FIELD_IPv6_VER << 4)
+#define IOC_NET_HEADER_FIELD_IPv6_FL (IOC_NET_HEADER_FIELD_IPv6_VER << 5)
+#define IOC_NET_HEADER_FIELD_IPv6_HOP_LIMIT (IOC_NET_HEADER_FIELD_IPv6_VER << 6)
+#define IOC_NET_HEADER_FIELD_IPv6_ALL_FIELDS ((IOC_NET_HEADER_FIELD_IPv6_VER << 7) - 1)
+
+#define IOC_NET_HEADER_FIELD_IPv6_ADDR_SIZE 16
+#define IOC_NET_HEADER_FIELD_IPv6_NEXT_HDR_SIZE 1
+
+#define IOC_NET_HEADER_FIELD_ICMP_TYPE (1)
+#define IOC_NET_HEADER_FIELD_ICMP_CODE (IOC_NET_HEADER_FIELD_ICMP_TYPE << 1)
+#define IOC_NET_HEADER_FIELD_ICMP_CKSUM (IOC_NET_HEADER_FIELD_ICMP_TYPE << 2)
+#define IOC_NET_HEADER_FIELD_ICMP_ID (IOC_NET_HEADER_FIELD_ICMP_TYPE << 3)
+#define IOC_NET_HEADER_FIELD_ICMP_SQ_NUM (IOC_NET_HEADER_FIELD_ICMP_TYPE << 4)
+#define IOC_NET_HEADER_FIELD_ICMP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_ICMP_TYPE << 5) - 1)
+
+#define IOC_NET_HEADER_FIELD_ICMP_CODE_SIZE 1
+#define IOC_NET_HEADER_FIELD_ICMP_TYPE_SIZE 1
+
+#define IOC_NET_HEADER_FIELD_IGMP_VERSION (1)
+#define IOC_NET_HEADER_FIELD_IGMP_TYPE (IOC_NET_HEADER_FIELD_IGMP_VERSION << 1)
+#define IOC_NET_HEADER_FIELD_IGMP_CKSUM (IOC_NET_HEADER_FIELD_IGMP_VERSION << 2)
+#define IOC_NET_HEADER_FIELD_IGMP_DATA (IOC_NET_HEADER_FIELD_IGMP_VERSION << 3)
+#define IOC_NET_HEADER_FIELD_IGMP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_IGMP_VERSION << 4) - 1)
+
+typedef uint16_t ioc_header_field_tcp_t;
+
+#define IOC_NET_HEADER_FIELD_TCP_PORT_SRC (1)
+#define IOC_NET_HEADER_FIELD_TCP_PORT_DST (IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 1)
+#define IOC_NET_HEADER_FIELD_TCP_SEQ (IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 2)
+#define IOC_NET_HEADER_FIELD_TCP_ACK (IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 3)
+#define IOC_NET_HEADER_FIELD_TCP_OFFSET (IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 4)
+#define IOC_NET_HEADER_FIELD_TCP_FLAGS (IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 5)
+#define IOC_NET_HEADER_FIELD_TCP_WINDOW (IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 6)
+#define IOC_NET_HEADER_FIELD_TCP_CKSUM (IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 7)
+#define IOC_NET_HEADER_FIELD_TCP_URGPTR (IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 8)
+#define IOC_NET_HEADER_FIELD_TCP_OPTS (IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 9)
+#define IOC_NET_HEADER_FIELD_TCP_OPTS_COUNT (IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 10)
+#define IOC_NET_HEADER_FIELD_TCP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_TCP_PORT_SRC << 11) - 1)
+
+#define IOC_NET_HEADER_FIELD_TCP_PORT_SIZE 2
+
+typedef uint8_t ioc_header_field_sctp_t;
+
+#define IOC_NET_HEADER_FIELD_SCTP_PORT_SRC (1)
+#define IOC_NET_HEADER_FIELD_SCTP_PORT_DST (IOC_NET_HEADER_FIELD_SCTP_PORT_SRC << 1)
+#define IOC_NET_HEADER_FIELD_SCTP_VER_TAG (IOC_NET_HEADER_FIELD_SCTP_PORT_SRC << 2)
+#define IOC_NET_HEADER_FIELD_SCTP_CKSUM (IOC_NET_HEADER_FIELD_SCTP_PORT_SRC << 3)
+#define IOC_NET_HEADER_FIELD_SCTP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_SCTP_PORT_SRC << 4) - 1)
+
+#define IOC_NET_HEADER_FIELD_SCTP_PORT_SIZE 2
+
+typedef uint8_t ioc_header_field_dccp_t;
+
+#define IOC_NET_HEADER_FIELD_DCCP_PORT_SRC (1)
+#define IOC_NET_HEADER_FIELD_DCCP_PORT_DST (IOC_NET_HEADER_FIELD_DCCP_PORT_SRC << 1)
+#define IOC_NET_HEADER_FIELD_DCCP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_DCCP_PORT_SRC << 2) - 1)
+
+#define IOC_NET_HEADER_FIELD_DCCP_PORT_SIZE 2
+
+typedef uint8_t ioc_header_field_udp_t;
+
+#define IOC_NET_HEADER_FIELD_UDP_PORT_SRC (1)
+#define IOC_NET_HEADER_FIELD_UDP_PORT_DST (IOC_NET_HEADER_FIELD_UDP_PORT_SRC << 1)
+#define IOC_NET_HEADER_FIELD_UDP_LEN (IOC_NET_HEADER_FIELD_UDP_PORT_SRC << 2)
+#define IOC_NET_HEADER_FIELD_UDP_CKSUM (IOC_NET_HEADER_FIELD_UDP_PORT_SRC << 3)
+#define IOC_NET_HEADER_FIELD_UDP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_UDP_PORT_SRC << 4) - 1)
+
+#define IOC_NET_HEADER_FIELD_UDP_PORT_SIZE 2
+
+typedef uint8_t ioc_header_field_udp_lite_t;
+
+#define IOC_NET_HEADER_FIELD_UDP_LITE_PORT_SRC (1)
+#define IOC_NET_HEADER_FIELD_UDP_LITE_PORT_DST (IOC_NET_HEADER_FIELD_UDP_LITE_PORT_SRC << 1)
+#define IOC_NET_HEADER_FIELD_UDP_LITE_ALL_FIELDS ((IOC_NET_HEADER_FIELD_UDP_LITE_PORT_SRC << 2) - 1)
+
+#define IOC_NET_HEADER_FIELD_UDP_LITE_PORT_SIZE 2
+
+typedef uint8_t ioc_header_field_udp_encap_esp_t;
+
+#define IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_PORT_SRC (1)
+#define IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_PORT_DST (IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_PORT_SRC << 1)
+#define IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_LEN (IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_PORT_SRC << 2)
+#define IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_CKSUM (IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_PORT_SRC << 3)
+#define IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_SPI (IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_PORT_SRC << 4)
+#define IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_SEQUENCE_NUM (IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_PORT_SRC << 5)
+#define IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_PORT_SRC << 6) - 1)
+
+#define IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_PORT_SIZE 2
+#define IOC_NET_HEADER_FIELD_UDP_ENCAP_ESP_SPI_SIZE 4
+
+#define IOC_NET_HEADER_FIELD_IPHC_CID (1)
+#define IOC_NET_HEADER_FIELD_IPHC_CID_TYPE (IOC_NET_HEADER_FIELD_IPHC_CID << 1)
+#define IOC_NET_HEADER_FIELD_IPHC_HCINDEX (IOC_NET_HEADER_FIELD_IPHC_CID << 2)
+#define IOC_NET_HEADER_FIELD_IPHC_GEN (IOC_NET_HEADER_FIELD_IPHC_CID << 3)
+#define IOC_NET_HEADER_FIELD_IPHC_D_BIT (IOC_NET_HEADER_FIELD_IPHC_CID << 4)
+#define IOC_NET_HEADER_FIELD_IPHC_ALL_FIELDS ((IOC_NET_HEADER_FIELD_IPHC_CID << 5) - 1)
+
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE (1)
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_FLAGS (IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE << 1)
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_LENGTH (IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE << 2)
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TSN (IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE << 3)
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_STREAM_ID (IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE << 4)
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_STREAM_SQN (IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE << 5)
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_PAYLOAD_PID (IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE << 6)
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_UNORDERED (IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE << 7)
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_BEGGINING (IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE << 8)
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_END (IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE << 9)
+#define IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_ALL_FIELDS ((IOC_NET_HEADER_FIELD_SCTP_CHUNK_DATA_TYPE << 10) - 1)
+
+#define IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT (1)
+#define IOC_NET_HEADER_FIELD_L2TPv2_LENGTH_BIT (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 1)
+#define IOC_NET_HEADER_FIELD_L2TPv2_SEQUENCE_BIT (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 2)
+#define IOC_NET_HEADER_FIELD_L2TPv2_OFFSET_BIT (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 3)
+#define IOC_NET_HEADER_FIELD_L2TPv2_PRIORITY_BIT (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 4)
+#define IOC_NET_HEADER_FIELD_L2TPv2_VERSION (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 5)
+#define IOC_NET_HEADER_FIELD_L2TPv2_LEN (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 6)
+#define IOC_NET_HEADER_FIELD_L2TPv2_TUNNEL_ID (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 7)
+#define IOC_NET_HEADER_FIELD_L2TPv2_SESSION_ID (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 8)
+#define IOC_NET_HEADER_FIELD_L2TPv2_NS (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 9)
+#define IOC_NET_HEADER_FIELD_L2TPv2_NR (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 10)
+#define IOC_NET_HEADER_FIELD_L2TPv2_OFFSET_SIZE (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 11)
+#define IOC_NET_HEADER_FIELD_L2TPv2_FIRST_BYTE (IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 12)
+#define IOC_NET_HEADER_FIELD_L2TPv2_ALL_FIELDS ((IOC_NET_HEADER_FIELD_L2TPv2_TYPE_BIT << 13) - 1)
+
+#define IOC_NET_HEADER_FIELD_L2TPv3_CTRL_TYPE_BIT (1)
+#define IOC_NET_HEADER_FIELD_L2TPv3_CTRL_LENGTH_BIT (IOC_NET_HEADER_FIELD_L2TPv3_CTRL_TYPE_BIT << 1)
+#define IOC_NET_HEADER_FIELD_L2TPv3_CTRL_SEQUENCE_BIT (IOC_NET_HEADER_FIELD_L2TPv3_CTRL_TYPE_BIT << 2)
+#define IOC_NET_HEADER_FIELD_L2TPv3_CTRL_VERSION (IOC_NET_HEADER_FIELD_L2TPv3_CTRL_TYPE_BIT << 3)
+#define IOC_NET_HEADER_FIELD_L2TPv3_CTRL_LENGTH (IOC_NET_HEADER_FIELD_L2TPv3_CTRL_TYPE_BIT << 4)
+#define IOC_NET_HEADER_FIELD_L2TPv3_CTRL_CONTROL (IOC_NET_HEADER_FIELD_L2TPv3_CTRL_TYPE_BIT << 5)
+#define IOC_NET_HEADER_FIELD_L2TPv3_CTRL_SENT (IOC_NET_HEADER_FIELD_L2TPv3_CTRL_TYPE_BIT << 6)
+#define IOC_NET_HEADER_FIELD_L2TPv3_CTRL_RECV (IOC_NET_HEADER_FIELD_L2TPv3_CTRL_TYPE_BIT << 7)
+#define IOC_NET_HEADER_FIELD_L2TPv3_CTRL_FIRST_BYTE (IOC_NET_HEADER_FIELD_L2TPv3_CTRL_TYPE_BIT << 8)
+#define IOC_NET_HEADER_FIELD_L2TPv3_CTRL_ALL_FIELDS ((IOC_NET_HEADER_FIELD_L2TPv3_CTRL_TYPE_BIT << 9) - 1)
+
+#define IOC_NET_HEADER_FIELD_L2TPv3_SESS_TYPE_BIT (1)
+#define IOC_NET_HEADER_FIELD_L2TPv3_SESS_VERSION (IOC_NET_HEADER_FIELD_L2TPv3_SESS_TYPE_BIT << 1)
+#define IOC_NET_HEADER_FIELD_L2TPv3_SESS_ID (IOC_NET_HEADER_FIELD_L2TPv3_SESS_TYPE_BIT << 2)
+#define IOC_NET_HEADER_FIELD_L2TPv3_SESS_COOKIE (IOC_NET_HEADER_FIELD_L2TPv3_SESS_TYPE_BIT << 3)
+#define IOC_NET_HEADER_FIELD_L2TPv3_SESS_ALL_FIELDS ((IOC_NET_HEADER_FIELD_L2TPv3_SESS_TYPE_BIT << 4) - 1)
+
+typedef uint8_t ioc_header_field_vlan_t;
+
+#define IOC_NET_HEADER_FIELD_VLAN_VPRI (1)
+#define IOC_NET_HEADER_FIELD_VLAN_CFI (IOC_NET_HEADER_FIELD_VLAN_VPRI << 1)
+#define IOC_NET_HEADER_FIELD_VLAN_VID (IOC_NET_HEADER_FIELD_VLAN_VPRI << 2)
+#define IOC_NET_HEADER_FIELD_VLAN_LENGTH (IOC_NET_HEADER_FIELD_VLAN_VPRI << 3)
+#define IOC_NET_HEADER_FIELD_VLAN_TYPE (IOC_NET_HEADER_FIELD_VLAN_VPRI << 4)
+#define IOC_NET_HEADER_FIELD_VLAN_ALL_FIELDS ((IOC_NET_HEADER_FIELD_VLAN_VPRI << 5) - 1)
+
+#define IOC_NET_HEADER_FIELD_VLAN_TCI (IOC_NET_HEADER_FIELD_VLAN_VPRI | \
+ IOC_NET_HEADER_FIELD_VLAN_CFI | \
+ IOC_NET_HEADER_FIELD_VLAN_VID)
+
+typedef uint8_t ioc_header_field_llc_t;
+
+#define IOC_NET_HEADER_FIELD_LLC_DSAP (1)
+#define IOC_NET_HEADER_FIELD_LLC_SSAP (IOC_NET_HEADER_FIELD_LLC_DSAP << 1)
+#define IOC_NET_HEADER_FIELD_LLC_CTRL (IOC_NET_HEADER_FIELD_LLC_DSAP << 2)
+#define IOC_NET_HEADER_FIELD_LLC_ALL_FIELDS ((IOC_NET_HEADER_FIELD_LLC_DSAP << 3) - 1)
+
+#define IOC_NET_HEADER_FIELD_NLPID_NLPID (1)
+#define IOC_NET_HEADER_FIELD_NLPID_ALL_FIELDS ((IOC_NET_HEADER_FIELD_NLPID_NLPID << 1) - 1)
+
+typedef uint8_t ioc_header_field_snap_t;
+
+#define IOC_NET_HEADER_FIELD_SNAP_OUI (1)
+#define IOC_NET_HEADER_FIELD_SNAP_PID (IOC_NET_HEADER_FIELD_SNAP_OUI << 1)
+#define IOC_NET_HEADER_FIELD_SNAP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_SNAP_OUI << 2) - 1)
+
+typedef uint8_t ioc_header_field_llc_snap_t;
+
+#define IOC_NET_HEADER_FIELD_LLC_SNAP_TYPE (1)
+#define IOC_NET_HEADER_FIELD_LLC_SNAP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_LLC_SNAP_TYPE << 1) - 1)
+
+#define IOC_NET_HEADER_FIELD_ARP_HTYPE (1)
+#define IOC_NET_HEADER_FIELD_ARP_PTYPE (IOC_NET_HEADER_FIELD_ARP_HTYPE << 1)
+#define IOC_NET_HEADER_FIELD_ARP_HLEN (IOC_NET_HEADER_FIELD_ARP_HTYPE << 2)
+#define IOC_NET_HEADER_FIELD_ARP_PLEN (IOC_NET_HEADER_FIELD_ARP_HTYPE << 3)
+#define IOC_NET_HEADER_FIELD_ARP_OPER (IOC_NET_HEADER_FIELD_ARP_HTYPE << 4)
+#define IOC_NET_HEADER_FIELD_ARP_SHA (IOC_NET_HEADER_FIELD_ARP_HTYPE << 5)
+#define IOC_NET_HEADER_FIELD_ARP_SPA (IOC_NET_HEADER_FIELD_ARP_HTYPE << 6)
+#define IOC_NET_HEADER_FIELD_ARP_THA (IOC_NET_HEADER_FIELD_ARP_HTYPE << 7)
+#define IOC_NET_HEADER_FIELD_ARP_TPA (IOC_NET_HEADER_FIELD_ARP_HTYPE << 8)
+#define IOC_NET_HEADER_FIELD_ARP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_ARP_HTYPE << 9) - 1)
+
+#define IOC_NET_HEADER_FIELD_RFC2684_LLC (1)
+#define IOC_NET_HEADER_FIELD_RFC2684_NLPID (IOC_NET_HEADER_FIELD_RFC2684_LLC << 1)
+#define IOC_NET_HEADER_FIELD_RFC2684_OUI (IOC_NET_HEADER_FIELD_RFC2684_LLC << 2)
+#define IOC_NET_HEADER_FIELD_RFC2684_PID (IOC_NET_HEADER_FIELD_RFC2684_LLC << 3)
+#define IOC_NET_HEADER_FIELD_RFC2684_VPN_OUI (IOC_NET_HEADER_FIELD_RFC2684_LLC << 4)
+#define IOC_NET_HEADER_FIELD_RFC2684_VPN_IDX (IOC_NET_HEADER_FIELD_RFC2684_LLC << 5)
+#define IOC_NET_HEADER_FIELD_RFC2684_ALL_FIELDS ((IOC_NET_HEADER_FIELD_RFC2684_LLC << 6) - 1)
+
+#define IOC_NET_HEADER_FIELD_USER_DEFINED_SRCPORT (1)
+#define IOC_NET_HEADER_FIELD_USER_DEFINED_PCDID (IOC_NET_HEADER_FIELD_USER_DEFINED_SRCPORT << 1)
+#define IOC_NET_HEADER_FIELD_USER_DEFINED_ALL_FIELDS ((IOC_NET_HEADER_FIELD_USER_DEFINED_SRCPORT << 2) - 1)
+
+#define IOC_NET_HEADER_FIELD_PAYLOAD_BUFFER (1)
+#define IOC_NET_HEADER_FIELD_PAYLOAD_SIZE (IOC_NET_HEADER_FIELD_PAYLOAD_BUFFER << 1)
+#define IOC_NET_HEADER_FIELD_MAX_FRM_SIZE (IOC_NET_HEADER_FIELD_PAYLOAD_BUFFER << 2)
+#define IOC_NET_HEADER_FIELD_MIN_FRM_SIZE (IOC_NET_HEADER_FIELD_PAYLOAD_BUFFER << 3)
+#define IOC_NET_HEADER_FIELD_PAYLOAD_TYPE (IOC_NET_HEADER_FIELD_PAYLOAD_BUFFER << 4)
+#define IOC_NET_HEADER_FIELD_FRAME_SIZE (IOC_NET_HEADER_FIELD_PAYLOAD_BUFFER << 5)
+#define IOC_NET_HEADER_FIELD_PAYLOAD_ALL_FIELDS ((IOC_NET_HEADER_FIELD_PAYLOAD_BUFFER << 6) - 1)
+
+typedef uint8_t ioc_header_field_gre_t;
+
+#define IOC_NET_HEADER_FIELD_GRE_TYPE (1)
+#define IOC_NET_HEADER_FIELD_GRE_ALL_FIELDS ((IOC_NET_HEADER_FIELD_GRE_TYPE << 1) - 1)
+
+typedef uint8_t ioc_header_field_minencap_t;
+
+#define IOC_NET_HEADER_FIELD_MINENCAP_SRC_IP (1)
+#define IOC_NET_HEADER_FIELD_MINENCAP_DST_IP (IOC_NET_HEADER_FIELD_MINENCAP_SRC_IP << 1)
+#define IOC_NET_HEADER_FIELD_MINENCAP_TYPE (IOC_NET_HEADER_FIELD_MINENCAP_SRC_IP << 2)
+#define IOC_NET_HEADER_FIELD_MINENCAP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_MINENCAP_SRC_IP << 3) - 1)
+
+typedef uint8_t ioc_header_field_ipsec_ah_t;
+
+#define IOC_NET_HEADER_FIELD_IPSEC_AH_SPI (1)
+#define IOC_NET_HEADER_FIELD_IPSEC_AH_NH (IOC_NET_HEADER_FIELD_IPSEC_AH_SPI << 1)
+#define IOC_NET_HEADER_FIELD_IPSEC_AH_ALL_FIELDS ((IOC_NET_HEADER_FIELD_IPSEC_AH_SPI << 2) - 1)
+
+typedef uint8_t ioc_header_field_ipsec_esp_t;
+
+#define IOC_NET_HEADER_FIELD_IPSEC_ESP_SPI (1)
+#define IOC_NET_HEADER_FIELD_IPSEC_ESP_SEQUENCE_NUM (IOC_NET_HEADER_FIELD_IPSEC_ESP_SPI << 1)
+#define IOC_NET_HEADER_FIELD_IPSEC_ESP_ALL_FIELDS ((IOC_NET_HEADER_FIELD_IPSEC_ESP_SPI << 2) - 1)
+
+#define IOC_NET_HEADER_FIELD_IPSEC_ESP_SPI_SIZE 4
+
+
+typedef uint8_t ioc_header_field_mpls_t;
+
+#define IOC_NET_HEADER_FIELD_MPLS_LABEL_STACK (1)
+#define IOC_NET_HEADER_FIELD_MPLS_LABEL_STACK_ALL_FIELDS ((IOC_NET_HEADER_FIELD_MPLS_LABEL_STACK << 1) - 1)
+
+typedef uint8_t ioc_header_field_macsec_t;
+
+#define IOC_NET_HEADER_FIELD_MACSEC_SECTAG (1)
+#define IOC_NET_HEADER_FIELD_MACSEC_ALL_FIELDS ((IOC_NET_HEADER_FIELD_MACSEC_SECTAG << 1) - 1)
+
+typedef enum {
+ HEADER_TYPE_NONE = 0,
+ HEADER_TYPE_PAYLOAD,
+ HEADER_TYPE_ETH,
+ HEADER_TYPE_VLAN,
+ HEADER_TYPE_IPv4,
+ HEADER_TYPE_IPv6,
+ HEADER_TYPE_IP,
+ HEADER_TYPE_TCP,
+ HEADER_TYPE_UDP,
+ HEADER_TYPE_UDP_LITE,
+ HEADER_TYPE_IPHC,
+ HEADER_TYPE_SCTP,
+ HEADER_TYPE_SCTP_CHUNK_DATA,
+ HEADER_TYPE_PPPoE,
+ HEADER_TYPE_PPP,
+ HEADER_TYPE_PPPMUX,
+ HEADER_TYPE_PPPMUX_SUBFRAME,
+ HEADER_TYPE_L2TPv2,
+ HEADER_TYPE_L2TPv3_CTRL,
+ HEADER_TYPE_L2TPv3_SESS,
+ HEADER_TYPE_LLC,
+ HEADER_TYPE_LLC_SNAP,
+ HEADER_TYPE_NLPID,
+ HEADER_TYPE_SNAP,
+ HEADER_TYPE_MPLS,
+ HEADER_TYPE_IPSEC_AH,
+ HEADER_TYPE_IPSEC_ESP,
+ HEADER_TYPE_UDP_ENCAP_ESP, /* RFC 3948 */
+ HEADER_TYPE_MACSEC,
+ HEADER_TYPE_GRE,
+ HEADER_TYPE_MINENCAP,
+ HEADER_TYPE_DCCP,
+ HEADER_TYPE_ICMP,
+ HEADER_TYPE_IGMP,
+ HEADER_TYPE_ARP,
+ HEADER_TYPE_CAPWAP,
+ HEADER_TYPE_CAPWAP_DTLS,
+ HEADER_TYPE_RFC2684,
+ HEADER_TYPE_USER_DEFINED_L2,
+ HEADER_TYPE_USER_DEFINED_L3,
+ HEADER_TYPE_USER_DEFINED_L4,
+ HEADER_TYPE_USER_DEFINED_SHIM1,
+ HEADER_TYPE_USER_DEFINED_SHIM2,
+ MAX_HEADER_TYPE_COUNT
+} ioc_net_header_type;
+
+#endif /* __NET_EXT_H */
diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build
index 271416f08..67803cd34 100644
--- a/drivers/net/dpaa/meson.build
+++ b/drivers/net/dpaa/meson.build
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018 NXP
+# Copyright 2018-2019 NXP
if not is_linux
build = false
@@ -8,6 +8,7 @@ endif
deps += ['mempool_dpaa']
sources = files('dpaa_ethdev.c',
+ 'fmlib/fm_lib.c',
'dpaa_rxtx.c')
if cc.has_argument('-Wno-pointer-arith')
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 16/37] net/dpaa: add VSP support in FMLIB
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (14 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 15/37] net/dpaa: add support for fmlib in dpdk Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 17/37] net/dpaa: add support for fmcless mode Hemant Agrawal
` (22 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
This patch adds support for VSP (Virtual Storage Profile)
FMLIB routines.
VSP allow a network interface to be divided into physical
and virtual instances.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa/Makefile | 1 +
drivers/net/dpaa/fmlib/fm_vsp.c | 143 ++++++++++++++++++++++++++++
drivers/net/dpaa/fmlib/fm_vsp_ext.h | 140 +++++++++++++++++++++++++++
drivers/net/dpaa/meson.build | 1 +
4 files changed, 285 insertions(+)
create mode 100644 drivers/net/dpaa/fmlib/fm_vsp.c
create mode 100644 drivers/net/dpaa/fmlib/fm_vsp_ext.h
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index 0d2f32ba1..8db4e457f 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -28,6 +28,7 @@ EXPORT_MAP := rte_pmd_dpaa_version.map
# Interfaces with DPDK
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += fmlib/fm_lib.c
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += fmlib/fm_vsp.c
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
diff --git a/drivers/net/dpaa/fmlib/fm_vsp.c b/drivers/net/dpaa/fmlib/fm_vsp.c
new file mode 100644
index 000000000..b511b5159
--- /dev/null
+++ b/drivers/net/dpaa/fmlib/fm_vsp.c
@@ -0,0 +1,143 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2019-2020 NXP
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <unistd.h>
+#include <termios.h>
+#include <sys/ioctl.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include "fm_ext.h"
+#include "fm_pcd_ext.h"
+#include "fm_port_ext.h"
+#include "fm_vsp_ext.h"
+#include <dpaa_ethdev.h>
+
+uint32_t FM_PORT_VSPAlloc(t_Handle h_FmPort,
+ t_FmPortVSPAllocParams *p_Params)
+{
+ t_Device *p_Dev = (t_Device *)h_FmPort;
+ ioc_fm_port_vsp_alloc_params_t params;
+
+ _fml_dbg("Calling...\n");
+ memset(¶ms, 0, sizeof(ioc_fm_port_vsp_alloc_params_t));
+ memcpy(¶ms.params, p_Params, sizeof(t_FmPortVSPAllocParams));
+
+ if (ioctl(p_Dev->fd, FM_PORT_IOC_VSP_ALLOC, ¶ms))
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+
+ _fml_dbg("Called.\n");
+
+ return E_OK;
+}
+
+t_Handle FM_VSP_Config(t_FmVspParams *p_FmVspParams)
+{
+ t_Device *p_Dev = NULL;
+ t_Device *p_VspDev = NULL;
+ ioc_fm_vsp_params_t param;
+
+ p_Dev = p_FmVspParams->h_Fm;
+
+ _fml_dbg("Performing VSP Configuration...\n");
+
+ memset(¶m, 0, sizeof(ioc_fm_vsp_params_t));
+ memcpy(¶m, p_FmVspParams, sizeof(t_FmVspParams));
+ param.vsp_params.h_Fm = UINT_TO_PTR(p_Dev->id);
+ param.id = NULL;
+
+ if (ioctl(p_Dev->fd, FM_IOC_VSP_CONFIG, ¶m)) {
+ DPAA_PMD_ERR("%s ioctl error\n", __func__);
+ return NULL;
+ }
+
+ p_VspDev = (t_Device *)malloc(sizeof(t_Device));
+ if (!p_VspDev) {
+ DPAA_PMD_ERR("FM VSP Params!\n");
+ return NULL;
+ }
+ memset(p_VspDev, 0, sizeof(t_Device));
+ p_VspDev->h_UserPriv = (t_Handle)p_Dev;
+ p_Dev->owners++;
+ p_VspDev->id = PTR_TO_UINT(param.id);
+
+ _fml_dbg("VSP Configuration completed\n");
+
+ return (t_Handle)p_VspDev;
+}
+
+uint32_t FM_VSP_Init(t_Handle h_FmVsp)
+{
+ t_Device *p_Dev = NULL;
+ t_Device *p_VspDev = (t_Device *)h_FmVsp;
+ ioc_fm_obj_t id;
+
+ _fml_dbg("Calling...\n");
+
+ p_Dev = (t_Device *)p_VspDev->h_UserPriv;
+ id.obj = UINT_TO_PTR(p_VspDev->id);
+
+ if (ioctl(p_Dev->fd, FM_IOC_VSP_INIT, &id)) {
+ DPAA_PMD_ERR("%s ioctl error\n", __func__);
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+ }
+
+ _fml_dbg("Called.\n");
+
+ return E_OK;
+}
+
+uint32_t FM_VSP_Free(t_Handle h_FmVsp)
+{
+ t_Device *p_Dev = NULL;
+ t_Device *p_VspDev = (t_Device *)h_FmVsp;
+ ioc_fm_obj_t id;
+
+ _fml_dbg("Calling...\n");
+
+ p_Dev = (t_Device *)p_VspDev->h_UserPriv;
+ id.obj = UINT_TO_PTR(p_VspDev->id);
+
+ if (ioctl(p_Dev->fd, FM_IOC_VSP_FREE, &id)) {
+ DPAA_PMD_ERR("%s ioctl error\n", __func__);
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+ }
+
+ p_Dev->owners--;
+ free(p_VspDev);
+
+ _fml_dbg("Called.\n");
+
+ return E_OK;
+}
+
+uint32_t FM_VSP_ConfigBufferPrefixContent(t_Handle h_FmVsp,
+ t_FmBufferPrefixContent *p_FmBufferPrefixContent)
+{
+ t_Device *p_Dev = NULL;
+ t_Device *p_VspDev = (t_Device *)h_FmVsp;
+ ioc_fm_buffer_prefix_content_params_t params;
+
+ _fml_dbg("Calling...\n");
+
+ p_Dev = (t_Device *)p_VspDev->h_UserPriv;
+ params.p_fm_vsp = UINT_TO_PTR(p_VspDev->id);
+ memcpy(¶ms.fm_buffer_prefix_content,
+ p_FmBufferPrefixContent, sizeof(*p_FmBufferPrefixContent));
+
+ if (ioctl(p_Dev->fd, FM_IOC_VSP_CONFIG_BUFFER_PREFIX_CONTENT,
+ ¶ms)) {
+ DPAA_PMD_ERR("%s ioctl error\n", __func__);
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+ }
+
+ _fml_dbg("Called.\n");
+
+ return E_OK;
+}
diff --git a/drivers/net/dpaa/fmlib/fm_vsp_ext.h b/drivers/net/dpaa/fmlib/fm_vsp_ext.h
new file mode 100644
index 000000000..097d25d4e
--- /dev/null
+++ b/drivers/net/dpaa/fmlib/fm_vsp_ext.h
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ *
+ * Copyright 2008-2012 Freescale Semiconductor, Inc
+ * Copyright 2019-2020 NXP
+ *
+ */
+
+/**
+ @File fm_vsp_ext.h
+
+ @Description FM Virtual Storage-Profile
+*/
+#ifndef __FM_VSP_EXT_H
+#define __FM_VSP_EXT_H
+#include "ncsw_ext.h"
+#include "fm_ext.h"
+#include "net_ext.h"
+
+typedef struct t_FmVspParams {
+ t_Handle h_Fm;
+ /**< A handle to the FM object this VSP related to */
+ t_FmExtPools extBufPools;/**< Which external buffer pools are used
+ (up to FM_PORT_MAX_NUM_OF_EXT_POOLS), and their sizes.
+ parameter associated with Rx / OP port */
+ uint16_t liodnOffset; /**< VSP's LIODN offset */
+ struct {
+ e_FmPortType portType; /**< Port type */
+ uint8_t portId; /**< Port Id - relative to type */
+ } portParams;
+ uint8_t relativeProfileId; /**< VSP Id - relative to VSP's range
+ defined in relevant FM object */
+} t_FmVspParams;
+
+typedef struct ioc_fm_vsp_params_t {
+ struct t_FmVspParams vsp_params;
+ void *id; /**< return value */
+} ioc_fm_vsp_params_t;
+
+typedef struct t_FmPortVSPAllocParams {
+ uint8_t numOfProfiles;
+ /**< Number of Virtual Storage Profiles; must be a power of 2 */
+ uint8_t dfltRelativeId;
+ /**< The default Virtual-Storage-Profile-id dedicated to Rx/OP port
+ The same default Virtual-Storage-Profile-id will be for coupled Tx port
+ if relevant function called for Rx port */
+} t_FmPortVSPAllocParams;
+
+typedef struct ioc_fm_port_vsp_alloc_params_t {
+ struct t_FmPortVSPAllocParams params;
+ void *p_fm_tx_port;
+ /**< Handle to coupled Tx Port; not relevant for OP port. */
+} ioc_fm_port_vsp_alloc_params_t;
+
+typedef struct ioc_fm_buffer_prefix_content_t {
+ uint16_t priv_data_size;
+ /**< Number of bytes to be left at the beginning
+ of the external buffer; Note that the private-area will
+ start from the base of the buffer address. */
+ bool pass_prs_result;
+ /**< TRUE to pass the parse result to/from the FM;
+ User may use FM_PORT_GetBufferPrsResult() in order to
+ get the parser-result from a buffer. */
+ bool pass_time_stamp;
+ /**< TRUE to pass the timeStamp to/from the FM
+ User may use FM_PORT_GetBufferTimeStamp() in order to
+ get the parser-result from a buffer. */
+ bool pass_hash_result;
+ /**< TRUE to pass the KG hash result to/from the FM
+ User may use FM_PORT_GetBufferHashResult() in order to
+ get the parser-result from a buffer. */
+ bool pass_all_other_pcd_info;
+ /**< Add all other Internal-Context information:
+ AD, hash-result, key, etc. */
+ uint16_t data_align; /**< 0 to use driver's default alignment [64],
+ other value for selecting a data alignment
+ (must be a power of 2);
+ if write optimization is used, must be >= 16. */
+ uint8_t manip_extra_space;
+ /**< Maximum extra size needed
+ * (insertion-size minus removal-size);
+ * Note that this field impacts the size of the
+ * buffer-prefix (i.e. it pushes the data offset);
+ * This field is irrelevant if DPAA_VERSION==10
+ */
+} ioc_fm_buffer_prefix_content_t;
+
+typedef struct ioc_fm_buffer_prefix_content_params_t {
+ void *p_fm_vsp;
+ ioc_fm_buffer_prefix_content_t fm_buffer_prefix_content;
+} ioc_fm_buffer_prefix_content_params_t;
+
+uint32_t FM_PORT_VSPAlloc(
+ t_Handle h_FmPort,
+ t_FmPortVSPAllocParams *p_Params);
+
+t_Handle FM_VSP_Config(t_FmVspParams *p_FmVspParams);
+
+uint32_t FM_VSP_Init(t_Handle h_FmVsp);
+
+uint32_t FM_VSP_Free(t_Handle h_FmVsp);
+
+uint32_t FM_VSP_ConfigBufferPrefixContent(t_Handle h_FmVsp,
+ t_FmBufferPrefixContent *p_FmBufferPrefixContent);
+
+#if defined(CONFIG_COMPAT)
+#define FM_PORT_IOC_VSP_ALLOC_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(38), ioc_compat_fm_port_vsp_alloc_params_t)
+#endif
+#define FM_PORT_IOC_VSP_ALLOC \
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(38), ioc_fm_port_vsp_alloc_params_t)
+
+#if defined(CONFIG_COMPAT)
+#define FM_IOC_VSP_CONFIG_COMPAT \
+ _IOWR(FM_IOC_TYPE_BASE, FM_IOC_NUM(8), ioc_compat_fm_vsp_params_t)
+#endif
+#define FM_IOC_VSP_CONFIG \
+ _IOWR(FM_IOC_TYPE_BASE, FM_IOC_NUM(8), ioc_fm_vsp_params_t)
+
+#if defined(CONFIG_COMPAT)
+#define FM_IOC_VSP_INIT_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(9), ioc_compat_fm_obj_t)
+#endif
+#define FM_IOC_VSP_INIT \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(9), ioc_fm_obj_t)
+
+#if defined(CONFIG_COMPAT)
+#define FM_IOC_VSP_FREE_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(10), ioc_compat_fm_obj_t)
+#endif
+#define FM_IOC_VSP_FREE \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(10), ioc_fm_obj_t)
+
+#if defined(CONFIG_COMPAT)
+#define FM_IOC_VSP_CONFIG_BUFFER_PREFIX_CONTENT_COMPAT \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(12), ioc_compat_fm_buffer_prefix_content_params_t)
+#endif
+#define FM_IOC_VSP_CONFIG_BUFFER_PREFIX_CONTENT \
+ _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(12), ioc_fm_buffer_prefix_content_params_t)
+
+#endif /* __FM_VSP_EXT_H */
diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build
index 67803cd34..94d509528 100644
--- a/drivers/net/dpaa/meson.build
+++ b/drivers/net/dpaa/meson.build
@@ -9,6 +9,7 @@ deps += ['mempool_dpaa']
sources = files('dpaa_ethdev.c',
'fmlib/fm_lib.c',
+ 'fmlib/fm_vsp.c',
'dpaa_rxtx.c')
if cc.has_argument('-Wno-pointer-arith')
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 17/37] net/dpaa: add support for fmcless mode
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (15 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 16/37] net/dpaa: add VSP support in FMLIB Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-06-30 17:01 ` Ferruh Yigit
2020-05-27 13:23 ` [dpdk-dev] [PATCH 18/37] bus/dpaa: add shared MAC support Hemant Agrawal
` (21 subsequent siblings)
38 siblings, 1 reply; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Sachin Saxena, Hemant Agrawal
From: Sachin Saxena <sachin.saxena@nxp.com>
This patch uses fmlib to configure the FMAN HW for flow
and distribution configuration, thus avoiding the need
for static FMC tool execution optionally.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/Makefile | 1 +
drivers/net/dpaa/dpaa_ethdev.c | 111 +++-
drivers/net/dpaa/dpaa_ethdev.h | 4 +
drivers/net/dpaa/dpaa_flow.c | 905 +++++++++++++++++++++++++++++++++
drivers/net/dpaa/dpaa_flow.h | 14 +
drivers/net/dpaa/meson.build | 1 +
6 files changed, 1014 insertions(+), 22 deletions(-)
create mode 100644 drivers/net/dpaa/dpaa_flow.c
create mode 100644 drivers/net/dpaa/dpaa_flow.h
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index 8db4e457f..d334b82a0 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -30,6 +30,7 @@ EXPORT_MAP := rte_pmd_dpaa_version.map
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += fmlib/fm_lib.c
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += fmlib/fm_vsp.c
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_flow.c
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
LDLIBS += -lrte_bus_dpaa
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 7c4762002..1dbe2abf4 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -39,6 +39,7 @@
#include <dpaa_ethdev.h>
#include <dpaa_rxtx.h>
+#include <dpaa_flow.h>
#include <rte_pmd_dpaa.h>
#include <fsl_usd.h>
@@ -78,6 +79,7 @@ static uint64_t dev_tx_offloads_nodis =
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
+static int fmc_q = 1; /* Indicates the use of static fmc for distribution */
static int default_q; /* use default queue - FMC is not executed*/
/* At present we only allow up to 4 push mode queues as default - as each of
* this queue need dedicated portal and we are short of portals.
@@ -1294,16 +1296,15 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
}
};
- if (fqid) {
+ if (fmc_q || default_q) {
ret = qman_reserve_fqid(fqid);
if (ret) {
- DPAA_PMD_ERR("reserve rx fqid 0x%x failed with ret: %d",
+ DPAA_PMD_ERR("reserve rx fqid 0x%x failed, ret: %d",
fqid, ret);
return -EINVAL;
}
- } else {
- flags |= QMAN_FQ_FLAG_DYNAMIC_FQID;
}
+
DPAA_PMD_DEBUG("creating rx fq %p, fqid 0x%x", fq, fqid);
ret = qman_create_fq(fqid, flags, fq);
if (ret) {
@@ -1478,7 +1479,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
struct fman_if_bpool *bp, *tmp_bp;
uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES];
uint32_t cgrid_tx[MAX_DPAA_CORES];
- char eth_buf[RTE_ETHER_ADDR_FMT_SIZE];
+ uint32_t dev_rx_fqids[DPAA_MAX_NUM_PCD_QUEUES];
PMD_INIT_FUNC_TRACE();
@@ -1495,30 +1496,36 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->ifid = dev_id;
dpaa_intf->cfg = cfg;
+ memset((char *)dev_rx_fqids, 0,
+ sizeof(uint32_t) * DPAA_MAX_NUM_PCD_QUEUES);
+
/* Initialize Rx FQ's */
if (default_q) {
num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
+ } else if (fmc_q) {
+ num_rx_fqs = 1;
} else {
- if (getenv("DPAA_NUM_RX_QUEUES"))
- num_rx_fqs = atoi(getenv("DPAA_NUM_RX_QUEUES"));
- else
- num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
+ /* FMCLESS mode, load balance to multiple cores.*/
+ num_rx_fqs = rte_lcore_count();
}
-
/* Each device can not have more than DPAA_MAX_NUM_PCD_QUEUES RX
* queues.
*/
- if (num_rx_fqs <= 0 || num_rx_fqs > DPAA_MAX_NUM_PCD_QUEUES) {
+ if (num_rx_fqs < 0 || num_rx_fqs > DPAA_MAX_NUM_PCD_QUEUES) {
DPAA_PMD_ERR("Invalid number of RX queues\n");
return -EINVAL;
}
- dpaa_intf->rx_queues = rte_zmalloc(NULL,
- sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
- if (!dpaa_intf->rx_queues) {
- DPAA_PMD_ERR("Failed to alloc mem for RX queues\n");
- return -ENOMEM;
+ if (num_rx_fqs > 0) {
+ dpaa_intf->rx_queues = rte_zmalloc(NULL,
+ sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+ if (!dpaa_intf->rx_queues) {
+ DPAA_PMD_ERR("Failed to alloc mem for RX queues\n");
+ return -ENOMEM;
+ }
+ } else {
+ dpaa_intf->rx_queues = NULL;
}
memset(cgrid, 0, sizeof(cgrid));
@@ -1537,7 +1544,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
}
/* If congestion control is enabled globally*/
- if (td_threshold) {
+ if (num_rx_fqs > 0 && td_threshold) {
dpaa_intf->cgr_rx = rte_zmalloc(NULL,
sizeof(struct qman_cgr) * num_rx_fqs, MAX_CACHELINE);
if (!dpaa_intf->cgr_rx) {
@@ -1556,12 +1563,20 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->cgr_rx = NULL;
}
+ if (!fmc_q && !default_q) {
+ ret = qman_alloc_fqid_range(dev_rx_fqids, num_rx_fqs,
+ num_rx_fqs, 0);
+ if (ret < 0) {
+ DPAA_PMD_ERR("Failed to alloc rx fqid's\n");
+ goto free_rx;
+ }
+ }
+
for (loop = 0; loop < num_rx_fqs; loop++) {
if (default_q)
fqid = cfg->rx_def;
else
- fqid = DPAA_PCD_FQID_START + fman_intf->mac_idx *
- DPAA_PCD_FQID_MULTIPLIER + loop;
+ fqid = dev_rx_fqids[loop];
if (dpaa_intf->cgr_rx)
dpaa_intf->cgr_rx[loop].cgrid = cgrid[loop];
@@ -1658,9 +1673,16 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
/* copy the primary mac address */
rte_ether_addr_copy(&fman_intf->mac_addr, ð_dev->data->mac_addrs[0]);
- rte_ether_format_addr(eth_buf, sizeof(eth_buf), &fman_intf->mac_addr);
- DPAA_PMD_INFO("net: dpaa: %s: %s", dpaa_device->name, eth_buf);
+ RTE_LOG(INFO, PMD, "net: dpaa: %s: %02x:%02x:%02x:%02x:%02x:%02x\n",
+ dpaa_device->name,
+ fman_intf->mac_addr.addr_bytes[0],
+ fman_intf->mac_addr.addr_bytes[1],
+ fman_intf->mac_addr.addr_bytes[2],
+ fman_intf->mac_addr.addr_bytes[3],
+ fman_intf->mac_addr.addr_bytes[4],
+ fman_intf->mac_addr.addr_bytes[5]);
+
/* Disable RX mode */
fman_if_discard_rx_errors(fman_intf);
@@ -1707,6 +1729,12 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
return -1;
}
+ /* DPAA FM deconfig */
+ if (!(default_q || fmc_q)) {
+ if (dpaa_fm_deconfig(dpaa_intf, dev->process_private))
+ DPAA_PMD_WARN("DPAA FM deconfig failed\n");
+ }
+
dpaa_eth_dev_close(dev);
/* release configuration memory */
@@ -1750,7 +1778,7 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
}
static int
-rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
+rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv,
struct rte_dpaa_device *dpaa_dev)
{
int diag;
@@ -1796,6 +1824,13 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
default_q = 1;
}
+ if (!(default_q || fmc_q)) {
+ if (dpaa_fm_init()) {
+ DPAA_PMD_ERR("FM init failed\n");
+ return -1;
+ }
+ }
+
/* disabling the default push mode for LS1043 */
if (dpaa_svr_family == SVR_LS1043A_FAMILY)
dpaa_push_mode_max_queue = 0;
@@ -1869,6 +1904,38 @@ rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
return 0;
}
+static void __attribute__((destructor(102))) dpaa_finish(void)
+{
+ /* For secondary, primary will do all the cleanup */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ if (!(default_q || fmc_q)) {
+ unsigned int i;
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (rte_eth_devices[i].dev_ops == &dpaa_devops) {
+ struct rte_eth_dev *dev = &rte_eth_devices[i];
+ struct dpaa_if *dpaa_intf =
+ dev->data->dev_private;
+ struct fman_if *fif =
+ dev->process_private;
+ if (dpaa_intf->port_handle)
+ if (dpaa_fm_deconfig(dpaa_intf, fif))
+ DPAA_PMD_WARN("DPAA FM "
+ "deconfig failed\n");
+ }
+ }
+ if (is_global_init)
+ if (dpaa_fm_term())
+ DPAA_PMD_WARN("DPAA FM term failed\n");
+
+ is_global_init = 0;
+
+ DPAA_PMD_INFO("DPAA fman cleaned up");
+ }
+}
+
static struct rte_dpaa_driver rte_dpaa_pmd = {
.drv_flags = RTE_DPAA_DRV_INTR_LSC,
.drv_type = FSL_DPAA_ETH,
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 4c40ff86a..b10c4a20b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -118,6 +118,10 @@ struct dpaa_if {
uint32_t ifid;
struct dpaa_bp_info *bp_info;
struct rte_eth_fc_conf *fc_conf;
+ void *port_handle;
+ void *netenv_handle;
+ void *scheme_handle[2];
+ uint32_t scheme_count;
};
struct dpaa_if_stats {
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
new file mode 100644
index 000000000..c7a4f87b1
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -0,0 +1,905 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2017-2019 NXP
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <sys/types.h>
+
+#include <dpaa_ethdev.h>
+#include <dpaa_flow.h>
+#include <rte_dpaa_logs.h>
+#include <fmlib/fm_port_ext.h>
+
+#define DPAA_MAX_NUM_ETH_DEV 8
+
+static inline
+ioc_fm_pcd_extract_entry_t *
+SCH_EXT_ARR(ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx)
+{
+return &scheme_params->param.key_extract_and_hash_params.extract_array[hdr_idx];
+}
+
+#define SCH_EXT_HDR(scheme_params, hdr_idx) \
+ SCH_EXT_ARR(scheme_params, hdr_idx)->extract_params.extract_by_hdr
+
+#define SCH_EXT_FULL_FLD(scheme_params, hdr_idx) \
+ SCH_EXT_HDR(scheme_params, hdr_idx).extract_by_hdr_type.full_field
+
+/* FM global info */
+struct dpaa_fm_info {
+ t_Handle fman_handle;
+ t_Handle pcd_handle;
+};
+
+/*FM model to read and write from file */
+struct dpaa_fm_model {
+ uint32_t dev_count;
+ uint8_t device_order[DPAA_MAX_NUM_ETH_DEV];
+ t_FmPortParams fm_port_params[DPAA_MAX_NUM_ETH_DEV];
+ t_Handle netenv_devid[DPAA_MAX_NUM_ETH_DEV];
+ t_Handle scheme_devid[DPAA_MAX_NUM_ETH_DEV][2];
+};
+
+static struct dpaa_fm_info fm_info;
+static struct dpaa_fm_model fm_model;
+static const char *fm_log = "/tmp/fmdpdk.bin";
+
+static void fm_prev_cleanup(void)
+{
+ uint32_t fman_id = 0, i = 0, devid;
+ struct dpaa_if dpaa_intf = {0};
+ t_FmPcdParams fmPcdParams = {0};
+ PMD_INIT_FUNC_TRACE();
+
+ fm_info.fman_handle = FM_Open(fman_id);
+ if (!fm_info.fman_handle) {
+ printf("\n%s- unable to open FMAN", __func__);
+ return;
+ }
+
+ fmPcdParams.h_Fm = fm_info.fman_handle;
+ fmPcdParams.prsSupport = true;
+ fmPcdParams.kgSupport = true;
+ /* FM PCD Open */
+ fm_info.pcd_handle = FM_PCD_Open(&fmPcdParams);
+ if (!fm_info.pcd_handle) {
+ printf("\n%s- unable to open PCD", __func__);
+ return;
+ }
+
+ while (i < fm_model.dev_count) {
+ devid = fm_model.device_order[i];
+ /* FM Port Open */
+ fm_model.fm_port_params[devid].h_Fm = fm_info.fman_handle;
+ dpaa_intf.port_handle =
+ FM_PORT_Open(&fm_model.fm_port_params[devid]);
+ dpaa_intf.scheme_handle[0] = CreateDevice(fm_info.pcd_handle,
+ fm_model.scheme_devid[devid][0]);
+ dpaa_intf.scheme_count = 1;
+ if (fm_model.scheme_devid[devid][1]) {
+ dpaa_intf.scheme_handle[1] =
+ CreateDevice(fm_info.pcd_handle,
+ fm_model.scheme_devid[devid][1]);
+ if (dpaa_intf.scheme_handle[1])
+ dpaa_intf.scheme_count++;
+ }
+
+ dpaa_intf.netenv_handle = CreateDevice(fm_info.pcd_handle,
+ fm_model.netenv_devid[devid]);
+ i++;
+ if (!dpaa_intf.netenv_handle ||
+ !dpaa_intf.scheme_handle[0] ||
+ !dpaa_intf.port_handle)
+ continue;
+
+ if (dpaa_fm_deconfig(&dpaa_intf, NULL))
+ printf("\nDPAA FM deconfig failed\n");
+ }
+
+ if (dpaa_fm_term())
+ printf("\nDPAA FM term failed\n");
+
+ memset(&fm_model, 0, sizeof(struct dpaa_fm_model));
+}
+
+void dpaa_write_fm_config_to_file(void)
+{
+ size_t bytes_write;
+ FILE *fp = fopen(fm_log, "wb");
+ PMD_INIT_FUNC_TRACE();
+
+ if (!fp) {
+ DPAA_PMD_ERR("File open failed");
+ return;
+ }
+ bytes_write = fwrite(&fm_model, sizeof(struct dpaa_fm_model), 1, fp);
+ if (!bytes_write) {
+ DPAA_PMD_WARN("No bytes write");
+ fclose(fp);
+ return;
+ }
+ fclose(fp);
+}
+
+static void dpaa_read_fm_config_from_file(void)
+{
+ size_t bytes_read;
+ FILE *fp = fopen(fm_log, "rb");
+ PMD_INIT_FUNC_TRACE();
+
+ if (!fp)
+ return;
+ DPAA_PMD_INFO("Previous DPDK-FM config instance present, cleaning up.");
+
+ bytes_read = fread(&fm_model, sizeof(struct dpaa_fm_model), 1, fp);
+ if (!bytes_read) {
+ DPAA_PMD_WARN("No bytes read");
+ fclose(fp);
+ return;
+ }
+ fclose(fp);
+
+ /*FM cleanup from previous configured app */
+ fm_prev_cleanup();
+}
+
+static inline int set_hashParams_eth(
+ ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx)
+{
+ int k;
+
+ for (k = 0; k < 2; k++) {
+ SCH_EXT_ARR(scheme_params, hdr_idx)->type =
+ e_IOC_FM_PCD_EXTRACT_BY_HDR;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr =
+ HEADER_TYPE_ETH;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index =
+ e_IOC_FM_PCD_HDR_INDEX_NONE;
+ SCH_EXT_HDR(scheme_params, hdr_idx).type =
+ e_IOC_FM_PCD_EXTRACT_FULL_FIELD;
+ if (k == 0)
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).eth =
+ IOC_NET_HEADER_FIELD_ETH_SA;
+ else
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).eth =
+ IOC_NET_HEADER_FIELD_ETH_DA;
+ hdr_idx++;
+ }
+ return hdr_idx;
+}
+
+static inline int set_hashParams_ipv4(
+ ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx)
+{
+ int k;
+
+ for (k = 0; k < 2; k++) {
+ SCH_EXT_ARR(scheme_params, hdr_idx)->type =
+ e_IOC_FM_PCD_EXTRACT_BY_HDR;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr =
+ HEADER_TYPE_IPv4;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index =
+ e_IOC_FM_PCD_HDR_INDEX_NONE;
+ SCH_EXT_HDR(scheme_params, hdr_idx).type =
+ e_IOC_FM_PCD_EXTRACT_FULL_FIELD;
+ if (k == 0)
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).ipv4 =
+ IOC_NET_HEADER_FIELD_IPv4_SRC_IP;
+ else
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).ipv4 =
+ IOC_NET_HEADER_FIELD_IPv4_DST_IP;
+ hdr_idx++;
+ }
+ return hdr_idx;
+}
+
+static inline int set_hashParams_ipv6(
+ ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx)
+{
+ int k;
+
+ for (k = 0; k < 2; k++) {
+ SCH_EXT_ARR(scheme_params, hdr_idx)->type =
+ e_IOC_FM_PCD_EXTRACT_BY_HDR;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr =
+ HEADER_TYPE_IPv6;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index =
+ e_IOC_FM_PCD_HDR_INDEX_NONE;
+ SCH_EXT_HDR(scheme_params, hdr_idx).type =
+ e_IOC_FM_PCD_EXTRACT_FULL_FIELD;
+ if (k == 0)
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).ipv6 =
+ IOC_NET_HEADER_FIELD_IPv6_SRC_IP;
+ else
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).ipv6 =
+ IOC_NET_HEADER_FIELD_IPv6_DST_IP;
+ hdr_idx++;
+ }
+ return hdr_idx;
+}
+
+static inline int set_hashParams_udp(
+ ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx)
+{
+ int k;
+
+ for (k = 0; k < 2; k++) {
+ SCH_EXT_ARR(scheme_params, hdr_idx)->type =
+ e_IOC_FM_PCD_EXTRACT_BY_HDR;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr =
+ HEADER_TYPE_UDP;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index =
+ e_IOC_FM_PCD_HDR_INDEX_NONE;
+ SCH_EXT_HDR(scheme_params, hdr_idx).type =
+ e_IOC_FM_PCD_EXTRACT_FULL_FIELD;
+ if (k == 0)
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).udp =
+ IOC_NET_HEADER_FIELD_UDP_PORT_SRC;
+ else
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).udp =
+ IOC_NET_HEADER_FIELD_UDP_PORT_DST;
+ hdr_idx++;
+ }
+ return hdr_idx;
+}
+
+static inline int set_hashParams_tcp(
+ ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx)
+{
+ int k;
+
+ for (k = 0; k < 2; k++) {
+ SCH_EXT_ARR(scheme_params, hdr_idx)->type =
+ e_IOC_FM_PCD_EXTRACT_BY_HDR;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr =
+ HEADER_TYPE_TCP;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index =
+ e_IOC_FM_PCD_HDR_INDEX_NONE;
+ SCH_EXT_HDR(scheme_params, hdr_idx).type =
+ e_IOC_FM_PCD_EXTRACT_FULL_FIELD;
+ if (k == 0)
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).tcp =
+ IOC_NET_HEADER_FIELD_TCP_PORT_SRC;
+ else
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).tcp =
+ IOC_NET_HEADER_FIELD_TCP_PORT_DST;
+ hdr_idx++;
+ }
+ return hdr_idx;
+}
+
+static inline int set_hashParams_sctp(
+ ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx)
+{
+ int k;
+
+ for (k = 0; k < 2; k++) {
+ SCH_EXT_ARR(scheme_params, hdr_idx)->type =
+ e_IOC_FM_PCD_EXTRACT_BY_HDR;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr =
+ HEADER_TYPE_SCTP;
+ SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index =
+ e_IOC_FM_PCD_HDR_INDEX_NONE;
+ SCH_EXT_HDR(scheme_params, hdr_idx).type =
+ e_IOC_FM_PCD_EXTRACT_FULL_FIELD;
+ if (k == 0)
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).sctp =
+ IOC_NET_HEADER_FIELD_SCTP_PORT_SRC;
+ else
+ SCH_EXT_FULL_FLD(scheme_params, hdr_idx).sctp =
+ IOC_NET_HEADER_FIELD_SCTP_PORT_DST;
+ hdr_idx++;
+ }
+ return hdr_idx;
+}
+
+/* Set scheme params for hash distribution */
+static int set_scheme_params(
+ ioc_fm_pcd_kg_scheme_params_t *scheme_params,
+ ioc_fm_pcd_net_env_params_t *dist_units,
+ struct dpaa_if *dpaa_intf,
+ struct fman_if *fif __rte_unused)
+{
+ int dist_idx, hdr_idx = 0;
+ PMD_INIT_FUNC_TRACE();
+
+ scheme_params->param.use_hash = 1;
+ scheme_params->param.modify = false;
+ scheme_params->param.always_direct = false;
+ scheme_params->param.scheme_counter.update = 1;
+ scheme_params->param.scheme_counter.value = 0;
+ scheme_params->param.next_engine = e_IOC_FM_PCD_DONE;
+ scheme_params->param.base_fqid = dpaa_intf->rx_queues[0].fqid;
+ scheme_params->param.net_env_params.net_env_id =
+ dpaa_intf->netenv_handle;
+ scheme_params->param.net_env_params.num_of_distinction_units =
+ dist_units->param.num_of_distinction_units;
+
+ scheme_params->param.key_extract_and_hash_params
+ .hash_distribution_num_of_fqids =
+ dpaa_intf->nb_rx_queues;
+ scheme_params->param.key_extract_and_hash_params
+ .num_of_used_extracts =
+ 2 * dist_units->param.num_of_distinction_units;
+
+ for (dist_idx = 0; dist_idx <
+ dist_units->param.num_of_distinction_units;
+ dist_idx++) {
+ switch (dist_units->param.units[dist_idx].hdrs[0].hdr) {
+ case HEADER_TYPE_ETH:
+ hdr_idx = set_hashParams_eth(scheme_params, hdr_idx);
+ break;
+
+ case HEADER_TYPE_IPv4:
+ hdr_idx = set_hashParams_ipv4(scheme_params, hdr_idx);
+ break;
+
+ case HEADER_TYPE_IPv6:
+ hdr_idx = set_hashParams_ipv6(scheme_params, hdr_idx);
+ break;
+
+ case HEADER_TYPE_UDP:
+ hdr_idx = set_hashParams_udp(scheme_params, hdr_idx);
+ break;
+
+ case HEADER_TYPE_TCP:
+ hdr_idx = set_hashParams_tcp(scheme_params, hdr_idx);
+ break;
+
+ case HEADER_TYPE_SCTP:
+ hdr_idx = set_hashParams_sctp(scheme_params, hdr_idx);
+ break;
+
+ default:
+ DPAA_PMD_ERR("Invalid Distinction Unit");
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
+ uint64_t req_dist_set)
+{
+ uint32_t loop = 0, dist_idx = 0, dist_field = 0;
+ int l2_configured = 0, ipv4_configured = 0, ipv6_configured = 0;
+ int udp_configured = 0, tcp_configured = 0, sctp_configured = 0;
+ PMD_INIT_FUNC_TRACE();
+
+ if (!req_dist_set)
+ dist_units->param.units[dist_idx++].hdrs[0].hdr =
+ HEADER_TYPE_ETH;
+
+ while (req_dist_set) {
+ if (req_dist_set % 2 != 0) {
+ dist_field = 1U << loop;
+ switch (dist_field) {
+ case ETH_RSS_L2_PAYLOAD:
+
+ if (l2_configured)
+ break;
+ l2_configured = 1;
+
+ dist_units->param.units[dist_idx++].hdrs[0].hdr =
+ HEADER_TYPE_ETH;
+ break;
+
+ case ETH_RSS_IPV4:
+ case ETH_RSS_FRAG_IPV4:
+ case ETH_RSS_NONFRAG_IPV4_OTHER:
+
+ if (ipv4_configured)
+ break;
+ ipv4_configured = 1;
+ dist_units->param.units[dist_idx++].hdrs[0].hdr =
+ HEADER_TYPE_IPv4;
+ break;
+
+ case ETH_RSS_IPV6:
+ case ETH_RSS_FRAG_IPV6:
+ case ETH_RSS_NONFRAG_IPV6_OTHER:
+ case ETH_RSS_IPV6_EX:
+
+ if (ipv6_configured)
+ break;
+ ipv6_configured = 1;
+ dist_units->param.units[dist_idx++].hdrs[0].hdr =
+ HEADER_TYPE_IPv6;
+ break;
+
+ case ETH_RSS_NONFRAG_IPV4_TCP:
+ case ETH_RSS_NONFRAG_IPV6_TCP:
+ case ETH_RSS_IPV6_TCP_EX:
+
+ if (tcp_configured)
+ break;
+ tcp_configured = 1;
+ dist_units->param.units[dist_idx++].hdrs[0].hdr =
+ HEADER_TYPE_TCP;
+ break;
+
+ case ETH_RSS_NONFRAG_IPV4_UDP:
+ case ETH_RSS_NONFRAG_IPV6_UDP:
+ case ETH_RSS_IPV6_UDP_EX:
+
+ if (udp_configured)
+ break;
+ udp_configured = 1;
+ dist_units->param.units[dist_idx++].hdrs[0].hdr =
+ HEADER_TYPE_UDP;
+ break;
+
+ case ETH_RSS_NONFRAG_IPV4_SCTP:
+ case ETH_RSS_NONFRAG_IPV6_SCTP:
+
+ if (sctp_configured)
+ break;
+ sctp_configured = 1;
+
+ dist_units->param.units[dist_idx++].hdrs[0].hdr =
+ HEADER_TYPE_SCTP;
+ break;
+
+ default:
+ DPAA_PMD_ERR("Bad flow distribution option");
+ }
+ }
+ req_dist_set = req_dist_set >> 1;
+ loop++;
+ }
+
+ /* Dist units is set to dist_idx */
+ dist_units->param.num_of_distinction_units = dist_idx;
+}
+
+/* Apply PCD configuration on interface */
+static inline int set_port_pcd(struct dpaa_if *dpaa_intf)
+{
+ int ret = 0;
+ unsigned int idx;
+ ioc_fm_port_pcd_params_t pcd_param;
+ ioc_fm_port_pcd_prs_params_t prs_param;
+ ioc_fm_port_pcd_kg_params_t kg_param;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* PCD support for hash distribution */
+ uint8_t pcd_support = e_FM_PORT_PCD_SUPPORT_PRS_AND_KG;
+
+ memset(&pcd_param, 0, sizeof(pcd_param));
+ memset(&prs_param, 0, sizeof(prs_param));
+ memset(&kg_param, 0, sizeof(kg_param));
+
+ /* Set parse params */
+ prs_param.first_prs_hdr = HEADER_TYPE_ETH;
+
+ /* Set kg params */
+ for (idx = 0; idx < dpaa_intf->scheme_count; idx++)
+ kg_param.scheme_ids[idx] = dpaa_intf->scheme_handle[idx];
+ kg_param.num_of_schemes = dpaa_intf->scheme_count;
+
+ /* Set pcd params */
+ pcd_param.net_env_id = dpaa_intf->netenv_handle;
+ pcd_param.pcd_support = pcd_support;
+ pcd_param.p_kg_params = &kg_param;
+ pcd_param.p_prs_params = &prs_param;
+
+ /* FM PORT Disable */
+ ret = FM_PORT_Disable(dpaa_intf->port_handle);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("FM_PORT_Disable: Failed");
+ return ret;
+ }
+
+ /* FM PORT SetPCD */
+ ret = FM_PORT_SetPCD(dpaa_intf->port_handle, &pcd_param);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("FM_PORT_SetPCD: Failed");
+ return ret;
+ }
+
+ /* FM PORT Enable */
+ ret = FM_PORT_Enable(dpaa_intf->port_handle);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("FM_PORT_Enable: Failed");
+ goto fm_port_delete_pcd;
+ }
+
+ return 0;
+
+fm_port_delete_pcd:
+ /* FM PORT DeletePCD */
+ ret = FM_PORT_DeletePCD(dpaa_intf->port_handle);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("FM_PORT_DeletePCD: Failed\n");
+ return ret;
+ }
+ return -1;
+}
+
+/* Unset PCD NerEnv and scheme */
+static inline void unset_pcd_netenv_scheme(struct dpaa_if *dpaa_intf)
+{
+ int ret;
+ PMD_INIT_FUNC_TRACE();
+
+ /* reduce scheme count */
+ if (dpaa_intf->scheme_count)
+ dpaa_intf->scheme_count--;
+
+ DPAA_PMD_DEBUG("KG SCHEME DEL %d handle =%p",
+ dpaa_intf->scheme_count,
+ dpaa_intf->scheme_handle[dpaa_intf->scheme_count]);
+
+ ret = FM_PCD_KgSchemeDelete(
+ dpaa_intf->scheme_handle[dpaa_intf->scheme_count]);
+ if (ret != E_OK)
+ DPAA_PMD_ERR("FM_PCD_KgSchemeDelete: Failed");
+
+ dpaa_intf->scheme_handle[dpaa_intf->scheme_count] = NULL;
+}
+
+/* Set PCD NetEnv and Scheme and default scheme */
+static inline int set_default_scheme(struct dpaa_if *dpaa_intf)
+{
+ ioc_fm_pcd_kg_scheme_params_t scheme_params;
+ int idx = dpaa_intf->scheme_count;
+ PMD_INIT_FUNC_TRACE();
+
+ /* Set PCD NetEnvCharacteristics */
+ memset(&scheme_params, 0, sizeof(scheme_params));
+
+ /* Adding 10 to default schemes as the number of interface would be
+ * lesser than 10 and the relative scheme ids should be unique for
+ * every scheme.
+ */
+ scheme_params.param.scm_id.relative_scheme_id =
+ 10 + dpaa_intf->ifid;
+ scheme_params.param.use_hash = 0;
+ scheme_params.param.next_engine = e_IOC_FM_PCD_DONE;
+ scheme_params.param.net_env_params.num_of_distinction_units = 0;
+ scheme_params.param.net_env_params.net_env_id =
+ dpaa_intf->netenv_handle;
+ scheme_params.param.base_fqid = dpaa_intf->rx_queues[0].fqid;
+ scheme_params.param.key_extract_and_hash_params
+ .hash_distribution_num_of_fqids = 1;
+ scheme_params.param.key_extract_and_hash_params
+ .num_of_used_extracts = 0;
+ scheme_params.param.modify = false;
+ scheme_params.param.always_direct = false;
+ scheme_params.param.scheme_counter.update = 1;
+ scheme_params.param.scheme_counter.value = 0;
+
+ /* FM PCD KgSchemeSet */
+ dpaa_intf->scheme_handle[idx] =
+ FM_PCD_KgSchemeSet(fm_info.pcd_handle, &scheme_params);
+ DPAA_PMD_DEBUG("KG SCHEME SET %d handle =%p",
+ idx, dpaa_intf->scheme_handle[idx]);
+ if (!dpaa_intf->scheme_handle[idx]) {
+ DPAA_PMD_ERR("FM_PCD_KgSchemeSet: Failed");
+ return -1;
+ }
+
+ fm_model.scheme_devid[dpaa_intf->ifid][idx] =
+ GetDeviceId(dpaa_intf->scheme_handle[idx]);
+ dpaa_intf->scheme_count++;
+ return 0;
+}
+
+
+/* Set PCD NetEnv and Scheme and default scheme */
+static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
+ uint64_t req_dist_set,
+ struct fman_if *fif)
+{
+ int ret = -1;
+ ioc_fm_pcd_net_env_params_t dist_units;
+ ioc_fm_pcd_kg_scheme_params_t scheme_params;
+ int idx = dpaa_intf->scheme_count;
+ PMD_INIT_FUNC_TRACE();
+
+ /* Set PCD NetEnvCharacteristics */
+ memset(&dist_units, 0, sizeof(dist_units));
+ memset(&scheme_params, 0, sizeof(scheme_params));
+
+ /* Set dist unit header type */
+ set_dist_units(&dist_units, req_dist_set);
+
+ scheme_params.param.scm_id.relative_scheme_id = dpaa_intf->ifid;
+
+ /* Set PCD Scheme params */
+ ret = set_scheme_params(&scheme_params, &dist_units, dpaa_intf, fif);
+ if (ret) {
+ DPAA_PMD_ERR("Set scheme params: Failed");
+ return -1;
+ }
+
+ /* FM PCD KgSchemeSet */
+ dpaa_intf->scheme_handle[idx] =
+ FM_PCD_KgSchemeSet(fm_info.pcd_handle, &scheme_params);
+ DPAA_PMD_DEBUG("KG SCHEME SET %d handle =%p",
+ idx, dpaa_intf->scheme_handle[idx]);
+ if (!dpaa_intf->scheme_handle[idx]) {
+ DPAA_PMD_ERR("FM_PCD_KgSchemeSet: Failed");
+ return -1;
+ }
+
+ fm_model.scheme_devid[dpaa_intf->ifid][idx] =
+ GetDeviceId(dpaa_intf->scheme_handle[idx]);
+ dpaa_intf->scheme_count++;
+ return 0;
+}
+
+
+static inline int get_port_type(struct fman_if *fif)
+{
+ if (fif->mac_type == fman_mac_1g)
+ return e_FM_PORT_TYPE_RX;
+ else if (fif->mac_type == fman_mac_2_5g)
+ return e_FM_PORT_TYPE_RX_2_5G;
+ else if (fif->mac_type == fman_mac_10g)
+ return e_FM_PORT_TYPE_RX_10G;
+
+ DPAA_PMD_ERR("MAC type unsupported");
+ return -1;
+}
+
+static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
+ uint64_t req_dist_set,
+ struct fman_if *fif)
+{
+ t_FmPortParams fm_port_params;
+ ioc_fm_pcd_net_env_params_t dist_units;
+ PMD_INIT_FUNC_TRACE();
+
+ /* FMAN mac indexes mappings (0 is unused,
+ * first 8 are for 1G, next for 10G ports
+ */
+ uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
+
+ /* Memset FM port params */
+ memset(&fm_port_params, 0, sizeof(fm_port_params));
+
+ /* Set FM port params */
+ fm_port_params.h_Fm = fm_info.fman_handle;
+ fm_port_params.portType = get_port_type(fif);
+ fm_port_params.portId = mac_idx[fif->mac_idx];
+
+ /* FM PORT Open */
+ dpaa_intf->port_handle = FM_PORT_Open(&fm_port_params);
+ if (!dpaa_intf->port_handle) {
+ DPAA_PMD_ERR("FM_PORT_Open: Failed\n");
+ return -1;
+ }
+
+ fm_model.fm_port_params[dpaa_intf->ifid] = fm_port_params;
+
+ /* Set PCD NetEnvCharacteristics */
+ memset(&dist_units, 0, sizeof(dist_units));
+
+ /* Set dist unit header type */
+ set_dist_units(&dist_units, req_dist_set);
+
+ /* FM PCD NetEnvCharacteristicsSet */
+ dpaa_intf->netenv_handle = FM_PCD_NetEnvCharacteristicsSet(
+ fm_info.pcd_handle, &dist_units);
+ if (!dpaa_intf->netenv_handle) {
+ DPAA_PMD_ERR("FM_PCD_NetEnvCharacteristicsSet: Failed");
+ return -1;
+ }
+
+ fm_model.netenv_devid[dpaa_intf->ifid] =
+ GetDeviceId(dpaa_intf->netenv_handle);
+
+ return 0;
+}
+
+/* De-Configure DPAA FM */
+int dpaa_fm_deconfig(struct dpaa_if *dpaa_intf,
+ struct fman_if *fif __rte_unused)
+{
+ int ret;
+ unsigned int idx;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* FM PORT Disable */
+ ret = FM_PORT_Disable(dpaa_intf->port_handle);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("FM_PORT_Disable: Failed");
+ return ret;
+ }
+
+ /* FM PORT DeletePCD */
+ ret = FM_PORT_DeletePCD(dpaa_intf->port_handle);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("FM_PORT_DeletePCD: Failed");
+ return ret;
+ }
+
+ for (idx = 0; idx < dpaa_intf->scheme_count; idx++) {
+ DPAA_PMD_DEBUG("KG SCHEME DEL %d, handle =%p",
+ idx, dpaa_intf->scheme_handle[idx]);
+ /* FM PCD KgSchemeDelete */
+ ret = FM_PCD_KgSchemeDelete(dpaa_intf->scheme_handle[idx]);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("FM_PCD_KgSchemeDelete: Failed");
+ return ret;
+ }
+ dpaa_intf->scheme_handle[idx] = NULL;
+ }
+ /* FM PCD NetEnvCharacteristicsDelete */
+ ret = FM_PCD_NetEnvCharacteristicsDelete(dpaa_intf->netenv_handle);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("FM_PCD_NetEnvCharacteristicsDelete: Failed");
+ return ret;
+ }
+ dpaa_intf->netenv_handle = NULL;
+
+ /* FM PORT Close */
+ FM_PORT_Close(dpaa_intf->port_handle);
+ dpaa_intf->port_handle = NULL;
+
+ /* Set scheme count to 0 */
+ dpaa_intf->scheme_count = 0;
+
+ return 0;
+}
+
+int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
+ int ret;
+ unsigned int i = 0;
+ PMD_INIT_FUNC_TRACE();
+
+ if (dpaa_intf->port_handle) {
+ if (dpaa_fm_deconfig(dpaa_intf, fif))
+ DPAA_PMD_ERR("DPAA FM deconfig failed");
+ }
+
+ if (!dev->data->nb_rx_queues)
+ return 0;
+
+ if (dev->data->nb_rx_queues & (dev->data->nb_rx_queues - 1)) {
+ DPAA_PMD_ERR("No of queues should be power of 2");
+ return -1;
+ }
+
+ dpaa_intf->nb_rx_queues = dev->data->nb_rx_queues;
+
+ /* Open FM Port and set it in port info */
+ ret = set_fm_port_handle(dpaa_intf, req_dist_set, fif);
+ if (ret) {
+ DPAA_PMD_ERR("Set FM Port handle: Failed");
+ return -1;
+ }
+
+ /* Set PCD netenv and scheme */
+ if (req_dist_set) {
+ ret = set_pcd_netenv_scheme(dpaa_intf, req_dist_set, fif);
+ if (ret) {
+ DPAA_PMD_ERR("Set PCD NetEnv and Scheme dist: Failed");
+ goto unset_fm_port_handle;
+ }
+ }
+ /* Set default netenv and scheme */
+ ret = set_default_scheme(dpaa_intf);
+ if (ret) {
+ DPAA_PMD_ERR("Set PCD NetEnv and Scheme: Failed");
+ goto unset_pcd_netenv_scheme1;
+ }
+
+ /* Set Port PCD */
+ ret = set_port_pcd(dpaa_intf);
+ if (ret) {
+ DPAA_PMD_ERR("Set Port PCD: Failed");
+ goto unset_pcd_netenv_scheme;
+ }
+
+ for (; i < fm_model.dev_count; i++)
+ if (fm_model.device_order[i] == dpaa_intf->ifid)
+ return 0;
+
+ fm_model.device_order[fm_model.dev_count] = dpaa_intf->ifid;
+ fm_model.dev_count++;
+
+ return 0;
+
+unset_pcd_netenv_scheme:
+ unset_pcd_netenv_scheme(dpaa_intf);
+
+unset_pcd_netenv_scheme1:
+ unset_pcd_netenv_scheme(dpaa_intf);
+
+unset_fm_port_handle:
+ /* FM PORT Close */
+ FM_PORT_Close(dpaa_intf->port_handle);
+ dpaa_intf->port_handle = NULL;
+ return -1;
+}
+
+int dpaa_fm_init(void)
+{
+ t_Handle fman_handle;
+ t_Handle pcd_handle;
+ t_FmPcdParams fmPcdParams = {0};
+ /* Hard-coded : fman id 0 since one fman is present in LS104x */
+ int fman_id = 0, ret;
+ PMD_INIT_FUNC_TRACE();
+
+ dpaa_read_fm_config_from_file();
+
+ /* FM Open */
+ fman_handle = FM_Open(fman_id);
+ if (!fman_handle) {
+ DPAA_PMD_ERR("FM_Open: Failed");
+ return -1;
+ }
+
+ /* FM PCD Open */
+ fmPcdParams.h_Fm = fman_handle;
+ fmPcdParams.prsSupport = true;
+ fmPcdParams.kgSupport = true;
+ pcd_handle = FM_PCD_Open(&fmPcdParams);
+ if (!pcd_handle) {
+ FM_Close(fman_handle);
+ DPAA_PMD_ERR("FM_PCD_Open: Failed");
+ return -1;
+ }
+
+ /* FM PCD Enable */
+ ret = FM_PCD_Enable(pcd_handle);
+ if (ret) {
+ FM_Close(fman_handle);
+ FM_PCD_Close(pcd_handle);
+ DPAA_PMD_ERR("FM_PCD_Enable: Failed");
+ return -1;
+ }
+
+ /* Set fman and pcd handle in fm info */
+ fm_info.fman_handle = fman_handle;
+ fm_info.pcd_handle = pcd_handle;
+
+ return 0;
+}
+
+
+/* De-initialization of FM */
+int dpaa_fm_term(void)
+{
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (fm_info.pcd_handle && fm_info.fman_handle) {
+ /* FM PCD Disable */
+ ret = FM_PCD_Disable(fm_info.pcd_handle);
+ if (ret) {
+ DPAA_PMD_ERR("FM_PCD_Disable: Failed");
+ return -1;
+ }
+
+ /* FM PCD Close */
+ FM_PCD_Close(fm_info.pcd_handle);
+ fm_info.pcd_handle = NULL;
+ }
+
+ if (fm_info.fman_handle) {
+ /* FM Close */
+ FM_Close(fm_info.fman_handle);
+ fm_info.fman_handle = NULL;
+ }
+
+ if (access(fm_log, F_OK) != -1) {
+ ret = remove(fm_log);
+ if (ret)
+ DPAA_PMD_ERR("File remove: Failed");
+ }
+ return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_flow.h b/drivers/net/dpaa/dpaa_flow.h
new file mode 100644
index 000000000..d16bfec21
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_flow.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2017,2019 NXP
+ */
+
+#ifndef __DPAA_FLOW_H__
+#define __DPAA_FLOW_H__
+
+int dpaa_fm_init(void);
+int dpaa_fm_term(void);
+int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set);
+int dpaa_fm_deconfig(struct dpaa_if *dpaa_intf, struct fman_if *fif);
+void dpaa_write_fm_config_to_file(void);
+
+#endif
diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build
index 94d509528..191500001 100644
--- a/drivers/net/dpaa/meson.build
+++ b/drivers/net/dpaa/meson.build
@@ -10,6 +10,7 @@ deps += ['mempool_dpaa']
sources = files('dpaa_ethdev.c',
'fmlib/fm_lib.c',
'fmlib/fm_vsp.c',
+ 'dpaa_flow.c',
'dpaa_rxtx.c')
if cc.has_argument('-Wno-pointer-arith')
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 18/37] bus/dpaa: add shared MAC support
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (16 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 17/37] net/dpaa: add support for fmcless mode Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 19/37] bus/dpaa: add Virtual Storage Profile port init Hemant Agrawal
` (20 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Radu Bulie, Jun Yang, Nipun Gupta
From: Radu Bulie <radu-andrei.bulie@nxp.com>
A shared MAC interface is an interface which can be used
by both kernel and userspace based on classification configuration
It is defined in dts with the compatible string "fsl,dpa-ethernet-shared"
which bpool will be seeded by the dpdk partition and configured
as a netdev by the dpaa Linux eth driver.
User space buffers from the bpool will be kmapped by the kernel.
Signed-off-by: Radu Bulie <radu-andrei.bulie@nxp.com>
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/bus/dpaa/base/fman/fman.c | 27 ++++++++++++++++++++++-----
drivers/bus/dpaa/include/fman.h | 2 ++
drivers/net/dpaa/dpaa_ethdev.c | 31 +++++++++++++++++--------------
drivers/net/dpaa/dpaa_flow.c | 18 ++++++++++++++----
4 files changed, 55 insertions(+), 23 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 33be9e5d7..3ae29bf06 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -167,13 +167,21 @@ fman_if_init(const struct device_node *dpa_node)
const char *mname, *fname;
const char *dname = dpa_node->full_name;
size_t lenp;
- int _errno;
+ int _errno, is_shared = 0;
const char *char_prop;
uint32_t na;
if (of_device_is_available(dpa_node) == false)
return 0;
+ if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") &&
+ !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-shared")) {
+ return 0;
+ }
+
+ if (of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-shared"))
+ is_shared = 1;
+
rprop = "fsl,qman-frame-queues-rx";
mprop = "fsl,fman-mac";
@@ -387,7 +395,7 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- assert(lenp == (4 * sizeof(phandle)));
+ assert(lenp >= (4 * sizeof(phandle)));
na = of_n_addr_cells(mac_node);
/* Get rid of endianness (issues). Convert to host byte order */
@@ -408,7 +416,7 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- assert(lenp == (4 * sizeof(phandle)));
+ assert(lenp >= (4 * sizeof(phandle)));
/*TODO: Fix for other cases also */
na = of_n_addr_cells(mac_node);
/* Get rid of endianness (issues). Convert to host byte order */
@@ -508,6 +516,9 @@ fman_if_init(const struct device_node *dpa_node)
pools_phandle++;
}
+ if (is_shared)
+ __if->__if.is_shared_mac = 1;
+
/* Parsing of the network interface is complete, add it to the list */
DPAA_BUS_LOG(DEBUG, "Found %s, Tx Channel = %x, FMAN = %x,"
"Port ID = %x",
@@ -524,7 +535,7 @@ fman_if_init(const struct device_node *dpa_node)
int
fman_init(void)
{
- const struct device_node *dpa_node;
+ const struct device_node *dpa_node, *parent_node;
int _errno;
/* If multiple dependencies try to initialise the Fman driver, don't
@@ -539,7 +550,13 @@ fman_init(void)
return fman_ccsr_map_fd;
}
- for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-init") {
+ parent_node = of_find_compatible_node(NULL, NULL, "fsl,dpaa");
+ if (!parent_node) {
+ DPAA_BUS_LOG(ERR, "Unable to find fsl,dpaa node");
+ return -ENODEV;
+ }
+
+ for_each_child_node(parent_node, dpa_node) {
_errno = fman_if_init(dpa_node);
if (_errno) {
FMAN_ERR(_errno, "if_init(%s)\n", dpa_node->full_name);
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 7a0a7d405..cb7f18ca2 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -320,6 +320,8 @@ struct fman_if {
struct rte_ether_addr mac_addr;
/* The Qman channel to schedule Tx FQs to */
u16 tx_channel_id;
+
+ uint8_t is_shared_mac;
/* The hard-coded FQIDs for this interface. Note: this doesn't cover
* the PCD nor the "Rx default" FQIDs, which are configured via FMC
* and its XML-based configuration.
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 1dbe2abf4..b004b7060 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -353,7 +353,8 @@ static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- fman_if_disable_rx(fif);
+ if (!fif->is_shared_mac)
+ fman_if_disable_rx(fif);
dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
}
@@ -1683,19 +1684,21 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
fman_intf->mac_addr.addr_bytes[4],
fman_intf->mac_addr.addr_bytes[5]);
-
- /* Disable RX mode */
- fman_if_discard_rx_errors(fman_intf);
- fman_if_disable_rx(fman_intf);
- /* Disable promiscuous mode */
- fman_if_promiscuous_disable(fman_intf);
- /* Disable multicast */
- fman_if_reset_mcast_filter_table(fman_intf);
- /* Reset interface statistics */
- fman_if_stats_reset(fman_intf);
- /* Disable SG by default */
- fman_if_set_sg(fman_intf, 0);
- fman_if_set_maxfrm(fman_intf, RTE_ETHER_MAX_LEN + VLAN_TAG_SIZE);
+ if (!fman_intf->is_shared_mac) {
+ /* Disable RX mode */
+ fman_if_discard_rx_errors(fman_intf);
+ fman_if_disable_rx(fman_intf);
+ /* Disable promiscuous mode */
+ fman_if_promiscuous_disable(fman_intf);
+ /* Disable multicast */
+ fman_if_reset_mcast_filter_table(fman_intf);
+ /* Reset interface statistics */
+ fman_if_stats_reset(fman_intf);
+ /* Disable SG by default */
+ fman_if_set_sg(fman_intf, 0);
+ fman_if_set_maxfrm(fman_intf,
+ RTE_ETHER_MAX_LEN + VLAN_TAG_SIZE);
+ }
return 0;
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c7a4f87b1..42970a788 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -740,6 +740,14 @@ int dpaa_fm_deconfig(struct dpaa_if *dpaa_intf,
}
dpaa_intf->netenv_handle = NULL;
+ if (fif && fif->is_shared_mac) {
+ ret = FM_PORT_Enable(dpaa_intf->port_handle);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("shared mac re-enable failed");
+ return ret;
+ }
+ }
+
/* FM PORT Close */
FM_PORT_Close(dpaa_intf->port_handle);
dpaa_intf->port_handle = NULL;
@@ -789,10 +797,12 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
}
}
/* Set default netenv and scheme */
- ret = set_default_scheme(dpaa_intf);
- if (ret) {
- DPAA_PMD_ERR("Set PCD NetEnv and Scheme: Failed");
- goto unset_pcd_netenv_scheme1;
+ if (!fif->is_shared_mac) {
+ ret = set_default_scheme(dpaa_intf);
+ if (ret) {
+ DPAA_PMD_ERR("Set PCD NetEnv and Scheme: Failed");
+ goto unset_pcd_netenv_scheme1;
+ }
}
/* Set Port PCD */
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 19/37] bus/dpaa: add Virtual Storage Profile port init
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (17 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 18/37] bus/dpaa: add shared MAC support Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 20/37] net/dpaa: add support for Virtual Storage Profile Hemant Agrawal
` (19 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Hemant Agrawal
This patch add support to initialize the VSP ports
in the FMAN library.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman.c | 57 +++++++++++++++++++++++++++++++
drivers/bus/dpaa/include/fman.h | 3 ++
2 files changed, 60 insertions(+)
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 3ae29bf06..39102bc1f 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -145,6 +145,61 @@ fman_get_mac_index(uint64_t regs_addr_host, uint8_t *mac_idx)
return ret;
}
+static void fman_if_vsp_init(struct __fman_if *__if)
+{
+ const phandle *prop;
+ int cell_index;
+ const struct device_node *dev;
+ size_t lenp;
+ const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
+
+ if (__if->__if.mac_type == fman_mac_1g) {
+ for_each_compatible_node(dev, NULL,
+ "fsl,fman-port-1g-rx-extended-args") {
+ prop = of_get_property(dev, "cell-index", &lenp);
+ if (prop) {
+ cell_index = of_read_number(
+ &prop[0],
+ lenp / sizeof(phandle));
+ if (cell_index == mac_idx[__if->__if.mac_idx]) {
+ prop = of_get_property(
+ dev,
+ "vsp-window", &lenp);
+ if (prop) {
+ __if->__if.num_profiles =
+ of_read_number(
+ &prop[0], 1);
+ __if->__if.base_profile_id =
+ of_read_number(
+ &prop[1], 1);
+ }
+ }
+ }
+ }
+ } else if (__if->__if.mac_type == fman_mac_10g) {
+ for_each_compatible_node(dev, NULL,
+ "fsl,fman-port-10g-rx-extended-args") {
+ prop = of_get_property(dev, "cell-index", &lenp);
+ if (prop) {
+ cell_index = of_read_number(
+ &prop[0], lenp / sizeof(phandle));
+ if (cell_index == mac_idx[__if->__if.mac_idx]) {
+ prop = of_get_property(
+ dev, "vsp-window", &lenp);
+ if (prop) {
+ __if->__if.num_profiles =
+ of_read_number(
+ &prop[0], 1);
+ __if->__if.base_profile_id =
+ of_read_number(
+ &prop[1], 1);
+ }
+ }
+ }
+ }
+ }
+}
+
static int
fman_if_init(const struct device_node *dpa_node)
{
@@ -519,6 +574,8 @@ fman_if_init(const struct device_node *dpa_node)
if (is_shared)
__if->__if.is_shared_mac = 1;
+ fman_if_vsp_init(__if);
+
/* Parsing of the network interface is complete, add it to the list */
DPAA_BUS_LOG(DEBUG, "Found %s, Tx Channel = %x, FMAN = %x,"
"Port ID = %x",
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index cb7f18ca2..dcf408372 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -321,6 +321,9 @@ struct fman_if {
/* The Qman channel to schedule Tx FQs to */
u16 tx_channel_id;
+ uint8_t base_profile_id;
+ uint8_t num_profiles;
+
uint8_t is_shared_mac;
/* The hard-coded FQIDs for this interface. Note: this doesn't cover
* the PCD nor the "Rx default" FQIDs, which are configured via FMC
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 20/37] net/dpaa: add support for Virtual Storage Profile
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (18 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 19/37] bus/dpaa: add Virtual Storage Profile port init Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 21/37] net/dpaa: add fmc parser support for VSP Hemant Agrawal
` (18 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
This patch adds support for Virtual Storage profile (VSP) feature.
With VSP support when memory pool is created, the bpid is not allocated;
the bpid is identified by dpaa flow create API. The memory pool of RX
queue is attached to specific BMan pool according to the VSP ID when
RX queue is setup.
For FMCLESS hash queue, vsp base ID is assigned to each queue.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/bus/dpaa/include/fsl_qman.h | 1 +
drivers/net/dpaa/dpaa_ethdev.c | 135 +++++++++++++++++-----
drivers/net/dpaa/dpaa_ethdev.h | 7 ++
drivers/net/dpaa/dpaa_flow.c | 166 +++++++++++++++++++++++++++-
drivers/net/dpaa/dpaa_flow.h | 5 +
5 files changed, 286 insertions(+), 28 deletions(-)
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 0d9cfc339..4fcac0806 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1240,6 +1240,7 @@ struct qman_fq {
/* affined portal in case of static queue */
struct qman_portal *qp;
struct dpaa_bp_info *bp_array;
+ int8_t vsp_id;
volatile unsigned long flags;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index b004b7060..9cb3d213c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -657,6 +657,56 @@ static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
return 0;
}
+static void dpaa_fman_if_pool_setup(struct rte_eth_dev *dev)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if_ic_params icp;
+ uint32_t fd_offset;
+ uint32_t bp_size;
+
+ memset(&icp, 0, sizeof(icp));
+ /* set ICEOF for to the default value , which is 0*/
+ icp.iciof = DEFAULT_ICIOF;
+ icp.iceof = DEFAULT_RX_ICEOF;
+ icp.icsz = DEFAULT_ICSZ;
+ fman_if_set_ic_params(dev->process_private, &icp);
+
+ fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
+ fman_if_set_fdoff(dev->process_private, fd_offset);
+
+ /* Buffer pool size should be equal to Dataroom Size*/
+ bp_size = rte_pktmbuf_data_room_size(dpaa_intf->bp_info->mp);
+
+ fman_if_set_bp(dev->process_private,
+ dpaa_intf->bp_info->mp->size,
+ dpaa_intf->bp_info->bpid, bp_size);
+}
+
+static inline int dpaa_eth_rx_queue_bp_check(
+ struct rte_eth_dev *dev, int8_t vsp_id, uint32_t bpid)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
+
+ if (fif->num_profiles) {
+ if (vsp_id < 0)
+ vsp_id = fif->base_profile_id;
+ } else {
+ if (vsp_id < 0)
+ vsp_id = 0;
+ }
+
+ if (dpaa_intf->vsp_bpid[vsp_id] &&
+ bpid != dpaa_intf->vsp_bpid[vsp_id]) {
+ DPAA_PMD_ERR(
+ "Various MPs are assigned to RXQs with same VSP");
+
+ return -1;
+ }
+
+ return 0;
+}
+
static
int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc,
@@ -684,6 +734,20 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_PMD_INFO("Rx queue setup for queue index: %d fq_id (0x%x)",
queue_idx, rxq->fqid);
+ if (!fif->num_profiles) {
+ if (dpaa_intf->bp_info && dpaa_intf->bp_info->bp &&
+ dpaa_intf->bp_info->mp != mp) {
+ DPAA_PMD_WARN(
+ "Multiple pools on same interface not supported");
+ return -EINVAL;
+ }
+ } else {
+ if (dpaa_eth_rx_queue_bp_check(dev, rxq->vsp_id,
+ DPAA_MEMPOOL_TO_POOL_INFO(mp)->bpid)) {
+ return -EINVAL;
+ }
+ }
+
/* Max packet can fit in single buffer */
if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
;
@@ -706,36 +770,41 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
buffsz - RTE_PKTMBUF_HEADROOM);
}
- if (!dpaa_intf->bp_info || dpaa_intf->bp_info->mp != mp) {
- struct fman_if_ic_params icp;
- uint32_t fd_offset;
- uint32_t bp_size;
+ dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
- if (!mp->pool_data) {
- DPAA_PMD_ERR("Not an offloaded buffer pool!");
- return -1;
+ /* For shared interface, it's done in kernel, skip.*/
+ if (!fif->is_shared_mac)
+ dpaa_fman_if_pool_setup(dev);
+
+ if (fif->num_profiles) {
+ int8_t vsp_id = rxq->vsp_id;
+
+ if (vsp_id >= 0) {
+ ret = dpaa_port_vsp_update(dpaa_intf, fmc_q, vsp_id,
+ DPAA_MEMPOOL_TO_POOL_INFO(mp)->bpid,
+ fif);
+ if (ret) {
+ DPAA_PMD_ERR("dpaa_port_vsp_update failed");
+ return ret;
+ }
+ } else {
+ DPAA_PMD_INFO("Base profile is associated to"
+ " RXQ fqid:%d\r\n", rxq->fqid);
+ if (fif->is_shared_mac) {
+ DPAA_PMD_ERR(
+ "Fatal: Base profile is associated to"
+ " shared interface on DPDK.");
+ return -EINVAL;
+ }
+ dpaa_intf->vsp_bpid[fif->base_profile_id] =
+ DPAA_MEMPOOL_TO_POOL_INFO(mp)->bpid;
}
- dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
-
- memset(&icp, 0, sizeof(icp));
- /* set ICEOF for to the default value , which is 0*/
- icp.iciof = DEFAULT_ICIOF;
- icp.iceof = DEFAULT_RX_ICEOF;
- icp.icsz = DEFAULT_ICSZ;
- fman_if_set_ic_params(fif, &icp);
-
- fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
- fman_if_set_fdoff(fif, fd_offset);
-
- /* Buffer pool size should be equal to Dataroom Size*/
- bp_size = rte_pktmbuf_data_room_size(mp);
- fman_if_set_bp(fif, mp->size,
- dpaa_intf->bp_info->bpid, bp_size);
- dpaa_intf->valid = 1;
- DPAA_PMD_DEBUG("if:%s fd_offset = %d offset = %d",
- dpaa_intf->name, fd_offset,
- fman_if_get_fdoff(fif));
+ } else {
+ dpaa_intf->vsp_bpid[0] =
+ DPAA_MEMPOOL_TO_POOL_INFO(mp)->bpid;
}
+
+ dpaa_intf->valid = 1;
DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
fman_if_get_sg_enable(fif),
dev->data->dev_conf.rxmode.max_rx_pkt_len);
@@ -1481,6 +1550,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES];
uint32_t cgrid_tx[MAX_DPAA_CORES];
uint32_t dev_rx_fqids[DPAA_MAX_NUM_PCD_QUEUES];
+ int8_t dev_vspids[DPAA_MAX_NUM_PCD_QUEUES];
+ int8_t vsp_id = -1;
PMD_INIT_FUNC_TRACE();
@@ -1500,6 +1571,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
memset((char *)dev_rx_fqids, 0,
sizeof(uint32_t) * DPAA_MAX_NUM_PCD_QUEUES);
+ memset(dev_vspids, -1, DPAA_MAX_NUM_PCD_QUEUES);
+
/* Initialize Rx FQ's */
if (default_q) {
num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
@@ -1579,6 +1652,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
else
fqid = dev_rx_fqids[loop];
+ vsp_id = dev_vspids[loop];
+
if (dpaa_intf->cgr_rx)
dpaa_intf->cgr_rx[loop].cgrid = cgrid[loop];
@@ -1587,6 +1662,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
fqid);
if (ret)
goto free_rx;
+ dpaa_intf->rx_queues[loop].vsp_id = vsp_id;
dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
}
dpaa_intf->nb_rx_queues = num_rx_fqs;
@@ -1927,6 +2003,11 @@ static void __attribute__((destructor(102))) dpaa_finish(void)
if (dpaa_fm_deconfig(dpaa_intf, fif))
DPAA_PMD_WARN("DPAA FM "
"deconfig failed\n");
+ if (fif->num_profiles) {
+ if (dpaa_port_vsp_cleanup(dpaa_intf,
+ fif))
+ DPAA_PMD_WARN("DPAA FM vsp cleanup failed\n");
+ }
}
}
if (is_global_init)
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b10c4a20b..dd182c4d5 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -103,6 +103,10 @@
#define DPAA_FD_CMD_CFQ 0x00ffffff
/**< Confirmation Frame Queue */
+#define DPAA_VSP_PROFILE_MAX_NUM 8
+
+#define DPAA_DEFAULT_RXQ_VSP_ID 1
+
/* Each network interface is represented by one of these */
struct dpaa_if {
int valid;
@@ -122,6 +126,9 @@ struct dpaa_if {
void *netenv_handle;
void *scheme_handle[2];
uint32_t scheme_count;
+
+ void *vsp_handle[DPAA_VSP_PROFILE_MAX_NUM];
+ uint32_t vsp_bpid[DPAA_VSP_PROFILE_MAX_NUM];
};
struct dpaa_if_stats {
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 42970a788..11348b3e0 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -12,6 +12,7 @@
#include <dpaa_flow.h>
#include <rte_dpaa_logs.h>
#include <fmlib/fm_port_ext.h>
+#include <fmlib/fm_vsp_ext.h>
#define DPAA_MAX_NUM_ETH_DEV 8
@@ -47,6 +48,17 @@ static struct dpaa_fm_info fm_info;
static struct dpaa_fm_model fm_model;
static const char *fm_log = "/tmp/fmdpdk.bin";
+static inline uint8_t fm_default_vsp_id(struct fman_if *fif)
+{
+ /* Avoid being same as base profile which could be used
+ * for kernel interface of shared mac.
+ */
+ if (fif->base_profile_id)
+ return 0;
+ else
+ return DPAA_DEFAULT_RXQ_VSP_ID;
+}
+
static void fm_prev_cleanup(void)
{
uint32_t fman_id = 0, i = 0, devid;
@@ -301,11 +313,18 @@ static int set_scheme_params(
ioc_fm_pcd_kg_scheme_params_t *scheme_params,
ioc_fm_pcd_net_env_params_t *dist_units,
struct dpaa_if *dpaa_intf,
- struct fman_if *fif __rte_unused)
+ struct fman_if *fif)
{
int dist_idx, hdr_idx = 0;
PMD_INIT_FUNC_TRACE();
+ if (fif->num_profiles) {
+ scheme_params->param.override_storage_profile = true;
+ scheme_params->param.storage_profile.direct = true;
+ scheme_params->param.storage_profile.profile_select
+ .direct_relative_profileId = fm_default_vsp_id(fif);
+ }
+
scheme_params->param.use_hash = 1;
scheme_params->param.modify = false;
scheme_params->param.always_direct = false;
@@ -788,6 +807,14 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
return -1;
}
+ if (fif->num_profiles) {
+ for (i = 0; i < dpaa_intf->nb_rx_queues; i++)
+ dpaa_intf->rx_queues[i].vsp_id =
+ fm_default_vsp_id(fif);
+
+ i = 0;
+ }
+
/* Set PCD netenv and scheme */
if (req_dist_set) {
ret = set_pcd_netenv_scheme(dpaa_intf, req_dist_set, fif);
@@ -913,3 +940,140 @@ int dpaa_fm_term(void)
}
return 0;
}
+
+static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
+ uint8_t vsp_id, t_Handle fman_handle,
+ struct fman_if *fif)
+{
+ t_FmVspParams vsp_params;
+ t_FmBufferPrefixContent buf_prefix_cont;
+ uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
+ uint8_t idx = mac_idx[fif->mac_idx];
+ int ret;
+
+ if (vsp_id == fif->base_profile_id && fif->is_shared_mac) {
+ /* For shared interface, VSP of base
+ * profile is default pool located in kernel.
+ */
+ dpaa_intf->vsp_bpid[vsp_id] = 0;
+ return 0;
+ }
+
+ if (vsp_id >= DPAA_VSP_PROFILE_MAX_NUM) {
+ DPAA_PMD_ERR("VSP ID %d exceeds MAX number %d",
+ vsp_id, DPAA_VSP_PROFILE_MAX_NUM);
+ return -1;
+ }
+
+ memset(&vsp_params, 0, sizeof(vsp_params));
+ vsp_params.h_Fm = fman_handle;
+ vsp_params.relativeProfileId = vsp_id;
+ vsp_params.portParams.portId = idx;
+ if (fif->mac_type == fman_mac_1g) {
+ vsp_params.portParams.portType = e_FM_PORT_TYPE_RX;
+ } else if (fif->mac_type == fman_mac_2_5g) {
+ vsp_params.portParams.portType = e_FM_PORT_TYPE_RX_2_5G;
+ } else if (fif->mac_type == fman_mac_10g) {
+ vsp_params.portParams.portType = e_FM_PORT_TYPE_RX_10G;
+ } else {
+ DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
+ return -1;
+ }
+ vsp_params.extBufPools.numOfPoolsUsed = 1;
+ vsp_params.extBufPools.extBufPool[0].id =
+ dpaa_intf->vsp_bpid[vsp_id];
+ vsp_params.extBufPools.extBufPool[0].size =
+ RTE_MBUF_DEFAULT_BUF_SIZE;
+
+ dpaa_intf->vsp_handle[vsp_id] = FM_VSP_Config(&vsp_params);
+ if (!dpaa_intf->vsp_handle[vsp_id]) {
+ DPAA_PMD_ERR("FM_VSP_Config error for profile %d", vsp_id);
+ return -EINVAL;
+ }
+
+ /* configure the application buffer (structure, size and
+ * content)
+ */
+
+ memset(&buf_prefix_cont, 0, sizeof(buf_prefix_cont));
+
+ buf_prefix_cont.privDataSize = 16;
+ buf_prefix_cont.dataAlign = 64;
+ buf_prefix_cont.passPrsResult = true;
+ buf_prefix_cont.passTimeStamp = true;
+ buf_prefix_cont.passHashResult = false;
+ buf_prefix_cont.passAllOtherPCDInfo = false;
+ ret = FM_VSP_ConfigBufferPrefixContent(dpaa_intf->vsp_handle[vsp_id],
+ &buf_prefix_cont);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("FM_VSP_ConfigBufferPrefixContent error for profile %d err: %d",
+ vsp_id, ret);
+ return ret;
+ }
+
+ /* initialize the FM VSP module */
+ ret = FM_VSP_Init(dpaa_intf->vsp_handle[vsp_id]);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("FM_VSP_Init error for profile %d err:%d",
+ vsp_id, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+int dpaa_port_vsp_update(struct dpaa_if *dpaa_intf,
+ bool fmc_mode, uint8_t vsp_id, uint32_t bpid,
+ struct fman_if *fif)
+{
+ int ret = 0;
+ t_Handle fman_handle;
+
+ if (!fif->num_profiles)
+ return 0;
+
+ if (vsp_id >= fif->num_profiles)
+ return 0;
+
+ if (dpaa_intf->vsp_bpid[vsp_id] == bpid)
+ return 0;
+
+ if (dpaa_intf->vsp_handle[vsp_id]) {
+ ret = FM_VSP_Free(dpaa_intf->vsp_handle[vsp_id]);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR(
+ "Error FM_VSP_Free: "
+ "err %d vsp_handle[%d]",
+ ret, vsp_id);
+ return ret;
+ }
+ dpaa_intf->vsp_handle[vsp_id] = 0;
+ }
+
+ if (fmc_mode)
+ fman_handle = FM_Open(0);
+ else
+ fman_handle = fm_info.fman_handle;
+
+ dpaa_intf->vsp_bpid[vsp_id] = bpid;
+
+ return dpaa_port_vsp_configure(dpaa_intf, vsp_id, fman_handle, fif);
+}
+
+int dpaa_port_vsp_cleanup(struct dpaa_if *dpaa_intf, struct fman_if *fif)
+{
+ int idx, ret;
+
+ for (idx = 0; idx < (uint8_t)fif->num_profiles; idx++) {
+ if (dpaa_intf->vsp_handle[idx]) {
+ ret = FM_VSP_Free(dpaa_intf->vsp_handle[idx]);
+ if (ret != E_OK) {
+ DPAA_PMD_ERR("Error FM_VSP_Free: err %d vsp_handle[%d]",
+ ret, idx);
+ return ret;
+ }
+ }
+ }
+
+ return E_OK;
+}
diff --git a/drivers/net/dpaa/dpaa_flow.h b/drivers/net/dpaa/dpaa_flow.h
index d16bfec21..f5e131acf 100644
--- a/drivers/net/dpaa/dpaa_flow.h
+++ b/drivers/net/dpaa/dpaa_flow.h
@@ -10,5 +10,10 @@ int dpaa_fm_term(void);
int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set);
int dpaa_fm_deconfig(struct dpaa_if *dpaa_intf, struct fman_if *fif);
void dpaa_write_fm_config_to_file(void);
+int dpaa_port_vsp_update(struct dpaa_if *dpaa_intf,
+ bool fmc_mode, uint8_t vsp_id, uint32_t bpid, struct fman_if *fif);
+int dpaa_port_vsp_cleanup(struct dpaa_if *dpaa_intf, struct fman_if *fif);
+int dpaa_port_fmc_init(struct fman_if *fif,
+ uint32_t *fqids, int8_t *vspids, int max_nb_rxq);
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 21/37] net/dpaa: add fmc parser support for VSP
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (19 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 20/37] net/dpaa: add support for Virtual Storage Profile Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 22/37] net/dpaa: add RSS update func with FMCless Hemant Agrawal
` (17 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Parse the fmc.bin generated by fmc to setup
RXQs for each port on fmc mode.
The parser gets the fqids and vspids from fmc.bin.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa/Makefile | 1 +
drivers/net/dpaa/dpaa_ethdev.c | 26 +-
drivers/net/dpaa/dpaa_ethdev.h | 10 +-
drivers/net/dpaa/dpaa_fmc.c | 488 +++++++++++++++++++++++++++++++++
drivers/net/dpaa/meson.build | 3 +-
5 files changed, 521 insertions(+), 7 deletions(-)
create mode 100644 drivers/net/dpaa/dpaa_fmc.c
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index d334b82a0..f4a1c0ec5 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -32,6 +32,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += fmlib/fm_vsp.c
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_flow.c
SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_fmc.c
LDLIBS += -lrte_bus_dpaa
LDLIBS += -lrte_mempool_dpaa
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 9cb3d213c..a508b10c3 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -261,6 +261,16 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
+ if (!(default_q || fmc_q)) {
+ if (dpaa_fm_config(dev,
+ eth_conf->rx_adv_conf.rss_conf.rss_hf)) {
+ dpaa_write_fm_config_to_file();
+ DPAA_PMD_ERR("FM port configuration: Failed\n");
+ return -1;
+ }
+ dpaa_write_fm_config_to_file();
+ }
+
/* if the interrupts were configured on this devices*/
if (intr_handle && intr_handle->fd) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
@@ -336,6 +346,9 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
+ if (!(default_q || fmc_q))
+ dpaa_write_fm_config_to_file();
+
/* Change tx callback to the real one */
if (dpaa_intf->cgr_tx)
dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
@@ -1577,7 +1590,18 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
if (default_q) {
num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
} else if (fmc_q) {
- num_rx_fqs = 1;
+ num_rx_fqs = dpaa_port_fmc_init(fman_intf, dev_rx_fqids,
+ dev_vspids,
+ DPAA_MAX_NUM_PCD_QUEUES);
+ if (num_rx_fqs < 0) {
+ DPAA_PMD_ERR("%s FMC initializes failed!",
+ dpaa_intf->name);
+ goto free_rx;
+ }
+ if (!num_rx_fqs) {
+ DPAA_PMD_WARN("%s is not configured by FMC.",
+ dpaa_intf->name);
+ }
} else {
/* FMCLESS mode, load balance to multiple cores.*/
num_rx_fqs = rte_lcore_count();
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index dd182c4d5..1b8e120e8 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -59,10 +59,10 @@
#endif
/* PCD frame queues */
-#define DPAA_PCD_FQID_START 0x400
-#define DPAA_PCD_FQID_MULTIPLIER 0x100
#define DPAA_DEFAULT_NUM_PCD_QUEUES 1
-#define DPAA_MAX_NUM_PCD_QUEUES 4
+#define DPAA_VSP_PROFILE_MAX_NUM 8
+#define DPAA_MAX_NUM_PCD_QUEUES DPAA_VSP_PROFILE_MAX_NUM
+/*Same as VSP profile number*/
#define DPAA_IF_TX_PRIORITY 3
#define DPAA_IF_RX_PRIORITY 0
@@ -103,10 +103,10 @@
#define DPAA_FD_CMD_CFQ 0x00ffffff
/**< Confirmation Frame Queue */
-#define DPAA_VSP_PROFILE_MAX_NUM 8
-
#define DPAA_DEFAULT_RXQ_VSP_ID 1
+#define FMC_FILE "/tmp/fmc.bin"
+
/* Each network interface is represented by one of these */
struct dpaa_if {
int valid;
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
new file mode 100644
index 000000000..b3b9a7e43
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -0,0 +1,488 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2017-2020 NXP
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <sys/types.h>
+
+#include <dpaa_ethdev.h>
+#include <dpaa_flow.h>
+#include <rte_dpaa_logs.h>
+#include <fmlib/fm_port_ext.h>
+#include <fmlib/fm_vsp_ext.h>
+
+#define FMC_OUTPUT_FORMAT_VER 0x106
+
+#define FMC_NAME_LEN 64
+#define FMC_FMAN_NUM 2
+#define FMC_PORTS_PER_FMAN 16
+#define FMC_SCHEMES_NUM 32
+#define FMC_SCHEME_PROTOCOLS_NUM 16
+#define FMC_CC_NODES_NUM 512
+#define FMC_REPLICATORS_NUM 16
+#define FMC_PLC_NUM 64
+#define MAX_SP_CODE_SIZE 0x7C0
+#define FMC_MANIP_MAX 64
+#define FMC_HMANIP_MAX 512
+#define FMC_INSERT_MAX 56
+#define FM_PCD_MAX_NUM_OF_REPS 64
+
+typedef struct fmc_port_t {
+ e_FmPortType type;
+ unsigned int number;
+ struct fm_pcd_net_env_params_t distinctionUnits;
+ struct ioc_fm_port_pcd_params_t pcdParam;
+ struct ioc_fm_port_pcd_prs_params_t prsParam;
+ struct ioc_fm_port_pcd_kg_params_t kgParam;
+ struct ioc_fm_port_pcd_cc_params_t ccParam;
+ char name[FMC_NAME_LEN];
+ char cctree_name[FMC_NAME_LEN];
+ t_Handle handle;
+ t_Handle env_id_handle;
+ t_Handle env_id_devId;
+ t_Handle cctree_handle;
+ t_Handle cctree_devId;
+
+ unsigned int schemes_count;
+ unsigned int schemes[FMC_SCHEMES_NUM];
+ unsigned int ccnodes_count;
+ unsigned int ccnodes[FMC_CC_NODES_NUM];
+ unsigned int htnodes_count;
+ unsigned int htnodes[FMC_CC_NODES_NUM];
+
+ unsigned int replicators_count;
+ unsigned int replicators[FMC_REPLICATORS_NUM];
+ ioc_fm_port_vsp_alloc_params_t vspParam;
+
+ unsigned int ccroot_count;
+ unsigned int ccroot[FMC_CC_NODES_NUM];
+ enum ioc_fm_pcd_engine ccroot_type[FMC_CC_NODES_NUM];
+ unsigned int ccroot_manip[FMC_CC_NODES_NUM];
+
+ unsigned int reasm_index;
+} fmc_port;
+
+typedef struct fmc_fman_t {
+ unsigned int number;
+ unsigned int port_count;
+ unsigned int ports[FMC_PORTS_PER_FMAN];
+ char name[FMC_NAME_LEN];
+ t_Handle handle;
+ char pcd_name[FMC_NAME_LEN];
+ t_Handle pcd_handle;
+ unsigned int kg_payload_offset;
+
+ unsigned int offload_support;
+
+ unsigned int reasm_count;
+ struct fm_pcd_manip_params_t reasm[FMC_MANIP_MAX];
+ char reasm_name[FMC_MANIP_MAX][FMC_NAME_LEN];
+ t_Handle reasm_handle[FMC_MANIP_MAX];
+ t_Handle reasm_devId[FMC_MANIP_MAX];
+
+ unsigned int frag_count;
+ struct fm_pcd_manip_params_t frag[FMC_MANIP_MAX];
+ char frag_name[FMC_MANIP_MAX][FMC_NAME_LEN];
+ t_Handle frag_handle[FMC_MANIP_MAX];
+ t_Handle frag_devId[FMC_MANIP_MAX];
+
+ unsigned int hdr_count;
+ struct fm_pcd_manip_params_t hdr[FMC_HMANIP_MAX];
+ uint8_t insertData[FMC_HMANIP_MAX][FMC_INSERT_MAX];
+ char hdr_name[FMC_HMANIP_MAX][FMC_NAME_LEN];
+ t_Handle hdr_handle[FMC_HMANIP_MAX];
+ t_Handle hdr_devId[FMC_HMANIP_MAX];
+ unsigned int hdr_hasNext[FMC_HMANIP_MAX];
+ unsigned int hdr_next[FMC_HMANIP_MAX];
+} fmc_fman;
+
+typedef enum fmc_apply_order_e {
+ FMCEngineStart,
+ FMCEngineEnd,
+ FMCPortStart,
+ FMCPortEnd,
+ FMCScheme,
+ FMCCCNode,
+ FMCHTNode,
+ FMCCCTree,
+ FMCPolicer,
+ FMCReplicator,
+ FMCManipulation
+} fmc_apply_order_e;
+
+typedef struct fmc_apply_order_t {
+ fmc_apply_order_e type;
+ unsigned int index;
+} fmc_apply_order;
+
+struct fmc_model_t {
+ unsigned int format_version;
+ unsigned int sp_enable;
+ t_FmPcdPrsSwParams sp;
+ uint8_t spcode[MAX_SP_CODE_SIZE];
+
+ unsigned int fman_count;
+ fmc_fman fman[FMC_FMAN_NUM];
+
+ unsigned int port_count;
+ fmc_port port[FMC_FMAN_NUM * FMC_PORTS_PER_FMAN];
+
+ unsigned int scheme_count;
+ char scheme_name[FMC_SCHEMES_NUM][FMC_NAME_LEN];
+ t_Handle scheme_handle[FMC_SCHEMES_NUM];
+ t_Handle scheme_devId[FMC_SCHEMES_NUM];
+ struct fm_pcd_kg_scheme_params_t scheme[FMC_SCHEMES_NUM];
+
+ unsigned int ccnode_count;
+ char ccnode_name[FMC_CC_NODES_NUM][FMC_NAME_LEN];
+ t_Handle ccnode_handle[FMC_CC_NODES_NUM];
+ t_Handle ccnode_devId[FMC_CC_NODES_NUM];
+ struct fm_pcd_cc_node_params_t ccnode[FMC_CC_NODES_NUM];
+ uint8_t cckeydata[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]
+ [FM_PCD_MAX_SIZE_OF_KEY];
+ unsigned char ccmask[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]
+ [FM_PCD_MAX_SIZE_OF_KEY];
+ unsigned int
+ ccentry_action_index[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS];
+ enum ioc_fm_pcd_engine
+ ccentry_action_type[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS];
+ unsigned char ccentry_frag[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS];
+ unsigned int ccentry_manip[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS];
+ unsigned int ccmiss_action_index[FMC_CC_NODES_NUM];
+ enum ioc_fm_pcd_engine ccmiss_action_type[FMC_CC_NODES_NUM];
+ unsigned char ccmiss_frag[FMC_CC_NODES_NUM];
+ unsigned int ccmiss_manip[FMC_CC_NODES_NUM];
+
+ unsigned int htnode_count;
+ char htnode_name[FMC_CC_NODES_NUM][FMC_NAME_LEN];
+ t_Handle htnode_handle[FMC_CC_NODES_NUM];
+ t_Handle htnode_devId[FMC_CC_NODES_NUM];
+ struct fm_pcd_hash_table_params_t htnode[FMC_CC_NODES_NUM];
+
+ unsigned int htentry_count[FMC_CC_NODES_NUM];
+ struct ioc_fm_pcd_cc_key_params_t
+ htentry[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS];
+ uint8_t htkeydata[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]
+ [FM_PCD_MAX_SIZE_OF_KEY];
+ unsigned int
+ htentry_action_index[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS];
+ enum ioc_fm_pcd_engine
+ htentry_action_type[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS];
+ unsigned char htentry_frag[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS];
+ unsigned int htentry_manip[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS];
+
+ unsigned int htmiss_action_index[FMC_CC_NODES_NUM];
+ enum ioc_fm_pcd_engine htmiss_action_type[FMC_CC_NODES_NUM];
+ unsigned char htmiss_frag[FMC_CC_NODES_NUM];
+ unsigned int htmiss_manip[FMC_CC_NODES_NUM];
+
+ unsigned int replicator_count;
+ char replicator_name[FMC_REPLICATORS_NUM][FMC_NAME_LEN];
+ t_Handle replicator_handle[FMC_REPLICATORS_NUM];
+ t_Handle replicator_devId[FMC_REPLICATORS_NUM];
+ struct fm_pcd_frm_replic_group_params_t replicator[FMC_REPLICATORS_NUM];
+ unsigned int
+ repentry_action_index[FMC_REPLICATORS_NUM][FM_PCD_MAX_NUM_OF_REPS];
+ unsigned char repentry_frag[FMC_REPLICATORS_NUM][FM_PCD_MAX_NUM_OF_REPS];
+ unsigned int repentry_manip[FMC_REPLICATORS_NUM][FM_PCD_MAX_NUM_OF_REPS];
+
+ unsigned int policer_count;
+ char policer_name[FMC_PLC_NUM][FMC_NAME_LEN];
+ struct fm_pcd_plcr_profile_params_t policer[FMC_PLC_NUM];
+ t_Handle policer_handle[FMC_PLC_NUM];
+ t_Handle policer_devId[FMC_PLC_NUM];
+ unsigned int policer_action_index[FMC_PLC_NUM][3];
+
+ unsigned int apply_order_count;
+ fmc_apply_order apply_order[FMC_FMAN_NUM *
+ FMC_PORTS_PER_FMAN *
+ (FMC_SCHEMES_NUM + FMC_CC_NODES_NUM)];
+};
+
+struct fmc_model_t *g_fmc_model;
+
+static int dpaa_port_fmc_port_parse(
+ struct fman_if *fif,
+ const struct fmc_model_t *fmc_model,
+ int apply_idx)
+{
+ int current_port = fmc_model->apply_order[apply_idx].index;
+ const fmc_port *pport = &fmc_model->port[current_port];
+ const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
+ const uint8_t mac_type[] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2};
+
+ if (mac_idx[fif->mac_idx] != pport->number ||
+ mac_type[fif->mac_idx] != pport->type)
+ return -1;
+
+ return current_port;
+}
+
+static int dpaa_port_fmc_scheme_parse(
+ struct fman_if *fif,
+ const struct fmc_model_t *fmc_model,
+ int apply_idx,
+ uint16_t *rxq_idx, int max_nb_rxq,
+ uint32_t *fqids, int8_t *vspids)
+{
+ int scheme_idx = fmc_model->apply_order[apply_idx].index;
+ uint32_t i;
+
+ if (!fmc_model->scheme[scheme_idx].override_storage_profile &&
+ fif->is_shared_mac) {
+ DPAA_PMD_WARN("No VSP is assigned to sheme %d for sharemac %d!",
+ scheme_idx, fif->mac_idx);
+ DPAA_PMD_WARN("Risk to receive pkts from skb pool to CRASH!");
+ }
+
+ if (e_IOC_FM_PCD_DONE ==
+ fmc_model->scheme[scheme_idx].next_engine) {
+ for (i = 0; i < fmc_model->scheme[scheme_idx]
+ .key_extract_and_hash_params.hash_distribution_num_of_fqids; i++) {
+ uint32_t fqid = fmc_model->scheme[scheme_idx].base_fqid + i;
+ int k, found = 0;
+
+ if (fqid == fif->fqid_rx_def) {
+ if (fif->is_shared_mac &&
+ fmc_model->scheme[scheme_idx].override_storage_profile &&
+ fmc_model->scheme[scheme_idx].storage_profile.direct &&
+ fmc_model->scheme[scheme_idx].storage_profile
+ .profile_select.direct_relative_profileId !=
+ fif->base_profile_id) {
+ DPAA_PMD_ERR(
+ "Default RXQ must be associated"
+ " with default VSP on sharemac!");
+
+ return -1;
+ }
+ continue;
+ }
+
+ if (fif->is_shared_mac &&
+ !fmc_model->scheme[scheme_idx].override_storage_profile) {
+ DPAA_PMD_ERR(
+ "RXQ to DPDK must be associated"
+ " with VSP on sharemac!");
+ return -1;
+ }
+
+ if (fif->is_shared_mac &&
+ fmc_model->scheme[scheme_idx].override_storage_profile &&
+ fmc_model->scheme[scheme_idx].storage_profile.direct &&
+ fmc_model->scheme[scheme_idx].storage_profile
+ .profile_select.direct_relative_profileId ==
+ fif->base_profile_id) {
+ DPAA_PMD_ERR(
+ "RXQ can't be associated"
+ " with default VSP on sharemac!");
+
+ return -1;
+ }
+
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_DEBUG(
+ "Too many queues in FMC policy"
+ "%d overflow %d",
+ (*rxq_idx), max_nb_rxq);
+
+ continue;
+ }
+
+ for (k = 0; k < (*rxq_idx); k++) {
+ if (fqids[k] == fqid) {
+ found = 1;
+ break;
+ }
+ }
+
+ if (found)
+ continue;
+ fqids[(*rxq_idx)] = fqid;
+ if (fmc_model->scheme[scheme_idx].override_storage_profile) {
+ if (fmc_model->scheme[scheme_idx].storage_profile.direct) {
+ vspids[(*rxq_idx)] =
+ fmc_model->scheme[scheme_idx].storage_profile
+ .profile_select.direct_relative_profileId;
+ } else {
+ vspids[(*rxq_idx)] = -1;
+ }
+ } else {
+ vspids[(*rxq_idx)] = -1;
+ }
+ (*rxq_idx)++;
+ }
+ }
+
+ return 0;
+}
+
+static int dpaa_port_fmc_ccnode_parse(
+ struct fman_if *fif,
+ const struct fmc_model_t *fmc_model,
+ int apply_idx,
+ uint16_t *rxq_idx, int max_nb_rxq,
+ uint32_t *fqids, int8_t *vspids)
+{
+ uint16_t j, k, found = 0;
+ const struct ioc_keys_params_t *keys_params;
+ uint32_t fqid, cc_idx = fmc_model->apply_order[apply_idx].index;
+
+ keys_params = &fmc_model->ccnode[cc_idx].keys_params;
+
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_WARN(
+ "Too many queues in FMC policy"
+ "%d overflow %d",
+ (*rxq_idx), max_nb_rxq);
+
+ return 0;
+ }
+
+ for (j = 0; j < keys_params->num_of_keys; ++j) {
+ found = 0;
+ fqid = keys_params->key_params[j].cc_next_engine_params
+ .params.enqueue_params.new_fqid;
+
+ if (keys_params->key_params[j].cc_next_engine_params
+ .next_engine != e_IOC_FM_PCD_DONE) {
+ DPAA_PMD_WARN("FMC CC next engine not support");
+ continue;
+ }
+ if (keys_params->key_params[j].cc_next_engine_params
+ .params.enqueue_params.action !=
+ e_IOC_FM_PCD_ENQ_FRAME)
+ continue;
+ for (k = 0; k < (*rxq_idx); k++) {
+ if (fqids[k] == fqid) {
+ found = 1;
+ break;
+ }
+ }
+ if (found)
+ continue;
+
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_WARN(
+ "Too many queues in FMC policy"
+ "%d overflow %d",
+ (*rxq_idx), max_nb_rxq);
+
+ return 0;
+ }
+
+ fqids[(*rxq_idx)] = fqid;
+ vspids[(*rxq_idx)] =
+ keys_params->key_params[j].cc_next_engine_params
+ .params.enqueue_params
+ .new_relative_storage_profile_id;
+
+ if (vspids[(*rxq_idx)] == fif->base_profile_id &&
+ fif->is_shared_mac) {
+ DPAA_PMD_ERR(
+ "VSP %d can NOT be used on DPDK.",
+ vspids[(*rxq_idx)]);
+ DPAA_PMD_ERR(
+ "It is associated to skb pool of shared interface.");
+
+ return -1;
+ }
+ (*rxq_idx)++;
+ }
+
+ return 0;
+}
+
+int dpaa_port_fmc_init(struct fman_if *fif,
+ uint32_t *fqids, int8_t *vspids, int max_nb_rxq)
+{
+ int current_port = -1, ret;
+ uint16_t rxq_idx = 0;
+ const struct fmc_model_t *fmc_model;
+ uint32_t i;
+
+ if (!g_fmc_model) {
+ size_t bytes_read;
+ FILE *fp = fopen(FMC_FILE, "rb");
+
+ if (!fp) {
+ DPAA_PMD_ERR("%s not exists", FMC_FILE);
+ return -1;
+ }
+
+ g_fmc_model = rte_malloc(NULL, sizeof(struct fmc_model_t), 64);
+ if (!g_fmc_model) {
+ DPAA_PMD_ERR("FMC memory alloc failed");
+ fclose(fp);
+ return -1;
+ }
+
+ bytes_read = fread(g_fmc_model,
+ sizeof(struct fmc_model_t), 1, fp);
+ if (!bytes_read) {
+ DPAA_PMD_ERR("No bytes read");
+ fclose(fp);
+ rte_free(g_fmc_model);
+ g_fmc_model = NULL;
+ return -1;
+ }
+ fclose(fp);
+ }
+
+ fmc_model = g_fmc_model;
+
+ if (fmc_model->format_version != FMC_OUTPUT_FORMAT_VER)
+ return -1;
+
+ for (i = 0; i < fmc_model->apply_order_count; i++) {
+ switch (fmc_model->apply_order[i].type) {
+ case FMCEngineStart:
+ break;
+ case FMCEngineEnd:
+ break;
+ case FMCPortStart:
+ current_port = dpaa_port_fmc_port_parse(
+ fif, fmc_model, i);
+ break;
+ case FMCPortEnd:
+ break;
+ case FMCScheme:
+ if (current_port < 0)
+ break;
+
+ ret = dpaa_port_fmc_scheme_parse(
+ fif, fmc_model,
+ i, &rxq_idx, max_nb_rxq, fqids, vspids);
+ if (ret)
+ return ret;
+
+ break;
+ case FMCCCNode:
+ if (current_port < 0)
+ break;
+
+ ret = dpaa_port_fmc_ccnode_parse(fif, fmc_model,
+ i, &rxq_idx, max_nb_rxq, fqids, vspids);
+ if (ret)
+ return ret;
+
+ break;
+ case FMCHTNode:
+ break;
+ case FMCReplicator:
+ break;
+ case FMCCCTree:
+ break;
+ case FMCPolicer:
+ break;
+ case FMCManipulation:
+ break;
+ default:
+ break;
+ }
+ }
+
+ return rxq_idx;
+}
diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build
index 191500001..451c68823 100644
--- a/drivers/net/dpaa/meson.build
+++ b/drivers/net/dpaa/meson.build
@@ -11,7 +11,8 @@ sources = files('dpaa_ethdev.c',
'fmlib/fm_lib.c',
'fmlib/fm_vsp.c',
'dpaa_flow.c',
- 'dpaa_rxtx.c')
+ 'dpaa_rxtx.c',
+ 'dpaa_fmc.c')
if cc.has_argument('-Wno-pointer-arith')
cflags += '-Wno-pointer-arith'
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 22/37] net/dpaa: add RSS update func with FMCless
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (20 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 21/37] net/dpaa: add fmc parser support for VSP Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 23/37] net/dpaa2: dynamic flow control support Hemant Agrawal
` (16 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Hemant Agrawal, Sachin Saxena
From: Sachin Saxena <sachin.saxena@nxp.com>
With FMCLESS mode now RSS can be modified on runtime.
This patch add support for RSS update functions
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 37 ++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index a508b10c3..478153cfe 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1222,6 +1222,41 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
return ret;
}
+static int
+dpaa_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct rte_eth_conf *eth_conf = &data->dev_conf;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (!(default_q || fmc_q)) {
+ if (dpaa_fm_config(dev, rss_conf->rss_hf)) {
+ DPAA_PMD_ERR("FM port configuration: Failed\n");
+ return -1;
+ }
+ eth_conf->rx_adv_conf.rss_conf.rss_hf = rss_conf->rss_hf;
+ } else {
+ DPAA_PMD_ERR("Function not supported\n");
+ return -ENOTSUP;
+ }
+ return 0;
+}
+
+static int
+dpaa_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct rte_eth_conf *eth_conf = &data->dev_conf;
+
+ /* dpaa does not support rss_key, so length should be 0*/
+ rss_conf->rss_key_len = 0;
+ rss_conf->rss_hf = eth_conf->rx_adv_conf.rss_conf.rss_hf;
+ return 0;
+}
+
static int dpaa_dev_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
@@ -1296,6 +1331,8 @@ static struct eth_dev_ops dpaa_devops = {
.rx_queue_intr_enable = dpaa_dev_queue_intr_enable,
.rx_queue_intr_disable = dpaa_dev_queue_intr_disable,
+ .rss_hash_update = dpaa_dev_rss_hash_update,
+ .rss_hash_conf_get = dpaa_dev_rss_hash_conf_get,
};
static bool
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 23/37] net/dpaa2: dynamic flow control support
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (21 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 22/37] net/dpaa: add RSS update func with FMCless Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 24/37] net/dpaa2: key extracts of flow API Hemant Agrawal
` (15 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Dynamic flow used instead of layout defined.
The actual key/mask size depends on protocols and(or) fields
of patterns specified.
Also, the key and mask should start from the beginning of IOVA.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 146 ++++++++-------------------------
1 file changed, 34 insertions(+), 112 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 8aa65db30..05d115c78 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -33,29 +33,6 @@ struct rte_flow {
uint16_t flow_id;
};
-/* Layout for rule compositions for supported patterns */
-/* TODO: Current design only supports Ethernet + IPv4 based classification. */
-/* So corresponding offset macros are valid only. Rest are placeholder for */
-/* now. Once support for other netwrok headers will be added then */
-/* corresponding macros will be updated with correct values*/
-#define DPAA2_CLS_RULE_OFFSET_ETH 0 /*Start of buffer*/
-#define DPAA2_CLS_RULE_OFFSET_VLAN 14 /* DPAA2_CLS_RULE_OFFSET_ETH */
- /* + Sizeof Eth fields */
-#define DPAA2_CLS_RULE_OFFSET_IPV4 14 /* DPAA2_CLS_RULE_OFFSET_VLAN */
- /* + Sizeof VLAN fields */
-#define DPAA2_CLS_RULE_OFFSET_IPV6 25 /* DPAA2_CLS_RULE_OFFSET_IPV4 */
- /* + Sizeof IPV4 fields */
-#define DPAA2_CLS_RULE_OFFSET_ICMP 58 /* DPAA2_CLS_RULE_OFFSET_IPV6 */
- /* + Sizeof IPV6 fields */
-#define DPAA2_CLS_RULE_OFFSET_UDP 60 /* DPAA2_CLS_RULE_OFFSET_ICMP */
- /* + Sizeof ICMP fields */
-#define DPAA2_CLS_RULE_OFFSET_TCP 64 /* DPAA2_CLS_RULE_OFFSET_UDP */
- /* + Sizeof UDP fields */
-#define DPAA2_CLS_RULE_OFFSET_SCTP 68 /* DPAA2_CLS_RULE_OFFSET_TCP */
- /* + Sizeof TCP fields */
-#define DPAA2_CLS_RULE_OFFSET_GRE 72 /* DPAA2_CLS_RULE_OFFSET_SCTP */
- /* + Sizeof SCTP fields */
-
static const
enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
RTE_FLOW_ITEM_TYPE_END,
@@ -212,7 +189,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
(pattern->mask ? pattern->mask : default_mask);
/* Key rule */
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_ETH;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(spec->src.addr_bytes),
sizeof(struct rte_ether_addr));
key_iova += sizeof(struct rte_ether_addr);
@@ -223,7 +200,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
sizeof(rte_be16_t));
/* Key mask */
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_ETH;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(mask->src.addr_bytes),
sizeof(struct rte_ether_addr));
mask_iova += sizeof(struct rte_ether_addr);
@@ -233,9 +210,9 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
memcpy((void *)mask_iova, (const void *)(&mask->type),
sizeof(rte_be16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_ETH +
- ((2 * sizeof(struct rte_ether_addr)) +
- sizeof(rte_be16_t)));
+ flow->key_size += ((2 * sizeof(struct rte_ether_addr)) +
+ sizeof(rte_be16_t));
+
return device_configured;
}
@@ -335,15 +312,15 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
mask = (const struct rte_flow_item_vlan *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_VLAN;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(&spec->tci),
sizeof(rte_be16_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_VLAN;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(&mask->tci),
sizeof(rte_be16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_VLAN + sizeof(rte_be16_t));
+ flow->key_size += sizeof(rte_be16_t);
return device_configured;
}
@@ -474,7 +451,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow,
mask = (const struct rte_flow_item_ipv4 *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)&spec->hdr.src_addr,
sizeof(uint32_t));
key_iova += sizeof(uint32_t);
@@ -484,7 +461,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow,
memcpy((void *)key_iova, (const void *)&spec->hdr.next_proto_id,
sizeof(uint8_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_IPV4;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)&mask->hdr.src_addr,
sizeof(uint32_t));
mask_iova += sizeof(uint32_t);
@@ -494,9 +471,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow,
memcpy((void *)mask_iova, (const void *)&mask->hdr.next_proto_id,
sizeof(uint8_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_IPV4 +
- (2 * sizeof(uint32_t)) + sizeof(uint8_t));
-
+ flow->key_size += (2 * sizeof(uint32_t)) + sizeof(uint8_t);
return device_configured;
}
@@ -613,23 +588,22 @@ dpaa2_configure_flow_ipv6(struct rte_flow *flow,
mask = (const struct rte_flow_item_ipv6 *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV6;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(spec->hdr.src_addr),
sizeof(spec->hdr.src_addr));
key_iova += sizeof(spec->hdr.src_addr);
memcpy((void *)key_iova, (const void *)(spec->hdr.dst_addr),
sizeof(spec->hdr.dst_addr));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_IPV6;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(mask->hdr.src_addr),
sizeof(mask->hdr.src_addr));
mask_iova += sizeof(mask->hdr.src_addr);
memcpy((void *)mask_iova, (const void *)(mask->hdr.dst_addr),
sizeof(mask->hdr.dst_addr));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_IPV6 +
- sizeof(spec->hdr.src_addr) +
- sizeof(mask->hdr.dst_addr));
+ flow->key_size += sizeof(spec->hdr.src_addr) +
+ sizeof(mask->hdr.dst_addr);
return device_configured;
}
@@ -746,22 +720,21 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
mask = (const struct rte_flow_item_icmp *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_ICMP;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_type,
sizeof(uint8_t));
key_iova += sizeof(uint8_t);
memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_code,
sizeof(uint8_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_ICMP;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_type,
sizeof(uint8_t));
key_iova += sizeof(uint8_t);
memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_code,
sizeof(uint8_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_ICMP +
- (2 * sizeof(uint8_t)));
+ flow->key_size += 2 * sizeof(uint8_t);
return device_configured;
}
@@ -837,13 +810,6 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.qos_key_cfg.extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -862,13 +828,6 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.fs_key_cfg[group].extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -892,25 +851,21 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
mask = (const struct rte_flow_item_udp *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 +
- (2 * sizeof(uint32_t));
- memset((void *)key_iova, 0x11, sizeof(uint8_t));
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_UDP;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
sizeof(uint16_t));
key_iova += sizeof(uint16_t);
memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
sizeof(uint16_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_UDP;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
sizeof(uint16_t));
mask_iova += sizeof(uint16_t);
memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
sizeof(uint16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_UDP +
- (2 * sizeof(uint16_t)));
+ flow->key_size += (2 * sizeof(uint16_t));
return device_configured;
}
@@ -986,13 +941,6 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.qos_key_cfg.extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1012,13 +960,6 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.fs_key_cfg[group].extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1042,25 +983,21 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
mask = (const struct rte_flow_item_tcp *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 +
- (2 * sizeof(uint32_t));
- memset((void *)key_iova, 0x06, sizeof(uint8_t));
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_TCP;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
sizeof(uint16_t));
key_iova += sizeof(uint16_t);
memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
sizeof(uint16_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_TCP;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
sizeof(uint16_t));
mask_iova += sizeof(uint16_t);
memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
sizeof(uint16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_TCP +
- (2 * sizeof(uint16_t)));
+ flow->key_size += 2 * sizeof(uint16_t);
return device_configured;
}
@@ -1136,13 +1073,6 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.qos_key_cfg.extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1162,13 +1092,6 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.fs_key_cfg[group].extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1192,25 +1115,22 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
mask = (const struct rte_flow_item_sctp *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 +
- (2 * sizeof(uint32_t));
- memset((void *)key_iova, 0x84, sizeof(uint8_t));
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_SCTP;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
sizeof(uint16_t));
key_iova += sizeof(uint16_t);
memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
sizeof(uint16_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_SCTP;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
sizeof(uint16_t));
mask_iova += sizeof(uint16_t);
memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
sizeof(uint16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_SCTP +
- (2 * sizeof(uint16_t)));
+ flow->key_size += 2 * sizeof(uint16_t);
+
return device_configured;
}
@@ -1313,15 +1233,15 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
mask = (const struct rte_flow_item_gre *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_GRE;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(&spec->protocol),
sizeof(rte_be16_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_GRE;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(&mask->protocol),
sizeof(rte_be16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_GRE + sizeof(rte_be16_t));
+ flow->key_size += sizeof(rte_be16_t);
return device_configured;
}
@@ -1503,6 +1423,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
action.flow_id = action.flow_id % nic_attr.num_rx_tcs;
index = flow->index + (flow->tc_id * nic_attr.fs_entries);
+ flow->rule.key_size = flow->key_size;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &flow->rule,
flow->tc_id, index,
@@ -1606,6 +1527,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
/* Add Rule into QoS table */
index = flow->index + (flow->tc_id * nic_attr.fs_entries);
+ flow->rule.key_size = flow->key_size;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
&flow->rule, flow->tc_id,
index, 0, 0);
@@ -1862,7 +1784,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
flow->rule.key_iova = key_iova;
flow->rule.mask_iova = mask_iova;
- flow->rule.key_size = 0;
+ flow->key_size = 0;
switch (dpaa2_filter_type) {
case RTE_ETH_FILTER_GENERIC:
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 24/37] net/dpaa2: key extracts of flow API
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (22 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 23/37] net/dpaa2: dynamic flow control support Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 25/37] net/dpaa2: sanity check for flow extracts Hemant Agrawal
` (14 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
1) Support QoS extracts and TC extracts for multiple TCs.
2) Protocol type of L2 extract is used to parse L3.
Next protocol of L3 extract is used to parse L4.
3) generic IP key extracts instead of IPv4 and IPv6 respectively.
4) Special for IP address extracts:
Put IP(v4/v6) address extract(s)/rule(s) at the end of extracts array
to make rest fields at fixed poisition.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 35 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 43 +-
drivers/net/dpaa2/dpaa2_flow.c | 3628 +++++++++++++++++++++---------
3 files changed, 2665 insertions(+), 1041 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index d3eb10459..60c2ded40 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1,7 +1,7 @@
/* * SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2016 NXP
+ * Copyright 2016-2020 NXP
*
*/
@@ -2503,23 +2503,41 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->tx_pkt_burst = dpaa2_dev_tx;
/*Init fields w.r.t. classficaition*/
- memset(&priv->extract.qos_key_cfg, 0, sizeof(struct dpkg_profile_cfg));
+ memset(&priv->extract.qos_key_extract, 0,
+ sizeof(struct dpaa2_key_extract));
priv->extract.qos_extract_param = (size_t)rte_malloc(NULL, 256, 64);
if (!priv->extract.qos_extract_param) {
DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow "
" classificaiton ", ret);
goto init_err;
}
+ priv->extract.qos_key_extract.key_info.ipv4_src_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.qos_key_extract.key_info.ipv4_dst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.qos_key_extract.key_info.ipv6_src_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.qos_key_extract.key_info.ipv6_dst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+
for (i = 0; i < MAX_TCS; i++) {
- memset(&priv->extract.fs_key_cfg[i], 0,
- sizeof(struct dpkg_profile_cfg));
- priv->extract.fs_extract_param[i] =
+ memset(&priv->extract.tc_key_extract[i], 0,
+ sizeof(struct dpaa2_key_extract));
+ priv->extract.tc_extract_param[i] =
(size_t)rte_malloc(NULL, 256, 64);
- if (!priv->extract.fs_extract_param[i]) {
+ if (!priv->extract.tc_extract_param[i]) {
DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classificaiton",
ret);
goto init_err;
}
+ priv->extract.tc_key_extract[i].key_info.ipv4_src_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.tc_key_extract[i].key_info.ipv4_dst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.tc_key_extract[i].key_info.ipv6_src_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.tc_key_extract[i].key_info.ipv6_dst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
}
ret = dpni_set_max_frame_length(dpni_dev, CMD_PRI_LOW, priv->token,
@@ -2595,8 +2613,9 @@ dpaa2_dev_uninit(struct rte_eth_dev *eth_dev)
rte_free(dpni);
for (i = 0; i < MAX_TCS; i++) {
- if (priv->extract.fs_extract_param[i])
- rte_free((void *)(size_t)priv->extract.fs_extract_param[i]);
+ if (priv->extract.tc_extract_param[i])
+ rte_free((void *)
+ (size_t)priv->extract.tc_extract_param[i]);
}
if (priv->extract.qos_extract_param)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index c7fb6539f..030c625e3 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -96,10 +96,39 @@ extern enum pmd_dpaa2_ts dpaa2_enable_ts;
#define DPAA2_QOS_TABLE_RECONFIGURE 1
#define DPAA2_FS_TABLE_RECONFIGURE 2
+#define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4
+#define DPAA2_FS_TABLE_IPADDR_EXTRACT 8
+
+
/*Externaly defined*/
extern const struct rte_flow_ops dpaa2_flow_ops;
extern enum rte_filter_type dpaa2_filter_type;
+#define IP_ADDRESS_OFFSET_INVALID (-1)
+
+struct dpaa2_key_info {
+ uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
+ uint8_t key_size[DPKG_MAX_NUM_OF_EXTRACTS];
+ /* Special for IP address. */
+ int ipv4_src_offset;
+ int ipv4_dst_offset;
+ int ipv6_src_offset;
+ int ipv6_dst_offset;
+ uint8_t key_total_size;
+};
+
+struct dpaa2_key_extract {
+ struct dpkg_profile_cfg dpkg;
+ struct dpaa2_key_info key_info;
+};
+
+struct extract_s {
+ struct dpaa2_key_extract qos_key_extract;
+ struct dpaa2_key_extract tc_key_extract[MAX_TCS];
+ uint64_t qos_extract_param;
+ uint64_t tc_extract_param[MAX_TCS];
+};
+
struct dpaa2_dev_priv {
void *hw;
int32_t hw_id;
@@ -122,17 +151,9 @@ struct dpaa2_dev_priv {
uint8_t max_cgs;
uint8_t cgid_in_use[MAX_RX_QUEUES];
- struct pattern_s {
- uint8_t item_count;
- uint8_t pattern_type[DPKG_MAX_NUM_OF_EXTRACTS];
- } pattern[MAX_TCS + 1];
-
- struct extract_s {
- struct dpkg_profile_cfg qos_key_cfg;
- struct dpkg_profile_cfg fs_key_cfg[MAX_TCS];
- uint64_t qos_extract_param;
- uint64_t fs_extract_param[MAX_TCS];
- } extract;
+ struct extract_s extract;
+ uint8_t *qos_index;
+ uint8_t *fs_index;
uint16_t ss_offset;
uint64_t ss_iova;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 05d115c78..779cb64ab 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
-/* * SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2018-2020 NXP
*/
#include <sys/queue.h>
@@ -22,15 +22,44 @@
#include <dpaa2_ethdev.h>
#include <dpaa2_pmd_logs.h>
+/* Workaround to discriminate the UDP/TCP/SCTP
+ * with next protocol of l3.
+ * MC/WRIOP are not able to identify
+ * the l4 protocol with l4 ports.
+ */
+int mc_l4_port_identification;
+
+enum flow_rule_ipaddr_type {
+ FLOW_NONE_IPADDR,
+ FLOW_IPV4_ADDR,
+ FLOW_IPV6_ADDR
+};
+
+struct flow_rule_ipaddr {
+ enum flow_rule_ipaddr_type ipaddr_type;
+ int qos_ipsrc_offset;
+ int qos_ipdst_offset;
+ int fs_ipsrc_offset;
+ int fs_ipdst_offset;
+};
+
struct rte_flow {
LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
- struct dpni_rule_cfg rule;
+ struct dpni_rule_cfg qos_rule;
+ struct dpni_rule_cfg fs_rule;
+ uint16_t qos_index;
+ uint16_t fs_index;
uint8_t key_size;
- uint8_t tc_id;
+ uint8_t tc_id; /** Traffic Class ID. */
uint8_t flow_type;
- uint8_t index;
+ uint8_t tc_index; /** index within this Traffic Class. */
enum rte_flow_action_type action;
uint16_t flow_id;
+ /* Special for IP address to specify the offset
+ * in key/mask.
+ */
+ struct flow_rule_ipaddr ipaddr_rule;
+ struct dpni_fs_action_cfg action_cfg;
};
static const
@@ -54,166 +83,717 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = {
RTE_FLOW_ACTION_TYPE_RSS
};
+/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
+#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
+
enum rte_filter_type dpaa2_filter_type = RTE_ETH_FILTER_NONE;
static const void *default_mask;
+static inline void dpaa2_flow_extract_key_set(
+ struct dpaa2_key_info *key_info, int index, uint8_t size)
+{
+ key_info->key_size[index] = size;
+ if (index > 0) {
+ key_info->key_offset[index] =
+ key_info->key_offset[index - 1] +
+ key_info->key_size[index - 1];
+ } else {
+ key_info->key_offset[index] = 0;
+ }
+ key_info->key_total_size += size;
+}
+
+static int dpaa2_flow_extract_add(
+ struct dpaa2_key_extract *key_extract,
+ enum net_prot prot,
+ uint32_t field, uint8_t field_size)
+{
+ int index, ip_src = -1, ip_dst = -1;
+ struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
+ struct dpaa2_key_info *key_info = &key_extract->key_info;
+
+ if (dpkg->num_extracts >=
+ DPKG_MAX_NUM_OF_EXTRACTS) {
+ DPAA2_PMD_WARN("Number of extracts overflows");
+ return -1;
+ }
+ /* Before reorder, the IP SRC and IP DST are already last
+ * extract(s).
+ */
+ for (index = 0; index < dpkg->num_extracts; index++) {
+ if (dpkg->extracts[index].extract.from_hdr.prot ==
+ NET_PROT_IP) {
+ if (dpkg->extracts[index].extract.from_hdr.field ==
+ NH_FLD_IP_SRC) {
+ ip_src = index;
+ }
+ if (dpkg->extracts[index].extract.from_hdr.field ==
+ NH_FLD_IP_DST) {
+ ip_dst = index;
+ }
+ }
+ }
+
+ if (ip_src >= 0)
+ RTE_ASSERT((ip_src + 2) >= dpkg->num_extracts);
+
+ if (ip_dst >= 0)
+ RTE_ASSERT((ip_dst + 2) >= dpkg->num_extracts);
+
+ if (prot == NET_PROT_IP &&
+ (field == NH_FLD_IP_SRC ||
+ field == NH_FLD_IP_DST)) {
+ index = dpkg->num_extracts;
+ } else {
+ if (ip_src >= 0 && ip_dst >= 0)
+ index = dpkg->num_extracts - 2;
+ else if (ip_src >= 0 || ip_dst >= 0)
+ index = dpkg->num_extracts - 1;
+ else
+ index = dpkg->num_extracts;
+ }
+
+ dpkg->extracts[index].type = DPKG_EXTRACT_FROM_HDR;
+ dpkg->extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
+ dpkg->extracts[index].extract.from_hdr.prot = prot;
+ dpkg->extracts[index].extract.from_hdr.field = field;
+ if (prot == NET_PROT_IP &&
+ (field == NH_FLD_IP_SRC ||
+ field == NH_FLD_IP_DST)) {
+ dpaa2_flow_extract_key_set(key_info, index, 0);
+ } else {
+ dpaa2_flow_extract_key_set(key_info, index, field_size);
+ }
+
+ if (prot == NET_PROT_IP) {
+ if (field == NH_FLD_IP_SRC) {
+ if (key_info->ipv4_dst_offset >= 0) {
+ key_info->ipv4_src_offset =
+ key_info->ipv4_dst_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ } else {
+ key_info->ipv4_src_offset =
+ key_info->key_offset[index - 1] +
+ key_info->key_size[index - 1];
+ }
+ if (key_info->ipv6_dst_offset >= 0) {
+ key_info->ipv6_src_offset =
+ key_info->ipv6_dst_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ } else {
+ key_info->ipv6_src_offset =
+ key_info->key_offset[index - 1] +
+ key_info->key_size[index - 1];
+ }
+ } else if (field == NH_FLD_IP_DST) {
+ if (key_info->ipv4_src_offset >= 0) {
+ key_info->ipv4_dst_offset =
+ key_info->ipv4_src_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ } else {
+ key_info->ipv4_dst_offset =
+ key_info->key_offset[index - 1] +
+ key_info->key_size[index - 1];
+ }
+ if (key_info->ipv6_src_offset >= 0) {
+ key_info->ipv6_dst_offset =
+ key_info->ipv6_src_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ } else {
+ key_info->ipv6_dst_offset =
+ key_info->key_offset[index - 1] +
+ key_info->key_size[index - 1];
+ }
+ }
+ }
+
+ if (index == dpkg->num_extracts) {
+ dpkg->num_extracts++;
+ return 0;
+ }
+
+ if (ip_src >= 0) {
+ ip_src++;
+ dpkg->extracts[ip_src].type =
+ DPKG_EXTRACT_FROM_HDR;
+ dpkg->extracts[ip_src].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ dpkg->extracts[ip_src].extract.from_hdr.prot =
+ NET_PROT_IP;
+ dpkg->extracts[ip_src].extract.from_hdr.field =
+ NH_FLD_IP_SRC;
+ dpaa2_flow_extract_key_set(key_info, ip_src, 0);
+ key_info->ipv4_src_offset += field_size;
+ key_info->ipv6_src_offset += field_size;
+ }
+ if (ip_dst >= 0) {
+ ip_dst++;
+ dpkg->extracts[ip_dst].type =
+ DPKG_EXTRACT_FROM_HDR;
+ dpkg->extracts[ip_dst].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ dpkg->extracts[ip_dst].extract.from_hdr.prot =
+ NET_PROT_IP;
+ dpkg->extracts[ip_dst].extract.from_hdr.field =
+ NH_FLD_IP_DST;
+ dpaa2_flow_extract_key_set(key_info, ip_dst, 0);
+ key_info->ipv4_dst_offset += field_size;
+ key_info->ipv6_dst_offset += field_size;
+ }
+
+ dpkg->num_extracts++;
+
+ return 0;
+}
+
+/* Protocol discrimination.
+ * Discriminate IPv4/IPv6/vLan by Eth type.
+ * Discriminate UDP/TCP/ICMP by next proto of IP.
+ */
+static inline int
+dpaa2_flow_proto_discrimination_extract(
+ struct dpaa2_key_extract *key_extract,
+ enum rte_flow_item_type type)
+{
+ if (type == RTE_FLOW_ITEM_TYPE_ETH) {
+ return dpaa2_flow_extract_add(
+ key_extract, NET_PROT_ETH,
+ NH_FLD_ETH_TYPE,
+ sizeof(rte_be16_t));
+ } else if (type == (enum rte_flow_item_type)
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
+ return dpaa2_flow_extract_add(
+ key_extract, NET_PROT_IP,
+ NH_FLD_IP_PROTO,
+ NH_FLD_IP_PROTO_SIZE);
+ }
+
+ return -1;
+}
+
+static inline int dpaa2_flow_extract_search(
+ struct dpkg_profile_cfg *dpkg,
+ enum net_prot prot, uint32_t field)
+{
+ int i;
+
+ for (i = 0; i < dpkg->num_extracts; i++) {
+ if (dpkg->extracts[i].extract.from_hdr.prot == prot &&
+ dpkg->extracts[i].extract.from_hdr.field == field) {
+ return i;
+ }
+ }
+
+ return -1;
+}
+
+static inline int dpaa2_flow_extract_key_offset(
+ struct dpaa2_key_extract *key_extract,
+ enum net_prot prot, uint32_t field)
+{
+ int i;
+ struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
+ struct dpaa2_key_info *key_info = &key_extract->key_info;
+
+ if (prot == NET_PROT_IPV4 ||
+ prot == NET_PROT_IPV6)
+ i = dpaa2_flow_extract_search(dpkg, NET_PROT_IP, field);
+ else
+ i = dpaa2_flow_extract_search(dpkg, prot, field);
+
+ if (i >= 0) {
+ if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_SRC)
+ return key_info->ipv4_src_offset;
+ else if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_DST)
+ return key_info->ipv4_dst_offset;
+ else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_SRC)
+ return key_info->ipv6_src_offset;
+ else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_DST)
+ return key_info->ipv6_dst_offset;
+ else
+ return key_info->key_offset[i];
+ } else {
+ return -1;
+ }
+}
+
+struct proto_discrimination {
+ enum rte_flow_item_type type;
+ union {
+ rte_be16_t eth_type;
+ uint8_t ip_proto;
+ };
+};
+
+static int
+dpaa2_flow_proto_discrimination_rule(
+ struct dpaa2_dev_priv *priv, struct rte_flow *flow,
+ struct proto_discrimination proto, int group)
+{
+ enum net_prot prot;
+ uint32_t field;
+ int offset;
+ size_t key_iova;
+ size_t mask_iova;
+ rte_be16_t eth_type;
+ uint8_t ip_proto;
+
+ if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
+ prot = NET_PROT_ETH;
+ field = NH_FLD_ETH_TYPE;
+ } else if (proto.type == DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
+ prot = NET_PROT_IP;
+ field = NH_FLD_IP_PROTO;
+ } else {
+ DPAA2_PMD_ERR(
+ "Only Eth and IP support to discriminate next proto.");
+ return -1;
+ }
+
+ offset = dpaa2_flow_extract_key_offset(&priv->extract.qos_key_extract,
+ prot, field);
+ if (offset < 0) {
+ DPAA2_PMD_ERR("QoS prot %d field %d extract failed",
+ prot, field);
+ return -1;
+ }
+ key_iova = flow->qos_rule.key_iova + offset;
+ mask_iova = flow->qos_rule.mask_iova + offset;
+ if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
+ eth_type = proto.eth_type;
+ memcpy((void *)key_iova, (const void *)(ð_type),
+ sizeof(rte_be16_t));
+ eth_type = 0xffff;
+ memcpy((void *)mask_iova, (const void *)(ð_type),
+ sizeof(rte_be16_t));
+ } else {
+ ip_proto = proto.ip_proto;
+ memcpy((void *)key_iova, (const void *)(&ip_proto),
+ sizeof(uint8_t));
+ ip_proto = 0xff;
+ memcpy((void *)mask_iova, (const void *)(&ip_proto),
+ sizeof(uint8_t));
+ }
+
+ offset = dpaa2_flow_extract_key_offset(
+ &priv->extract.tc_key_extract[group],
+ prot, field);
+ if (offset < 0) {
+ DPAA2_PMD_ERR("FS prot %d field %d extract failed",
+ prot, field);
+ return -1;
+ }
+ key_iova = flow->fs_rule.key_iova + offset;
+ mask_iova = flow->fs_rule.mask_iova + offset;
+
+ if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
+ eth_type = proto.eth_type;
+ memcpy((void *)key_iova, (const void *)(ð_type),
+ sizeof(rte_be16_t));
+ eth_type = 0xffff;
+ memcpy((void *)mask_iova, (const void *)(ð_type),
+ sizeof(rte_be16_t));
+ } else {
+ ip_proto = proto.ip_proto;
+ memcpy((void *)key_iova, (const void *)(&ip_proto),
+ sizeof(uint8_t));
+ ip_proto = 0xff;
+ memcpy((void *)mask_iova, (const void *)(&ip_proto),
+ sizeof(uint8_t));
+ }
+
+ return 0;
+}
+
+static inline int
+dpaa2_flow_rule_data_set(
+ struct dpaa2_key_extract *key_extract,
+ struct dpni_rule_cfg *rule,
+ enum net_prot prot, uint32_t field,
+ const void *key, const void *mask, int size)
+{
+ int offset = dpaa2_flow_extract_key_offset(key_extract,
+ prot, field);
+
+ if (offset < 0) {
+ DPAA2_PMD_ERR("prot %d, field %d extract failed",
+ prot, field);
+ return -1;
+ }
+ memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
+ memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+
+ return 0;
+}
+
+static inline int
+_dpaa2_flow_rule_move_ipaddr_tail(
+ struct dpaa2_key_extract *key_extract,
+ struct dpni_rule_cfg *rule, int src_offset,
+ uint32_t field, bool ipv4)
+{
+ size_t key_src;
+ size_t mask_src;
+ size_t key_dst;
+ size_t mask_dst;
+ int dst_offset, len;
+ enum net_prot prot;
+ char tmp[NH_FLD_IPV6_ADDR_SIZE];
+
+ if (field != NH_FLD_IP_SRC &&
+ field != NH_FLD_IP_DST) {
+ DPAA2_PMD_ERR("Field of IP addr reorder must be IP SRC/DST");
+ return -1;
+ }
+ if (ipv4)
+ prot = NET_PROT_IPV4;
+ else
+ prot = NET_PROT_IPV6;
+ dst_offset = dpaa2_flow_extract_key_offset(key_extract,
+ prot, field);
+ if (dst_offset < 0) {
+ DPAA2_PMD_ERR("Field %d reorder extract failed", field);
+ return -1;
+ }
+ key_src = rule->key_iova + src_offset;
+ mask_src = rule->mask_iova + src_offset;
+ key_dst = rule->key_iova + dst_offset;
+ mask_dst = rule->mask_iova + dst_offset;
+ if (ipv4)
+ len = sizeof(rte_be32_t);
+ else
+ len = NH_FLD_IPV6_ADDR_SIZE;
+
+ memcpy(tmp, (char *)key_src, len);
+ memcpy((char *)key_dst, tmp, len);
+
+ memcpy(tmp, (char *)mask_src, len);
+ memcpy((char *)mask_dst, tmp, len);
+
+ return 0;
+}
+
+static inline int
+dpaa2_flow_rule_move_ipaddr_tail(
+ struct rte_flow *flow, struct dpaa2_dev_priv *priv,
+ int fs_group)
+{
+ int ret;
+ enum net_prot prot;
+
+ if (flow->ipaddr_rule.ipaddr_type == FLOW_NONE_IPADDR)
+ return 0;
+
+ if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR)
+ prot = NET_PROT_IPV4;
+ else
+ prot = NET_PROT_IPV6;
+
+ if (flow->ipaddr_rule.qos_ipsrc_offset >= 0) {
+ ret = _dpaa2_flow_rule_move_ipaddr_tail(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ flow->ipaddr_rule.qos_ipsrc_offset,
+ NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS src address reorder failed");
+ return -1;
+ }
+ flow->ipaddr_rule.qos_ipsrc_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.qos_key_extract,
+ prot, NH_FLD_IP_SRC);
+ }
+
+ if (flow->ipaddr_rule.qos_ipdst_offset >= 0) {
+ ret = _dpaa2_flow_rule_move_ipaddr_tail(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ flow->ipaddr_rule.qos_ipdst_offset,
+ NH_FLD_IP_DST, prot == NET_PROT_IPV4);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS dst address reorder failed");
+ return -1;
+ }
+ flow->ipaddr_rule.qos_ipdst_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.qos_key_extract,
+ prot, NH_FLD_IP_DST);
+ }
+
+ if (flow->ipaddr_rule.fs_ipsrc_offset >= 0) {
+ ret = _dpaa2_flow_rule_move_ipaddr_tail(
+ &priv->extract.tc_key_extract[fs_group],
+ &flow->fs_rule,
+ flow->ipaddr_rule.fs_ipsrc_offset,
+ NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
+ if (ret) {
+ DPAA2_PMD_ERR("FS src address reorder failed");
+ return -1;
+ }
+ flow->ipaddr_rule.fs_ipsrc_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.tc_key_extract[fs_group],
+ prot, NH_FLD_IP_SRC);
+ }
+ if (flow->ipaddr_rule.fs_ipdst_offset >= 0) {
+ ret = _dpaa2_flow_rule_move_ipaddr_tail(
+ &priv->extract.tc_key_extract[fs_group],
+ &flow->fs_rule,
+ flow->ipaddr_rule.fs_ipdst_offset,
+ NH_FLD_IP_DST, prot == NET_PROT_IPV4);
+ if (ret) {
+ DPAA2_PMD_ERR("FS dst address reorder failed");
+ return -1;
+ }
+ flow->ipaddr_rule.fs_ipdst_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.tc_key_extract[fs_group],
+ prot, NH_FLD_IP_DST);
+ }
+
+ return 0;
+}
+
static int
dpaa2_configure_flow_eth(struct rte_flow *flow,
struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_eth *spec, *mask;
/* TODO: Currently upper bound of range parameter is not implemented */
const struct rte_flow_item_eth *last __rte_unused;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- /* TODO: pattern is an array of 9 elements where 9th pattern element */
- /* is for QoS table and 1-8th pattern element is for FS tables. */
- /* It can be changed to macro. */
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_eth *)pattern->spec;
+ last = (const struct rte_flow_item_eth *)pattern->last;
+ mask = (const struct rte_flow_item_eth *)
+ (pattern->mask ? pattern->mask : default_mask);
+ if (!spec) {
+ /* Don't care any field of eth header,
+ * only care eth protocol.
+ */
+ DPAA2_PMD_WARN("No pattern spec for Eth flow, just skip");
+ return 0;
}
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_SA);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_ETH, NH_FLD_ETH_SA,
+ RTE_ETHER_ADDR_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add ETH_SA failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_SA);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_ETH, NH_FLD_ETH_SA,
+ RTE_ETHER_ADDR_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add ETH_SA failed.");
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before ETH_SA rule set failed");
+ return -1;
+ }
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_SA,
+ &spec->src.addr_bytes,
+ &mask->src.addr_bytes,
+ sizeof(struct rte_ether_addr));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
+ return -1;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_SA,
+ &spec->src.addr_bytes,
+ &mask->src.addr_bytes,
+ sizeof(struct rte_ether_addr));
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
+ return -1;
+ }
}
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ETH_SA;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ETH_DA;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ETH_TYPE;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ETH_SA;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ETH_DA;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ETH_TYPE;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_DA);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_ETH, NH_FLD_ETH_DA,
+ RTE_ETHER_ADDR_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add ETH_DA failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_DA);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_ETH, NH_FLD_ETH_DA,
+ RTE_ETHER_ADDR_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add ETH_DA failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before ETH DA rule set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_DA,
+ &spec->dst.addr_bytes,
+ &mask->dst.addr_bytes,
+ sizeof(struct rte_ether_addr));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_DA,
+ &spec->dst.addr_bytes,
+ &mask->dst.addr_bytes,
+ sizeof(struct rte_ether_addr));
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_eth *)pattern->spec;
- last = (const struct rte_flow_item_eth *)pattern->last;
- mask = (const struct rte_flow_item_eth *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE,
+ RTE_ETHER_TYPE_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add ETH_TYPE failed.");
- /* Key rule */
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(spec->src.addr_bytes),
- sizeof(struct rte_ether_addr));
- key_iova += sizeof(struct rte_ether_addr);
- memcpy((void *)key_iova, (const void *)(spec->dst.addr_bytes),
- sizeof(struct rte_ether_addr));
- key_iova += sizeof(struct rte_ether_addr);
- memcpy((void *)key_iova, (const void *)(&spec->type),
- sizeof(rte_be16_t));
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_ETH, NH_FLD_ETH_TYPE,
+ RTE_ETHER_TYPE_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add ETH_TYPE failed.");
- /* Key mask */
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(mask->src.addr_bytes),
- sizeof(struct rte_ether_addr));
- mask_iova += sizeof(struct rte_ether_addr);
- memcpy((void *)mask_iova, (const void *)(mask->dst.addr_bytes),
- sizeof(struct rte_ether_addr));
- mask_iova += sizeof(struct rte_ether_addr);
- memcpy((void *)mask_iova, (const void *)(&mask->type),
- sizeof(rte_be16_t));
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before ETH TYPE rule set failed");
+ return -1;
+ }
- flow->key_size += ((2 * sizeof(struct rte_ether_addr)) +
- sizeof(rte_be16_t));
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_TYPE,
+ &spec->type,
+ &mask->type,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
+ return -1;
+ }
- return device_configured;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_TYPE,
+ &spec->type,
+ &mask->type,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
+ return -1;
+ }
+ }
+
+ (*device_configured) |= local_cfg;
+
+ return 0;
}
static int
@@ -222,12 +802,11 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_vlan *spec, *mask;
@@ -236,375 +815,524 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_vlan *)pattern->spec;
+ last = (const struct rte_flow_item_vlan *)pattern->last;
+ mask = (const struct rte_flow_item_vlan *)
+ (pattern->mask ? pattern->mask : default_mask);
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec) {
+ /* Don't care any field of vlan header,
+ * only care vlan protocol.
+ */
+ /* Eth type is actually used for vLan classification.
+ */
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Ext ETH_TYPE to discriminate vLan failed");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Ext ETH_TYPE to discriminate vLan failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before vLan discrimination set failed");
+ return -1;
+ }
+
+ proto.type = RTE_FLOW_ITEM_TYPE_ETH;
+ proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("vLan discrimination rule set failed");
+ return -1;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
+ (*device_configured) |= local_cfg;
+
+ return 0;
}
+ if (!mask->tci)
+ return 0;
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_VLAN, NH_FLD_VLAN_TCI);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_VLAN,
+ NH_FLD_VLAN_TCI,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add VLAN_TCI failed.");
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_VLAN;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_VLAN_TCI;
- priv->extract.qos_key_cfg.num_extracts++;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_VLAN, NH_FLD_VLAN_TCI);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_VLAN,
+ NH_FLD_VLAN_TCI,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add VLAN_TCI failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_VLAN;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_VLAN_TCI;
- priv->extract.fs_key_cfg[group].num_extracts++;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before VLAN TCI rule set failed");
+ return -1;
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_vlan *)pattern->spec;
- last = (const struct rte_flow_item_vlan *)pattern->last;
- mask = (const struct rte_flow_item_vlan *)
- (pattern->mask ? pattern->mask : default_mask);
+ ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_VLAN,
+ NH_FLD_VLAN_TCI,
+ &spec->tci,
+ &mask->tci,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
+ return -1;
+ }
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(&spec->tci),
- sizeof(rte_be16_t));
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_VLAN,
+ NH_FLD_VLAN_TCI,
+ &spec->tci,
+ &mask->tci,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
+ return -1;
+ }
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(&mask->tci),
- sizeof(rte_be16_t));
+ (*device_configured) |= local_cfg;
- flow->key_size += sizeof(rte_be16_t);
- return device_configured;
+ return 0;
}
static int
-dpaa2_configure_flow_ipv4(struct rte_flow *flow,
- struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item *pattern,
- const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+dpaa2_configure_flow_generic_ip(
+ struct rte_flow *flow,
+ struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item *pattern,
+ const struct rte_flow_action actions[] __rte_unused,
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
- const struct rte_flow_item_ipv4 *spec, *mask;
+ const struct rte_flow_item_ipv4 *spec_ipv4 = 0,
+ *mask_ipv4 = 0;
+ const struct rte_flow_item_ipv6 *spec_ipv6 = 0,
+ *mask_ipv6 = 0;
+ const void *key, *mask;
+ enum net_prot prot;
- const struct rte_flow_item_ipv4 *last __rte_unused;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
+ int size;
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
+ /* Parse pattern list to get the matching parameters */
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+ spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec;
+ mask_ipv4 = (const struct rte_flow_item_ipv4 *)
+ (pattern->mask ? pattern->mask : default_mask);
+ } else {
+ spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec;
+ mask_ipv6 = (const struct rte_flow_item_ipv6 *)
+ (pattern->mask ? pattern->mask : default_mask);
}
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec_ipv4 && !spec_ipv6) {
+ /* Don't care any field of IP header,
+ * only care IP protocol.
+ * Example: flow create 0 ingress pattern ipv6 /
+ */
+ /* Eth type is actually used for IP identification.
+ */
+ /* TODO: Current design only supports Eth + IP,
+ * Eth + vLan + IP needs to add.
+ */
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Ext ETH_TYPE to discriminate IP failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Ext ETH_TYPE to discriminate IP failed");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
- }
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before IP discrimination set failed");
+ return -1;
+ }
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_DST;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_DST;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
- }
+ proto.type = RTE_FLOW_ITEM_TYPE_ETH;
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
+ proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+ else
+ proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("IP discrimination rule set failed");
+ return -1;
+ }
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_ipv4 *)pattern->spec;
- last = (const struct rte_flow_item_ipv4 *)pattern->last;
- mask = (const struct rte_flow_item_ipv4 *)
- (pattern->mask ? pattern->mask : default_mask);
+ (*device_configured) |= local_cfg;
+
+ return 0;
+ }
+
+ if (mask_ipv4 && (mask_ipv4->hdr.src_addr ||
+ mask_ipv4->hdr.dst_addr)) {
+ flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR;
+ } else if (mask_ipv6 &&
+ (memcmp((const char *)mask_ipv6->hdr.src_addr,
+ zero_cmp, NH_FLD_IPV6_ADDR_SIZE) ||
+ memcmp((const char *)mask_ipv6->hdr.dst_addr,
+ zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
+ flow->ipaddr_rule.ipaddr_type = FLOW_IPV6_ADDR;
+ }
+
+ if ((mask_ipv4 && mask_ipv4->hdr.src_addr) ||
+ (mask_ipv6 &&
+ memcmp((const char *)mask_ipv6->hdr.src_addr,
+ zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_IP,
+ NH_FLD_IP_SRC,
+ 0);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add IP_SRC failed.");
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)&spec->hdr.src_addr,
- sizeof(uint32_t));
- key_iova += sizeof(uint32_t);
- memcpy((void *)key_iova, (const void *)&spec->hdr.dst_addr,
- sizeof(uint32_t));
- key_iova += sizeof(uint32_t);
- memcpy((void *)key_iova, (const void *)&spec->hdr.next_proto_id,
- sizeof(uint8_t));
-
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)&mask->hdr.src_addr,
- sizeof(uint32_t));
- mask_iova += sizeof(uint32_t);
- memcpy((void *)mask_iova, (const void *)&mask->hdr.dst_addr,
- sizeof(uint32_t));
- mask_iova += sizeof(uint32_t);
- memcpy((void *)mask_iova, (const void *)&mask->hdr.next_proto_id,
- sizeof(uint8_t));
-
- flow->key_size += (2 * sizeof(uint32_t)) + sizeof(uint8_t);
- return device_configured;
-}
+ return -1;
+ }
+ local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE |
+ DPAA2_QOS_TABLE_IPADDR_EXTRACT);
+ }
-static int
-dpaa2_configure_flow_ipv6(struct rte_flow *flow,
- struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item *pattern,
- const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
-{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
- uint32_t group;
- const struct rte_flow_item_ipv6 *spec, *mask;
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_IP,
+ NH_FLD_IP_SRC,
+ 0);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add IP_SRC failed.");
- const struct rte_flow_item_ipv6 *last __rte_unused;
- struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ return -1;
+ }
+ local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE |
+ DPAA2_FS_TABLE_IPADDR_EXTRACT);
+ }
- group = attr->group;
+ if (spec_ipv4)
+ key = &spec_ipv4->hdr.src_addr;
+ else
+ key = &spec_ipv6->hdr.src_addr[0];
+ if (mask_ipv4) {
+ mask = &mask_ipv4->hdr.src_addr;
+ size = NH_FLD_IPV4_ADDR_SIZE;
+ prot = NET_PROT_IPV4;
+ } else {
+ mask = &mask_ipv6->hdr.src_addr[0];
+ size = NH_FLD_IPV6_ADDR_SIZE;
+ prot = NET_PROT_IPV6;
+ }
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ prot, NH_FLD_IP_SRC,
+ key, mask, size);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_IP_SRC rule data set failed");
+ return -1;
+ }
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ prot, NH_FLD_IP_SRC,
+ key, mask, size);
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_IP_SRC rule data set failed");
+ return -1;
+ }
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ flow->ipaddr_rule.qos_ipsrc_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.qos_key_extract,
+ prot, NH_FLD_IP_SRC);
+ flow->ipaddr_rule.fs_ipsrc_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.tc_key_extract[group],
+ prot, NH_FLD_IP_SRC);
+ }
+
+ if ((mask_ipv4 && mask_ipv4->hdr.dst_addr) ||
+ (mask_ipv6 &&
+ memcmp((const char *)mask_ipv6->hdr.dst_addr,
+ zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_DST);
+ if (index < 0) {
+ if (mask_ipv4)
+ size = NH_FLD_IPV4_ADDR_SIZE;
+ else
+ size = NH_FLD_IPV6_ADDR_SIZE;
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_IP,
+ NH_FLD_IP_DST,
+ size);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE |
+ DPAA2_QOS_TABLE_IPADDR_EXTRACT);
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_DST);
+ if (index < 0) {
+ if (mask_ipv4)
+ size = NH_FLD_IPV4_ADDR_SIZE;
+ else
+ size = NH_FLD_IPV6_ADDR_SIZE;
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_IP,
+ NH_FLD_IP_DST,
+ size);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
+ return -1;
+ }
+ local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE |
+ DPAA2_FS_TABLE_IPADDR_EXTRACT);
+ }
+
+ if (spec_ipv4)
+ key = &spec_ipv4->hdr.dst_addr;
+ else
+ key = spec_ipv6->hdr.dst_addr;
+ if (mask_ipv4) {
+ mask = &mask_ipv4->hdr.dst_addr;
+ size = NH_FLD_IPV4_ADDR_SIZE;
+ prot = NET_PROT_IPV4;
} else {
- entry_found = 1;
- break;
+ mask = &mask_ipv6->hdr.dst_addr[0];
+ size = NH_FLD_IPV6_ADDR_SIZE;
+ prot = NET_PROT_IPV6;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
- }
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ prot, NH_FLD_IP_DST,
+ key, mask, size);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_IP_DST rule data set failed");
+ return -1;
+ }
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_DST;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_DST;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ prot, NH_FLD_IP_DST,
+ key, mask, size);
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_IP_DST rule data set failed");
+ return -1;
+ }
+ flow->ipaddr_rule.qos_ipdst_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.qos_key_extract,
+ prot, NH_FLD_IP_DST);
+ flow->ipaddr_rule.fs_ipdst_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.tc_key_extract[group],
+ prot, NH_FLD_IP_DST);
+ }
+
+ if ((mask_ipv4 && mask_ipv4->hdr.next_proto_id) ||
+ (mask_ipv6 && mask_ipv6->hdr.proto)) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_IP,
+ NH_FLD_IP_PROTO,
+ NH_FLD_IP_PROTO_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_IP,
+ NH_FLD_IP_PROTO,
+ NH_FLD_IP_PROTO_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr after NH_FLD_IP_PROTO rule set failed");
+ return -1;
+ }
+
+ if (spec_ipv4)
+ key = &spec_ipv4->hdr.next_proto_id;
+ else
+ key = &spec_ipv6->hdr.proto;
+ if (mask_ipv4)
+ mask = &mask_ipv4->hdr.next_proto_id;
+ else
+ mask = &mask_ipv6->hdr.proto;
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_IP,
+ NH_FLD_IP_PROTO,
+ key, mask, NH_FLD_IP_PROTO_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_IP_PROTO rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_IP,
+ NH_FLD_IP_PROTO,
+ key, mask, NH_FLD_IP_PROTO_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_IP_PROTO rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_ipv6 *)pattern->spec;
- last = (const struct rte_flow_item_ipv6 *)pattern->last;
- mask = (const struct rte_flow_item_ipv6 *)
- (pattern->mask ? pattern->mask : default_mask);
+ (*device_configured) |= local_cfg;
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(spec->hdr.src_addr),
- sizeof(spec->hdr.src_addr));
- key_iova += sizeof(spec->hdr.src_addr);
- memcpy((void *)key_iova, (const void *)(spec->hdr.dst_addr),
- sizeof(spec->hdr.dst_addr));
-
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(mask->hdr.src_addr),
- sizeof(mask->hdr.src_addr));
- mask_iova += sizeof(mask->hdr.src_addr);
- memcpy((void *)mask_iova, (const void *)(mask->hdr.dst_addr),
- sizeof(mask->hdr.dst_addr));
-
- flow->key_size += sizeof(spec->hdr.src_addr) +
- sizeof(mask->hdr.dst_addr);
- return device_configured;
+ return 0;
}
static int
@@ -613,12 +1341,11 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_icmp *spec, *mask;
@@ -627,116 +1354,220 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_icmp *)pattern->spec;
+ last = (const struct rte_flow_item_icmp *)pattern->last;
+ mask = (const struct rte_flow_item_icmp *)
+ (pattern->mask ? pattern->mask : default_mask);
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec) {
+ /* Don't care any field of ICMP header,
+ * only care ICMP protocol.
+ * Example: flow create 0 ingress pattern icmp /
+ */
+ /* Next proto of Generical IP is actually used
+ * for ICMP identification.
+ */
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract IP protocol to discriminate ICMP failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract IP protocol to discriminate ICMP failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move IP addr before ICMP discrimination set failed");
+ return -1;
+ }
+
+ proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
+ proto.ip_proto = IPPROTO_ICMP;
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("ICMP discrimination rule set failed");
+ return -1;
+ }
+
+ (*device_configured) |= local_cfg;
+
+ return 0;
}
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ICMP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ICMP_TYPE;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ICMP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ICMP_CODE;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ICMP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ICMP_TYPE;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ICMP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ICMP_CODE;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ if (mask->hdr.icmp_type) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_TYPE,
+ NH_FLD_ICMP_TYPE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add ICMP_TYPE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_TYPE,
+ NH_FLD_ICMP_TYPE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add ICMP_TYPE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before ICMP TYPE set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_TYPE,
+ &spec->hdr.icmp_type,
+ &mask->hdr.icmp_type,
+ NH_FLD_ICMP_TYPE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_ICMP_TYPE rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_TYPE,
+ &spec->hdr.icmp_type,
+ &mask->hdr.icmp_type,
+ NH_FLD_ICMP_TYPE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_ICMP_TYPE rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_icmp *)pattern->spec;
- last = (const struct rte_flow_item_icmp *)pattern->last;
- mask = (const struct rte_flow_item_icmp *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (mask->hdr.icmp_code) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ICMP, NH_FLD_ICMP_CODE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_CODE,
+ NH_FLD_ICMP_CODE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add ICMP_CODE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_type,
- sizeof(uint8_t));
- key_iova += sizeof(uint8_t);
- memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_code,
- sizeof(uint8_t));
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ICMP, NH_FLD_ICMP_CODE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_CODE,
+ NH_FLD_ICMP_CODE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add ICMP_CODE failed.");
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_type,
- sizeof(uint8_t));
- key_iova += sizeof(uint8_t);
- memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_code,
- sizeof(uint8_t));
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
- flow->key_size += 2 * sizeof(uint8_t);
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr after ICMP CODE set failed");
+ return -1;
+ }
- return device_configured;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_CODE,
+ &spec->hdr.icmp_code,
+ &mask->hdr.icmp_code,
+ NH_FLD_ICMP_CODE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_ICMP_CODE rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_CODE,
+ &spec->hdr.icmp_code,
+ &mask->hdr.icmp_code,
+ NH_FLD_ICMP_CODE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_ICMP_CODE rule data set failed");
+ return -1;
+ }
+ }
+
+ (*device_configured) |= local_cfg;
+
+ return 0;
}
static int
@@ -745,12 +1576,11 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_udp *spec, *mask;
@@ -759,115 +1589,217 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_udp *)pattern->spec;
+ last = (const struct rte_flow_item_udp *)pattern->last;
+ mask = (const struct rte_flow_item_udp *)
+ (pattern->mask ? pattern->mask : default_mask);
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec || !mc_l4_port_identification) {
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract IP protocol to discriminate UDP failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract IP protocol to discriminate UDP failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move IP addr before UDP discrimination set failed");
+ return -1;
+ }
+
+ proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
+ proto.ip_proto = IPPROTO_UDP;
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("UDP discrimination rule set failed");
+ return -1;
+ }
+
+ (*device_configured) |= local_cfg;
+
+ if (!spec)
+ return 0;
}
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
+ if (mask->hdr.src_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_SRC,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add UDP_SRC failed.");
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_UDP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_SRC;
- index++;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
- priv->extract.qos_key_cfg.extracts[index].type = DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_UDP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
- index++;
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_SRC,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add UDP_SRC failed.");
- priv->extract.qos_key_cfg.num_extracts = index;
- }
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_UDP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_SRC;
- index++;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before UDP_PORT_SRC set failed");
+ return -1;
+ }
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_UDP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
- index++;
+ ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_UDP_PORT_SRC rule data set failed");
+ return -1;
+ }
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_UDP_PORT_SRC rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_udp *)pattern->spec;
- last = (const struct rte_flow_item_udp *)pattern->last;
- mask = (const struct rte_flow_item_udp *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (mask->hdr.dst_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_DST,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add UDP_DST failed.");
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
- sizeof(uint16_t));
- key_iova += sizeof(uint16_t);
- memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
- sizeof(uint16_t));
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
- sizeof(uint16_t));
- mask_iova += sizeof(uint16_t);
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
- sizeof(uint16_t));
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_DST,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add UDP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before UDP_PORT_DST set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_UDP_PORT_DST rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_UDP_PORT_DST rule data set failed");
+ return -1;
+ }
+ }
- flow->key_size += (2 * sizeof(uint16_t));
+ (*device_configured) |= local_cfg;
- return device_configured;
+ return 0;
}
static int
@@ -876,130 +1808,231 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_tcp *spec, *mask;
- const struct rte_flow_item_tcp *last __rte_unused;
- struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_tcp *last __rte_unused;
+ struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+ group = attr->group;
+
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_tcp *)pattern->spec;
+ last = (const struct rte_flow_item_tcp *)pattern->last;
+ mask = (const struct rte_flow_item_tcp *)
+ (pattern->mask ? pattern->mask : default_mask);
+
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec || !mc_l4_port_identification) {
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract IP protocol to discriminate TCP failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract IP protocol to discriminate TCP failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move IP addr before TCP discrimination set failed");
+ return -1;
+ }
+
+ proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
+ proto.ip_proto = IPPROTO_TCP;
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("TCP discrimination rule set failed");
+ return -1;
+ }
- group = attr->group;
+ (*device_configured) |= local_cfg;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
+ if (!spec)
+ return 0;
}
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ if (mask->hdr.src_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_SRC,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add TCP_SRC failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_SRC,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add TCP_SRC failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
- }
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before TCP_PORT_SRC set failed");
+ return -1;
+ }
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_TCP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_SRC;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_TCP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_DST;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_TCP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_SRC;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_TCP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_DST;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_TCP_PORT_SRC rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_TCP_PORT_SRC rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_tcp *)pattern->spec;
- last = (const struct rte_flow_item_tcp *)pattern->last;
- mask = (const struct rte_flow_item_tcp *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (mask->hdr.dst_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_DST,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add TCP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_DST,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add TCP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
- sizeof(uint16_t));
- key_iova += sizeof(uint16_t);
- memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
- sizeof(uint16_t));
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before TCP_PORT_DST set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_TCP_PORT_DST rule data set failed");
+ return -1;
+ }
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
- sizeof(uint16_t));
- mask_iova += sizeof(uint16_t);
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
- sizeof(uint16_t));
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_TCP_PORT_DST rule data set failed");
+ return -1;
+ }
+ }
- flow->key_size += 2 * sizeof(uint16_t);
+ (*device_configured) |= local_cfg;
- return device_configured;
+ return 0;
}
static int
@@ -1008,12 +2041,11 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_sctp *spec, *mask;
@@ -1022,116 +2054,218 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too. */
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_sctp *)pattern->spec;
+ last = (const struct rte_flow_item_sctp *)pattern->last;
+ mask = (const struct rte_flow_item_sctp *)
+ (pattern->mask ? pattern->mask : default_mask);
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec || !mc_l4_port_identification) {
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract IP protocol to discriminate SCTP failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract IP protocol to discriminate SCTP failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before SCTP discrimination set failed");
+ return -1;
+ }
+
+ proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
+ proto.ip_proto = IPPROTO_SCTP;
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("SCTP discrimination rule set failed");
+ return -1;
+ }
+
+ (*device_configured) |= local_cfg;
+
+ if (!spec)
+ return 0;
}
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_SCTP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_SRC;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_SCTP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_DST;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_SCTP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_SRC;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_SCTP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_DST;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ if (mask->hdr.src_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_SRC,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add SCTP_SRC failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_SRC,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add SCTP_SRC failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before SCTP_PORT_SRC set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_SCTP_PORT_SRC rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_SCTP_PORT_SRC rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_sctp *)pattern->spec;
- last = (const struct rte_flow_item_sctp *)pattern->last;
- mask = (const struct rte_flow_item_sctp *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (mask->hdr.dst_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_DST,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add SCTP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_DST,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add SCTP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before SCTP_PORT_DST set failed");
+ return -1;
+ }
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
- sizeof(uint16_t));
- key_iova += sizeof(uint16_t);
- memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
- sizeof(uint16_t));
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_SCTP_PORT_DST rule data set failed");
+ return -1;
+ }
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
- sizeof(uint16_t));
- mask_iova += sizeof(uint16_t);
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
- sizeof(uint16_t));
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_SCTP_PORT_DST rule data set failed");
+ return -1;
+ }
+ }
- flow->key_size += 2 * sizeof(uint16_t);
+ (*device_configured) |= local_cfg;
- return device_configured;
+ return 0;
}
static int
@@ -1140,12 +2274,11 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_gre *spec, *mask;
@@ -1154,96 +2287,413 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too. */
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_gre *)pattern->spec;
+ last = (const struct rte_flow_item_gre *)pattern->last;
+ mask = (const struct rte_flow_item_gre *)
+ (pattern->mask ? pattern->mask : default_mask);
+
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec) {
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract IP protocol to discriminate GRE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract IP protocol to discriminate GRE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move IP addr before GRE discrimination set failed");
+ return -1;
+ }
+
+ proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
+ proto.ip_proto = IPPROTO_GRE;
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("GRE discrimination rule set failed");
+ return -1;
+ }
+
+ (*device_configured) |= local_cfg;
+
+ return 0;
}
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ if (!mask->protocol)
+ return 0;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_GRE, NH_FLD_GRE_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_GRE,
+ NH_FLD_GRE_TYPE,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add GRE_TYPE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_GRE, NH_FLD_GRE_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_GRE,
+ NH_FLD_GRE_TYPE,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add GRE_TYPE failed.");
+
+ return -1;
}
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before GRE_TYPE set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_GRE,
+ NH_FLD_GRE_TYPE,
+ &spec->protocol,
+ &mask->protocol,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_GRE_TYPE rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_GRE,
+ NH_FLD_GRE_TYPE,
+ &spec->protocol,
+ &mask->protocol,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_GRE_TYPE rule data set failed");
+ return -1;
}
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
+ (*device_configured) |= local_cfg;
+
+ return 0;
+}
+
+/* The existing QoS/FS entry with IP address(es)
+ * needs update after
+ * new extract(s) are inserted before IP
+ * address(es) extract(s).
+ */
+static int
+dpaa2_flow_entry_update(
+ struct dpaa2_dev_priv *priv, uint8_t tc_id)
+{
+ struct rte_flow *curr = LIST_FIRST(&priv->flows);
+ struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
+ int ret;
+ int qos_ipsrc_offset = -1, qos_ipdst_offset = -1;
+ int fs_ipsrc_offset = -1, fs_ipdst_offset = -1;
+ struct dpaa2_key_extract *qos_key_extract =
+ &priv->extract.qos_key_extract;
+ struct dpaa2_key_extract *tc_key_extract =
+ &priv->extract.tc_key_extract[tc_id];
+ char ipsrc_key[NH_FLD_IPV6_ADDR_SIZE];
+ char ipdst_key[NH_FLD_IPV6_ADDR_SIZE];
+ char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
+ char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
+ int extend = -1, extend1, size;
+
+ while (curr) {
+ if (curr->ipaddr_rule.ipaddr_type ==
+ FLOW_NONE_IPADDR) {
+ curr = LIST_NEXT(curr, next);
continue;
+ }
+
+ if (curr->ipaddr_rule.ipaddr_type ==
+ FLOW_IPV4_ADDR) {
+ qos_ipsrc_offset =
+ qos_key_extract->key_info.ipv4_src_offset;
+ qos_ipdst_offset =
+ qos_key_extract->key_info.ipv4_dst_offset;
+ fs_ipsrc_offset =
+ tc_key_extract->key_info.ipv4_src_offset;
+ fs_ipdst_offset =
+ tc_key_extract->key_info.ipv4_dst_offset;
+ size = NH_FLD_IPV4_ADDR_SIZE;
} else {
- entry_found = 1;
- break;
+ qos_ipsrc_offset =
+ qos_key_extract->key_info.ipv6_src_offset;
+ qos_ipdst_offset =
+ qos_key_extract->key_info.ipv6_dst_offset;
+ fs_ipsrc_offset =
+ tc_key_extract->key_info.ipv6_src_offset;
+ fs_ipdst_offset =
+ tc_key_extract->key_info.ipv6_dst_offset;
+ size = NH_FLD_IPV6_ADDR_SIZE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
- }
+ ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+ priv->token, &curr->qos_rule);
+ if (ret) {
+ DPAA2_PMD_ERR("Qos entry remove failed.");
+ return -1;
+ }
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
+ extend = -1;
+
+ if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
+ RTE_ASSERT(qos_ipsrc_offset >=
+ curr->ipaddr_rule.qos_ipsrc_offset);
+ extend1 = qos_ipsrc_offset -
+ curr->ipaddr_rule.qos_ipsrc_offset;
+ if (extend >= 0)
+ RTE_ASSERT(extend == extend1);
+ else
+ extend = extend1;
+
+ memcpy(ipsrc_key,
+ (char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ size);
+ memset((char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ 0, size);
+
+ memcpy(ipsrc_mask,
+ (char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ size);
+ memset((char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ 0, size);
+
+ curr->ipaddr_rule.qos_ipsrc_offset = qos_ipsrc_offset;
+ }
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_GRE;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_GRE_TYPE;
- index++;
+ if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
+ RTE_ASSERT(qos_ipdst_offset >=
+ curr->ipaddr_rule.qos_ipdst_offset);
+ extend1 = qos_ipdst_offset -
+ curr->ipaddr_rule.qos_ipdst_offset;
+ if (extend >= 0)
+ RTE_ASSERT(extend == extend1);
+ else
+ extend = extend1;
+
+ memcpy(ipdst_key,
+ (char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ size);
+ memset((char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ 0, size);
+
+ memcpy(ipdst_mask,
+ (char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ size);
+ memset((char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ 0, size);
+
+ curr->ipaddr_rule.qos_ipdst_offset = qos_ipdst_offset;
+ }
- priv->extract.qos_key_cfg.num_extracts = index;
- }
+ if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
+ memcpy((char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ ipsrc_key,
+ size);
+ memcpy((char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ ipsrc_mask,
+ size);
+ }
+ if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
+ memcpy((char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ ipdst_key,
+ size);
+ memcpy((char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ ipdst_mask,
+ size);
+ }
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_GRE;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_GRE_TYPE;
- index++;
+ if (extend >= 0)
+ curr->qos_rule.key_size += extend;
- priv->extract.fs_key_cfg[group].num_extracts = index;
- }
+ ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+ priv->token, &curr->qos_rule,
+ curr->tc_id, curr->qos_index,
+ 0, 0);
+ if (ret) {
+ DPAA2_PMD_ERR("Qos entry update failed.");
+ return -1;
+ }
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_gre *)pattern->spec;
- last = (const struct rte_flow_item_gre *)pattern->last;
- mask = (const struct rte_flow_item_gre *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (curr->action != RTE_FLOW_ACTION_TYPE_QUEUE) {
+ curr = LIST_NEXT(curr, next);
+ continue;
+ }
+
+ extend = -1;
+
+ ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW,
+ priv->token, curr->tc_id, &curr->fs_rule);
+ if (ret) {
+ DPAA2_PMD_ERR("FS entry remove failed.");
+ return -1;
+ }
+
+ if (curr->ipaddr_rule.fs_ipsrc_offset >= 0 &&
+ tc_id == curr->tc_id) {
+ RTE_ASSERT(fs_ipsrc_offset >=
+ curr->ipaddr_rule.fs_ipsrc_offset);
+ extend1 = fs_ipsrc_offset -
+ curr->ipaddr_rule.fs_ipsrc_offset;
+ if (extend >= 0)
+ RTE_ASSERT(extend == extend1);
+ else
+ extend = extend1;
+
+ memcpy(ipsrc_key,
+ (char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ size);
+ memset((char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ 0, size);
+
+ memcpy(ipsrc_mask,
+ (char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ size);
+ memset((char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ 0, size);
+
+ curr->ipaddr_rule.fs_ipsrc_offset = fs_ipsrc_offset;
+ }
+
+ if (curr->ipaddr_rule.fs_ipdst_offset >= 0 &&
+ tc_id == curr->tc_id) {
+ RTE_ASSERT(fs_ipdst_offset >=
+ curr->ipaddr_rule.fs_ipdst_offset);
+ extend1 = fs_ipdst_offset -
+ curr->ipaddr_rule.fs_ipdst_offset;
+ if (extend >= 0)
+ RTE_ASSERT(extend == extend1);
+ else
+ extend = extend1;
+
+ memcpy(ipdst_key,
+ (char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ size);
+ memset((char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ 0, size);
+
+ memcpy(ipdst_mask,
+ (char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ size);
+ memset((char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ 0, size);
+
+ curr->ipaddr_rule.fs_ipdst_offset = fs_ipdst_offset;
+ }
+
+ if (curr->ipaddr_rule.fs_ipsrc_offset >= 0) {
+ memcpy((char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ ipsrc_key,
+ size);
+ memcpy((char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ ipsrc_mask,
+ size);
+ }
+ if (curr->ipaddr_rule.fs_ipdst_offset >= 0) {
+ memcpy((char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ ipdst_key,
+ size);
+ memcpy((char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ ipdst_mask,
+ size);
+ }
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(&spec->protocol),
- sizeof(rte_be16_t));
+ if (extend >= 0)
+ curr->fs_rule.key_size += extend;
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(&mask->protocol),
- sizeof(rte_be16_t));
+ ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
+ priv->token, curr->tc_id, curr->fs_index,
+ &curr->fs_rule, &curr->action_cfg);
+ if (ret) {
+ DPAA2_PMD_ERR("FS entry update failed.");
+ return -1;
+ }
- flow->key_size += sizeof(rte_be16_t);
+ curr = LIST_NEXT(curr, next);
+ }
- return device_configured;
+ return 0;
}
static int
@@ -1262,7 +2712,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
struct dpni_attr nic_attr;
struct dpni_rx_tc_dist_cfg tc_cfg;
struct dpni_qos_tbl_cfg qos_cfg;
- struct dpkg_profile_cfg key_cfg;
struct dpni_fs_action_cfg action;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
@@ -1273,75 +2722,77 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
while (!end_of_list) {
switch (pattern[i].type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- is_keycfg_configured = dpaa2_configure_flow_eth(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_eth(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("ETH flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
- is_keycfg_configured = dpaa2_configure_flow_vlan(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_vlan(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("vLan flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
- is_keycfg_configured = dpaa2_configure_flow_ipv4(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
- break;
case RTE_FLOW_ITEM_TYPE_IPV6:
- is_keycfg_configured = dpaa2_configure_flow_ipv6(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_generic_ip(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("IP flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_ICMP:
- is_keycfg_configured = dpaa2_configure_flow_icmp(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_icmp(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("ICMP flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_UDP:
- is_keycfg_configured = dpaa2_configure_flow_udp(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_udp(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("UDP flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_TCP:
- is_keycfg_configured = dpaa2_configure_flow_tcp(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_tcp(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("TCP flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_SCTP:
- is_keycfg_configured = dpaa2_configure_flow_sctp(flow,
- dev, attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_sctp(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("SCTP flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_GRE:
- is_keycfg_configured = dpaa2_configure_flow_gre(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_gre(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("GRE flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_END:
end_of_list = 1;
@@ -1365,8 +2816,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
action.flow_id = flow->flow_id;
if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- if (dpkg_prepare_key_cfg(&priv->extract.qos_key_cfg,
- (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
+ if (dpkg_prepare_key_cfg(&priv->extract.qos_key_extract.dpkg,
+ (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
DPAA2_PMD_ERR(
"Unable to prepare extract parameters");
return -1;
@@ -1377,7 +2828,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
qos_cfg.keep_entries = true;
qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param;
ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
- priv->token, &qos_cfg);
+ priv->token, &qos_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
"Distribution cannot be configured.(%d)"
@@ -1386,8 +2837,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
}
if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- if (dpkg_prepare_key_cfg(&priv->extract.fs_key_cfg[flow->tc_id],
- (uint8_t *)(size_t)priv->extract.fs_extract_param[flow->tc_id]) < 0) {
+ if (dpkg_prepare_key_cfg(
+ &priv->extract.tc_key_extract[flow->tc_id].dpkg,
+ (uint8_t *)(size_t)priv->extract
+ .tc_extract_param[flow->tc_id]) < 0) {
DPAA2_PMD_ERR(
"Unable to prepare extract parameters");
return -1;
@@ -1397,7 +2850,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc;
tc_cfg.dist_mode = DPNI_DIST_MODE_FS;
tc_cfg.key_cfg_iova =
- (uint64_t)priv->extract.fs_extract_param[flow->tc_id];
+ (uint64_t)priv->extract.tc_extract_param[flow->tc_id];
tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP;
tc_cfg.fs_cfg.keep_entries = true;
ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW,
@@ -1422,27 +2875,114 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
action.flow_id = action.flow_id % nic_attr.num_rx_tcs;
- index = flow->index + (flow->tc_id * nic_attr.fs_entries);
- flow->rule.key_size = flow->key_size;
+
+ if (!priv->qos_index) {
+ priv->qos_index = rte_zmalloc(0,
+ nic_attr.qos_entries, 64);
+ }
+ for (index = 0; index < nic_attr.qos_entries; index++) {
+ if (!priv->qos_index[index]) {
+ priv->qos_index[index] = 1;
+ break;
+ }
+ }
+ if (index >= nic_attr.qos_entries) {
+ DPAA2_PMD_ERR("QoS table with %d entries full",
+ nic_attr.qos_entries);
+ return -1;
+ }
+ flow->qos_rule.key_size = priv->extract
+ .qos_key_extract.key_info.key_total_size;
+ if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
+ if (flow->ipaddr_rule.qos_ipdst_offset >=
+ flow->ipaddr_rule.qos_ipsrc_offset) {
+ flow->qos_rule.key_size =
+ flow->ipaddr_rule.qos_ipdst_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ } else {
+ flow->qos_rule.key_size =
+ flow->ipaddr_rule.qos_ipsrc_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ }
+ } else if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV6_ADDR) {
+ if (flow->ipaddr_rule.qos_ipdst_offset >=
+ flow->ipaddr_rule.qos_ipsrc_offset) {
+ flow->qos_rule.key_size =
+ flow->ipaddr_rule.qos_ipdst_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ } else {
+ flow->qos_rule.key_size =
+ flow->ipaddr_rule.qos_ipsrc_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ }
+ }
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
- priv->token, &flow->rule,
+ priv->token, &flow->qos_rule,
flow->tc_id, index,
0, 0);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in addnig entry to QoS table(%d)", ret);
+ priv->qos_index[index] = 0;
return ret;
}
+ flow->qos_index = index;
/* Then Configure FS table */
+ if (!priv->fs_index) {
+ priv->fs_index = rte_zmalloc(0,
+ nic_attr.fs_entries, 64);
+ }
+ for (index = 0; index < nic_attr.fs_entries; index++) {
+ if (!priv->fs_index[index]) {
+ priv->fs_index[index] = 1;
+ break;
+ }
+ }
+ if (index >= nic_attr.fs_entries) {
+ DPAA2_PMD_ERR("FS table with %d entries full",
+ nic_attr.fs_entries);
+ return -1;
+ }
+ flow->fs_rule.key_size = priv->extract
+ .tc_key_extract[attr->group].key_info.key_total_size;
+ if (flow->ipaddr_rule.ipaddr_type ==
+ FLOW_IPV4_ADDR) {
+ if (flow->ipaddr_rule.fs_ipdst_offset >=
+ flow->ipaddr_rule.fs_ipsrc_offset) {
+ flow->fs_rule.key_size =
+ flow->ipaddr_rule.fs_ipdst_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ } else {
+ flow->fs_rule.key_size =
+ flow->ipaddr_rule.fs_ipsrc_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ }
+ } else if (flow->ipaddr_rule.ipaddr_type ==
+ FLOW_IPV6_ADDR) {
+ if (flow->ipaddr_rule.fs_ipdst_offset >=
+ flow->ipaddr_rule.fs_ipsrc_offset) {
+ flow->fs_rule.key_size =
+ flow->ipaddr_rule.fs_ipdst_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ } else {
+ flow->fs_rule.key_size =
+ flow->ipaddr_rule.fs_ipsrc_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ }
+ }
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
- flow->tc_id, flow->index,
- &flow->rule, &action);
+ flow->tc_id, index,
+ &flow->fs_rule, &action);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in adding entry to FS table(%d)", ret);
+ priv->fs_index[index] = 0;
return ret;
}
+ flow->fs_index = index;
+ memcpy(&flow->action_cfg, &action,
+ sizeof(struct dpni_fs_action_cfg));
break;
case RTE_FLOW_ACTION_TYPE_RSS:
ret = dpni_get_attributes(dpni, CMD_PRI_LOW,
@@ -1465,7 +3005,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
flow->action = RTE_FLOW_ACTION_TYPE_RSS;
ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types,
- &key_cfg);
+ &priv->extract.tc_key_extract[flow->tc_id].dpkg);
if (ret < 0) {
DPAA2_PMD_ERR(
"unable to set flow distribution.please check queue config\n");
@@ -1479,7 +3019,9 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
- if (dpkg_prepare_key_cfg(&key_cfg, (uint8_t *)param) < 0) {
+ if (dpkg_prepare_key_cfg(
+ &priv->extract.tc_key_extract[flow->tc_id].dpkg,
+ (uint8_t *)param) < 0) {
DPAA2_PMD_ERR(
"Unable to prepare extract parameters");
rte_free((void *)param);
@@ -1503,8 +3045,9 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
rte_free((void *)param);
- if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- if (dpkg_prepare_key_cfg(&priv->extract.qos_key_cfg,
+ if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
+ if (dpkg_prepare_key_cfg(
+ &priv->extract.qos_key_extract.dpkg,
(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
DPAA2_PMD_ERR(
"Unable to prepare extract parameters");
@@ -1514,29 +3057,47 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
sizeof(struct dpni_qos_tbl_cfg));
qos_cfg.discard_on_miss = true;
qos_cfg.keep_entries = true;
- qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param;
+ qos_cfg.key_cfg_iova =
+ (size_t)priv->extract.qos_extract_param;
ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
priv->token, &qos_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Distribution can not be configured(%d)\n",
+ "Distribution can't be configured %d\n",
ret);
return -1;
}
}
/* Add Rule into QoS table */
- index = flow->index + (flow->tc_id * nic_attr.fs_entries);
- flow->rule.key_size = flow->key_size;
+ if (!priv->qos_index) {
+ priv->qos_index = rte_zmalloc(0,
+ nic_attr.qos_entries, 64);
+ }
+ for (index = 0; index < nic_attr.qos_entries; index++) {
+ if (!priv->qos_index[index]) {
+ priv->qos_index[index] = 1;
+ break;
+ }
+ }
+ if (index >= nic_attr.qos_entries) {
+ DPAA2_PMD_ERR("QoS table with %d entries full",
+ nic_attr.qos_entries);
+ return -1;
+ }
+ flow->qos_rule.key_size =
+ priv->extract.qos_key_extract.key_info.key_total_size;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
- &flow->rule, flow->tc_id,
+ &flow->qos_rule, flow->tc_id,
index, 0, 0);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in entry addition in QoS table(%d)",
ret);
+ priv->qos_index[index] = 0;
return ret;
}
+ flow->qos_index = index;
break;
case RTE_FLOW_ACTION_TYPE_END:
end_of_list = 1;
@@ -1550,6 +3111,12 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
if (!ret) {
+ ret = dpaa2_flow_entry_update(priv, flow->tc_id);
+ if (ret) {
+ DPAA2_PMD_ERR("Flow entry update failed.");
+
+ return -1;
+ }
/* New rules are inserted. */
if (!curr) {
LIST_INSERT_HEAD(&priv->flows, flow, next);
@@ -1625,15 +3192,15 @@ dpaa2_dev_update_default_mask(const struct rte_flow_item *pattern)
}
static inline int
-dpaa2_dev_verify_patterns(struct dpaa2_dev_priv *dev_priv,
- const struct rte_flow_item pattern[])
+dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
{
- unsigned int i, j, k, is_found = 0;
+ unsigned int i, j, is_found = 0;
int ret = 0;
for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
for (i = 0; i < RTE_DIM(dpaa2_supported_pattern_type); i++) {
- if (dpaa2_supported_pattern_type[i] == pattern[j].type) {
+ if (dpaa2_supported_pattern_type[i]
+ == pattern[j].type) {
is_found = 1;
break;
}
@@ -1653,18 +3220,6 @@ dpaa2_dev_verify_patterns(struct dpaa2_dev_priv *dev_priv,
dpaa2_dev_update_default_mask(&pattern[j]);
}
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too. */
- for (i = 0; pattern[i].type != RTE_FLOW_ITEM_TYPE_END; i++) {
- for (j = 0; j < MAX_TCS + 1; j++) {
- for (k = 0; k < DPKG_MAX_NUM_OF_EXTRACTS; k++) {
- if (dev_priv->pattern[j].pattern_type[k] == pattern[i].type)
- break;
- }
- if (dev_priv->pattern[j].item_count >= DPKG_MAX_NUM_OF_EXTRACTS)
- ret = -ENOTSUP;
- }
- }
return ret;
}
@@ -1687,7 +3242,8 @@ dpaa2_dev_verify_actions(const struct rte_flow_action actions[])
}
}
for (j = 0; actions[j].type != RTE_FLOW_ACTION_TYPE_END; j++) {
- if ((actions[j].type != RTE_FLOW_ACTION_TYPE_DROP) && (!actions[j].conf))
+ if ((actions[j].type
+ != RTE_FLOW_ACTION_TYPE_DROP) && (!actions[j].conf))
ret = -EINVAL;
}
return ret;
@@ -1729,7 +3285,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
goto not_valid_params;
}
/* Verify input pattern list */
- ret = dpaa2_dev_verify_patterns(priv, pattern);
+ ret = dpaa2_dev_verify_patterns(pattern);
if (ret < 0) {
DPAA2_PMD_ERR(
"Invalid pattern list is given\n");
@@ -1763,28 +3319,54 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
size_t key_iova = 0, mask_iova = 0;
int ret;
- flow = rte_malloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
+ flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
if (!flow) {
DPAA2_PMD_ERR("Failure to allocate memory for flow");
goto mem_failure;
}
/* Allocate DMA'ble memory to write the rules */
- key_iova = (size_t)rte_malloc(NULL, 256, 64);
+ key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
+ if (!key_iova) {
+ DPAA2_PMD_ERR(
+ "Memory allocation failure for rule configration\n");
+ goto mem_failure;
+ }
+ mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
+ if (!mask_iova) {
+ DPAA2_PMD_ERR(
+ "Memory allocation failure for rule configration\n");
+ goto mem_failure;
+ }
+
+ flow->qos_rule.key_iova = key_iova;
+ flow->qos_rule.mask_iova = mask_iova;
+
+ /* Allocate DMA'ble memory to write the rules */
+ key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!key_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configration\n");
goto mem_failure;
}
- mask_iova = (size_t)rte_malloc(NULL, 256, 64);
+ mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!mask_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configration\n");
goto mem_failure;
}
- flow->rule.key_iova = key_iova;
- flow->rule.mask_iova = mask_iova;
- flow->key_size = 0;
+ flow->fs_rule.key_iova = key_iova;
+ flow->fs_rule.mask_iova = mask_iova;
+
+ flow->ipaddr_rule.ipaddr_type = FLOW_NONE_IPADDR;
+ flow->ipaddr_rule.qos_ipsrc_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ flow->ipaddr_rule.qos_ipdst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ flow->ipaddr_rule.fs_ipsrc_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ flow->ipaddr_rule.fs_ipdst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
switch (dpaa2_filter_type) {
case RTE_ETH_FILTER_GENERIC:
@@ -1832,25 +3414,27 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
case RTE_FLOW_ACTION_TYPE_QUEUE:
/* Remove entry from QoS table first */
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
- &flow->rule);
+ &flow->qos_rule);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in adding entry to QoS table(%d)", ret);
goto error;
}
+ priv->qos_index[flow->qos_index] = 0;
/* Then remove entry from FS table */
ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token,
- flow->tc_id, &flow->rule);
+ flow->tc_id, &flow->fs_rule);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in entry addition in FS table(%d)", ret);
goto error;
}
+ priv->fs_index[flow->fs_index] = 0;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
- &flow->rule);
+ &flow->qos_rule);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in entry addition in QoS table(%d)", ret);
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 25/37] net/dpaa2: sanity check for flow extracts
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (23 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 24/37] net/dpaa2: key extracts of flow API Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 26/37] net/dpaa2: free flow rule memory Hemant Agrawal
` (13 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Define extracts support for each protocol and check the fields of each
pattern before building extracts of QoS/FS table.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 7 +-
drivers/net/dpaa2/dpaa2_flow.c | 250 +++++++++++++++++++++++++------
2 files changed, 204 insertions(+), 53 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 60c2ded40..cd8555246 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2612,11 +2612,8 @@ dpaa2_dev_uninit(struct rte_eth_dev *eth_dev)
eth_dev->process_private = NULL;
rte_free(dpni);
- for (i = 0; i < MAX_TCS; i++) {
- if (priv->extract.tc_extract_param[i])
- rte_free((void *)
- (size_t)priv->extract.tc_extract_param[i]);
- }
+ for (i = 0; i < MAX_TCS; i++)
+ rte_free((void *)(size_t)priv->extract.tc_extract_param[i]);
if (priv->extract.qos_extract_param)
rte_free((void *)(size_t)priv->extract.qos_extract_param);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 779cb64ab..507a5d0e3 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -87,7 +87,68 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = {
#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
enum rte_filter_type dpaa2_filter_type = RTE_ETH_FILTER_NONE;
-static const void *default_mask;
+
+#ifndef __cplusplus
+static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
+ .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+ .src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+ .type = RTE_BE16(0xffff),
+};
+
+static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = {
+ .tci = RTE_BE16(0xffff),
+};
+
+static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = {
+ .hdr.src_addr = RTE_BE32(0xffffffff),
+ .hdr.dst_addr = RTE_BE32(0xffffffff),
+ .hdr.next_proto_id = 0xff,
+};
+
+static const struct rte_flow_item_ipv6 dpaa2_flow_item_ipv6_mask = {
+ .hdr = {
+ .src_addr =
+ "\xff\xff\xff\xff\xff\xff\xff\xff"
+ "\xff\xff\xff\xff\xff\xff\xff\xff",
+ .dst_addr =
+ "\xff\xff\xff\xff\xff\xff\xff\xff"
+ "\xff\xff\xff\xff\xff\xff\xff\xff",
+ .proto = 0xff
+ },
+};
+
+static const struct rte_flow_item_icmp dpaa2_flow_item_icmp_mask = {
+ .hdr.icmp_type = 0xff,
+ .hdr.icmp_code = 0xff,
+};
+
+static const struct rte_flow_item_udp dpaa2_flow_item_udp_mask = {
+ .hdr = {
+ .src_port = RTE_BE16(0xffff),
+ .dst_port = RTE_BE16(0xffff),
+ },
+};
+
+static const struct rte_flow_item_tcp dpaa2_flow_item_tcp_mask = {
+ .hdr = {
+ .src_port = RTE_BE16(0xffff),
+ .dst_port = RTE_BE16(0xffff),
+ },
+};
+
+static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
+ .hdr = {
+ .src_port = RTE_BE16(0xffff),
+ .dst_port = RTE_BE16(0xffff),
+ },
+};
+
+static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
+ .protocol = RTE_BE16(0xffff),
+};
+
+#endif
+
static inline void dpaa2_flow_extract_key_set(
struct dpaa2_key_info *key_info, int index, uint8_t size)
@@ -555,6 +616,67 @@ dpaa2_flow_rule_move_ipaddr_tail(
return 0;
}
+static int
+dpaa2_flow_extract_support(
+ const uint8_t *mask_src,
+ enum rte_flow_item_type type)
+{
+ char mask[64];
+ int i, size = 0;
+ const char *mask_support = 0;
+
+ switch (type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ mask_support = (const char *)&dpaa2_flow_item_eth_mask;
+ size = sizeof(struct rte_flow_item_eth);
+ break;
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ mask_support = (const char *)&dpaa2_flow_item_vlan_mask;
+ size = sizeof(struct rte_flow_item_vlan);
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ mask_support = (const char *)&dpaa2_flow_item_ipv4_mask;
+ size = sizeof(struct rte_flow_item_ipv4);
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ mask_support = (const char *)&dpaa2_flow_item_ipv6_mask;
+ size = sizeof(struct rte_flow_item_ipv6);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ mask_support = (const char *)&dpaa2_flow_item_icmp_mask;
+ size = sizeof(struct rte_flow_item_icmp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ mask_support = (const char *)&dpaa2_flow_item_udp_mask;
+ size = sizeof(struct rte_flow_item_udp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ mask_support = (const char *)&dpaa2_flow_item_tcp_mask;
+ size = sizeof(struct rte_flow_item_tcp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ mask_support = (const char *)&dpaa2_flow_item_sctp_mask;
+ size = sizeof(struct rte_flow_item_sctp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_GRE:
+ mask_support = (const char *)&dpaa2_flow_item_gre_mask;
+ size = sizeof(struct rte_flow_item_gre);
+ break;
+ default:
+ return -1;
+ }
+
+ memcpy(mask, mask_support, size);
+
+ for (i = 0; i < size; i++)
+ mask[i] = (mask[i] | mask_src[i]);
+
+ if (memcmp(mask, mask_support, size))
+ return -1;
+
+ return 0;
+}
+
static int
dpaa2_configure_flow_eth(struct rte_flow *flow,
struct rte_eth_dev *dev,
@@ -580,7 +702,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
spec = (const struct rte_flow_item_eth *)pattern->spec;
last = (const struct rte_flow_item_eth *)pattern->last;
mask = (const struct rte_flow_item_eth *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_eth_mask);
if (!spec) {
/* Don't care any field of eth header,
* only care eth protocol.
@@ -593,6 +715,13 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
flow->tc_id = group;
flow->tc_index = attr->priority;
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_ETH)) {
+ DPAA2_PMD_WARN("Extract field(s) of ethernet not support.");
+
+ return -1;
+ }
+
if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
index = dpaa2_flow_extract_search(
&priv->extract.qos_key_extract.dpkg,
@@ -819,7 +948,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
spec = (const struct rte_flow_item_vlan *)pattern->spec;
last = (const struct rte_flow_item_vlan *)pattern->last;
mask = (const struct rte_flow_item_vlan *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -886,6 +1015,13 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_VLAN)) {
+ DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+
+ return -1;
+ }
+
if (!mask->tci)
return 0;
@@ -990,11 +1126,13 @@ dpaa2_configure_flow_generic_ip(
if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec;
mask_ipv4 = (const struct rte_flow_item_ipv4 *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask :
+ &dpaa2_flow_item_ipv4_mask);
} else {
spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec;
mask_ipv6 = (const struct rte_flow_item_ipv6 *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask :
+ &dpaa2_flow_item_ipv6_mask);
}
/* Get traffic class index and flow id to be configured */
@@ -1069,6 +1207,24 @@ dpaa2_configure_flow_generic_ip(
return 0;
}
+ if (mask_ipv4) {
+ if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+ RTE_FLOW_ITEM_TYPE_IPV4)) {
+ DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+
+ return -1;
+ }
+ }
+
+ if (mask_ipv6) {
+ if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+ RTE_FLOW_ITEM_TYPE_IPV6)) {
+ DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
+
+ return -1;
+ }
+ }
+
if (mask_ipv4 && (mask_ipv4->hdr.src_addr ||
mask_ipv4->hdr.dst_addr)) {
flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR;
@@ -1358,7 +1514,7 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
spec = (const struct rte_flow_item_icmp *)pattern->spec;
last = (const struct rte_flow_item_icmp *)pattern->last;
mask = (const struct rte_flow_item_icmp *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_icmp_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -1427,6 +1583,13 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_ICMP)) {
+ DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
+
+ return -1;
+ }
+
if (mask->hdr.icmp_type) {
index = dpaa2_flow_extract_search(
&priv->extract.qos_key_extract.dpkg,
@@ -1593,7 +1756,7 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
spec = (const struct rte_flow_item_udp *)pattern->spec;
last = (const struct rte_flow_item_udp *)pattern->last;
mask = (const struct rte_flow_item_udp *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_udp_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -1656,6 +1819,13 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_UDP)) {
+ DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
+
+ return -1;
+ }
+
if (mask->hdr.src_port) {
index = dpaa2_flow_extract_search(
&priv->extract.qos_key_extract.dpkg,
@@ -1825,7 +1995,7 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
spec = (const struct rte_flow_item_tcp *)pattern->spec;
last = (const struct rte_flow_item_tcp *)pattern->last;
mask = (const struct rte_flow_item_tcp *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_tcp_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -1888,6 +2058,13 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_TCP)) {
+ DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
+
+ return -1;
+ }
+
if (mask->hdr.src_port) {
index = dpaa2_flow_extract_search(
&priv->extract.qos_key_extract.dpkg,
@@ -2058,7 +2235,8 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
spec = (const struct rte_flow_item_sctp *)pattern->spec;
last = (const struct rte_flow_item_sctp *)pattern->last;
mask = (const struct rte_flow_item_sctp *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask :
+ &dpaa2_flow_item_sctp_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -2121,6 +2299,13 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_SCTP)) {
+ DPAA2_PMD_WARN("Extract field(s) of SCTP not support.");
+
+ return -1;
+ }
+
if (mask->hdr.src_port) {
index = dpaa2_flow_extract_search(
&priv->extract.qos_key_extract.dpkg,
@@ -2291,7 +2476,7 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
spec = (const struct rte_flow_item_gre *)pattern->spec;
last = (const struct rte_flow_item_gre *)pattern->last;
mask = (const struct rte_flow_item_gre *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_gre_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -2353,6 +2538,13 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_GRE)) {
+ DPAA2_PMD_WARN("Extract field(s) of GRE not support.");
+
+ return -1;
+ }
+
if (!mask->protocol)
return 0;
@@ -3155,42 +3347,6 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
return ret;
}
-static inline void
-dpaa2_dev_update_default_mask(const struct rte_flow_item *pattern)
-{
- switch (pattern->type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- default_mask = (const void *)&rte_flow_item_eth_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_VLAN:
- default_mask = (const void *)&rte_flow_item_vlan_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_IPV4:
- default_mask = (const void *)&rte_flow_item_ipv4_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- default_mask = (const void *)&rte_flow_item_ipv6_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_ICMP:
- default_mask = (const void *)&rte_flow_item_icmp_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- default_mask = (const void *)&rte_flow_item_udp_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- default_mask = (const void *)&rte_flow_item_tcp_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- default_mask = (const void *)&rte_flow_item_sctp_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_GRE:
- default_mask = (const void *)&rte_flow_item_gre_mask;
- break;
- default:
- DPAA2_PMD_ERR("Invalid pattern type");
- }
-}
-
static inline int
dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
{
@@ -3216,8 +3372,6 @@ dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
ret = -EINVAL;
break;
}
- if ((pattern[j].last) && (!pattern[j].mask))
- dpaa2_dev_update_default_mask(&pattern[j]);
}
return ret;
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 26/37] net/dpaa2: free flow rule memory
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (24 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 25/37] net/dpaa2: sanity check for flow extracts Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 27/37] net/dpaa2: flow QoS or FS table entry indexing Hemant Agrawal
` (12 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Free rule memory when the flow is destroyed.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 507a5d0e3..941d62b80 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3594,6 +3594,7 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
"Error in entry addition in QoS table(%d)", ret);
goto error;
}
+ priv->qos_index[flow->qos_index] = 0;
break;
default:
DPAA2_PMD_ERR(
@@ -3603,6 +3604,10 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
}
LIST_REMOVE(flow, next);
+ rte_free((void *)(size_t)flow->qos_rule.key_iova);
+ rte_free((void *)(size_t)flow->qos_rule.mask_iova);
+ rte_free((void *)(size_t)flow->fs_rule.key_iova);
+ rte_free((void *)(size_t)flow->fs_rule.mask_iova);
/* Now free the flow */
rte_free(flow);
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 27/37] net/dpaa2: flow QoS or FS table entry indexing
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (25 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 26/37] net/dpaa2: free flow rule memory Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 28/37] net/dpaa2: define the size of table entry Hemant Agrawal
` (11 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Calculate QoS/FS entry index by group and priority of flow.
1)The less index of entry, the higher priority of flow.
2)Verify if the flow with same group and priority has been added before
creating flow.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 4 +
drivers/net/dpaa2/dpaa2_ethdev.h | 5 +-
drivers/net/dpaa2/dpaa2_flow.c | 127 +++++++++++++------------------
3 files changed, 59 insertions(+), 77 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index cd8555246..401a75cca 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2394,6 +2394,10 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
}
priv->num_rx_tc = attr.num_rx_tcs;
+ priv->qos_entries = attr.qos_entries;
+ priv->fs_entries = attr.fs_entries;
+ priv->dist_queues = attr.num_queues;
+
/* only if the custom CG is enabled */
if (attr.options & DPNI_OPT_CUSTOM_CG)
priv->max_cgs = attr.num_cgs;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 030c625e3..b49b88a2d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -145,6 +145,9 @@ struct dpaa2_dev_priv {
uint8_t max_mac_filters;
uint8_t max_vlan_filters;
uint8_t num_rx_tc;
+ uint16_t qos_entries;
+ uint16_t fs_entries;
+ uint8_t dist_queues;
uint8_t flags; /*dpaa2 config flags */
uint8_t en_ordered;
uint8_t en_loose_ordered;
@@ -152,8 +155,6 @@ struct dpaa2_dev_priv {
uint8_t cgid_in_use[MAX_RX_QUEUES];
struct extract_s extract;
- uint8_t *qos_index;
- uint8_t *fs_index;
uint16_t ss_offset;
uint64_t ss_iova;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 941d62b80..760a8a793 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -47,11 +47,8 @@ struct rte_flow {
LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
struct dpni_rule_cfg qos_rule;
struct dpni_rule_cfg fs_rule;
- uint16_t qos_index;
- uint16_t fs_index;
uint8_t key_size;
uint8_t tc_id; /** Traffic Class ID. */
- uint8_t flow_type;
uint8_t tc_index; /** index within this Traffic Class. */
enum rte_flow_action_type action;
uint16_t flow_id;
@@ -2645,6 +2642,7 @@ dpaa2_flow_entry_update(
char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
int extend = -1, extend1, size;
+ uint16_t qos_index;
while (curr) {
if (curr->ipaddr_rule.ipaddr_type ==
@@ -2676,6 +2674,9 @@ dpaa2_flow_entry_update(
size = NH_FLD_IPV6_ADDR_SIZE;
}
+ qos_index = curr->tc_id * priv->fs_entries +
+ curr->tc_index;
+
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &curr->qos_rule);
if (ret) {
@@ -2769,7 +2770,7 @@ dpaa2_flow_entry_update(
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &curr->qos_rule,
- curr->tc_id, curr->qos_index,
+ curr->tc_id, qos_index,
0, 0);
if (ret) {
DPAA2_PMD_ERR("Qos entry update failed.");
@@ -2875,7 +2876,7 @@ dpaa2_flow_entry_update(
curr->fs_rule.key_size += extend;
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
- priv->token, curr->tc_id, curr->fs_index,
+ priv->token, curr->tc_id, curr->tc_index,
&curr->fs_rule, &curr->action_cfg);
if (ret) {
DPAA2_PMD_ERR("FS entry update failed.");
@@ -2888,6 +2889,28 @@ dpaa2_flow_entry_update(
return 0;
}
+static inline int
+dpaa2_flow_verify_attr(
+ struct dpaa2_dev_priv *priv,
+ const struct rte_flow_attr *attr)
+{
+ struct rte_flow *curr = LIST_FIRST(&priv->flows);
+
+ while (curr) {
+ if (curr->tc_id == attr->group &&
+ curr->tc_index == attr->priority) {
+ DPAA2_PMD_ERR(
+ "Flow with group %d and priority %d already exists.",
+ attr->group, attr->priority);
+
+ return -1;
+ }
+ curr = LIST_NEXT(curr, next);
+ }
+
+ return 0;
+}
+
static int
dpaa2_generic_flow_set(struct rte_flow *flow,
struct rte_eth_dev *dev,
@@ -2898,10 +2921,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
{
const struct rte_flow_action_queue *dest_queue;
const struct rte_flow_action_rss *rss_conf;
- uint16_t index;
int is_keycfg_configured = 0, end_of_list = 0;
int ret = 0, i = 0, j = 0;
- struct dpni_attr nic_attr;
struct dpni_rx_tc_dist_cfg tc_cfg;
struct dpni_qos_tbl_cfg qos_cfg;
struct dpni_fs_action_cfg action;
@@ -2909,6 +2930,11 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
size_t param;
struct rte_flow *curr = LIST_FIRST(&priv->flows);
+ uint16_t qos_index;
+
+ ret = dpaa2_flow_verify_attr(priv, attr);
+ if (ret)
+ return ret;
/* Parse pattern list to get the matching parameters */
while (!end_of_list) {
@@ -3056,31 +3082,15 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
}
/* Configure QoS table first */
- memset(&nic_attr, 0, sizeof(struct dpni_attr));
- ret = dpni_get_attributes(dpni, CMD_PRI_LOW,
- priv->token, &nic_attr);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Failure to get attribute. dpni@%p err code(%d)\n",
- dpni, ret);
- return ret;
- }
- action.flow_id = action.flow_id % nic_attr.num_rx_tcs;
+ action.flow_id = action.flow_id % priv->num_rx_tc;
- if (!priv->qos_index) {
- priv->qos_index = rte_zmalloc(0,
- nic_attr.qos_entries, 64);
- }
- for (index = 0; index < nic_attr.qos_entries; index++) {
- if (!priv->qos_index[index]) {
- priv->qos_index[index] = 1;
- break;
- }
- }
- if (index >= nic_attr.qos_entries) {
+ qos_index = flow->tc_id * priv->fs_entries +
+ flow->tc_index;
+
+ if (qos_index >= priv->qos_entries) {
DPAA2_PMD_ERR("QoS table with %d entries full",
- nic_attr.qos_entries);
+ priv->qos_entries);
return -1;
}
flow->qos_rule.key_size = priv->extract
@@ -3110,30 +3120,18 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &flow->qos_rule,
- flow->tc_id, index,
+ flow->tc_id, qos_index,
0, 0);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in addnig entry to QoS table(%d)", ret);
- priv->qos_index[index] = 0;
return ret;
}
- flow->qos_index = index;
/* Then Configure FS table */
- if (!priv->fs_index) {
- priv->fs_index = rte_zmalloc(0,
- nic_attr.fs_entries, 64);
- }
- for (index = 0; index < nic_attr.fs_entries; index++) {
- if (!priv->fs_index[index]) {
- priv->fs_index[index] = 1;
- break;
- }
- }
- if (index >= nic_attr.fs_entries) {
+ if (flow->tc_index >= priv->fs_entries) {
DPAA2_PMD_ERR("FS table with %d entries full",
- nic_attr.fs_entries);
+ priv->fs_entries);
return -1;
}
flow->fs_rule.key_size = priv->extract
@@ -3164,31 +3162,23 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
}
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
- flow->tc_id, index,
+ flow->tc_id, flow->tc_index,
&flow->fs_rule, &action);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in adding entry to FS table(%d)", ret);
- priv->fs_index[index] = 0;
return ret;
}
- flow->fs_index = index;
memcpy(&flow->action_cfg, &action,
sizeof(struct dpni_fs_action_cfg));
break;
case RTE_FLOW_ACTION_TYPE_RSS:
- ret = dpni_get_attributes(dpni, CMD_PRI_LOW,
- priv->token, &nic_attr);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Failure to get attribute. dpni@%p err code(%d)\n",
- dpni, ret);
- return ret;
- }
rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
for (i = 0; i < (int)rss_conf->queue_num; i++) {
- if (rss_conf->queue[i] < (attr->group * nic_attr.num_queues) ||
- rss_conf->queue[i] >= ((attr->group + 1) * nic_attr.num_queues)) {
+ if (rss_conf->queue[i] <
+ (attr->group * priv->dist_queues) ||
+ rss_conf->queue[i] >=
+ ((attr->group + 1) * priv->dist_queues)) {
DPAA2_PMD_ERR(
"Queue/Group combination are not supported\n");
return -ENOTSUP;
@@ -3262,34 +3252,24 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
/* Add Rule into QoS table */
- if (!priv->qos_index) {
- priv->qos_index = rte_zmalloc(0,
- nic_attr.qos_entries, 64);
- }
- for (index = 0; index < nic_attr.qos_entries; index++) {
- if (!priv->qos_index[index]) {
- priv->qos_index[index] = 1;
- break;
- }
- }
- if (index >= nic_attr.qos_entries) {
+ qos_index = flow->tc_id * priv->fs_entries +
+ flow->tc_index;
+ if (qos_index >= priv->qos_entries) {
DPAA2_PMD_ERR("QoS table with %d entries full",
- nic_attr.qos_entries);
+ priv->qos_entries);
return -1;
}
flow->qos_rule.key_size =
priv->extract.qos_key_extract.key_info.key_total_size;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
&flow->qos_rule, flow->tc_id,
- index, 0, 0);
+ qos_index, 0, 0);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in entry addition in QoS table(%d)",
ret);
- priv->qos_index[index] = 0;
return ret;
}
- flow->qos_index = index;
break;
case RTE_FLOW_ACTION_TYPE_END:
end_of_list = 1;
@@ -3574,7 +3554,6 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
"Error in adding entry to QoS table(%d)", ret);
goto error;
}
- priv->qos_index[flow->qos_index] = 0;
/* Then remove entry from FS table */
ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token,
@@ -3584,7 +3563,6 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
"Error in entry addition in FS table(%d)", ret);
goto error;
}
- priv->fs_index[flow->fs_index] = 0;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
@@ -3594,7 +3572,6 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
"Error in entry addition in QoS table(%d)", ret);
goto error;
}
- priv->qos_index[flow->qos_index] = 0;
break;
default:
DPAA2_PMD_ERR(
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 28/37] net/dpaa2: define the size of table entry
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (26 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 27/37] net/dpaa2: flow QoS or FS table entry indexing Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 29/37] net/dpaa2: log of flow extracts and rules Hemant Agrawal
` (10 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
If entry size is not bigger than 27, MC alloc one TCAM entry,
otherwise, alloc 2 TCAM entries.
Extracts size by HW must be not bigger than TCAM entry size(27 or 54).
So define the flow entry size as 54.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 90 ++++++++++++++++++++++------------
1 file changed, 60 insertions(+), 30 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 760a8a793..bcbd5977a 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -29,6 +29,8 @@
*/
int mc_l4_port_identification;
+#define FIXED_ENTRY_SIZE 54
+
enum flow_rule_ipaddr_type {
FLOW_NONE_IPADDR,
FLOW_IPV4_ADDR,
@@ -47,7 +49,8 @@ struct rte_flow {
LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
struct dpni_rule_cfg qos_rule;
struct dpni_rule_cfg fs_rule;
- uint8_t key_size;
+ uint8_t qos_real_key_size;
+ uint8_t fs_real_key_size;
uint8_t tc_id; /** Traffic Class ID. */
uint8_t tc_index; /** index within this Traffic Class. */
enum rte_flow_action_type action;
@@ -478,6 +481,7 @@ dpaa2_flow_rule_data_set(
prot, field);
return -1;
}
+
memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
@@ -523,9 +527,11 @@ _dpaa2_flow_rule_move_ipaddr_tail(
len = NH_FLD_IPV6_ADDR_SIZE;
memcpy(tmp, (char *)key_src, len);
+ memset((char *)key_src, 0, len);
memcpy((char *)key_dst, tmp, len);
memcpy(tmp, (char *)mask_src, len);
+ memset((char *)mask_src, 0, len);
memcpy((char *)mask_dst, tmp, len);
return 0;
@@ -1251,8 +1257,7 @@ dpaa2_configure_flow_generic_ip(
return -1;
}
- local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE |
- DPAA2_QOS_TABLE_IPADDR_EXTRACT);
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
index = dpaa2_flow_extract_search(
@@ -1269,8 +1274,7 @@ dpaa2_configure_flow_generic_ip(
return -1;
}
- local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE |
- DPAA2_FS_TABLE_IPADDR_EXTRACT);
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
if (spec_ipv4)
@@ -1339,8 +1343,7 @@ dpaa2_configure_flow_generic_ip(
return -1;
}
- local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE |
- DPAA2_QOS_TABLE_IPADDR_EXTRACT);
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
index = dpaa2_flow_extract_search(
@@ -1361,8 +1364,7 @@ dpaa2_configure_flow_generic_ip(
return -1;
}
- local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE |
- DPAA2_FS_TABLE_IPADDR_EXTRACT);
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
if (spec_ipv4)
@@ -2641,7 +2643,7 @@ dpaa2_flow_entry_update(
char ipdst_key[NH_FLD_IPV6_ADDR_SIZE];
char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
- int extend = -1, extend1, size;
+ int extend = -1, extend1, size = -1;
uint16_t qos_index;
while (curr) {
@@ -2696,6 +2698,9 @@ dpaa2_flow_entry_update(
else
extend = extend1;
+ RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
+ (size == NH_FLD_IPV6_ADDR_SIZE));
+
memcpy(ipsrc_key,
(char *)(size_t)curr->qos_rule.key_iova +
curr->ipaddr_rule.qos_ipsrc_offset,
@@ -2725,6 +2730,9 @@ dpaa2_flow_entry_update(
else
extend = extend1;
+ RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
+ (size == NH_FLD_IPV6_ADDR_SIZE));
+
memcpy(ipdst_key,
(char *)(size_t)curr->qos_rule.key_iova +
curr->ipaddr_rule.qos_ipdst_offset,
@@ -2745,6 +2753,8 @@ dpaa2_flow_entry_update(
}
if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
+ RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
+ (size == NH_FLD_IPV6_ADDR_SIZE));
memcpy((char *)(size_t)curr->qos_rule.key_iova +
curr->ipaddr_rule.qos_ipsrc_offset,
ipsrc_key,
@@ -2755,6 +2765,8 @@ dpaa2_flow_entry_update(
size);
}
if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
+ RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
+ (size == NH_FLD_IPV6_ADDR_SIZE));
memcpy((char *)(size_t)curr->qos_rule.key_iova +
curr->ipaddr_rule.qos_ipdst_offset,
ipdst_key,
@@ -2766,7 +2778,9 @@ dpaa2_flow_entry_update(
}
if (extend >= 0)
- curr->qos_rule.key_size += extend;
+ curr->qos_real_key_size += extend;
+
+ curr->qos_rule.key_size = FIXED_ENTRY_SIZE;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &curr->qos_rule,
@@ -2873,7 +2887,8 @@ dpaa2_flow_entry_update(
}
if (extend >= 0)
- curr->fs_rule.key_size += extend;
+ curr->fs_real_key_size += extend;
+ curr->fs_rule.key_size = FIXED_ENTRY_SIZE;
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
priv->token, curr->tc_id, curr->tc_index,
@@ -3093,31 +3108,34 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->qos_entries);
return -1;
}
- flow->qos_rule.key_size = priv->extract
- .qos_key_extract.key_info.key_total_size;
+ flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
if (flow->ipaddr_rule.qos_ipdst_offset >=
flow->ipaddr_rule.qos_ipsrc_offset) {
- flow->qos_rule.key_size =
+ flow->qos_real_key_size =
flow->ipaddr_rule.qos_ipdst_offset +
NH_FLD_IPV4_ADDR_SIZE;
} else {
- flow->qos_rule.key_size =
+ flow->qos_real_key_size =
flow->ipaddr_rule.qos_ipsrc_offset +
NH_FLD_IPV4_ADDR_SIZE;
}
- } else if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV6_ADDR) {
+ } else if (flow->ipaddr_rule.ipaddr_type ==
+ FLOW_IPV6_ADDR) {
if (flow->ipaddr_rule.qos_ipdst_offset >=
flow->ipaddr_rule.qos_ipsrc_offset) {
- flow->qos_rule.key_size =
+ flow->qos_real_key_size =
flow->ipaddr_rule.qos_ipdst_offset +
NH_FLD_IPV6_ADDR_SIZE;
} else {
- flow->qos_rule.key_size =
+ flow->qos_real_key_size =
flow->ipaddr_rule.qos_ipsrc_offset +
NH_FLD_IPV6_ADDR_SIZE;
}
}
+
+ flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
+
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &flow->qos_rule,
flow->tc_id, qos_index,
@@ -3134,17 +3152,20 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->fs_entries);
return -1;
}
- flow->fs_rule.key_size = priv->extract
- .tc_key_extract[attr->group].key_info.key_total_size;
+
+ flow->fs_real_key_size =
+ priv->extract.tc_key_extract[flow->tc_id]
+ .key_info.key_total_size;
+
if (flow->ipaddr_rule.ipaddr_type ==
FLOW_IPV4_ADDR) {
if (flow->ipaddr_rule.fs_ipdst_offset >=
flow->ipaddr_rule.fs_ipsrc_offset) {
- flow->fs_rule.key_size =
+ flow->fs_real_key_size =
flow->ipaddr_rule.fs_ipdst_offset +
NH_FLD_IPV4_ADDR_SIZE;
} else {
- flow->fs_rule.key_size =
+ flow->fs_real_key_size =
flow->ipaddr_rule.fs_ipsrc_offset +
NH_FLD_IPV4_ADDR_SIZE;
}
@@ -3152,15 +3173,18 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
FLOW_IPV6_ADDR) {
if (flow->ipaddr_rule.fs_ipdst_offset >=
flow->ipaddr_rule.fs_ipsrc_offset) {
- flow->fs_rule.key_size =
+ flow->fs_real_key_size =
flow->ipaddr_rule.fs_ipdst_offset +
NH_FLD_IPV6_ADDR_SIZE;
} else {
- flow->fs_rule.key_size =
+ flow->fs_real_key_size =
flow->ipaddr_rule.fs_ipsrc_offset +
NH_FLD_IPV6_ADDR_SIZE;
}
}
+
+ flow->fs_rule.key_size = FIXED_ENTRY_SIZE;
+
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
flow->tc_id, flow->tc_index,
&flow->fs_rule, &action);
@@ -3259,8 +3283,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->qos_entries);
return -1;
}
- flow->qos_rule.key_size =
+
+ flow->qos_real_key_size =
priv->extract.qos_key_extract.key_info.key_total_size;
+ flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
&flow->qos_rule, flow->tc_id,
qos_index, 0, 0);
@@ -3283,11 +3309,15 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
if (!ret) {
- ret = dpaa2_flow_entry_update(priv, flow->tc_id);
- if (ret) {
- DPAA2_PMD_ERR("Flow entry update failed.");
+ if (is_keycfg_configured &
+ (DPAA2_QOS_TABLE_RECONFIGURE |
+ DPAA2_FS_TABLE_RECONFIGURE)) {
+ ret = dpaa2_flow_entry_update(priv, flow->tc_id);
+ if (ret) {
+ DPAA2_PMD_ERR("Flow entry update failed.");
- return -1;
+ return -1;
+ }
}
/* New rules are inserted. */
if (!curr) {
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 29/37] net/dpaa2: log of flow extracts and rules
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (27 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 28/37] net/dpaa2: define the size of table entry Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 30/37] net/dpaa2: discrimination between IPv4 and IPv6 Hemant Agrawal
` (9 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
This patch add support for logging the flow rules.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 213 ++++++++++++++++++++++++++++++++-
1 file changed, 209 insertions(+), 4 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index bcbd5977a..95756bf7b 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -29,6 +29,8 @@
*/
int mc_l4_port_identification;
+static char *dpaa2_flow_control_log;
+
#define FIXED_ENTRY_SIZE 54
enum flow_rule_ipaddr_type {
@@ -149,6 +151,189 @@ static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
#endif
+static inline void dpaa2_prot_field_string(
+ enum net_prot prot, uint32_t field,
+ char *string)
+{
+ if (!dpaa2_flow_control_log)
+ return;
+
+ if (prot == NET_PROT_ETH) {
+ strcpy(string, "eth");
+ if (field == NH_FLD_ETH_DA)
+ strcat(string, ".dst");
+ else if (field == NH_FLD_ETH_SA)
+ strcat(string, ".src");
+ else if (field == NH_FLD_ETH_TYPE)
+ strcat(string, ".type");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_VLAN) {
+ strcpy(string, "vlan");
+ if (field == NH_FLD_VLAN_TCI)
+ strcat(string, ".tci");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_IP) {
+ strcpy(string, "ip");
+ if (field == NH_FLD_IP_SRC)
+ strcat(string, ".src");
+ else if (field == NH_FLD_IP_DST)
+ strcat(string, ".dst");
+ else if (field == NH_FLD_IP_PROTO)
+ strcat(string, ".proto");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_TCP) {
+ strcpy(string, "tcp");
+ if (field == NH_FLD_TCP_PORT_SRC)
+ strcat(string, ".src");
+ else if (field == NH_FLD_TCP_PORT_DST)
+ strcat(string, ".dst");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_UDP) {
+ strcpy(string, "udp");
+ if (field == NH_FLD_UDP_PORT_SRC)
+ strcat(string, ".src");
+ else if (field == NH_FLD_UDP_PORT_DST)
+ strcat(string, ".dst");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_ICMP) {
+ strcpy(string, "icmp");
+ if (field == NH_FLD_ICMP_TYPE)
+ strcat(string, ".type");
+ else if (field == NH_FLD_ICMP_CODE)
+ strcat(string, ".code");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_SCTP) {
+ strcpy(string, "sctp");
+ if (field == NH_FLD_SCTP_PORT_SRC)
+ strcat(string, ".src");
+ else if (field == NH_FLD_SCTP_PORT_DST)
+ strcat(string, ".dst");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_GRE) {
+ strcpy(string, "gre");
+ if (field == NH_FLD_GRE_TYPE)
+ strcat(string, ".type");
+ else
+ strcat(string, ".unknown field");
+ } else {
+ strcpy(string, "unknown protocol");
+ }
+}
+
+static inline void dpaa2_flow_qos_table_extracts_log(
+ const struct dpaa2_dev_priv *priv)
+{
+ int idx;
+ char string[32];
+
+ if (!dpaa2_flow_control_log)
+ return;
+
+ printf("Setup QoS table: number of extracts: %d\r\n",
+ priv->extract.qos_key_extract.dpkg.num_extracts);
+ for (idx = 0; idx < priv->extract.qos_key_extract.dpkg.num_extracts;
+ idx++) {
+ dpaa2_prot_field_string(priv->extract.qos_key_extract.dpkg
+ .extracts[idx].extract.from_hdr.prot,
+ priv->extract.qos_key_extract.dpkg.extracts[idx]
+ .extract.from_hdr.field,
+ string);
+ printf("%s", string);
+ if ((idx + 1) < priv->extract.qos_key_extract.dpkg.num_extracts)
+ printf(" / ");
+ }
+ printf("\r\n");
+}
+
+static inline void dpaa2_flow_fs_table_extracts_log(
+ const struct dpaa2_dev_priv *priv, int tc_id)
+{
+ int idx;
+ char string[32];
+
+ if (!dpaa2_flow_control_log)
+ return;
+
+ printf("Setup FS table: number of extracts of TC[%d]: %d\r\n",
+ tc_id, priv->extract.tc_key_extract[tc_id]
+ .dpkg.num_extracts);
+ for (idx = 0; idx < priv->extract.tc_key_extract[tc_id]
+ .dpkg.num_extracts; idx++) {
+ dpaa2_prot_field_string(priv->extract.tc_key_extract[tc_id]
+ .dpkg.extracts[idx].extract.from_hdr.prot,
+ priv->extract.tc_key_extract[tc_id].dpkg.extracts[idx]
+ .extract.from_hdr.field,
+ string);
+ printf("%s", string);
+ if ((idx + 1) < priv->extract.tc_key_extract[tc_id]
+ .dpkg.num_extracts)
+ printf(" / ");
+ }
+ printf("\r\n");
+}
+
+static inline void dpaa2_flow_qos_entry_log(
+ const char *log_info, const struct rte_flow *flow, int qos_index)
+{
+ int idx;
+ uint8_t *key, *mask;
+
+ if (!dpaa2_flow_control_log)
+ return;
+
+ printf("\r\n%s QoS entry[%d] for TC[%d], extracts size is %d\r\n",
+ log_info, qos_index, flow->tc_id, flow->qos_real_key_size);
+
+ key = (uint8_t *)(size_t)flow->qos_rule.key_iova;
+ mask = (uint8_t *)(size_t)flow->qos_rule.mask_iova;
+
+ printf("key:\r\n");
+ for (idx = 0; idx < flow->qos_real_key_size; idx++)
+ printf("%02x ", key[idx]);
+
+ printf("\r\nmask:\r\n");
+ for (idx = 0; idx < flow->qos_real_key_size; idx++)
+ printf("%02x ", mask[idx]);
+
+ printf("\r\n%s QoS ipsrc: %d, ipdst: %d\r\n", log_info,
+ flow->ipaddr_rule.qos_ipsrc_offset,
+ flow->ipaddr_rule.qos_ipdst_offset);
+}
+
+static inline void dpaa2_flow_fs_entry_log(
+ const char *log_info, const struct rte_flow *flow)
+{
+ int idx;
+ uint8_t *key, *mask;
+
+ if (!dpaa2_flow_control_log)
+ return;
+
+ printf("\r\n%s FS/TC entry[%d] of TC[%d], extracts size is %d\r\n",
+ log_info, flow->tc_index, flow->tc_id, flow->fs_real_key_size);
+
+ key = (uint8_t *)(size_t)flow->fs_rule.key_iova;
+ mask = (uint8_t *)(size_t)flow->fs_rule.mask_iova;
+
+ printf("key:\r\n");
+ for (idx = 0; idx < flow->fs_real_key_size; idx++)
+ printf("%02x ", key[idx]);
+
+ printf("\r\nmask:\r\n");
+ for (idx = 0; idx < flow->fs_real_key_size; idx++)
+ printf("%02x ", mask[idx]);
+
+ printf("\r\n%s FS ipsrc: %d, ipdst: %d\r\n", log_info,
+ flow->ipaddr_rule.fs_ipsrc_offset,
+ flow->ipaddr_rule.fs_ipdst_offset);
+}
static inline void dpaa2_flow_extract_key_set(
struct dpaa2_key_info *key_info, int index, uint8_t size)
@@ -2679,6 +2864,8 @@ dpaa2_flow_entry_update(
qos_index = curr->tc_id * priv->fs_entries +
curr->tc_index;
+ dpaa2_flow_qos_entry_log("Before update", curr, qos_index);
+
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &curr->qos_rule);
if (ret) {
@@ -2782,6 +2969,8 @@ dpaa2_flow_entry_update(
curr->qos_rule.key_size = FIXED_ENTRY_SIZE;
+ dpaa2_flow_qos_entry_log("Start update", curr, qos_index);
+
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &curr->qos_rule,
curr->tc_id, qos_index,
@@ -2796,6 +2985,7 @@ dpaa2_flow_entry_update(
continue;
}
+ dpaa2_flow_fs_entry_log("Before update", curr);
extend = -1;
ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW,
@@ -2890,6 +3080,8 @@ dpaa2_flow_entry_update(
curr->fs_real_key_size += extend;
curr->fs_rule.key_size = FIXED_ENTRY_SIZE;
+ dpaa2_flow_fs_entry_log("Start update", curr);
+
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
priv->token, curr->tc_id, curr->tc_index,
&curr->fs_rule, &curr->action_cfg);
@@ -3043,14 +3235,18 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
while (!end_of_list) {
switch (actions[j].type) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
- dest_queue = (const struct rte_flow_action_queue *)(actions[j].conf);
+ dest_queue =
+ (const struct rte_flow_action_queue *)(actions[j].conf);
flow->flow_id = dest_queue->index;
flow->action = RTE_FLOW_ACTION_TYPE_QUEUE;
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
action.flow_id = flow->flow_id;
if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- if (dpkg_prepare_key_cfg(&priv->extract.qos_key_extract.dpkg,
- (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
+ dpaa2_flow_qos_table_extracts_log(priv);
+ if (dpkg_prepare_key_cfg(
+ &priv->extract.qos_key_extract.dpkg,
+ (uint8_t *)(size_t)priv->extract.qos_extract_param)
+ < 0) {
DPAA2_PMD_ERR(
"Unable to prepare extract parameters");
return -1;
@@ -3059,7 +3255,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
qos_cfg.discard_on_miss = true;
qos_cfg.keep_entries = true;
- qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param;
+ qos_cfg.key_cfg_iova =
+ (size_t)priv->extract.qos_extract_param;
ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
priv->token, &qos_cfg);
if (ret < 0) {
@@ -3070,6 +3267,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
}
if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
+ dpaa2_flow_fs_table_extracts_log(priv, flow->tc_id);
if (dpkg_prepare_key_cfg(
&priv->extract.tc_key_extract[flow->tc_id].dpkg,
(uint8_t *)(size_t)priv->extract
@@ -3136,6 +3334,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
+ dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
+
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &flow->qos_rule,
flow->tc_id, qos_index,
@@ -3185,6 +3385,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
flow->fs_rule.key_size = FIXED_ENTRY_SIZE;
+ dpaa2_flow_fs_entry_log("Start add", flow);
+
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
flow->tc_id, flow->tc_index,
&flow->fs_rule, &action);
@@ -3483,6 +3685,9 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
size_t key_iova = 0, mask_iova = 0;
int ret;
+ dpaa2_flow_control_log =
+ getenv("DPAA2_FLOW_CONTROL_LOG");
+
flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
if (!flow) {
DPAA2_PMD_ERR("Failure to allocate memory for flow");
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 30/37] net/dpaa2: discrimination between IPv4 and IPv6
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (28 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 29/37] net/dpaa2: log of flow extracts and rules Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 31/37] net/dpaa2: distribution size set on multiple TCs Hemant Agrawal
` (8 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Discriminate between IPv4 and IPv6 in generic IP flow setup.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 153 +++++++++++++++++----------------
1 file changed, 80 insertions(+), 73 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 95756bf7b..6f3139f86 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1284,6 +1284,70 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
return 0;
}
+static int
+dpaa2_configure_flow_ip_discrimation(
+ struct dpaa2_dev_priv *priv, struct rte_flow *flow,
+ const struct rte_flow_item *pattern,
+ int *local_cfg, int *device_configured,
+ uint32_t group)
+{
+ int index, ret;
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract ETH_TYPE to discriminate IP failed.");
+ return -1;
+ }
+ (*local_cfg) |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract ETH_TYPE to discriminate IP failed.");
+ return -1;
+ }
+ (*local_cfg) |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before IP discrimination set failed");
+ return -1;
+ }
+
+ proto.type = RTE_FLOW_ITEM_TYPE_ETH;
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
+ proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+ else
+ proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow, proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("IP discrimination rule set failed");
+ return -1;
+ }
+
+ (*device_configured) |= (*local_cfg);
+
+ return 0;
+}
+
+
static int
dpaa2_configure_flow_generic_ip(
struct rte_flow *flow,
@@ -1327,73 +1391,16 @@ dpaa2_configure_flow_generic_ip(
flow->tc_id = group;
flow->tc_index = attr->priority;
- if (!spec_ipv4 && !spec_ipv6) {
- /* Don't care any field of IP header,
- * only care IP protocol.
- * Example: flow create 0 ingress pattern ipv6 /
- */
- /* Eth type is actually used for IP identification.
- */
- /* TODO: Current design only supports Eth + IP,
- * Eth + vLan + IP needs to add.
- */
- struct proto_discrimination proto;
-
- index = dpaa2_flow_extract_search(
- &priv->extract.qos_key_extract.dpkg,
- NET_PROT_ETH, NH_FLD_ETH_TYPE);
- if (index < 0) {
- ret = dpaa2_flow_proto_discrimination_extract(
- &priv->extract.qos_key_extract,
- RTE_FLOW_ITEM_TYPE_ETH);
- if (ret) {
- DPAA2_PMD_ERR(
- "QoS Ext ETH_TYPE to discriminate IP failed.");
-
- return -1;
- }
- local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
-
- index = dpaa2_flow_extract_search(
- &priv->extract.tc_key_extract[group].dpkg,
- NET_PROT_ETH, NH_FLD_ETH_TYPE);
- if (index < 0) {
- ret = dpaa2_flow_proto_discrimination_extract(
- &priv->extract.tc_key_extract[group],
- RTE_FLOW_ITEM_TYPE_ETH);
- if (ret) {
- DPAA2_PMD_ERR(
- "FS Ext ETH_TYPE to discriminate IP failed");
-
- return -1;
- }
- local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
- }
-
- ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
- if (ret) {
- DPAA2_PMD_ERR(
- "Move ipaddr before IP discrimination set failed");
- return -1;
- }
-
- proto.type = RTE_FLOW_ITEM_TYPE_ETH;
- if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
- proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
- else
- proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
- ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
- proto, group);
- if (ret) {
- DPAA2_PMD_ERR("IP discrimination rule set failed");
- return -1;
- }
-
- (*device_configured) |= local_cfg;
+ ret = dpaa2_configure_flow_ip_discrimation(priv,
+ flow, pattern, &local_cfg,
+ device_configured, group);
+ if (ret) {
+ DPAA2_PMD_ERR("IP discrimation failed!");
+ return -1;
+ }
+ if (!spec_ipv4 && !spec_ipv6)
return 0;
- }
if (mask_ipv4) {
if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
@@ -1433,10 +1440,10 @@ dpaa2_configure_flow_generic_ip(
NET_PROT_IP, NH_FLD_IP_SRC);
if (index < 0) {
ret = dpaa2_flow_extract_add(
- &priv->extract.qos_key_extract,
- NET_PROT_IP,
- NH_FLD_IP_SRC,
- 0);
+ &priv->extract.qos_key_extract,
+ NET_PROT_IP,
+ NH_FLD_IP_SRC,
+ 0);
if (ret) {
DPAA2_PMD_ERR("QoS Extract add IP_SRC failed.");
@@ -1519,10 +1526,10 @@ dpaa2_configure_flow_generic_ip(
else
size = NH_FLD_IPV6_ADDR_SIZE;
ret = dpaa2_flow_extract_add(
- &priv->extract.qos_key_extract,
- NET_PROT_IP,
- NH_FLD_IP_DST,
- size);
+ &priv->extract.qos_key_extract,
+ NET_PROT_IP,
+ NH_FLD_IP_DST,
+ size);
if (ret) {
DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 31/37] net/dpaa2: distribution size set on multiple TCs
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (29 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 30/37] net/dpaa2: discrimination between IPv4 and IPv6 Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 32/37] net/dpaa2: index of queue action for flow Hemant Agrawal
` (7 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Default distribution size of TC is 1, which is limited by MC. We have to
set the distribution size for each TC to support multiple RXQs per TC.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 6 +--
drivers/net/dpaa2/dpaa2_ethdev.c | 51 ++++++++++++++++----------
drivers/net/dpaa2/dpaa2_ethdev.h | 2 +-
3 files changed, 36 insertions(+), 23 deletions(-)
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 34de0d1f7..9f0dad6e7 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -81,14 +81,14 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
int
dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
- uint64_t req_dist_set)
+ uint64_t req_dist_set, int tc_index)
{
struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
struct fsl_mc_io *dpni = priv->hw;
struct dpni_rx_tc_dist_cfg tc_cfg;
struct dpkg_profile_cfg kg_cfg;
void *p_params;
- int ret, tc_index = 0;
+ int ret;
p_params = rte_malloc(
NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
@@ -107,7 +107,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
return ret;
}
tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
- tc_cfg.dist_size = eth_dev->data->nb_rx_queues;
+ tc_cfg.dist_size = priv->dist_queues;
tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
ret = dpkg_prepare_key_cfg(&kg_cfg, p_params);
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 401a75cca..0e22a8579 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -455,7 +455,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
int rx_l4_csum_offload = false;
int tx_l3_csum_offload = false;
int tx_l4_csum_offload = false;
- int ret;
+ int ret, tc_index;
PMD_INIT_FUNC_TRACE();
@@ -495,12 +495,16 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
- ret = dpaa2_setup_flow_dist(dev,
- eth_conf->rx_adv_conf.rss_conf.rss_hf);
- if (ret) {
- DPAA2_PMD_ERR("Unable to set flow distribution."
- "Check queue config");
- return ret;
+ for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
+ ret = dpaa2_setup_flow_dist(dev,
+ eth_conf->rx_adv_conf.rss_conf.rss_hf,
+ tc_index);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Unable to set flow distribution on tc%d."
+ "Check queue config", tc_index);
+ return ret;
+ }
}
}
@@ -757,11 +761,11 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
flow_id = 0;
ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
- tc_id, flow_id, options, &tx_flow_cfg);
+ tc_id, flow_id, options, &tx_flow_cfg);
if (ret) {
DPAA2_PMD_ERR("Error in setting the tx flow: "
- "tc_id=%d, flow=%d err=%d",
- tc_id, flow_id, ret);
+ "tc_id=%d, flow=%d err=%d",
+ tc_id, flow_id, ret);
return -1;
}
@@ -1986,22 +1990,31 @@ dpaa2_dev_rss_hash_update(struct rte_eth_dev *dev,
struct rte_eth_rss_conf *rss_conf)
{
struct rte_eth_dev_data *data = dev->data;
+ struct dpaa2_dev_priv *priv = data->dev_private;
struct rte_eth_conf *eth_conf = &data->dev_conf;
- int ret;
+ int ret, tc_index;
PMD_INIT_FUNC_TRACE();
if (rss_conf->rss_hf) {
- ret = dpaa2_setup_flow_dist(dev, rss_conf->rss_hf);
- if (ret) {
- DPAA2_PMD_ERR("Unable to set flow dist");
- return ret;
+ for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
+ ret = dpaa2_setup_flow_dist(dev, rss_conf->rss_hf,
+ tc_index);
+ if (ret) {
+ DPAA2_PMD_ERR("Unable to set flow dist on tc%d",
+ tc_index);
+ return ret;
+ }
}
} else {
- ret = dpaa2_remove_flow_dist(dev, 0);
- if (ret) {
- DPAA2_PMD_ERR("Unable to remove flow dist");
- return ret;
+ for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
+ ret = dpaa2_remove_flow_dist(dev, tc_index);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Unable to remove flow dist on tc%d",
+ tc_index);
+ return ret;
+ }
}
}
eth_conf->rx_adv_conf.rss_conf.rss_hf = rss_conf->rss_hf;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index b49b88a2d..52faeeefe 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -179,7 +179,7 @@ int dpaa2_distset_to_dpkg_profile_cfg(uint64_t req_dist_set,
struct dpkg_profile_cfg *kg_cfg);
int dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
- uint64_t req_dist_set);
+ uint64_t req_dist_set, int tc_index);
int dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
uint8_t tc_index);
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 32/37] net/dpaa2: index of queue action for flow
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (30 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 31/37] net/dpaa2: distribution size set on multiple TCs Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 33/37] net/dpaa2: flow data sanity check Hemant Agrawal
` (6 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Make more sense to use RXQ index for queue distribution
instead of flow ID.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 6f3139f86..76f68b903 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -56,7 +56,6 @@ struct rte_flow {
uint8_t tc_id; /** Traffic Class ID. */
uint8_t tc_index; /** index within this Traffic Class. */
enum rte_flow_action_type action;
- uint16_t flow_id;
/* Special for IP address to specify the offset
* in key/mask.
*/
@@ -3141,6 +3140,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
struct dpni_qos_tbl_cfg qos_cfg;
struct dpni_fs_action_cfg action;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ struct dpaa2_queue *rxq;
struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
size_t param;
struct rte_flow *curr = LIST_FIRST(&priv->flows);
@@ -3244,10 +3244,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
case RTE_FLOW_ACTION_TYPE_QUEUE:
dest_queue =
(const struct rte_flow_action_queue *)(actions[j].conf);
- flow->flow_id = dest_queue->index;
+ rxq = priv->rx_vq[dest_queue->index];
flow->action = RTE_FLOW_ACTION_TYPE_QUEUE;
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
- action.flow_id = flow->flow_id;
+ action.flow_id = rxq->flow_id;
if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
dpaa2_flow_qos_table_extracts_log(priv);
if (dpkg_prepare_key_cfg(
@@ -3303,8 +3303,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
/* Configure QoS table first */
- action.flow_id = action.flow_id % priv->num_rx_tc;
-
qos_index = flow->tc_id * priv->fs_entries +
flow->tc_index;
@@ -3407,13 +3405,22 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
break;
case RTE_FLOW_ACTION_TYPE_RSS:
rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
+ if (rss_conf->queue_num > priv->dist_queues) {
+ DPAA2_PMD_ERR(
+ "RSS number exceeds the distrbution size");
+ return -ENOTSUP;
+ }
+
for (i = 0; i < (int)rss_conf->queue_num; i++) {
- if (rss_conf->queue[i] <
- (attr->group * priv->dist_queues) ||
- rss_conf->queue[i] >=
- ((attr->group + 1) * priv->dist_queues)) {
+ if (rss_conf->queue[i] >= priv->nb_rx_queues) {
+ DPAA2_PMD_ERR(
+ "RSS RXQ number exceeds the total number");
+ return -ENOTSUP;
+ }
+ rxq = priv->rx_vq[rss_conf->queue[i]];
+ if (rxq->tc_index != attr->group) {
DPAA2_PMD_ERR(
- "Queue/Group combination are not supported\n");
+ "RSS RXQ distributed is not in current group");
return -ENOTSUP;
}
}
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 33/37] net/dpaa2: flow data sanity check
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (31 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 32/37] net/dpaa2: index of queue action for flow Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 34/37] net/dpaa2: flow API QoS setup follows FS setup Hemant Agrawal
` (5 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Check flow attributions and actions before creating flow.
Otherwise, the QoS table and FS table need to re-build
if checking fails.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 84 ++++++++++++++++++++++++++--------
1 file changed, 65 insertions(+), 19 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 76f68b903..3601829c9 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3124,6 +3124,67 @@ dpaa2_flow_verify_attr(
return 0;
}
+static inline int
+dpaa2_flow_verify_action(
+ struct dpaa2_dev_priv *priv,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action actions[])
+{
+ int end_of_list = 0, i, j = 0;
+ const struct rte_flow_action_queue *dest_queue;
+ const struct rte_flow_action_rss *rss_conf;
+ struct dpaa2_queue *rxq;
+
+ while (!end_of_list) {
+ switch (actions[j].type) {
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ dest_queue = (const struct rte_flow_action_queue *)
+ (actions[j].conf);
+ rxq = priv->rx_vq[dest_queue->index];
+ if (attr->group != rxq->tc_index) {
+ DPAA2_PMD_ERR(
+ "RXQ[%d] does not belong to the group %d",
+ dest_queue->index, attr->group);
+
+ return -1;
+ }
+ break;
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ rss_conf = (const struct rte_flow_action_rss *)
+ (actions[j].conf);
+ if (rss_conf->queue_num > priv->dist_queues) {
+ DPAA2_PMD_ERR(
+ "RSS number exceeds the distrbution size");
+ return -ENOTSUP;
+ }
+ for (i = 0; i < (int)rss_conf->queue_num; i++) {
+ if (rss_conf->queue[i] >= priv->nb_rx_queues) {
+ DPAA2_PMD_ERR(
+ "RSS queue index exceeds the number of RXQs");
+ return -ENOTSUP;
+ }
+ rxq = priv->rx_vq[rss_conf->queue[i]];
+ if (rxq->tc_index != attr->group) {
+ DPAA2_PMD_ERR(
+ "Queue/Group combination are not supported\n");
+ return -ENOTSUP;
+ }
+ }
+
+ break;
+ case RTE_FLOW_ACTION_TYPE_END:
+ end_of_list = 1;
+ break;
+ default:
+ DPAA2_PMD_ERR("Invalid action type");
+ return -ENOTSUP;
+ }
+ j++;
+ }
+
+ return 0;
+}
+
static int
dpaa2_generic_flow_set(struct rte_flow *flow,
struct rte_eth_dev *dev,
@@ -3150,6 +3211,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
if (ret)
return ret;
+ ret = dpaa2_flow_verify_action(priv, attr, actions);
+ if (ret)
+ return ret;
+
/* Parse pattern list to get the matching parameters */
while (!end_of_list) {
switch (pattern[i].type) {
@@ -3405,25 +3470,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
break;
case RTE_FLOW_ACTION_TYPE_RSS:
rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
- if (rss_conf->queue_num > priv->dist_queues) {
- DPAA2_PMD_ERR(
- "RSS number exceeds the distrbution size");
- return -ENOTSUP;
- }
-
- for (i = 0; i < (int)rss_conf->queue_num; i++) {
- if (rss_conf->queue[i] >= priv->nb_rx_queues) {
- DPAA2_PMD_ERR(
- "RSS RXQ number exceeds the total number");
- return -ENOTSUP;
- }
- rxq = priv->rx_vq[rss_conf->queue[i]];
- if (rxq->tc_index != attr->group) {
- DPAA2_PMD_ERR(
- "RSS RXQ distributed is not in current group");
- return -ENOTSUP;
- }
- }
flow->action = RTE_FLOW_ACTION_TYPE_RSS;
ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types,
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 34/37] net/dpaa2: flow API QoS setup follows FS setup
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (32 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 33/37] net/dpaa2: flow data sanity check Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 35/37] net/dpaa2: flow API FS miss action configuration Hemant Agrawal
` (4 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
In HW/MC logical, QoS setup should follow FS setup.
In addition, Skip QoS setup if MAX TC number of DPNI is set 1.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 151 ++++++++++++++++++---------------
1 file changed, 84 insertions(+), 67 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 3601829c9..9239fa459 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -2872,11 +2872,13 @@ dpaa2_flow_entry_update(
dpaa2_flow_qos_entry_log("Before update", curr, qos_index);
- ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
- priv->token, &curr->qos_rule);
- if (ret) {
- DPAA2_PMD_ERR("Qos entry remove failed.");
- return -1;
+ if (priv->num_rx_tc > 1) {
+ ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+ priv->token, &curr->qos_rule);
+ if (ret) {
+ DPAA2_PMD_ERR("Qos entry remove failed.");
+ return -1;
+ }
}
extend = -1;
@@ -2977,13 +2979,15 @@ dpaa2_flow_entry_update(
dpaa2_flow_qos_entry_log("Start update", curr, qos_index);
- ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
- priv->token, &curr->qos_rule,
- curr->tc_id, qos_index,
- 0, 0);
- if (ret) {
- DPAA2_PMD_ERR("Qos entry update failed.");
- return -1;
+ if (priv->num_rx_tc > 1) {
+ ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+ priv->token, &curr->qos_rule,
+ curr->tc_id, qos_index,
+ 0, 0);
+ if (ret) {
+ DPAA2_PMD_ERR("Qos entry update failed.");
+ return -1;
+ }
}
if (curr->action != RTE_FLOW_ACTION_TYPE_QUEUE) {
@@ -3313,31 +3317,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
flow->action = RTE_FLOW_ACTION_TYPE_QUEUE;
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
action.flow_id = rxq->flow_id;
- if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- dpaa2_flow_qos_table_extracts_log(priv);
- if (dpkg_prepare_key_cfg(
- &priv->extract.qos_key_extract.dpkg,
- (uint8_t *)(size_t)priv->extract.qos_extract_param)
- < 0) {
- DPAA2_PMD_ERR(
- "Unable to prepare extract parameters");
- return -1;
- }
- memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
- qos_cfg.discard_on_miss = true;
- qos_cfg.keep_entries = true;
- qos_cfg.key_cfg_iova =
- (size_t)priv->extract.qos_extract_param;
- ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
- priv->token, &qos_cfg);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Distribution cannot be configured.(%d)"
- , ret);
- return -1;
- }
- }
+ /* Configure FS table first*/
if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
dpaa2_flow_fs_table_extracts_log(priv, flow->tc_id);
if (dpkg_prepare_key_cfg(
@@ -3366,17 +3347,39 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
}
- /* Configure QoS table first */
- qos_index = flow->tc_id * priv->fs_entries +
- flow->tc_index;
+ /* Configure QoS table then.*/
+ if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
+ dpaa2_flow_qos_table_extracts_log(priv);
+ if (dpkg_prepare_key_cfg(
+ &priv->extract.qos_key_extract.dpkg,
+ (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
+ DPAA2_PMD_ERR(
+ "Unable to prepare extract parameters");
+ return -1;
+ }
- if (qos_index >= priv->qos_entries) {
- DPAA2_PMD_ERR("QoS table with %d entries full",
- priv->qos_entries);
- return -1;
+ memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
+ qos_cfg.discard_on_miss = false;
+ qos_cfg.default_tc = 0;
+ qos_cfg.keep_entries = true;
+ qos_cfg.key_cfg_iova =
+ (size_t)priv->extract.qos_extract_param;
+ /* QoS table is effecitive for multiple TCs.*/
+ if (priv->num_rx_tc > 1) {
+ ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
+ priv->token, &qos_cfg);
+ if (ret < 0) {
+ DPAA2_PMD_ERR(
+ "RSS QoS table can not be configured(%d)\n",
+ ret);
+ return -1;
+ }
+ }
}
- flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
+
+ flow->qos_real_key_size = priv->extract
+ .qos_key_extract.key_info.key_total_size;
if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
if (flow->ipaddr_rule.qos_ipdst_offset >=
flow->ipaddr_rule.qos_ipsrc_offset) {
@@ -3402,21 +3405,30 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
}
- flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
+ /* QoS entry added is only effective for multiple TCs.*/
+ if (priv->num_rx_tc > 1) {
+ qos_index = flow->tc_id * priv->fs_entries +
+ flow->tc_index;
+ if (qos_index >= priv->qos_entries) {
+ DPAA2_PMD_ERR("QoS table with %d entries full",
+ priv->qos_entries);
+ return -1;
+ }
+ flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
- dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
+ dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
- ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+ ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &flow->qos_rule,
flow->tc_id, qos_index,
0, 0);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Error in addnig entry to QoS table(%d)", ret);
- return ret;
+ if (ret < 0) {
+ DPAA2_PMD_ERR(
+ "Error in addnig entry to QoS table(%d)", ret);
+ return ret;
+ }
}
- /* Then Configure FS table */
if (flow->tc_index >= priv->fs_entries) {
DPAA2_PMD_ERR("FS table with %d entries full",
priv->fs_entries);
@@ -3507,7 +3519,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
&tc_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Distribution cannot be configured: %d\n", ret);
+ "RSS FS table cannot be configured: %d\n",
+ ret);
rte_free((void *)param);
return -1;
}
@@ -3841,13 +3854,15 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
switch (flow->action) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
- /* Remove entry from QoS table first */
- ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
- &flow->qos_rule);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Error in adding entry to QoS table(%d)", ret);
- goto error;
+ if (priv->num_rx_tc > 1) {
+ /* Remove entry from QoS table first */
+ ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
+ &flow->qos_rule);
+ if (ret < 0) {
+ DPAA2_PMD_ERR(
+ "Error in removing entry from QoS table(%d)", ret);
+ goto error;
+ }
}
/* Then remove entry from FS table */
@@ -3855,17 +3870,19 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
flow->tc_id, &flow->fs_rule);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Error in entry addition in FS table(%d)", ret);
+ "Error in removing entry from FS table(%d)", ret);
goto error;
}
break;
case RTE_FLOW_ACTION_TYPE_RSS:
- ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
- &flow->qos_rule);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Error in entry addition in QoS table(%d)", ret);
- goto error;
+ if (priv->num_rx_tc > 1) {
+ ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
+ &flow->qos_rule);
+ if (ret < 0) {
+ DPAA2_PMD_ERR(
+ "Error in entry addition in QoS table(%d)", ret);
+ goto error;
+ }
}
break;
default:
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 35/37] net/dpaa2: flow API FS miss action configuration
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (33 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 34/37] net/dpaa2: flow API QoS setup follows FS setup Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 36/37] net/dpaa2: configure per class distribution size Hemant Agrawal
` (3 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
1) dpni_set_rx_hash_dist and dpni_set_rx_fs_dist used for TC configuration
instead of dpni_set_rx_tc_dist. Otherwise, re-configuration of
default TC of QoS fails.
2) Default miss action is to drop.
"export DPAA2_FLOW_CONTROL_MISS_FLOW=flow_id" is used receive
the missed packets from flow with flow ID specified.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 30 +++++++------
drivers/net/dpaa2/dpaa2_flow.c | 62 ++++++++++++++++++--------
2 files changed, 60 insertions(+), 32 deletions(-)
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 9f0dad6e7..d69156bcc 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -85,7 +85,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
{
struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
struct fsl_mc_io *dpni = priv->hw;
- struct dpni_rx_tc_dist_cfg tc_cfg;
+ struct dpni_rx_dist_cfg tc_cfg;
struct dpkg_profile_cfg kg_cfg;
void *p_params;
int ret;
@@ -96,8 +96,9 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
return -ENOMEM;
}
+
memset(p_params, 0, DIST_PARAM_IOVA_SIZE);
- memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
+ memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
ret = dpaa2_distset_to_dpkg_profile_cfg(req_dist_set, &kg_cfg);
if (ret) {
@@ -106,9 +107,11 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
rte_free(p_params);
return ret;
}
+
tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
tc_cfg.dist_size = priv->dist_queues;
- tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
+ tc_cfg.enable = true;
+ tc_cfg.tc = tc_index;
ret = dpkg_prepare_key_cfg(&kg_cfg, p_params);
if (ret) {
@@ -117,8 +120,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
return ret;
}
- ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index,
- &tc_cfg);
+ ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, &tc_cfg);
rte_free(p_params);
if (ret) {
DPAA2_PMD_ERR(
@@ -136,7 +138,7 @@ int dpaa2_remove_flow_dist(
{
struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
struct fsl_mc_io *dpni = priv->hw;
- struct dpni_rx_tc_dist_cfg tc_cfg;
+ struct dpni_rx_dist_cfg tc_cfg;
struct dpkg_profile_cfg kg_cfg;
void *p_params;
int ret;
@@ -147,13 +149,15 @@ int dpaa2_remove_flow_dist(
DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
return -ENOMEM;
}
- memset(p_params, 0, DIST_PARAM_IOVA_SIZE);
- memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
- kg_cfg.num_extracts = 0;
- tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+
+ memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
tc_cfg.dist_size = 0;
- tc_cfg.dist_mode = DPNI_DIST_MODE_NONE;
+ tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+ tc_cfg.enable = true;
+ tc_cfg.tc = tc_index;
+ memset(p_params, 0, DIST_PARAM_IOVA_SIZE);
+ kg_cfg.num_extracts = 0;
ret = dpkg_prepare_key_cfg(&kg_cfg, p_params);
if (ret) {
DPAA2_PMD_ERR("Unable to prepare extract parameters");
@@ -161,8 +165,8 @@ int dpaa2_remove_flow_dist(
return ret;
}
- ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index,
- &tc_cfg);
+ ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token,
+ &tc_cfg);
rte_free(p_params);
if (ret)
DPAA2_PMD_ERR(
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 9239fa459..cc789346a 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -30,6 +30,8 @@
int mc_l4_port_identification;
static char *dpaa2_flow_control_log;
+static int dpaa2_flow_miss_flow_id =
+ DPNI_FS_MISS_DROP;
#define FIXED_ENTRY_SIZE 54
@@ -3201,7 +3203,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
const struct rte_flow_action_rss *rss_conf;
int is_keycfg_configured = 0, end_of_list = 0;
int ret = 0, i = 0, j = 0;
- struct dpni_rx_tc_dist_cfg tc_cfg;
+ struct dpni_rx_dist_cfg tc_cfg;
struct dpni_qos_tbl_cfg qos_cfg;
struct dpni_fs_action_cfg action;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
@@ -3330,20 +3332,30 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
- memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
+ memset(&tc_cfg, 0,
+ sizeof(struct dpni_rx_dist_cfg));
tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc;
- tc_cfg.dist_mode = DPNI_DIST_MODE_FS;
tc_cfg.key_cfg_iova =
(uint64_t)priv->extract.tc_extract_param[flow->tc_id];
- tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP;
- tc_cfg.fs_cfg.keep_entries = true;
- ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW,
- priv->token,
- flow->tc_id, &tc_cfg);
+ tc_cfg.tc = flow->tc_id;
+ tc_cfg.enable = false;
+ ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
+ priv->token, &tc_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Distribution cannot be configured.(%d)"
- , ret);
+ "TC hash cannot be disabled.(%d)",
+ ret);
+ return -1;
+ }
+ tc_cfg.enable = true;
+ tc_cfg.fs_miss_flow_id =
+ dpaa2_flow_miss_flow_id;
+ ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
+ priv->token, &tc_cfg);
+ if (ret < 0) {
+ DPAA2_PMD_ERR(
+ "TC distribution cannot be configured.(%d)",
+ ret);
return -1;
}
}
@@ -3508,18 +3520,16 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
- memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
+ memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
tc_cfg.dist_size = rss_conf->queue_num;
- tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
tc_cfg.key_cfg_iova = (size_t)param;
- tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP;
-
- ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW,
- priv->token, flow->tc_id,
- &tc_cfg);
+ tc_cfg.enable = true;
+ tc_cfg.tc = flow->tc_id;
+ ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
+ priv->token, &tc_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "RSS FS table cannot be configured: %d\n",
+ "RSS TC table cannot be configured: %d\n",
ret);
rte_free((void *)param);
return -1;
@@ -3544,7 +3554,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->token, &qos_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Distribution can't be configured %d\n",
+ "RSS QoS dist can't be configured-%d\n",
ret);
return -1;
}
@@ -3761,6 +3771,20 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
dpaa2_flow_control_log =
getenv("DPAA2_FLOW_CONTROL_LOG");
+ if (getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")) {
+ struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+ dpaa2_flow_miss_flow_id =
+ atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
+ if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
+ DPAA2_PMD_ERR(
+ "The missed flow ID %d exceeds the max flow ID %d",
+ dpaa2_flow_miss_flow_id,
+ priv->dist_queues - 1);
+ return NULL;
+ }
+ }
+
flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
if (!flow) {
DPAA2_PMD_ERR("Failure to allocate memory for flow");
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 36/37] net/dpaa2: configure per class distribution size
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (34 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 35/37] net/dpaa2: flow API FS miss action configuration Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 37/37] net/dpaa2: support raw flow classification Hemant Agrawal
` (2 subsequent siblings)
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Jun Yang
From: Jun Yang <jun.yang@nxp.com>
TC distribution size is set with dist_queues or
nb_rx_queues % dist_queues in order of TC priority.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index d69156bcc..25b1d2bb6 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -88,7 +88,21 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
struct dpni_rx_dist_cfg tc_cfg;
struct dpkg_profile_cfg kg_cfg;
void *p_params;
- int ret;
+ int ret, tc_dist_queues;
+
+ /*TC distribution size is set with dist_queues or
+ * nb_rx_queues % dist_queues in order of TC priority index.
+ * Calculating dist size for this tc_index:-
+ */
+ tc_dist_queues = eth_dev->data->nb_rx_queues -
+ tc_index * priv->dist_queues;
+ if (tc_dist_queues <= 0) {
+ DPAA2_PMD_INFO("No distribution on TC%d", tc_index);
+ return 0;
+ }
+
+ if (tc_dist_queues > priv->dist_queues)
+ tc_dist_queues = priv->dist_queues;
p_params = rte_malloc(
NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
@@ -109,7 +123,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
}
tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
- tc_cfg.dist_size = priv->dist_queues;
+ tc_cfg.dist_size = tc_dist_queues;
tc_cfg.enable = true;
tc_cfg.tc = tc_index;
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH 37/37] net/dpaa2: support raw flow classification
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (35 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 36/37] net/dpaa2: configure per class distribution size Hemant Agrawal
@ 2020-05-27 13:23 ` Hemant Agrawal
2020-06-30 17:01 ` [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Ferruh Yigit
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
38 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-05-27 13:23 UTC (permalink / raw)
To: dev, ferruh.yigit; +Cc: Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Add support for raw flow, which can be used for any
protocol rules.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.h | 3 +-
drivers/net/dpaa2/dpaa2_flow.c | 135 +++++++++++++++++++++++++++++++
2 files changed, 137 insertions(+), 1 deletion(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 52faeeefe..2bc0f3f5a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2016-2019 NXP
+ * Copyright 2016-2020 NXP
*
*/
@@ -99,6 +99,7 @@ extern enum pmd_dpaa2_ts dpaa2_enable_ts;
#define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4
#define DPAA2_FS_TABLE_IPADDR_EXTRACT 8
+#define DPAA2_FLOW_MAX_KEY_SIZE 16
/*Externaly defined*/
extern const struct rte_flow_ops dpaa2_flow_ops;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index cc789346a..136bdd5fa 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -493,6 +493,42 @@ static int dpaa2_flow_extract_add(
return 0;
}
+static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
+ int size)
+{
+ struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
+ struct dpaa2_key_info *key_info = &key_extract->key_info;
+ int last_extract_size, index;
+
+ if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
+ DPKG_EXTRACT_FROM_DATA) {
+ DPAA2_PMD_WARN("RAW extract cannot be combined with others");
+ return -1;
+ }
+
+ last_extract_size = (size % DPAA2_FLOW_MAX_KEY_SIZE);
+ dpkg->num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
+ if (last_extract_size)
+ dpkg->num_extracts++;
+ else
+ last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
+
+ for (index = 0; index < dpkg->num_extracts; index++) {
+ dpkg->extracts[index].type = DPKG_EXTRACT_FROM_DATA;
+ if (index == dpkg->num_extracts - 1)
+ dpkg->extracts[index].extract.from_data.size =
+ last_extract_size;
+ else
+ dpkg->extracts[index].extract.from_data.size =
+ DPAA2_FLOW_MAX_KEY_SIZE;
+ dpkg->extracts[index].extract.from_data.offset =
+ DPAA2_FLOW_MAX_KEY_SIZE * index;
+ }
+
+ key_info->key_total_size = size;
+ return 0;
+}
+
/* Protocol discrimination.
* Discriminate IPv4/IPv6/vLan by Eth type.
* Discriminate UDP/TCP/ICMP by next proto of IP.
@@ -674,6 +710,18 @@ dpaa2_flow_rule_data_set(
return 0;
}
+static inline int
+dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
+ const void *key, const void *mask, int size)
+{
+ int offset = 0;
+
+ memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
+ memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+
+ return 0;
+}
+
static inline int
_dpaa2_flow_rule_move_ipaddr_tail(
struct dpaa2_key_extract *key_extract,
@@ -2814,6 +2862,83 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
return 0;
}
+static int
+dpaa2_configure_flow_raw(struct rte_flow *flow,
+ struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item *pattern,
+ const struct rte_flow_action actions[] __rte_unused,
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
+{
+ struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_raw *spec = pattern->spec;
+ const struct rte_flow_item_raw *mask = pattern->mask;
+ int prev_key_size =
+ priv->extract.qos_key_extract.key_info.key_total_size;
+ int local_cfg = 0, ret;
+ uint32_t group;
+
+ /* Need both spec and mask */
+ if (!spec || !mask) {
+ DPAA2_PMD_ERR("spec or mask not present.");
+ return -EINVAL;
+ }
+ /* Only supports non-relative with offset 0 */
+ if (spec->relative || spec->offset != 0 ||
+ spec->search || spec->limit) {
+ DPAA2_PMD_ERR("relative and non zero offset not supported.");
+ return -EINVAL;
+ }
+ /* Spec len and mask len should be same */
+ if (spec->length != mask->length) {
+ DPAA2_PMD_ERR("Spec len and mask len mismatch.");
+ return -EINVAL;
+ }
+
+ /* Get traffic class index and flow id to be configured */
+ group = attr->group;
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (prev_key_size < spec->length) {
+ ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
+ spec->length);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract RAW add failed.");
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+
+ ret = dpaa2_flow_extract_add_raw(
+ &priv->extract.tc_key_extract[group],
+ spec->length);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract RAW add failed.");
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
+ mask->pattern, spec->length);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS RAW rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
+ mask->pattern, spec->length);
+ if (ret) {
+ DPAA2_PMD_ERR("FS RAW rule data set failed");
+ return -1;
+ }
+
+ (*device_configured) |= local_cfg;
+
+ return 0;
+}
+
/* The existing QoS/FS entry with IP address(es)
* needs update after
* new extract(s) are inserted before IP
@@ -3297,6 +3422,16 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return ret;
}
break;
+ case RTE_FLOW_ITEM_TYPE_RAW:
+ ret = dpaa2_configure_flow_raw(flow,
+ dev, attr, &pattern[i],
+ actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("RAW flow configuration failed!");
+ return ret;
+ }
+ break;
case RTE_FLOW_ITEM_TYPE_END:
end_of_list = 1;
break; /*End of List*/
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 01/37] bus/fslmc: fix getting the FD error
2020-05-27 13:22 ` [dpdk-dev] [PATCH 01/37] bus/fslmc: fix getting the FD error Hemant Agrawal
@ 2020-05-27 18:07 ` Akhil Goyal
0 siblings, 0 replies; 83+ messages in thread
From: Akhil Goyal @ 2020-05-27 18:07 UTC (permalink / raw)
To: Hemant Agrawal, dev, ferruh.yigit; +Cc: stable, Nipun Gupta
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Hemant Agrawal
> Sent: Wednesday, May 27, 2020 6:53 PM
> To: dev@dpdk.org; ferruh.yigit@intel.com
> Cc: stable@dpdk.org; Nipun Gupta <nipun.gupta@nxp.com>
> Subject: [dpdk-dev] [PATCH 01/37] bus/fslmc: fix getting the FD error
>
> From: Nipun Gupta <nipun.gupta@nxp.com>
>
> Fix the incorrect register for getting error
>
> Fixes: 03e36408b9fb ("bus/fslmc: add macros required by QDMA for FLE and FD")
> Cc: stable@dpdk.org
>
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> ---
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 02/37] net/dpaa: fix fd offset data type
2020-05-27 13:22 ` [dpdk-dev] [PATCH 02/37] net/dpaa: fix fd offset data type Hemant Agrawal
@ 2020-05-27 18:08 ` Akhil Goyal
0 siblings, 0 replies; 83+ messages in thread
From: Akhil Goyal @ 2020-05-27 18:08 UTC (permalink / raw)
To: Hemant Agrawal, dev, ferruh.yigit; +Cc: stable, Nipun Gupta
> From: Nipun Gupta <nipun.gupta@nxp.com>
>
> On DPAA fd offset is 9 bits, but we are using uint8_t in the
> SG case. This patch fixes the same.
> Fixes: 8cffdcbe85aa ("net/dpaa: support scattered Rx")
> Cc: stable@dpdk.org
>
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> ---
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 12/37] drivers: optimize thread local storage for dpaa
2020-05-27 13:23 ` [dpdk-dev] [PATCH 12/37] drivers: optimize thread local storage for dpaa Hemant Agrawal
@ 2020-05-27 18:13 ` Akhil Goyal
0 siblings, 0 replies; 83+ messages in thread
From: Akhil Goyal @ 2020-05-27 18:13 UTC (permalink / raw)
To: Hemant Agrawal, dev, ferruh.yigit; +Cc: Rohit Raj
> From: Rohit Raj <rohit.raj@nxp.com>
>
> Minimize the number of different thread variables
>
> Add all the thread specific variables in dpaa_portal
> structure to optimize TLS Usage.
>
> Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
> ---
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 15/37] net/dpaa: add support for fmlib in dpdk
2020-05-27 13:23 ` [dpdk-dev] [PATCH 15/37] net/dpaa: add support for fmlib in dpdk Hemant Agrawal
@ 2020-06-30 17:00 ` Ferruh Yigit
2020-07-01 4:18 ` Hemant Agrawal
0 siblings, 1 reply; 83+ messages in thread
From: Ferruh Yigit @ 2020-06-30 17:00 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: Sachin Saxena
On 5/27/2020 2:23 PM, Hemant Agrawal wrote:
> This library is required for configuring FMAN for
> various flow configurations.
This is a big patch with new files, looks like a new base code drop.
Can you please give more explanation on the patch and what 'fmlib' is?
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
<...>
> +#if defined(FM_LIB_DBG)
> + #define _fml_dbg(format, arg...) \
> + printf("fmlib [%s:%u] - " format, \
> + __func__, __LINE__, ##arg)
> +#else
> + #define _fml_dbg(arg...)
> +#endif
Shouldn't use 'printf' directly, this prevents using dynamic logging and our log
APIs. Please use a registered logtype instead.
> +
> +/*#define FM_IOCTL_DBG*/
> +
> +#if defined(FM_IOCTL_DBG)
> + #define _fm_ioctl_dbg(format, arg...) \
> + printk("fm ioctl [%s:%u](cpu:%u) - " format, \
> + __func__, __LINE__, smp_processor_id(), ##arg)
printk? :)
> +#else
> +# define _fm_ioctl_dbg(arg...)
> +#endif
> +
> +/**
> + @Group lnx_ioctl_ncsw_grp NetCommSw Linux User-Space (IOCTL) API
> + @{
> +*/
> +
> +#define NCSW_IOC_TYPE_BASE 0xe0
> + /**< defines the IOCTL type for all the NCSW Linux module commands */
> +
> +/**
> + @Group lnx_usr_FM_grp Frame Manager API
> +
> + @Description FM API functions, definitions and enums.
> +
> + @{
> +*/
There are lots of checkpatch warning in the block comment syntax, about missing
" * " on each line.
Other base dpaa/dpaa2 base code seems have it in the block comments, if this
won't create a maintance problem, what do you think to fix the syntax on comments?
<...>
> + e_IOC_FM_PCD_PRS_COUNTERS_SHIM_PARSE_RESULT_RETURNED_WITH_ERR,
> + /**< Parser counter - counts the number of times SHIM parse result is returned with errors. */
> + e_IOC_FM_PCD_PRS_COUNTERS_SOFT_PRS_CYCLES,
> + /**< Parser counter - counts the number of cycles spent executing soft parser instruction (including stall cycles). */
> + e_IOC_FM_PCD_PRS_COUNTERS_SOFT_PRS_STALL_CYCLES,
> + /**< Parser counter - counts the number of cycles stalled waiting for parser internal memory reads while executing soft parser instruction. */
Can you please break long lines?
<...>
> +#if 0
> +TODO: unused IOCTL
> +/**
> + @Function FM_PCD_ModifyCounter
> +
> + @Description Writes a value to an enabled counter. Use "0" to reset the counter.
> +
> + @Param[in] ioc_fm_pcd_counters_params_t - The requested counter parameters.
> +
> + @Return 0 on success; Error code otherwise.
> +*/
> +#define FM_PCD_IOC_MODIFY_COUNTER _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(10), ioc_fm_pcd_counters_params_t)
> +#define FM_PCD_IOC_SET_COUNTER FM_PCD_IOC_MODIFY_COUNTER
> +#endif
Can you please remove dead code?
<...>
> +/**
> + @Description Enumeration type for selecting the policer profile packet frame length selector
> +*/
> +typedef enum ioc_fm_pcd_plcr_frame_length_select {
> + e_IOC_FM_PCD_PLCR_L2_FRM_LEN, /**< L2 frame length */
> + e_IOC_FM_PCD_PLCR_L3_FRM_LEN, /**< L3 frame length */
> + e_IOC_FM_PCD_PLCR_L4_FRM_LEN, /**< L4 frame length */
> + e_IOC_FM_PCD_PLCR_FULL_FRM_LEN /**< Full frame length */
> +} ioc_fm_pcd_plcr_frame_length_select;
> +
> +/**
> + @Description Enumeration type for selecting roll-back frame
> +*/
> +typedef enum ioc_fm_pcd_plcr_roll_back_frame_select {
> + e_IOC_FM_PCD_PLCR_ROLLBACK_L2_FRM_LEN, /**< Rollback L2 frame length */
> + e_IOC_FM_PCD_PLCR_ROLLBACK_FULL_FRM_LEN /**< Rollback Full frame length */
> +} ioc_fm_pcd_plcr_roll_back_frame_select;
Please fix the leading whitespace for above two enums.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 17/37] net/dpaa: add support for fmcless mode
2020-05-27 13:23 ` [dpdk-dev] [PATCH 17/37] net/dpaa: add support for fmcless mode Hemant Agrawal
@ 2020-06-30 17:01 ` Ferruh Yigit
2020-07-01 4:04 ` Hemant Agrawal
0 siblings, 1 reply; 83+ messages in thread
From: Ferruh Yigit @ 2020-06-30 17:01 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: Sachin Saxena
On 5/27/2020 2:23 PM, Hemant Agrawal wrote:
> From: Sachin Saxena <sachin.saxena@nxp.com>
>
> This patch uses fmlib to configure the FMAN HW for flow
> and distribution configuration, thus avoiding the need
> for static FMC tool execution optionally.
What is FMC tool? Can you please put more details in the commit log.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
<...>
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (36 preceding siblings ...)
2020-05-27 13:23 ` [dpdk-dev] [PATCH 37/37] net/dpaa2: support raw flow classification Hemant Agrawal
@ 2020-06-30 17:01 ` Ferruh Yigit
2020-07-01 4:08 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
38 siblings, 1 reply; 83+ messages in thread
From: Ferruh Yigit @ 2020-06-30 17:01 UTC (permalink / raw)
To: Hemant Agrawal, dev
On 5/27/2020 2:22 PM, Hemant Agrawal wrote:
> This patch-set mainly address following enhancements
>
> 1. Supporting the non-EAL thread based I/O processing
> 2. Reducing the thread local storage
> 3. Adding support for HW FM library in DPAA, so that
> additional queue, flow configuration can be done.
> 4. Adding Shared MAC or Virtual storage profile support
> 5. DPAA2 flow support
>
> Gagandeep Singh (3):
> net/dpaa2: enable timestamp for Rx offload case as well
> bus/fslmc: combine thread specific variables
> net/dpaa: enable Tx queue taildrop
>
> Hemant Agrawal (3):
> bus/fslmc: support handle portal alloc failure
> net/dpaa: add support for fmlib in dpdk
> bus/dpaa: add Virtual Storage Profile port init
>
> Jun Yang (17):
> net/dpaa: add VSP support in FMLIB
> net/dpaa: add support for Virtual Storage Profile
> net/dpaa: add fmc parser support for VSP
> net/dpaa2: dynamic flow control support
> net/dpaa2: key extracts of flow API
> net/dpaa2: sanity check for flow extracts
> net/dpaa2: free flow rule memory
> net/dpaa2: flow QoS or FS table entry indexing
> net/dpaa2: define the size of table entry
> net/dpaa2: log of flow extracts and rules
> net/dpaa2: discrimination between IPv4 and IPv6
> net/dpaa2: distribution size set on multiple TCs
> net/dpaa2: index of queue action for flow
Can you please follow DPDK convention in patch titles which starts with a verb
and describes the motivation of the patch?
> net/dpaa2: flow data sanity check
> net/dpaa2: flow API QoS setup follows FS setup
> net/dpaa2: flow API FS miss action configuration
> net/dpaa2: configure per class distribution size
>
> Nipun Gupta (7):
> bus/fslmc: fix getting the FD error
> net/dpaa: fix fd offset data type
> bus/fslmc: rework portal allocation to a per thread basis
> bus/fslmc: support portal migration
> bus/fslmc: rename the cinh read functions used for ls1088
> net/dpaa: update process specific device info
> net/dpaa2: support raw flow classification
>
> Radu Bulie (1):
> bus/dpaa: add shared MAC support
>
> Rohit Raj (3):
> drivers: optimize thread local storage for dpaa
> bus/dpaa: enable link state interrupt
> bus/dpaa: enable set link status
>
> Sachin Saxena (3):
> net/dpaa: add 2.5G support
> net/dpaa: add support for fmcless mode
> net/dpaa: add RSS update func with FMCless
Can you please document the changes in the release notes?
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 17/37] net/dpaa: add support for fmcless mode
2020-06-30 17:01 ` Ferruh Yigit
@ 2020-07-01 4:04 ` Hemant Agrawal
2020-07-01 7:37 ` Ferruh Yigit
0 siblings, 1 reply; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-01 4:04 UTC (permalink / raw)
To: Ferruh Yigit, dev; +Cc: Sachin Saxena
Hi Ferruh,
-----Original Message-----
On 5/27/2020 2:23 PM, Hemant Agrawal wrote:
> From: Sachin Saxena <sachin.saxena@nxp.com>
>
> This patch uses fmlib to configure the FMAN HW for flow and
> distribution configuration, thus avoiding the need for static FMC tool
> execution optionally.
What is FMC tool? Can you please put more details in the commit log.
[Hemant] We will add details in next rev.
The name of MAC for dpaa platform is FMAN. FMC is FMAN Management and Configuration Tool, which is used
for statically configuring number of queues and classification before running the applications.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
<...>
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements
2020-06-30 17:01 ` [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Ferruh Yigit
@ 2020-07-01 4:08 ` Hemant Agrawal
0 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-01 4:08 UTC (permalink / raw)
To: Ferruh Yigit, dev
Hi Ferruh,
-----Original Message-----
From: Ferruh Yigit <ferruh.yigit@intel.com>
Sent: Tuesday, June 30, 2020 10:31 PM
To: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
Subject: Re: [PATCH 00/37] NXP DPAAx enhancements
On 5/27/2020 2:22 PM, Hemant Agrawal wrote:
> This patch-set mainly address following enhancements
>
> 1. Supporting the non-EAL thread based I/O processing 2. Reducing the
> thread local storage 3. Adding support for HW FM library in DPAA, so
> that additional queue, flow configuration can be done.
> 4. Adding Shared MAC or Virtual storage profile support 5. DPAA2 flow
> support
>
> Gagandeep Singh (3):
> net/dpaa2: enable timestamp for Rx offload case as well
> bus/fslmc: combine thread specific variables
> net/dpaa: enable Tx queue taildrop
>
> Hemant Agrawal (3):
> bus/fslmc: support handle portal alloc failure
> net/dpaa: add support for fmlib in dpdk
> bus/dpaa: add Virtual Storage Profile port init
>
> Jun Yang (17):
> net/dpaa: add VSP support in FMLIB
> net/dpaa: add support for Virtual Storage Profile
> net/dpaa: add fmc parser support for VSP
> net/dpaa2: dynamic flow control support
> net/dpaa2: key extracts of flow API
> net/dpaa2: sanity check for flow extracts
> net/dpaa2: free flow rule memory
> net/dpaa2: flow QoS or FS table entry indexing
> net/dpaa2: define the size of table entry
> net/dpaa2: log of flow extracts and rules
> net/dpaa2: discrimination between IPv4 and IPv6
> net/dpaa2: distribution size set on multiple TCs
> net/dpaa2: index of queue action for flow
Can you please follow DPDK convention in patch titles which starts with a verb and describes the motivation of the patch?
[Hemant] ok
> net/dpaa2: flow data sanity check
> net/dpaa2: flow API QoS setup follows FS setup
> net/dpaa2: flow API FS miss action configuration
> net/dpaa2: configure per class distribution size
>
> Nipun Gupta (7):
> bus/fslmc: fix getting the FD error
> net/dpaa: fix fd offset data type
> bus/fslmc: rework portal allocation to a per thread basis
> bus/fslmc: support portal migration
> bus/fslmc: rename the cinh read functions used for ls1088
> net/dpaa: update process specific device info
> net/dpaa2: support raw flow classification
>
> Radu Bulie (1):
> bus/dpaa: add shared MAC support
>
> Rohit Raj (3):
> drivers: optimize thread local storage for dpaa
> bus/dpaa: enable link state interrupt
> bus/dpaa: enable set link status
>
> Sachin Saxena (3):
> net/dpaa: add 2.5G support
> net/dpaa: add support for fmcless mode
> net/dpaa: add RSS update func with FMCless
Can you please document the changes in the release notes?
[Hemant] ok
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 15/37] net/dpaa: add support for fmlib in dpdk
2020-06-30 17:00 ` Ferruh Yigit
@ 2020-07-01 4:18 ` Hemant Agrawal
2020-07-01 7:35 ` Ferruh Yigit
0 siblings, 1 reply; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-01 4:18 UTC (permalink / raw)
To: Ferruh Yigit, dev; +Cc: Sachin Saxena
On 30-Jun-20 10:30 PM, Ferruh Yigit wrote:
> On 5/27/2020 2:23 PM, Hemant Agrawal wrote:
>> This library is required for configuring FMAN for
>> various flow configurations.
>
> This is a big patch with new files, looks like a new base code drop.
> Can you please give more explanation on the patch and what 'fmlib' is?
Yes, fmlib means FMAN config library. It is a base code used by many projects, we have integrated it into DPDK.
>
>
>>
>> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>
> <...>
>
>> +#if defined(FM_LIB_DBG)
>> + #define _fml_dbg(format, arg...) \
>> + printf("fmlib [%s:%u] - " format, \
>> + __func__, __LINE__, ##arg)
>> +#else
>> + #define _fml_dbg(arg...)
>> +#endif
>
> Shouldn't use 'printf' directly, this prevents using dynamic logging and our log
> APIs. Please use a registered logtype instead.
ok
>
>
>> +
>> +/*#define FM_IOCTL_DBG*/
>> +
>> +#if defined(FM_IOCTL_DBG)
>> + #define _fm_ioctl_dbg(format, arg...) \
>> + printk("fm ioctl [%s:%u](cpu:%u) - " format, \
>> + __func__, __LINE__, smp_processor_id(), ##arg)
>
> printk? :)
The same code goes to kernel as well, so for kernel portions they are using printk
>
>
>> +#else
>> +# define _fm_ioctl_dbg(arg...)
>> +#endif
>> +
>> +/**
>> + @Group lnx_ioctl_ncsw_grp NetCommSw Linux User-Space (IOCTL) API
>> + @{
>> +*/
>> +
>> +#define NCSW_IOC_TYPE_BASE 0xe0
>> + /**< defines the IOCTL type for all the NCSW Linux module commands */
>> +
>> +/**
>> + @Group lnx_usr_FM_grp Frame Manager API
>> +
>> + @Description FM API functions, definitions and enums.
>> +
>> + @{
>> +*/
>
> There are lots of checkpatch warning in the block comment syntax, about missing
> " * " on each line.
> Other base dpaa/dpaa2 base code seems have it in the block comments, if this
> won't create a maintance problem, what do you think to fix the syntax on comments?
>
> <...>
We have tried to correct few of these. But this code is an independent base library used by multiple projects
If we tried to align it with dpdk style, it will become a maintenance issue for us to get any future upgrades.
What you suggest?
>
>
>> + e_IOC_FM_PCD_PRS_COUNTERS_SHIM_PARSE_RESULT_RETURNED_WITH_ERR,
>> + /**< Parser counter - counts the number of times SHIM parse result is returned with errors. */
>> + e_IOC_FM_PCD_PRS_COUNTERS_SOFT_PRS_CYCLES,
>> + /**< Parser counter - counts the number of cycles spent executing soft parser instruction (including stall cycles). */
>> + e_IOC_FM_PCD_PRS_COUNTERS_SOFT_PRS_STALL_CYCLES,
>> + /**< Parser counter - counts the number of cycles stalled waiting for parser internal memory reads while executing soft parser instruction. */
>
> Can you please break long lines?
Same as explained above. If we make manual changes in this code, it's future upgrade from base library releases will become a maintenance issue for us.
>
>
> <...>
>
>> +#if 0
>> +TODO: unused IOCTL
>> +/**
>> + @Function FM_PCD_ModifyCounter
>> +
>> + @Description Writes a value to an enabled counter. Use "0" to reset the counter.
>> +
>> + @Param[in] ioc_fm_pcd_counters_params_t - The requested counter parameters.
>> +
>> + @Return 0 on success; Error code otherwise.
>> +*/
>> +#define FM_PCD_IOC_MODIFY_COUNTER _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(10), ioc_fm_pcd_counters_params_t)
>> +#define FM_PCD_IOC_SET_COUNTER FM_PCD_IOC_MODIFY_COUNTER
>> +#endif
>
> Can you please remove dead code?
ok
>
>
> <...>
>
>> +/**
>> + @Description Enumeration type for selecting the policer profile packet frame length selector
>> +*/
>> +typedef enum ioc_fm_pcd_plcr_frame_length_select {
>> + e_IOC_FM_PCD_PLCR_L2_FRM_LEN, /**< L2 frame length */
>> + e_IOC_FM_PCD_PLCR_L3_FRM_LEN, /**< L3 frame length */
>> + e_IOC_FM_PCD_PLCR_L4_FRM_LEN, /**< L4 frame length */
>> + e_IOC_FM_PCD_PLCR_FULL_FRM_LEN /**< Full frame length */
>> +} ioc_fm_pcd_plcr_frame_length_select;
>> +
>> +/**
>> + @Description Enumeration type for selecting roll-back frame
>> +*/
>> +typedef enum ioc_fm_pcd_plcr_roll_back_frame_select {
>> + e_IOC_FM_PCD_PLCR_ROLLBACK_L2_FRM_LEN, /**< Rollback L2 frame length */
>> + e_IOC_FM_PCD_PLCR_ROLLBACK_FULL_FRM_LEN /**< Rollback Full frame length */
>> +} ioc_fm_pcd_plcr_roll_back_frame_select;
>
> Please fix the leading whitespace for above two enums.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 05/37] bus/fslmc: rework portal allocation to a per thread basis
2020-05-27 13:22 ` [dpdk-dev] [PATCH 05/37] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
@ 2020-07-01 7:23 ` Ferruh Yigit
0 siblings, 0 replies; 83+ messages in thread
From: Ferruh Yigit @ 2020-07-01 7:23 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: Nipun Gupta
On 5/27/2020 2:22 PM, Hemant Agrawal wrote:
> From: Nipun Gupta <nipun.gupta@nxp.com>
>
> The patch reworks the portal allocation which was previously
> being done on per lcore basis to a per thread basis.
> Now user can also create its own threads and use DPAA2 portals
> for packet I/O.
>
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
<...>
> @@ -229,7 +264,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
> return 0;
> }
>
> -static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
> +static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
> {
> struct dpaa2_dpio_dev *dpio_dev = NULL;
> int ret;
> @@ -245,108 +280,83 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
> DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
> dpio_dev, dpio_dev->index, syscall(SYS_gettid));
>
> - ret = dpaa2_configure_stashing(dpio_dev, lcoreid);
> - if (ret)
> + ret = dpaa2_configure_stashing(dpio_dev);
> + if (ret) {
> DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
> + return NULL;
> + }
> +
> + ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev);
> + if (ret) {
> + DPAA2_BUS_ERR("pthread_setspecific failed with ret: %d", ret);
> + dpaa2_put_qbman_swp(dpio_dev);
> + return NULL;
> + }
>
> return dpio_dev;
> }
>
> +static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
> +{
> +#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
> + dpaa2_dpio_intr_deinit(dpio_dev);
> +#endif
> + if (dpio_dev)
> + rte_atomic16_clear(&dpio_dev->ref_count);
> +}
There is a build error on patch by patch build [1], just moving
'dpaa2_put_qbman_swp()' static function above the 'dpaa2_get_qbman_swp()' (where
it is used) solves it, and indeed next patch does it.
If you will make a new version can you please fix it, if there will be no new
version I can do while merging.
[1]
.../drivers/bus/fslmc/portal/dpaa2_hw_dpio.c:292:3: error: implicit declaration
of function 'dpaa2_put_qbman_swp' is invalid in C99
[-Werror,-Wimplicit-function-declaration]
dpaa2_put_qbman_swp(dpio_dev);
^
.../drivers/bus/fslmc/portal/dpaa2_hw_dpio.c:292:3: note: did you mean
'dpaa2_get_qbman_swp'?
.../drivers/bus/fslmc/portal/dpaa2_hw_dpio.c:267:31: note: 'dpaa2_get_qbman_swp'
declared here
static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
^
.../drivers/bus/fslmc/portal/dpaa2_hw_dpio.c:299:13: error: static declaration
of 'dpaa2_put_qbman_swp' follows non-static declaration
static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
^
.../drivers/bus/fslmc/portal/dpaa2_hw_dpio.c:292:3: note: previous implicit
declaration is here
dpaa2_put_qbman_swp(dpio_dev);
<...>
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 15/37] net/dpaa: add support for fmlib in dpdk
2020-07-01 4:18 ` Hemant Agrawal
@ 2020-07-01 7:35 ` Ferruh Yigit
0 siblings, 0 replies; 83+ messages in thread
From: Ferruh Yigit @ 2020-07-01 7:35 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: Sachin Saxena
On 7/1/2020 5:18 AM, Hemant Agrawal wrote:
>
> On 30-Jun-20 10:30 PM, Ferruh Yigit wrote:
>> On 5/27/2020 2:23 PM, Hemant Agrawal wrote:
>>> This library is required for configuring FMAN for
>>> various flow configurations.
>>
>> This is a big patch with new files, looks like a new base code drop.
>> Can you please give more explanation on the patch and what 'fmlib' is?
>
> Yes, fmlib means FMAN config library. It is a base code used by many projects, we have integrated it into DPDK.
Thanks, can you please put this into commit log in next version? And even FMAN
is not familiar to me, so even a little more details can help.
>>
>>
>>>
>>> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>
>> <...>
>>
>>> +#if defined(FM_LIB_DBG)
>>> + #define _fml_dbg(format, arg...) \
>>> + printf("fmlib [%s:%u] - " format, \
>>> + __func__, __LINE__, ##arg)
>>> +#else
>>> + #define _fml_dbg(arg...)
>>> +#endif
>>
>> Shouldn't use 'printf' directly, this prevents using dynamic logging and our log
>> APIs. Please use a registered logtype instead.
>
> ok
>>
>>
>>> +
>>> +/*#define FM_IOCTL_DBG*/
>>> +
>>> +#if defined(FM_IOCTL_DBG)
>>> + #define _fm_ioctl_dbg(format, arg...) \
>>> + printk("fm ioctl [%s:%u](cpu:%u) - " format, \
>>> + __func__, __LINE__, smp_processor_id(), ##arg)
>>
>> printk? :)
>
> The same code goes to kernel as well, so for kernel portions they are using printk
For DPDK this is dead code, can it be possible to strip kernel related ones in
the DPDK code? With some kind of OS layer perhaps (that is what Intel does).
>>
>>
>>> +#else
>>> +# define _fm_ioctl_dbg(arg...)
>>> +#endif
>>> +
>>> +/**
>>> + @Group lnx_ioctl_ncsw_grp NetCommSw Linux User-Space (IOCTL) API
>>> + @{
>>> +*/
>>> +
>>> +#define NCSW_IOC_TYPE_BASE 0xe0
>>> + /**< defines the IOCTL type for all the NCSW Linux module commands */
>>> +
>>> +/**
>>> + @Group lnx_usr_FM_grp Frame Manager API
>>> +
>>> + @Description FM API functions, definitions and enums.
>>> +
>>> + @{
>>> +*/
>>
>> There are lots of checkpatch warning in the block comment syntax, about missing
>> " * " on each line.
>> Other base dpaa/dpaa2 base code seems have it in the block comments, if this
>> won't create a maintance problem, what do you think to fix the syntax on comments?
>>
>> <...>
>
> We have tried to correct few of these. But this code is an independent base library used by multiple projects
>
> If we tried to align it with dpdk style, it will become a maintenance issue for us to get any future upgrades.
>
> What you suggest?
I agree that it is more practical to avoid maintenance cost for style issues in
this kind of shared code.
But please at least ensure the style is consistent in the file/module with its
initial style.
>>
>>
>>> + e_IOC_FM_PCD_PRS_COUNTERS_SHIM_PARSE_RESULT_RETURNED_WITH_ERR,
>>> + /**< Parser counter - counts the number of times SHIM parse result is returned with errors. */
>>> + e_IOC_FM_PCD_PRS_COUNTERS_SOFT_PRS_CYCLES,
>>> + /**< Parser counter - counts the number of cycles spent executing soft parser instruction (including stall cycles). */
>>> + e_IOC_FM_PCD_PRS_COUNTERS_SOFT_PRS_STALL_CYCLES,
>>> + /**< Parser counter - counts the number of cycles stalled waiting for parser internal memory reads while executing soft parser instruction. */
>>
>> Can you please break long lines?
>
> Same as explained above. If we make manual changes in this code, it's future upgrade from base library releases will become a maintenance issue for us.
>>
>>
>> <...>
>>
>>> +#if 0
>>> +TODO: unused IOCTL
>>> +/**
>>> + @Function FM_PCD_ModifyCounter
>>> +
>>> + @Description Writes a value to an enabled counter. Use "0" to reset the counter.
>>> +
>>> + @Param[in] ioc_fm_pcd_counters_params_t - The requested counter parameters.
>>> +
>>> + @Return 0 on success; Error code otherwise.
>>> +*/
>>> +#define FM_PCD_IOC_MODIFY_COUNTER _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(10), ioc_fm_pcd_counters_params_t)
>>> +#define FM_PCD_IOC_SET_COUNTER FM_PCD_IOC_MODIFY_COUNTER
>>> +#endif
>>
>> Can you please remove dead code?
>
> ok
>>
>>
>> <...>
>>
>>> +/**
>>> + @Description Enumeration type for selecting the policer profile packet frame length selector
>>> +*/
>>> +typedef enum ioc_fm_pcd_plcr_frame_length_select {
>>> + e_IOC_FM_PCD_PLCR_L2_FRM_LEN, /**< L2 frame length */
>>> + e_IOC_FM_PCD_PLCR_L3_FRM_LEN, /**< L3 frame length */
>>> + e_IOC_FM_PCD_PLCR_L4_FRM_LEN, /**< L4 frame length */
>>> + e_IOC_FM_PCD_PLCR_FULL_FRM_LEN /**< Full frame length */
>>> +} ioc_fm_pcd_plcr_frame_length_select;
>>> +
>>> +/**
>>> + @Description Enumeration type for selecting roll-back frame
>>> +*/
>>> +typedef enum ioc_fm_pcd_plcr_roll_back_frame_select {
>>> + e_IOC_FM_PCD_PLCR_ROLLBACK_L2_FRM_LEN, /**< Rollback L2 frame length */
>>> + e_IOC_FM_PCD_PLCR_ROLLBACK_FULL_FRM_LEN /**< Rollback Full frame length */
>>> +} ioc_fm_pcd_plcr_roll_back_frame_select;
>>
>> Please fix the leading whitespace for above two enums.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH 17/37] net/dpaa: add support for fmcless mode
2020-07-01 4:04 ` Hemant Agrawal
@ 2020-07-01 7:37 ` Ferruh Yigit
0 siblings, 0 replies; 83+ messages in thread
From: Ferruh Yigit @ 2020-07-01 7:37 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: Sachin Saxena
On 7/1/2020 5:04 AM, Hemant Agrawal wrote:
> Hi Ferruh,
>
> -----Original Message-----
> On 5/27/2020 2:23 PM, Hemant Agrawal wrote:
>> From: Sachin Saxena <sachin.saxena@nxp.com>
>>
>> This patch uses fmlib to configure the FMAN HW for flow and
>> distribution configuration, thus avoiding the need for static FMC tool
>> execution optionally.
>
> What is FMC tool? Can you please put more details in the commit log.
> [Hemant] We will add details in next rev.
> The name of MAC for dpaa platform is FMAN. FMC is FMAN Management and Configuration Tool, which is used
> for statically configuring number of queues and classification before running the applications.
Thanks.
Can we say it is a proprietary version of 'ethtool'? Is this a hard dependency
for Linux drivers, should we document this in DPDK?
Was this a dependency for DPDK before this patch? And again should we document this?
>
>>
>> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>
> <...>
>
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 00/29] NXP DPAAx enhancements
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
` (37 preceding siblings ...)
2020-06-30 17:01 ` [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Ferruh Yigit
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 01/29] bus/fslmc: fix getting the FD error Hemant Agrawal
` (29 more replies)
38 siblings, 30 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
v2: dropping the fmlib changes - we will send them separately
This patch-set mainly address following enhancements
1. Supporting the non-EAL thread based I/O processing
2. Reducing the thread local storage
3. DPAA2 flow support
4. other minor fixes and enhancements
Gagandeep Singh (3):
net/dpaa2: enable timestamp for Rx offload case as well
bus/fslmc: combine thread specific variables
net/dpaa: enable Tx queue taildrop
Hemant Agrawal (1):
bus/fslmc: support handle portal alloc failure
Jun Yang (14):
net/dpaa2: support dynamic flow control
net/dpaa2: support key extracts of flow API
net/dpaa2: add sanity check for flow extracts
net/dpaa2: free flow rule memory
net/dpaa2: support QoS or FS table entry indexing
net/dpaa2: define the size of table entry
net/dpaa2: add logging of flow extracts and rules
net/dpaa2: support iscrimination between IPv4 and IPv6
net/dpaa2: support distribution size set on multiple TCs
net/dpaa2: support ndex of queue action for flow
net/dpaa2: add flow data sanity check
net/dpaa2: modify flow API QoS setup to follow FS setup
net/dpaa2: support flow API FS miss action configuration
net/dpaa2: configure per class distribution size
Nipun Gupta (7):
bus/fslmc: fix getting the FD error
net/dpaa: fix fd offset data type
bus/fslmc: rework portal allocation to a per thread basis
bus/fslmc: support portal migration
bus/fslmc: rename the cinh read functions used for ls1088
net/dpaa: update process specific device info
net/dpaa2: support raw flow classification
Rohit Raj (3):
drivers: optimize thread local storage for dpaa
bus/dpaa: enable link state interrupt
bus/dpaa: enable set link status
Sachin Saxena (1):
net/dpaa: add 2.5G support
doc/guides/nics/features/dpaa.ini | 3 +-
doc/guides/nics/features/dpaa2.ini | 1 +
doc/guides/rel_notes/release_20_08.rst | 13 +
drivers/bus/dpaa/base/fman/fman.c | 10 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 3 +-
drivers/bus/dpaa/base/qbman/process.c | 99 +-
drivers/bus/dpaa/base/qbman/qman.c | 43 +
drivers/bus/dpaa/dpaa_bus.c | 52 +-
drivers/bus/dpaa/include/fman.h | 3 +
drivers/bus/dpaa/include/fsl_qman.h | 17 +
drivers/bus/dpaa/include/process.h | 31 +
drivers/bus/dpaa/rte_bus_dpaa_version.map | 7 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 48 +-
drivers/bus/fslmc/Makefile | 1 +
drivers/bus/fslmc/fslmc_bus.c | 2 -
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 284 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 10 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 10 +-
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 1 +
.../fslmc/qbman/include/fsl_qbman_portal.h | 8 +-
drivers/bus/fslmc/qbman/qbman_portal.c | 580 +-
drivers/bus/fslmc/qbman/qbman_portal.h | 19 +-
drivers/bus/fslmc/qbman/qbman_sys.h | 135 +-
drivers/bus/fslmc/rte_bus_fslmc_version.map | 1 -
drivers/bus/fslmc/rte_fslmc.h | 18 -
drivers/common/dpaax/compat.h | 5 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 11 +-
drivers/event/dpaa/dpaa_eventdev.c | 4 +-
drivers/mempool/dpaa/dpaa_mempool.c | 6 +-
drivers/net/dpaa/dpaa_ethdev.c | 431 +-
drivers/net/dpaa/dpaa_ethdev.h | 2 +-
drivers/net/dpaa/dpaa_rxtx.c | 77 +-
drivers/net/dpaa/dpaa_rxtx.h | 3 +
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 50 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 95 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 49 +-
drivers/net/dpaa2/dpaa2_flow.c | 4767 ++++++++++++-----
37 files changed, 5141 insertions(+), 1758 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 01/29] bus/fslmc: fix getting the FD error
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 02/29] net/dpaa: fix fd offset data type Hemant Agrawal
` (28 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Fix the incorrect register for getting error
Fixes: 03e36408b9fb ("bus/fslmc: add macros required by QDMA for FLE and FD")
Cc: stable@dpdk.org
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 4682a5299..f1c70251a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -286,7 +286,7 @@ enum qbman_fd_format {
#define DPAA2_GET_FD_FRC(fd) ((fd)->simple.frc)
#define DPAA2_GET_FD_FLC(fd) \
(((uint64_t)((fd)->simple.flc_hi) << 32) + (fd)->simple.flc_lo)
-#define DPAA2_GET_FD_ERR(fd) ((fd)->simple.bpid_offset & 0x000000FF)
+#define DPAA2_GET_FD_ERR(fd) ((fd)->simple.ctrl & 0x000000FF)
#define DPAA2_GET_FLE_OFFSET(fle) (((fle)->fin_bpid_offset & 0x0FFF0000) >> 16)
#define DPAA2_SET_FLE_SG_EXT(fle) ((fle)->fin_bpid_offset |= (uint64_t)1 << 29)
#define DPAA2_IS_SET_FLE_SG_EXT(fle) \
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 02/29] net/dpaa: fix fd offset data type
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 01/29] bus/fslmc: fix getting the FD error Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 03/29] net/dpaa2: enable timestamp for Rx offload case as well Hemant Agrawal
` (27 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
On DPAA fd offset is 9 bits, but we are using uint8_t in the
SG case. This patch fixes the same.
Fixes: 8cffdcbe85aa ("net/dpaa: support scattered Rx")
Cc: stable@dpdk.org
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 5dba1db8b..3aeecb7d2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -305,7 +305,7 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
struct qm_sg_entry *sgt, *sg_temp;
void *vaddr, *sg_vaddr;
int i = 0;
- uint8_t fd_offset = fd->offset;
+ uint16_t fd_offset = fd->offset;
vaddr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd));
if (!vaddr) {
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 03/29] net/dpaa2: enable timestamp for Rx offload case as well
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 01/29] bus/fslmc: fix getting the FD error Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 02/29] net/dpaa: fix fd offset data type Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-11 13:46 ` Thomas Monjalon
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 04/29] bus/fslmc: combine thread specific variables Hemant Agrawal
` (26 subsequent siblings)
29 siblings, 1 reply; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
This patch enables the packet timestamping
conditionally when Rx offload is enabled for timestamp.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a1f19194d..8edd4b3cd 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -524,8 +524,10 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
return ret;
}
+#if !defined(RTE_LIBRTE_IEEE1588)
if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
- dpaa2_enable_ts = true;
+#endif
+ dpaa2_enable_ts = true;
if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
tx_l3_csum_offload = true;
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 04/29] bus/fslmc: combine thread specific variables
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (2 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 03/29] net/dpaa2: enable timestamp for Rx offload case as well Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 05/29] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
` (25 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
This is to reduce the thread local storage
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/fslmc/fslmc_bus.c | 2 --
drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 7 +++++++
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 8 ++++++++
drivers/bus/fslmc/rte_bus_fslmc_version.map | 1 -
drivers/bus/fslmc/rte_fslmc.h | 18 ------------------
5 files changed, 15 insertions(+), 21 deletions(-)
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 25d364e81..beb3dd008 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -35,8 +35,6 @@ rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type)
return rte_fslmc_bus.device_count[device_type];
}
-RTE_DEFINE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
-
static void
cleanup_fslmc_device_list(void)
{
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 7c5966241..f6436f2e5 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -28,6 +28,13 @@ RTE_DECLARE_PER_LCORE(struct dpaa2_io_portal_t, _dpaa2_io);
#define DPAA2_PER_LCORE_ETHRX_DPIO RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
#define DPAA2_PER_LCORE_ETHRX_PORTAL DPAA2_PER_LCORE_ETHRX_DPIO->sw_portal
+#define DPAA2_PER_LCORE_DQRR_SIZE \
+ RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_size
+#define DPAA2_PER_LCORE_DQRR_HELD \
+ RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_held
+#define DPAA2_PER_LCORE_DQRR_MBUF(i) \
+ RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.mbuf[i]
+
/* Variable to store DPAA2 DQRR size */
extern uint8_t dpaa2_dqrr_size;
/* Variable to store DPAA2 EQCR size */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index f1c70251a..be48462dd 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -87,6 +87,13 @@ struct eqresp_metadata {
struct rte_mempool *mp;
};
+#define DPAA2_PORTAL_DEQUEUE_DEPTH 32
+struct dpaa2_portal_dqrr {
+ struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH];
+ uint64_t dqrr_held;
+ uint8_t dqrr_size;
+};
+
struct dpaa2_dpio_dev {
TAILQ_ENTRY(dpaa2_dpio_dev) next;
/**< Pointer to Next device instance */
@@ -112,6 +119,7 @@ struct dpaa2_dpio_dev {
struct rte_intr_handle intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
+ struct dpaa2_portal_dqrr dpaa2_held_bufs;
};
struct dpaa2_dpbp_dev {
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index 69e7dc6ad..2a79f4518 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -57,7 +57,6 @@ INTERNAL {
mc_get_version;
mc_send_command;
per_lcore__dpaa2_io;
- per_lcore_dpaa2_held_bufs;
qbman_check_command_complete;
qbman_check_new_result;
qbman_eq_desc_clear;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 5078b48ee..80873fffc 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -137,24 +137,6 @@ struct rte_fslmc_bus {
/**< Count of all devices scanned */
};
-#define DPAA2_PORTAL_DEQUEUE_DEPTH 32
-
-/* Create storage for dqrr entries per lcore */
-struct dpaa2_portal_dqrr {
- struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH];
- uint64_t dqrr_held;
- uint8_t dqrr_size;
-};
-
-RTE_DECLARE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
-
-#define DPAA2_PER_LCORE_DQRR_SIZE \
- RTE_PER_LCORE(dpaa2_held_bufs).dqrr_size
-#define DPAA2_PER_LCORE_DQRR_HELD \
- RTE_PER_LCORE(dpaa2_held_bufs).dqrr_held
-#define DPAA2_PER_LCORE_DQRR_MBUF(i) \
- RTE_PER_LCORE(dpaa2_held_bufs).mbuf[i]
-
/**
* Register a DPAA2 driver.
*
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 05/29] bus/fslmc: rework portal allocation to a per thread basis
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (3 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 04/29] bus/fslmc: combine thread specific variables Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 06/29] bus/fslmc: support handle portal alloc failure Hemant Agrawal
` (24 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
The patch reworks the portal allocation which was previously
being done on per lcore basis to a per thread basis.
Now user can also create its own threads and use DPAA2 portals
for packet I/O.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/bus/fslmc/Makefile | 1 +
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 220 +++++++++++++----------
drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 3 -
3 files changed, 124 insertions(+), 100 deletions(-)
diff --git a/drivers/bus/fslmc/Makefile b/drivers/bus/fslmc/Makefile
index c70e359c8..b98d758ee 100644
--- a/drivers/bus/fslmc/Makefile
+++ b/drivers/bus/fslmc/Makefile
@@ -17,6 +17,7 @@ CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
CFLAGS += -I$(RTE_SDK)/drivers/common/dpaax
CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
+LDLIBS += -lpthread
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
LDLIBS += -lrte_ethdev
LDLIBS += -lrte_common_dpaax
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 21c535f2f..47ae72749 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -62,6 +62,9 @@ uint8_t dpaa2_dqrr_size;
/* Variable to store DPAA2 EQCR size */
uint8_t dpaa2_eqcr_size;
+/* Variable to hold the portal_key, once created.*/
+static pthread_key_t dpaa2_portal_key;
+
/*Stashing Macros default for LS208x*/
static int dpaa2_core_cluster_base = 0x04;
static int dpaa2_cluster_sz = 2;
@@ -87,6 +90,32 @@ static int dpaa2_cluster_sz = 2;
* Cluster 4 (ID = x07) : CPU14, CPU15;
*/
+static int
+dpaa2_get_core_id(void)
+{
+ rte_cpuset_t cpuset;
+ int i, ret, cpu_id = -1;
+
+ ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+ &cpuset);
+ if (ret) {
+ DPAA2_BUS_ERR("pthread_getaffinity_np() failed");
+ return ret;
+ }
+
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ if (CPU_ISSET(i, &cpuset)) {
+ if (cpu_id == -1)
+ cpu_id = i;
+ else
+ /* Multiple cpus are affined */
+ return -1;
+ }
+ }
+
+ return cpu_id;
+}
+
static int
dpaa2_core_cluster_sdest(int cpu_id)
{
@@ -97,7 +126,7 @@ dpaa2_core_cluster_sdest(int cpu_id)
#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
static void
-dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
+dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id)
{
#define STRING_LEN 28
#define COMMAND_LEN 50
@@ -130,7 +159,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
return;
}
- cpu_mask = cpu_mask << dpaa2_cpu[lcoreid];
+ cpu_mask = cpu_mask << dpaa2_cpu[cpu_id];
snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity",
cpu_mask, token);
ret = system(command);
@@ -144,7 +173,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
fclose(file);
}
-static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
+static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int cpu_id)
{
struct epoll_event epoll_ev;
int eventfd, dpio_epoll_fd, ret;
@@ -181,36 +210,42 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
}
dpio_dev->epoll_fd = dpio_epoll_fd;
- dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, lcoreid);
+ dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, cpu_id);
return 0;
}
+
+static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
+{
+ int ret;
+
+ ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ if (ret)
+ DPAA2_BUS_ERR("DPIO interrupt disable failed");
+
+ close(dpio_dev->epoll_fd);
+}
#endif
static int
-dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
+dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
{
int sdest, ret;
int cpu_id;
/* Set the Stashing Destination */
- if (lcoreid < 0) {
- lcoreid = rte_get_master_lcore();
- if (lcoreid < 0) {
- DPAA2_BUS_ERR("Getting CPU Index failed");
- return -1;
- }
+ cpu_id = dpaa2_get_core_id();
+ if (cpu_id < 0) {
+ DPAA2_BUS_ERR("Thread not affined to a single core");
+ return -1;
}
- cpu_id = dpaa2_cpu[lcoreid];
-
/* Set the STASH Destination depending on Current CPU ID.
* Valid values of SDEST are 4,5,6,7. Where,
*/
-
sdest = dpaa2_core_cluster_sdest(cpu_id);
- DPAA2_BUS_DEBUG("Portal= %d CPU= %u lcore id =%u SDEST= %d",
- dpio_dev->index, cpu_id, lcoreid, sdest);
+ DPAA2_BUS_DEBUG("Portal= %d CPU= %u SDEST= %d",
+ dpio_dev->index, cpu_id, sdest);
ret = dpio_set_stashing_destination(dpio_dev->dpio, CMD_PRI_LOW,
dpio_dev->token, sdest);
@@ -220,7 +255,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
}
#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
- if (dpaa2_dpio_intr_init(dpio_dev, lcoreid)) {
+ if (dpaa2_dpio_intr_init(dpio_dev, cpu_id)) {
DPAA2_BUS_ERR("Interrupt registration failed for dpio");
return -1;
}
@@ -229,7 +264,17 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
return 0;
}
-static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
+static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
+{
+ if (dpio_dev) {
+#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
+ dpaa2_dpio_intr_deinit(dpio_dev);
+#endif
+ rte_atomic16_clear(&dpio_dev->ref_count);
+ }
+}
+
+static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
{
struct dpaa2_dpio_dev *dpio_dev = NULL;
int ret;
@@ -245,9 +290,18 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
dpio_dev, dpio_dev->index, syscall(SYS_gettid));
- ret = dpaa2_configure_stashing(dpio_dev, lcoreid);
- if (ret)
+ ret = dpaa2_configure_stashing(dpio_dev);
+ if (ret) {
DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+ return NULL;
+ }
+
+ ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev);
+ if (ret) {
+ DPAA2_BUS_ERR("pthread_setspecific failed with ret: %d", ret);
+ dpaa2_put_qbman_swp(dpio_dev);
+ return NULL;
+ }
return dpio_dev;
}
@@ -255,98 +309,55 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
int
dpaa2_affine_qbman_swp(void)
{
- unsigned int lcore_id = rte_lcore_id();
+ struct dpaa2_dpio_dev *dpio_dev;
uint64_t tid = syscall(SYS_gettid);
- if (lcore_id == LCORE_ID_ANY)
- lcore_id = rte_get_master_lcore();
- /* if the core id is not supported */
- else if (lcore_id >= RTE_MAX_LCORE)
- return -1;
-
- if (dpaa2_io_portal[lcore_id].dpio_dev) {
- DPAA2_BUS_DP_INFO("DPAA Portal=%p (%d) is being shared"
- " between thread %" PRIu64 " and current "
- "%" PRIu64 "\n",
- dpaa2_io_portal[lcore_id].dpio_dev,
- dpaa2_io_portal[lcore_id].dpio_dev->index,
- dpaa2_io_portal[lcore_id].net_tid,
- tid);
- RTE_PER_LCORE(_dpaa2_io).dpio_dev
- = dpaa2_io_portal[lcore_id].dpio_dev;
- rte_atomic16_inc(&dpaa2_io_portal
- [lcore_id].dpio_dev->ref_count);
- dpaa2_io_portal[lcore_id].net_tid = tid;
-
- DPAA2_BUS_DP_DEBUG("Old Portal=%p (%d) affined thread - "
- "%" PRIu64 "\n",
- dpaa2_io_portal[lcore_id].dpio_dev,
- dpaa2_io_portal[lcore_id].dpio_dev->index,
- tid);
- return 0;
- }
-
/* Populate the dpaa2_io_portal structure */
- dpaa2_io_portal[lcore_id].dpio_dev = dpaa2_get_qbman_swp(lcore_id);
-
- if (dpaa2_io_portal[lcore_id].dpio_dev) {
- RTE_PER_LCORE(_dpaa2_io).dpio_dev
- = dpaa2_io_portal[lcore_id].dpio_dev;
- dpaa2_io_portal[lcore_id].net_tid = tid;
+ if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) {
+ dpio_dev = dpaa2_get_qbman_swp();
+ if (!dpio_dev) {
+ DPAA2_BUS_ERR("No software portal resource left");
+ return -1;
+ }
+ RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
- return 0;
- } else {
- return -1;
+ DPAA2_BUS_INFO(
+ "DPAA Portal=%p (%d) is affined to thread %" PRIu64,
+ dpio_dev, dpio_dev->index, tid);
}
+ return 0;
}
int
dpaa2_affine_qbman_ethrx_swp(void)
{
- unsigned int lcore_id = rte_lcore_id();
+ struct dpaa2_dpio_dev *dpio_dev;
uint64_t tid = syscall(SYS_gettid);
- if (lcore_id == LCORE_ID_ANY)
- lcore_id = rte_get_master_lcore();
- /* if the core id is not supported */
- else if (lcore_id >= RTE_MAX_LCORE)
- return -1;
+ /* Populate the dpaa2_io_portal structure */
+ if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) {
+ dpio_dev = dpaa2_get_qbman_swp();
+ if (!dpio_dev) {
+ DPAA2_BUS_ERR("No software portal resource left");
+ return -1;
+ }
+ RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
- if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) {
- DPAA2_BUS_DP_INFO(
- "DPAA Portal=%p (%d) is being shared between thread"
- " %" PRIu64 " and current %" PRIu64 "\n",
- dpaa2_io_portal[lcore_id].ethrx_dpio_dev,
- dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index,
- dpaa2_io_portal[lcore_id].sec_tid,
- tid);
- RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
- = dpaa2_io_portal[lcore_id].ethrx_dpio_dev;
- rte_atomic16_inc(&dpaa2_io_portal
- [lcore_id].ethrx_dpio_dev->ref_count);
- dpaa2_io_portal[lcore_id].sec_tid = tid;
-
- DPAA2_BUS_DP_DEBUG(
- "Old Portal=%p (%d) affined thread"
- " - %" PRIu64 "\n",
- dpaa2_io_portal[lcore_id].ethrx_dpio_dev,
- dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index,
- tid);
- return 0;
+ DPAA2_BUS_INFO(
+ "DPAA Portal=%p (%d) is affined for eth rx to thread %"
+ PRIu64, dpio_dev, dpio_dev->index, tid);
}
+ return 0;
+}
- /* Populate the dpaa2_io_portal structure */
- dpaa2_io_portal[lcore_id].ethrx_dpio_dev =
- dpaa2_get_qbman_swp(lcore_id);
-
- if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) {
- RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
- = dpaa2_io_portal[lcore_id].ethrx_dpio_dev;
- dpaa2_io_portal[lcore_id].sec_tid = tid;
- return 0;
- } else {
- return -1;
- }
+static void dpaa2_portal_finish(void *arg)
+{
+ RTE_SET_USED(arg);
+
+ dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).dpio_dev);
+ dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev);
+
+ pthread_setspecific(dpaa2_portal_key, NULL);
}
/*
@@ -398,6 +409,7 @@ dpaa2_create_dpio_device(int vdev_fd,
struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)};
struct qbman_swp_desc p_des;
struct dpio_attr attr;
+ int ret;
static int check_lcore_cpuset;
if (obj_info->num_regions < NUM_DPIO_REGIONS) {
@@ -547,12 +559,26 @@ dpaa2_create_dpio_device(int vdev_fd,
TAILQ_INSERT_TAIL(&dpio_dev_list, dpio_dev, next);
+ if (!dpaa2_portal_key) {
+ /* create the key, supplying a function that'll be invoked
+ * when a portal affined thread will be deleted.
+ */
+ ret = pthread_key_create(&dpaa2_portal_key,
+ dpaa2_portal_finish);
+ if (ret) {
+ DPAA2_BUS_DEBUG("Unable to create pthread key (%d)",
+ ret);
+ goto err;
+ }
+ }
+
return 0;
err:
if (dpio_dev->dpio) {
dpio_disable(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token);
dpio_close(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token);
+ rte_free(dpio_dev->eqresp);
rte_free(dpio_dev->dpio);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index f6436f2e5..b8eb8ee0a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -14,9 +14,6 @@
struct dpaa2_io_portal_t {
struct dpaa2_dpio_dev *dpio_dev;
struct dpaa2_dpio_dev *ethrx_dpio_dev;
- uint64_t net_tid;
- uint64_t sec_tid;
- void *eventdev;
};
/*! Global per thread DPIO portal */
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 06/29] bus/fslmc: support handle portal alloc failure
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (4 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 05/29] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 07/29] bus/fslmc: support portal migration Hemant Agrawal
` (23 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta, Hemant Agrawal
Add the error handling on failure.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 47ae72749..5a12ff35d 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -284,8 +284,10 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
if (dpio_dev && rte_atomic16_test_and_set(&dpio_dev->ref_count))
break;
}
- if (!dpio_dev)
+ if (!dpio_dev) {
+ DPAA2_BUS_ERR("No software portal resource left");
return NULL;
+ }
DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
dpio_dev, dpio_dev->index, syscall(SYS_gettid));
@@ -293,6 +295,7 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
ret = dpaa2_configure_stashing(dpio_dev);
if (ret) {
DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+ rte_atomic16_clear(&dpio_dev->ref_count);
return NULL;
}
@@ -316,7 +319,7 @@ dpaa2_affine_qbman_swp(void)
if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) {
dpio_dev = dpaa2_get_qbman_swp();
if (!dpio_dev) {
- DPAA2_BUS_ERR("No software portal resource left");
+ DPAA2_BUS_ERR("Error in software portal allocation");
return -1;
}
RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
@@ -338,7 +341,7 @@ dpaa2_affine_qbman_ethrx_swp(void)
if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) {
dpio_dev = dpaa2_get_qbman_swp();
if (!dpio_dev) {
- DPAA2_BUS_ERR("No software portal resource left");
+ DPAA2_BUS_ERR("Error in software portal allocation");
return -1;
}
RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 07/29] bus/fslmc: support portal migration
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (5 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 06/29] bus/fslmc: support handle portal alloc failure Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 08/29] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
` (22 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
The patch adds support for portal migration by disabling stashing
for the portals which is used in the non-affined threads, or on
threads affined to multiple cores
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
doc/guides/rel_notes/release_20_08.rst | 5 +
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 83 +----
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 1 +
.../fslmc/qbman/include/fsl_qbman_portal.h | 8 +-
drivers/bus/fslmc/qbman/qbman_portal.c | 340 +++++++++++++++++-
drivers/bus/fslmc/qbman/qbman_portal.h | 19 +-
drivers/bus/fslmc/qbman/qbman_sys.h | 135 ++++++-
7 files changed, 508 insertions(+), 83 deletions(-)
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index ffae463f4..d915fce12 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -119,6 +119,11 @@ New Features
See the :doc:`../sample_app_ug/l2_forward_real_virtual` for more
details of this parameter usage.
+* **Updated NXP dpaa2 ethdev PMD.**
+
+ Updated the NXP dpaa2 ethdev with new features and improvements, including:
+
+ * Added support to use datapath APIs from non-EAL pthread
Removed Items
-------------
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 5a12ff35d..97be76116 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -53,10 +53,6 @@ static uint32_t io_space_count;
/* Variable to store DPAA2 platform type */
uint32_t dpaa2_svr_family;
-/* Physical core id for lcores running on dpaa2. */
-/* DPAA2 only support 1 lcore to 1 phy cpu mapping */
-static unsigned int dpaa2_cpu[RTE_MAX_LCORE];
-
/* Variable to store DPAA2 DQRR size */
uint8_t dpaa2_dqrr_size;
/* Variable to store DPAA2 EQCR size */
@@ -159,7 +155,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id)
return;
}
- cpu_mask = cpu_mask << dpaa2_cpu[cpu_id];
+ cpu_mask = cpu_mask << cpu_id;
snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity",
cpu_mask, token);
ret = system(command);
@@ -228,17 +224,9 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
#endif
static int
-dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
+dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int cpu_id)
{
int sdest, ret;
- int cpu_id;
-
- /* Set the Stashing Destination */
- cpu_id = dpaa2_get_core_id();
- if (cpu_id < 0) {
- DPAA2_BUS_ERR("Thread not affined to a single core");
- return -1;
- }
/* Set the STASH Destination depending on Current CPU ID.
* Valid values of SDEST are 4,5,6,7. Where,
@@ -277,6 +265,7 @@ static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
{
struct dpaa2_dpio_dev *dpio_dev = NULL;
+ int cpu_id;
int ret;
/* Get DPIO dev handle from list using index */
@@ -292,11 +281,19 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
dpio_dev, dpio_dev->index, syscall(SYS_gettid));
- ret = dpaa2_configure_stashing(dpio_dev);
- if (ret) {
- DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
- rte_atomic16_clear(&dpio_dev->ref_count);
- return NULL;
+ /* Set the Stashing Destination */
+ cpu_id = dpaa2_get_core_id();
+ if (cpu_id < 0) {
+ DPAA2_BUS_WARN("Thread not affined to a single core");
+ if (dpaa2_svr_family != SVR_LX2160A)
+ qbman_swp_update(dpio_dev->sw_portal, 1);
+ } else {
+ ret = dpaa2_configure_stashing(dpio_dev, cpu_id);
+ if (ret) {
+ DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+ rte_atomic16_clear(&dpio_dev->ref_count);
+ return NULL;
+ }
}
ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev);
@@ -363,46 +360,6 @@ static void dpaa2_portal_finish(void *arg)
pthread_setspecific(dpaa2_portal_key, NULL);
}
-/*
- * This checks for not supported lcore mappings as well as get the physical
- * cpuid for the lcore.
- * one lcore can only map to 1 cpu i.e. 1@10-14 not supported.
- * one cpu can be mapped to more than one lcores.
- */
-static int
-dpaa2_check_lcore_cpuset(void)
-{
- unsigned int lcore_id, i;
- int ret = 0;
-
- for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
- dpaa2_cpu[lcore_id] = 0xffffffff;
-
- for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- rte_cpuset_t cpuset = rte_lcore_cpuset(lcore_id);
-
- for (i = 0; i < CPU_SETSIZE; i++) {
- if (!CPU_ISSET(i, &cpuset))
- continue;
- if (i >= RTE_MAX_LCORE) {
- DPAA2_BUS_ERR("ERR:lcore map to core %u (>= %u) not supported",
- i, RTE_MAX_LCORE);
- ret = -1;
- continue;
- }
- RTE_LOG(DEBUG, EAL, "lcore id = %u cpu=%u\n",
- lcore_id, i);
- if (dpaa2_cpu[lcore_id] != 0xffffffff) {
- DPAA2_BUS_ERR("ERR:lcore map to multi-cpu not supported");
- ret = -1;
- continue;
- }
- dpaa2_cpu[lcore_id] = i;
- }
- }
- return ret;
-}
-
static int
dpaa2_create_dpio_device(int vdev_fd,
struct vfio_device_info *obj_info,
@@ -413,7 +370,6 @@ dpaa2_create_dpio_device(int vdev_fd,
struct qbman_swp_desc p_des;
struct dpio_attr attr;
int ret;
- static int check_lcore_cpuset;
if (obj_info->num_regions < NUM_DPIO_REGIONS) {
DPAA2_BUS_ERR("Not sufficient number of DPIO regions");
@@ -433,13 +389,6 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
- if (!check_lcore_cpuset) {
- check_lcore_cpuset = 1;
-
- if (dpaa2_check_lcore_cpuset() < 0)
- goto err;
- }
-
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 11267d439..54096e877 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
+ * Copyright 2020 NXP
*/
#ifndef _FSL_QBMAN_DEBUG_H
#define _FSL_QBMAN_DEBUG_H
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
index f820077d2..eb68c9cab 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (C) 2014 Freescale Semiconductor, Inc.
- * Copyright 2015-2019 NXP
+ * Copyright 2015-2020 NXP
*
*/
#ifndef _FSL_QBMAN_PORTAL_H
@@ -44,6 +44,12 @@ extern uint32_t dpaa2_svr_family;
*/
struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d);
+/**
+ * qbman_swp_update() - Update portal cacheability attributes.
+ * @p: the given qbman swp portal
+ */
+int qbman_swp_update(struct qbman_swp *p, int stash_off);
+
/**
* qbman_swp_finish() - Create and destroy a functional object representing
* the given QBMan portal descriptor.
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index d7ff74c7a..57f50b0d8 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
*
*/
@@ -82,6 +82,10 @@ qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd);
static int
+qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd);
+static int
qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd);
@@ -99,6 +103,12 @@ qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
uint32_t *flags,
int num_frames);
static int
+qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd,
+ uint32_t *flags,
+ int num_frames);
+static int
qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -118,6 +128,12 @@ qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
uint32_t *flags,
int num_frames);
static int
+qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ struct qbman_fd **fd,
+ uint32_t *flags,
+ int num_frames);
+static int
qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
struct qbman_fd **fd,
@@ -135,6 +151,11 @@ qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
const struct qbman_fd *fd,
int num_frames);
static int
+qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd,
+ int num_frames);
+static int
qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -143,9 +164,12 @@ qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
static int
qbman_swp_pull_direct(struct qbman_swp *s, struct qbman_pull_desc *d);
static int
+qbman_swp_pull_cinh_direct(struct qbman_swp *s, struct qbman_pull_desc *d);
+static int
qbman_swp_pull_mem_back(struct qbman_swp *s, struct qbman_pull_desc *d);
const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s);
+const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s);
const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s);
static int
@@ -153,6 +177,10 @@ qbman_swp_release_direct(struct qbman_swp *s,
const struct qbman_release_desc *d,
const uint64_t *buffers, unsigned int num_buffers);
static int
+qbman_swp_release_cinh_direct(struct qbman_swp *s,
+ const struct qbman_release_desc *d,
+ const uint64_t *buffers, unsigned int num_buffers);
+static int
qbman_swp_release_mem_back(struct qbman_swp *s,
const struct qbman_release_desc *d,
const uint64_t *buffers, unsigned int num_buffers);
@@ -327,6 +355,28 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
return p;
}
+int qbman_swp_update(struct qbman_swp *p, int stash_off)
+{
+ const struct qbman_swp_desc *d = &p->desc;
+ struct qbman_swp_sys *s = &p->sys;
+ int ret;
+
+ /* Nothing needs to be done for QBMAN rev > 5000 with fast access */
+ if ((qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+ && (d->cena_access_mode == qman_cena_fastest_access))
+ return 0;
+
+ ret = qbman_swp_sys_update(s, d, p->dqrr.dqrr_size, stash_off);
+ if (ret) {
+ pr_err("qbman_swp_sys_init() failed %d\n", ret);
+ return ret;
+ }
+
+ p->stash_off = stash_off;
+
+ return 0;
+}
+
void qbman_swp_finish(struct qbman_swp *p)
{
#ifdef QBMAN_CHECKING
@@ -462,6 +512,27 @@ void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb)
#endif
}
+void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb)
+{
+ uint8_t *v = cmd;
+#ifdef QBMAN_CHECKING
+ QBMAN_BUG_ON(!(p->mc.check != swp_mc_can_submit));
+#endif
+ /* TBD: "|=" is going to hurt performance. Need to move as many fields
+ * out of word zero, and for those that remain, the "OR" needs to occur
+ * at the caller side. This debug check helps to catch cases where the
+ * caller wants to OR but has forgotten to do so.
+ */
+ QBMAN_BUG_ON((*v & cmd_verb) != *v);
+ dma_wmb();
+ *v = cmd_verb | p->mc.valid_bit;
+ qbman_cinh_write_complete(&p->sys, QBMAN_CENA_SWP_CR, cmd);
+ clean(cmd);
+#ifdef QBMAN_CHECKING
+ p->mc.check = swp_mc_can_poll;
+#endif
+}
+
void *qbman_swp_mc_result(struct qbman_swp *p)
{
uint32_t *ret, verb;
@@ -500,6 +571,27 @@ void *qbman_swp_mc_result(struct qbman_swp *p)
return ret;
}
+void *qbman_swp_mc_result_cinh(struct qbman_swp *p)
+{
+ uint32_t *ret, verb;
+#ifdef QBMAN_CHECKING
+ QBMAN_BUG_ON(p->mc.check != swp_mc_can_poll);
+#endif
+ ret = qbman_cinh_read_shadow(&p->sys,
+ QBMAN_CENA_SWP_RR(p->mc.valid_bit));
+ /* Remove the valid-bit -
+ * command completed iff the rest is non-zero
+ */
+ verb = ret[0] & ~QB_VALID_BIT;
+ if (!verb)
+ return NULL;
+ p->mc.valid_bit ^= QB_VALID_BIT;
+#ifdef QBMAN_CHECKING
+ p->mc.check = swp_mc_can_start;
+#endif
+ return ret;
+}
+
/***********/
/* Enqueue */
/***********/
@@ -640,6 +732,16 @@ static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p,
QMAN_RT_MODE);
}
+static void memcpy_byte_by_byte(void *to, const void *from, size_t n)
+{
+ const uint8_t *src = from;
+ volatile uint8_t *dest = to;
+ size_t i;
+
+ for (i = 0; i < n; i++)
+ dest[i] = src[i];
+}
+
static int qbman_swp_enqueue_array_mode_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
@@ -754,7 +856,7 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
return -EBUSY;
}
- p = qbman_cena_write_start_wo_shadow(&s->sys,
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
memcpy(&p[1], &cl[1], 28);
memcpy(&p[8], fd, sizeof(*fd));
@@ -762,8 +864,6 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
/* Set the verb byte, have to substitute in the valid-bit */
p[0] = cl[0] | s->eqcr.pi_vb;
- qbman_cena_write_complete_wo_shadow(&s->sys,
- QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
s->eqcr.pi++;
s->eqcr.pi &= full_mask;
s->eqcr.available--;
@@ -815,7 +915,10 @@ static int qbman_swp_enqueue_ring_mode(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd)
{
- return qbman_swp_enqueue_ring_mode_ptr(s, d, fd);
+ if (!s->stash_off)
+ return qbman_swp_enqueue_ring_mode_ptr(s, d, fd);
+ else
+ return qbman_swp_enqueue_ring_mode_cinh_direct(s, d, fd);
}
int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
@@ -1025,7 +1128,12 @@ int qbman_swp_enqueue_multiple(struct qbman_swp *s,
uint32_t *flags,
int num_frames)
{
- return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags, num_frames);
+ if (!s->stash_off)
+ return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags,
+ num_frames);
+ else
+ return qbman_swp_enqueue_multiple_cinh_direct(s, d, fd, flags,
+ num_frames);
}
static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
@@ -1233,7 +1341,12 @@ int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
uint32_t *flags,
int num_frames)
{
- return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags, num_frames);
+ if (!s->stash_off)
+ return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags,
+ num_frames);
+ else
+ return qbman_swp_enqueue_multiple_fd_cinh_direct(s, d, fd,
+ flags, num_frames);
}
static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
@@ -1426,7 +1539,13 @@ int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
const struct qbman_fd *fd,
int num_frames)
{
- return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd, num_frames);
+ if (!s->stash_off)
+ return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd,
+ num_frames);
+ else
+ return qbman_swp_enqueue_multiple_desc_cinh_direct(s, d, fd,
+ num_frames);
+
}
/*************************/
@@ -1574,6 +1693,30 @@ static int qbman_swp_pull_direct(struct qbman_swp *s,
return 0;
}
+static int qbman_swp_pull_cinh_direct(struct qbman_swp *s,
+ struct qbman_pull_desc *d)
+{
+ uint32_t *p;
+ uint32_t *cl = qb_cl(d);
+
+ if (!atomic_dec_and_test(&s->vdq.busy)) {
+ atomic_inc(&s->vdq.busy);
+ return -EBUSY;
+ }
+
+ d->pull.tok = s->sys.idx + 1;
+ s->vdq.storage = (void *)(size_t)d->pull.rsp_addr_virt;
+ p = qbman_cinh_write_start_wo_shadow(&s->sys, QBMAN_CENA_SWP_VDQCR);
+ memcpy_byte_by_byte(&p[1], &cl[1], 12);
+
+ /* Set the verb byte, have to substitute in the valid-bit */
+ lwsync();
+ p[0] = cl[0] | s->vdq.valid_bit;
+ s->vdq.valid_bit ^= QB_VALID_BIT;
+
+ return 0;
+}
+
static int qbman_swp_pull_mem_back(struct qbman_swp *s,
struct qbman_pull_desc *d)
{
@@ -1601,7 +1744,10 @@ static int qbman_swp_pull_mem_back(struct qbman_swp *s,
int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
{
- return qbman_swp_pull_ptr(s, d);
+ if (!s->stash_off)
+ return qbman_swp_pull_ptr(s, d);
+ else
+ return qbman_swp_pull_cinh_direct(s, d);
}
/****************/
@@ -1638,7 +1784,10 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s)
*/
const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s)
{
- return qbman_swp_dqrr_next_ptr(s);
+ if (!s->stash_off)
+ return qbman_swp_dqrr_next_ptr(s);
+ else
+ return qbman_swp_dqrr_next_cinh_direct(s);
}
const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s)
@@ -1718,6 +1867,81 @@ const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s)
return p;
}
+const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s)
+{
+ uint32_t verb;
+ uint32_t response_verb;
+ uint32_t flags;
+ const struct qbman_result *p;
+
+ /* Before using valid-bit to detect if something is there, we have to
+ * handle the case of the DQRR reset bug...
+ */
+ if (s->dqrr.reset_bug) {
+ /* We pick up new entries by cache-inhibited producer index,
+ * which means that a non-coherent mapping would require us to
+ * invalidate and read *only* once that PI has indicated that
+ * there's an entry here. The first trip around the DQRR ring
+ * will be much less efficient than all subsequent trips around
+ * it...
+ */
+ uint8_t pi = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_DQPI) &
+ QMAN_DQRR_PI_MASK;
+
+ /* there are new entries if pi != next_idx */
+ if (pi == s->dqrr.next_idx)
+ return NULL;
+
+ /* if next_idx is/was the last ring index, and 'pi' is
+ * different, we can disable the workaround as all the ring
+ * entries have now been DMA'd to so valid-bit checking is
+ * repaired. Note: this logic needs to be based on next_idx
+ * (which increments one at a time), rather than on pi (which
+ * can burst and wrap-around between our snapshots of it).
+ */
+ QBMAN_BUG_ON((s->dqrr.dqrr_size - 1) < 0);
+ if (s->dqrr.next_idx == (s->dqrr.dqrr_size - 1u)) {
+ pr_debug("DEBUG: next_idx=%d, pi=%d, clear reset bug\n",
+ s->dqrr.next_idx, pi);
+ s->dqrr.reset_bug = 0;
+ }
+ }
+ p = qbman_cinh_read_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
+
+ verb = p->dq.verb;
+
+ /* If the valid-bit isn't of the expected polarity, nothing there. Note,
+ * in the DQRR reset bug workaround, we shouldn't need to skip these
+ * check, because we've already determined that a new entry is available
+ * and we've invalidated the cacheline before reading it, so the
+ * valid-bit behaviour is repaired and should tell us what we already
+ * knew from reading PI.
+ */
+ if ((verb & QB_VALID_BIT) != s->dqrr.valid_bit)
+ return NULL;
+
+ /* There's something there. Move "next_idx" attention to the next ring
+ * entry (and prefetch it) before returning what we found.
+ */
+ s->dqrr.next_idx++;
+ if (s->dqrr.next_idx == s->dqrr.dqrr_size) {
+ s->dqrr.next_idx = 0;
+ s->dqrr.valid_bit ^= QB_VALID_BIT;
+ }
+ /* If this is the final response to a volatile dequeue command
+ * indicate that the vdq is no longer busy
+ */
+ flags = p->dq.stat;
+ response_verb = verb & QBMAN_RESPONSE_VERB_MASK;
+ if ((response_verb == QBMAN_RESULT_DQ) &&
+ (flags & QBMAN_DQ_STAT_VOLATILE) &&
+ (flags & QBMAN_DQ_STAT_EXPIRED))
+ atomic_inc(&s->vdq.busy);
+
+ return p;
+}
+
const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s)
{
uint32_t verb;
@@ -2096,6 +2320,37 @@ static int qbman_swp_release_direct(struct qbman_swp *s,
return 0;
}
+static int qbman_swp_release_cinh_direct(struct qbman_swp *s,
+ const struct qbman_release_desc *d,
+ const uint64_t *buffers,
+ unsigned int num_buffers)
+{
+ uint32_t *p;
+ const uint32_t *cl = qb_cl(d);
+ uint32_t rar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_RAR);
+
+ pr_debug("RAR=%08x\n", rar);
+ if (!RAR_SUCCESS(rar))
+ return -EBUSY;
+
+ QBMAN_BUG_ON(!num_buffers || (num_buffers > 7));
+
+ /* Start the release command */
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_RCR(RAR_IDX(rar)));
+
+ /* Copy the caller's buffer pointers to the command */
+ memcpy_byte_by_byte(&p[2], buffers, num_buffers * sizeof(uint64_t));
+
+ /* Set the verb byte, have to substitute in the valid-bit and the
+ * number of buffers.
+ */
+ lwsync();
+ p[0] = cl[0] | RAR_VB(rar) | num_buffers;
+
+ return 0;
+}
+
static int qbman_swp_release_mem_back(struct qbman_swp *s,
const struct qbman_release_desc *d,
const uint64_t *buffers,
@@ -2134,7 +2389,11 @@ int qbman_swp_release(struct qbman_swp *s,
const uint64_t *buffers,
unsigned int num_buffers)
{
- return qbman_swp_release_ptr(s, d, buffers, num_buffers);
+ if (!s->stash_off)
+ return qbman_swp_release_ptr(s, d, buffers, num_buffers);
+ else
+ return qbman_swp_release_cinh_direct(s, d, buffers,
+ num_buffers);
}
/*******************/
@@ -2157,8 +2416,8 @@ struct qbman_acquire_rslt {
uint64_t buf[7];
};
-int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
- unsigned int num_buffers)
+static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
+ uint64_t *buffers, unsigned int num_buffers)
{
struct qbman_acquire_desc *p;
struct qbman_acquire_rslt *r;
@@ -2202,6 +2461,61 @@ int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
return (int)r->num;
}
+static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
+ uint64_t *buffers, unsigned int num_buffers)
+{
+ struct qbman_acquire_desc *p;
+ struct qbman_acquire_rslt *r;
+
+ if (!num_buffers || (num_buffers > 7))
+ return -EINVAL;
+
+ /* Start the management command */
+ p = qbman_swp_mc_start(s);
+
+ if (!p)
+ return -EBUSY;
+
+ /* Encode the caller-provided attributes */
+ p->bpid = bpid;
+ p->num = num_buffers;
+
+ /* Complete the management command */
+ r = qbman_swp_mc_complete_cinh(s, p, QBMAN_MC_ACQUIRE);
+ if (!r) {
+ pr_err("qbman: acquire from BPID %d failed, no response\n",
+ bpid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_MC_ACQUIRE);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Acquire buffers from BPID 0x%x failed, code=0x%02x\n",
+ bpid, r->rslt);
+ return -EIO;
+ }
+
+ QBMAN_BUG_ON(r->num > num_buffers);
+
+ /* Copy the acquired buffers to the caller's array */
+ u64_from_le32_copy(buffers, &r->buf[0], r->num);
+
+ return (int)r->num;
+}
+
+int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
+ unsigned int num_buffers)
+{
+ if (!s->stash_off)
+ return qbman_swp_acquire_direct(s, bpid, buffers, num_buffers);
+ else
+ return qbman_swp_acquire_cinh_direct(s, bpid, buffers,
+ num_buffers);
+}
+
/*****************/
/* FQ management */
/*****************/
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.h b/drivers/bus/fslmc/qbman/qbman_portal.h
index 3aaacae52..1cf791830 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/qbman_portal.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
*
*/
@@ -102,6 +102,7 @@ struct qbman_swp {
uint32_t ci;
int available;
} eqcr;
+ uint8_t stash_off;
};
/* -------------------------- */
@@ -118,7 +119,9 @@ struct qbman_swp {
*/
void *qbman_swp_mc_start(struct qbman_swp *p);
void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb);
+void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb);
void *qbman_swp_mc_result(struct qbman_swp *p);
+void *qbman_swp_mc_result_cinh(struct qbman_swp *p);
/* Wraps up submit + poll-for-result */
static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
@@ -135,6 +138,20 @@ static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
return cmd;
}
+static inline void *qbman_swp_mc_complete_cinh(struct qbman_swp *swp, void *cmd,
+ uint8_t cmd_verb)
+{
+ int loopvar = 1000;
+
+ qbman_swp_mc_submit_cinh(swp, cmd, cmd_verb);
+ do {
+ cmd = qbman_swp_mc_result_cinh(swp);
+ } while (!cmd && loopvar--);
+ QBMAN_BUG_ON(!loopvar);
+
+ return cmd;
+}
+
/* ---------------------- */
/* Descriptors/cachelines */
/* ---------------------- */
diff --git a/drivers/bus/fslmc/qbman/qbman_sys.h b/drivers/bus/fslmc/qbman/qbman_sys.h
index 55449edf3..61f817c47 100644
--- a/drivers/bus/fslmc/qbman/qbman_sys.h
+++ b/drivers/bus/fslmc/qbman/qbman_sys.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2019 NXP
+ * Copyright 2019-2020 NXP
*/
/* qbman_sys_decl.h and qbman_sys.h are the two platform-specific files in the
* driver. They are only included via qbman_private.h, which is itself a
@@ -190,6 +190,34 @@ static inline void qbman_cinh_write(struct qbman_swp_sys *s, uint32_t offset,
#endif
}
+static inline void *qbman_cinh_write_start_wo_shadow(struct qbman_swp_sys *s,
+ uint32_t offset)
+{
+#ifdef QBMAN_CINH_TRACE
+ pr_info("qbman_cinh_write_start(%p:%d:0x%03x)\n",
+ s->addr_cinh, s->idx, offset);
+#endif
+ QBMAN_BUG_ON(offset & 63);
+ return (s->addr_cinh + offset);
+}
+
+static inline void qbman_cinh_write_complete(struct qbman_swp_sys *s,
+ uint32_t offset, void *cmd)
+{
+ const uint32_t *shadow = cmd;
+ int loop;
+#ifdef QBMAN_CINH_TRACE
+ pr_info("qbman_cinh_write_complete(%p:%d:0x%03x) %p\n",
+ s->addr_cinh, s->idx, offset, shadow);
+ hexdump(cmd, 64);
+#endif
+ for (loop = 15; loop >= 1; loop--)
+ __raw_writel(shadow[loop], s->addr_cinh +
+ offset + loop * 4);
+ lwsync();
+ __raw_writel(shadow[0], s->addr_cinh + offset);
+}
+
static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
{
uint32_t reg = __raw_readl(s->addr_cinh + offset);
@@ -200,6 +228,35 @@ static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
return reg;
}
+static inline void *qbman_cinh_read_shadow(struct qbman_swp_sys *s,
+ uint32_t offset)
+{
+ uint32_t *shadow = (uint32_t *)(s->cena + offset);
+ unsigned int loop;
+#ifdef QBMAN_CINH_TRACE
+ pr_info(" %s (%p:%d:0x%03x) %p\n", __func__,
+ s->addr_cinh, s->idx, offset, shadow);
+#endif
+
+ for (loop = 0; loop < 16; loop++)
+ shadow[loop] = __raw_readl(s->addr_cinh + offset
+ + loop * 4);
+#ifdef QBMAN_CINH_TRACE
+ hexdump(shadow, 64);
+#endif
+ return shadow;
+}
+
+static inline void *qbman_cinh_read_wo_shadow(struct qbman_swp_sys *s,
+ uint32_t offset)
+{
+#ifdef QBMAN_CINH_TRACE
+ pr_info("qbman_cinh_read(%p:%d:0x%03x)\n",
+ s->addr_cinh, s->idx, offset);
+#endif
+ return s->addr_cinh + offset;
+}
+
static inline void *qbman_cena_write_start(struct qbman_swp_sys *s,
uint32_t offset)
{
@@ -476,6 +533,82 @@ static inline int qbman_swp_sys_init(struct qbman_swp_sys *s,
return 0;
}
+static inline int qbman_swp_sys_update(struct qbman_swp_sys *s,
+ const struct qbman_swp_desc *d,
+ uint8_t dqrr_size,
+ int stash_off)
+{
+ uint32_t reg;
+ int i;
+ int cena_region_size = 4*1024;
+ uint8_t est = 1;
+#ifdef RTE_ARCH_64
+ uint8_t wn = CENA_WRITE_ENABLE;
+#else
+ uint8_t wn = CINH_WRITE_ENABLE;
+#endif
+
+ if (stash_off)
+ wn = CINH_WRITE_ENABLE;
+
+ QBMAN_BUG_ON(d->idx < 0);
+#ifdef QBMAN_CHECKING
+ /* We should never be asked to initialise for a portal that isn't in
+ * the power-on state. (Ie. don't forget to reset portals when they are
+ * decommissioned!)
+ */
+ reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+ QBMAN_BUG_ON(reg);
+#endif
+ if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+ && (d->cena_access_mode == qman_cena_fastest_access))
+ memset(s->addr_cena, 0, cena_region_size);
+ else {
+ /* Invalidate the portal memory.
+ * This ensures no stale cache lines
+ */
+ for (i = 0; i < cena_region_size; i += 64)
+ dccivac(s->addr_cena + i);
+ }
+
+ if (dpaa2_svr_family == SVR_LS1080A)
+ est = 0;
+
+ if (s->eqcr_mode == qman_eqcr_vb_array) {
+ reg = qbman_set_swp_cfg(dqrr_size, wn,
+ 0, 3, 2, 3, 1, 1, 1, 1, 1, 1);
+ } else {
+ if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 &&
+ (d->cena_access_mode == qman_cena_fastest_access))
+ reg = qbman_set_swp_cfg(dqrr_size, wn,
+ 1, 3, 2, 0, 1, 1, 1, 1, 1, 1);
+ else
+ reg = qbman_set_swp_cfg(dqrr_size, wn,
+ est, 3, 2, 2, 1, 1, 1, 1, 1, 1);
+ }
+
+ if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+ && (d->cena_access_mode == qman_cena_fastest_access))
+ reg |= 1 << SWP_CFG_CPBS_SHIFT | /* memory-backed mode */
+ 1 << SWP_CFG_VPM_SHIFT | /* VDQCR read triggered mode */
+ 1 << SWP_CFG_CPM_SHIFT; /* CR read triggered mode */
+
+ qbman_cinh_write(s, QBMAN_CINH_SWP_CFG, reg);
+ reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+ if (!reg) {
+ pr_err("The portal %d is not enabled!\n", s->idx);
+ return -1;
+ }
+
+ if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+ && (d->cena_access_mode == qman_cena_fastest_access)) {
+ qbman_cinh_write(s, QBMAN_CINH_SWP_EQCR_PI, QMAN_RT_MODE);
+ qbman_cinh_write(s, QBMAN_CINH_SWP_RCR_PI, QMAN_RT_MODE);
+ }
+
+ return 0;
+}
+
static inline void qbman_swp_sys_finish(struct qbman_swp_sys *s)
{
free(s->cena);
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 08/29] bus/fslmc: rename the cinh read functions used for ls1088
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (6 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 07/29] bus/fslmc: support portal migration Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 09/29] net/dpaa: enable Tx queue taildrop Hemant Agrawal
` (21 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
This patch changes the qbman I/O function names as they are
only reading from cinh register, but writing to cena registers.
This gives way to add functions which purely work in cinh mode
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/bus/fslmc/qbman/qbman_portal.c | 250 +++++++++++++++++++++++--
1 file changed, 233 insertions(+), 17 deletions(-)
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 57f50b0d8..0a2af7be4 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -78,7 +78,7 @@ qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd);
static int
-qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_ring_mode_cinh_read_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd);
static int
@@ -97,7 +97,7 @@ qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
uint32_t *flags,
int num_frames);
static int
-qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_cinh_read_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
uint32_t *flags,
@@ -122,7 +122,7 @@ qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
uint32_t *flags,
int num_frames);
static int
-qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_fd_cinh_read_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
struct qbman_fd **fd,
uint32_t *flags,
@@ -146,7 +146,7 @@ qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
const struct qbman_fd *fd,
int num_frames);
static int
-qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_desc_cinh_read_direct(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
int num_frames);
@@ -309,15 +309,15 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
&& (d->cena_access_mode == qman_cena_fastest_access)) {
p->eqcr.pi_ring_size = 32;
qbman_swp_enqueue_array_mode_ptr =
- qbman_swp_enqueue_array_mode_mem_back;
+ qbman_swp_enqueue_array_mode_mem_back;
qbman_swp_enqueue_ring_mode_ptr =
- qbman_swp_enqueue_ring_mode_mem_back;
+ qbman_swp_enqueue_ring_mode_mem_back;
qbman_swp_enqueue_multiple_ptr =
- qbman_swp_enqueue_multiple_mem_back;
+ qbman_swp_enqueue_multiple_mem_back;
qbman_swp_enqueue_multiple_fd_ptr =
- qbman_swp_enqueue_multiple_fd_mem_back;
+ qbman_swp_enqueue_multiple_fd_mem_back;
qbman_swp_enqueue_multiple_desc_ptr =
- qbman_swp_enqueue_multiple_desc_mem_back;
+ qbman_swp_enqueue_multiple_desc_mem_back;
qbman_swp_pull_ptr = qbman_swp_pull_mem_back;
qbman_swp_dqrr_next_ptr = qbman_swp_dqrr_next_mem_back;
qbman_swp_release_ptr = qbman_swp_release_mem_back;
@@ -325,13 +325,13 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
if (dpaa2_svr_family == SVR_LS1080A) {
qbman_swp_enqueue_ring_mode_ptr =
- qbman_swp_enqueue_ring_mode_cinh_direct;
+ qbman_swp_enqueue_ring_mode_cinh_read_direct;
qbman_swp_enqueue_multiple_ptr =
- qbman_swp_enqueue_multiple_cinh_direct;
+ qbman_swp_enqueue_multiple_cinh_read_direct;
qbman_swp_enqueue_multiple_fd_ptr =
- qbman_swp_enqueue_multiple_fd_cinh_direct;
+ qbman_swp_enqueue_multiple_fd_cinh_read_direct;
qbman_swp_enqueue_multiple_desc_ptr =
- qbman_swp_enqueue_multiple_desc_cinh_direct;
+ qbman_swp_enqueue_multiple_desc_cinh_read_direct;
}
for (mask_size = p->eqcr.pi_ring_size; mask_size > 0; mask_size >>= 1)
@@ -835,7 +835,7 @@ static int qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s,
return 0;
}
-static int qbman_swp_enqueue_ring_mode_cinh_direct(
+static int qbman_swp_enqueue_ring_mode_cinh_read_direct(
struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd)
@@ -873,6 +873,44 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
return 0;
}
+static int qbman_swp_enqueue_ring_mode_cinh_direct(
+ struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd)
+{
+ uint32_t *p;
+ const uint32_t *cl = qb_cl(d);
+ uint32_t eqcr_ci, full_mask, half_mask;
+
+ half_mask = (s->eqcr.pi_ci_mask>>1);
+ full_mask = s->eqcr.pi_ci_mask;
+ if (!s->eqcr.available) {
+ eqcr_ci = s->eqcr.ci;
+ s->eqcr.ci = qbman_cinh_read(&s->sys,
+ QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+ s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ eqcr_ci, s->eqcr.ci);
+ if (!s->eqcr.available)
+ return -EBUSY;
+ }
+
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
+ memcpy_byte_by_byte(&p[1], &cl[1], 28);
+ memcpy_byte_by_byte(&p[8], fd, sizeof(*fd));
+ lwsync();
+
+ /* Set the verb byte, have to substitute in the valid-bit */
+ p[0] = cl[0] | s->eqcr.pi_vb;
+ s->eqcr.pi++;
+ s->eqcr.pi &= full_mask;
+ s->eqcr.available--;
+ if (!(s->eqcr.pi & half_mask))
+ s->eqcr.pi_vb ^= QB_VALID_BIT;
+
+ return 0;
+}
+
static int qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd)
@@ -999,7 +1037,7 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
return num_enqueued;
}
-static int qbman_swp_enqueue_multiple_cinh_direct(
+static int qbman_swp_enqueue_multiple_cinh_read_direct(
struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -1069,6 +1107,67 @@ static int qbman_swp_enqueue_multiple_cinh_direct(
return num_enqueued;
}
+static int qbman_swp_enqueue_multiple_cinh_direct(
+ struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd,
+ uint32_t *flags,
+ int num_frames)
+{
+ uint32_t *p = NULL;
+ const uint32_t *cl = qb_cl(d);
+ uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+ int i, num_enqueued = 0;
+
+ half_mask = (s->eqcr.pi_ci_mask>>1);
+ full_mask = s->eqcr.pi_ci_mask;
+ if (!s->eqcr.available) {
+ eqcr_ci = s->eqcr.ci;
+ s->eqcr.ci = qbman_cinh_read(&s->sys,
+ QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+ s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ eqcr_ci, s->eqcr.ci);
+ if (!s->eqcr.available)
+ return 0;
+ }
+
+ eqcr_pi = s->eqcr.pi;
+ num_enqueued = (s->eqcr.available < num_frames) ?
+ s->eqcr.available : num_frames;
+ s->eqcr.available -= num_enqueued;
+ /* Fill in the EQCR ring */
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ memcpy_byte_by_byte(&p[1], &cl[1], 28);
+ memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd));
+ eqcr_pi++;
+ }
+
+ lwsync();
+
+ /* Set the verb byte, have to substitute in the valid-bit */
+ eqcr_pi = s->eqcr.pi;
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ p[0] = cl[0] | s->eqcr.pi_vb;
+ if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
+ struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+
+ d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+ ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
+ }
+ eqcr_pi++;
+ if (!(eqcr_pi & half_mask))
+ s->eqcr.pi_vb ^= QB_VALID_BIT;
+ }
+
+ s->eqcr.pi = eqcr_pi & full_mask;
+
+ return num_enqueued;
+}
+
static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -1205,7 +1304,7 @@ static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
return num_enqueued;
}
-static int qbman_swp_enqueue_multiple_fd_cinh_direct(
+static int qbman_swp_enqueue_multiple_fd_cinh_read_direct(
struct qbman_swp *s,
const struct qbman_eq_desc *d,
struct qbman_fd **fd,
@@ -1275,6 +1374,67 @@ static int qbman_swp_enqueue_multiple_fd_cinh_direct(
return num_enqueued;
}
+static int qbman_swp_enqueue_multiple_fd_cinh_direct(
+ struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ struct qbman_fd **fd,
+ uint32_t *flags,
+ int num_frames)
+{
+ uint32_t *p = NULL;
+ const uint32_t *cl = qb_cl(d);
+ uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+ int i, num_enqueued = 0;
+
+ half_mask = (s->eqcr.pi_ci_mask>>1);
+ full_mask = s->eqcr.pi_ci_mask;
+ if (!s->eqcr.available) {
+ eqcr_ci = s->eqcr.ci;
+ s->eqcr.ci = qbman_cinh_read(&s->sys,
+ QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+ s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ eqcr_ci, s->eqcr.ci);
+ if (!s->eqcr.available)
+ return 0;
+ }
+
+ eqcr_pi = s->eqcr.pi;
+ num_enqueued = (s->eqcr.available < num_frames) ?
+ s->eqcr.available : num_frames;
+ s->eqcr.available -= num_enqueued;
+ /* Fill in the EQCR ring */
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ memcpy_byte_by_byte(&p[1], &cl[1], 28);
+ memcpy_byte_by_byte(&p[8], fd[i], sizeof(struct qbman_fd));
+ eqcr_pi++;
+ }
+
+ lwsync();
+
+ /* Set the verb byte, have to substitute in the valid-bit */
+ eqcr_pi = s->eqcr.pi;
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ p[0] = cl[0] | s->eqcr.pi_vb;
+ if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
+ struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+
+ d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+ ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
+ }
+ eqcr_pi++;
+ if (!(eqcr_pi & half_mask))
+ s->eqcr.pi_vb ^= QB_VALID_BIT;
+ }
+
+ s->eqcr.pi = eqcr_pi & full_mask;
+
+ return num_enqueued;
+}
+
static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
struct qbman_fd **fd,
@@ -1413,7 +1573,7 @@ static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
return num_enqueued;
}
-static int qbman_swp_enqueue_multiple_desc_cinh_direct(
+static int qbman_swp_enqueue_multiple_desc_cinh_read_direct(
struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
@@ -1478,6 +1638,62 @@ static int qbman_swp_enqueue_multiple_desc_cinh_direct(
return num_enqueued;
}
+static int qbman_swp_enqueue_multiple_desc_cinh_direct(
+ struct qbman_swp *s,
+ const struct qbman_eq_desc *d,
+ const struct qbman_fd *fd,
+ int num_frames)
+{
+ uint32_t *p;
+ const uint32_t *cl;
+ uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+ int i, num_enqueued = 0;
+
+ half_mask = (s->eqcr.pi_ci_mask>>1);
+ full_mask = s->eqcr.pi_ci_mask;
+ if (!s->eqcr.available) {
+ eqcr_ci = s->eqcr.ci;
+ s->eqcr.ci = qbman_cinh_read(&s->sys,
+ QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+ s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+ eqcr_ci, s->eqcr.ci);
+ if (!s->eqcr.available)
+ return 0;
+ }
+
+ eqcr_pi = s->eqcr.pi;
+ num_enqueued = (s->eqcr.available < num_frames) ?
+ s->eqcr.available : num_frames;
+ s->eqcr.available -= num_enqueued;
+ /* Fill in the EQCR ring */
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ cl = qb_cl(&d[i]);
+ memcpy_byte_by_byte(&p[1], &cl[1], 28);
+ memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd));
+ eqcr_pi++;
+ }
+
+ lwsync();
+
+ /* Set the verb byte, have to substitute in the valid-bit */
+ eqcr_pi = s->eqcr.pi;
+ for (i = 0; i < num_enqueued; i++) {
+ p = qbman_cinh_write_start_wo_shadow(&s->sys,
+ QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+ cl = qb_cl(&d[i]);
+ p[0] = cl[0] | s->eqcr.pi_vb;
+ eqcr_pi++;
+ if (!(eqcr_pi & half_mask))
+ s->eqcr.pi_vb ^= QB_VALID_BIT;
+ }
+
+ s->eqcr.pi = eqcr_pi & full_mask;
+
+ return num_enqueued;
+}
+
static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
const struct qbman_eq_desc *d,
const struct qbman_fd *fd,
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 09/29] net/dpaa: enable Tx queue taildrop
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (7 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 08/29] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 10/29] net/dpaa: add 2.5G support Hemant Agrawal
` (20 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
Enable congestion handling/tail drop for TX queues.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 43 +++++++++
drivers/bus/dpaa/include/fsl_qman.h | 17 ++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 2 +
drivers/net/dpaa/dpaa_ethdev.c | 111 ++++++++++++++++++++--
drivers/net/dpaa/dpaa_ethdev.h | 1 +
drivers/net/dpaa/dpaa_rxtx.c | 71 ++++++++++++++
drivers/net/dpaa/dpaa_rxtx.h | 3 +
7 files changed, 242 insertions(+), 6 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index b596e79c2..447c09177 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -40,6 +40,8 @@
spin_unlock(&__fq478->fqlock); \
} while (0)
+static qman_cb_free_mbuf qman_free_mbuf_cb;
+
static inline void fq_set(struct qman_fq *fq, u32 mask)
{
dpaa_set_bits(mask, &fq->flags);
@@ -790,6 +792,47 @@ static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
FQUNLOCK(fq);
}
+void
+qman_ern_register_cb(qman_cb_free_mbuf cb)
+{
+ qman_free_mbuf_cb = cb;
+}
+
+
+void
+qman_ern_poll_free(void)
+{
+ struct qman_portal *p = get_affine_portal();
+ u8 verb, num = 0;
+ const struct qm_mr_entry *msg;
+ const struct qm_fd *fd;
+ struct qm_mr_entry swapped_msg;
+
+ qm_mr_pvb_update(&p->p);
+ msg = qm_mr_current(&p->p);
+
+ while (msg != NULL) {
+ swapped_msg = *msg;
+ hw_fd_to_cpu(&swapped_msg.ern.fd);
+ verb = msg->ern.verb & QM_MR_VERB_TYPE_MASK;
+ fd = &swapped_msg.ern.fd;
+
+ if (unlikely(verb & 0x20)) {
+ printf("HW ERN notification, Nothing to do\n");
+ } else {
+ if ((fd->bpid & 0xff) != 0xff)
+ qman_free_mbuf_cb(fd);
+ }
+
+ num++;
+ qm_mr_next(&p->p);
+ qm_mr_pvb_update(&p->p);
+ msg = qm_mr_current(&p->p);
+ }
+
+ qm_mr_cci_consume(&p->p, num);
+}
+
static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
{
const struct qm_mr_entry *msg;
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 78b698f39..0d9cfc339 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1158,6 +1158,10 @@ typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq,
/* This callback type is used when handling DCP ERNs */
typedef void (*qman_cb_dc_ern)(struct qman_portal *qm,
const struct qm_mr_entry *msg);
+
+/* This callback function will be used to free mbufs of ERN */
+typedef uint16_t (*qman_cb_free_mbuf)(const struct qm_fd *fd);
+
/*
* s/w-visible states. Ie. tentatively scheduled + truly scheduled + active +
* held-active + held-suspended are just "sched". Things like "retired" will not
@@ -1808,6 +1812,19 @@ __rte_internal
int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
int frames_to_send);
+/**
+ * qman_ern_poll_free - Polling on MR and calling a callback function to free
+ * mbufs when SW ERNs received.
+ */
+__rte_internal
+void qman_ern_poll_free(void);
+
+/**
+ * qman_ern_register_cb - Register a callback function to free buffers.
+ */
+__rte_internal
+void qman_ern_register_cb(qman_cb_free_mbuf cb);
+
/**
* qman_enqueue_multi_fq - Enqueue multiple frames to their respective frame
* queues.
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 46d42f7d6..8069b05af 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -61,6 +61,8 @@ INTERNAL {
qman_enqueue;
qman_enqueue_multi;
qman_enqueue_multi_fq;
+ qman_ern_poll_free;
+ qman_ern_register_cb;
qman_fq_fqid;
qman_fq_portal_irqsource_add;
qman_fq_portal_irqsource_remove;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f1c9a7151..fd2c0c681 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2020 NXP
*
*/
/* System headers */
@@ -86,9 +86,12 @@ static int dpaa_push_mode_max_queue = DPAA_DEFAULT_PUSH_MODE_QUEUE;
static int dpaa_push_queue_idx; /* Queue index which are in push mode*/
-/* Per FQ Taildrop in frame count */
+/* Per RX FQ Taildrop in frame count */
static unsigned int td_threshold = CGR_RX_PERFQ_THRESH;
+/* Per TX FQ Taildrop in frame count, disabled by default */
+static unsigned int td_tx_threshold;
+
struct rte_dpaa_xstats_name_off {
char name[RTE_ETH_XSTATS_NAME_SIZE];
uint32_t offset;
@@ -275,7 +278,11 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* Change tx callback to the real one */
- dev->tx_pkt_burst = dpaa_eth_queue_tx;
+ if (dpaa_intf->cgr_tx)
+ dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+ else
+ dev->tx_pkt_burst = dpaa_eth_queue_tx;
+
fman_if_enable_rx(dpaa_intf->fif);
return 0;
@@ -867,6 +874,7 @@ int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_PMD_INFO("Tx queue setup for queue index: %d fq_id (0x%x)",
queue_idx, dpaa_intf->tx_queues[queue_idx].fqid);
dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+
return 0;
}
@@ -1236,9 +1244,19 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
/* Initialise a Tx FQ */
static int dpaa_tx_queue_init(struct qman_fq *fq,
- struct fman_if *fman_intf)
+ struct fman_if *fman_intf,
+ struct qman_cgr *cgr_tx)
{
struct qm_mcc_initfq opts = {0};
+ struct qm_mcc_initcgr cgr_opts = {
+ .we_mask = QM_CGR_WE_CS_THRES |
+ QM_CGR_WE_CSTD_EN |
+ QM_CGR_WE_MODE,
+ .cgr = {
+ .cstd_en = QM_CGR_EN,
+ .mode = QMAN_CGR_MODE_FRAME
+ }
+ };
int ret;
ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
@@ -1257,6 +1275,27 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
DPAA_PMD_DEBUG("init tx fq %p, fqid 0x%x", fq, fq->fqid);
+
+ if (cgr_tx) {
+ /* Enable tail drop with cgr on this queue */
+ qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres,
+ td_tx_threshold, 0);
+ cgr_tx->cb = NULL;
+ ret = qman_create_cgr(cgr_tx, QMAN_CGR_FLAG_USE_INIT,
+ &cgr_opts);
+ if (ret) {
+ DPAA_PMD_WARN(
+ "rx taildrop init fail on rx fqid 0x%x(ret=%d)",
+ fq->fqid, ret);
+ goto without_cgr;
+ }
+ opts.we_mask |= QM_INITFQ_WE_CGID;
+ opts.fqd.cgid = cgr_tx->cgrid;
+ opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+ DPAA_PMD_DEBUG("Tx FQ tail drop enabled, threshold = %d\n",
+ td_tx_threshold);
+ }
+without_cgr:
ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
if (ret)
DPAA_PMD_ERR("init tx fqid 0x%x failed %d", fq->fqid, ret);
@@ -1309,6 +1348,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
struct fman_if *fman_intf;
struct fman_if_bpool *bp, *tmp_bp;
uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES];
+ uint32_t cgrid_tx[MAX_DPAA_CORES];
char eth_buf[RTE_ETHER_ADDR_FMT_SIZE];
PMD_INIT_FUNC_TRACE();
@@ -1319,7 +1359,10 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->dev_ops = &dpaa_devops;
/* Plugging of UCODE burst API not supported in Secondary */
eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
- eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
+ if (dpaa_intf->cgr_tx)
+ eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+ else
+ eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
qman_set_fq_lookup_table(
dpaa_intf->rx_queues->qman_fq_lookup_table);
@@ -1366,6 +1409,21 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
+ memset(cgrid, 0, sizeof(cgrid));
+ memset(cgrid_tx, 0, sizeof(cgrid_tx));
+
+ /* if DPAA_TX_TAILDROP_THRESHOLD is set, use that value; if 0, it means
+ * Tx tail drop is disabled.
+ */
+ if (getenv("DPAA_TX_TAILDROP_THRESHOLD")) {
+ td_tx_threshold = atoi(getenv("DPAA_TX_TAILDROP_THRESHOLD"));
+ DPAA_PMD_DEBUG("Tail drop threshold env configured: %u",
+ td_tx_threshold);
+ /* if a very large value is being configured */
+ if (td_tx_threshold > UINT16_MAX)
+ td_tx_threshold = CGR_RX_PERFQ_THRESH;
+ }
+
/* If congestion control is enabled globally*/
if (td_threshold) {
dpaa_intf->cgr_rx = rte_zmalloc(NULL,
@@ -1414,9 +1472,36 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
goto free_rx;
}
+ /* If congestion control is enabled globally*/
+ if (td_tx_threshold) {
+ dpaa_intf->cgr_tx = rte_zmalloc(NULL,
+ sizeof(struct qman_cgr) * MAX_DPAA_CORES,
+ MAX_CACHELINE);
+ if (!dpaa_intf->cgr_tx) {
+ DPAA_PMD_ERR("Failed to alloc mem for cgr_tx\n");
+ ret = -ENOMEM;
+ goto free_rx;
+ }
+
+ ret = qman_alloc_cgrid_range(&cgrid_tx[0], MAX_DPAA_CORES,
+ 1, 0);
+ if (ret != MAX_DPAA_CORES) {
+ DPAA_PMD_WARN("insufficient CGRIDs available");
+ ret = -EINVAL;
+ goto free_rx;
+ }
+ } else {
+ dpaa_intf->cgr_tx = NULL;
+ }
+
+
for (loop = 0; loop < MAX_DPAA_CORES; loop++) {
+ if (dpaa_intf->cgr_tx)
+ dpaa_intf->cgr_tx[loop].cgrid = cgrid_tx[loop];
+
ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
- fman_intf);
+ fman_intf,
+ dpaa_intf->cgr_tx ? &dpaa_intf->cgr_tx[loop] : NULL);
if (ret)
goto free_tx;
dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
@@ -1487,6 +1572,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
free_rx:
rte_free(dpaa_intf->cgr_rx);
+ rte_free(dpaa_intf->cgr_tx);
rte_free(dpaa_intf->rx_queues);
dpaa_intf->rx_queues = NULL;
dpaa_intf->nb_rx_queues = 0;
@@ -1527,6 +1613,17 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
rte_free(dpaa_intf->cgr_rx);
dpaa_intf->cgr_rx = NULL;
+ /* Release TX congestion Groups */
+ if (dpaa_intf->cgr_tx) {
+ for (loop = 0; loop < MAX_DPAA_CORES; loop++)
+ qman_delete_cgr(&dpaa_intf->cgr_tx[loop]);
+
+ qman_release_cgrid_range(dpaa_intf->cgr_tx[loop].cgrid,
+ MAX_DPAA_CORES);
+ rte_free(dpaa_intf->cgr_tx);
+ dpaa_intf->cgr_tx = NULL;
+ }
+
rte_free(dpaa_intf->rx_queues);
dpaa_intf->rx_queues = NULL;
@@ -1631,6 +1728,8 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
eth_dev->device = &dpaa_dev->device;
dpaa_dev->eth_dev = eth_dev;
+ qman_ern_register_cb(dpaa_free_mbuf);
+
/* Invoke PMD device initialization function */
diag = dpaa_dev_init(eth_dev);
if (diag == 0) {
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 6a6477ac8..d4261f885 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -111,6 +111,7 @@ struct dpaa_if {
struct qman_fq *rx_queues;
struct qman_cgr *cgr_rx;
struct qman_fq *tx_queues;
+ struct qman_cgr *cgr_tx;
struct qman_fq debug_queues[2];
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 3aeecb7d2..819cad7c6 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -398,6 +398,69 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
return mbuf;
}
+uint16_t
+dpaa_free_mbuf(const struct qm_fd *fd)
+{
+ struct rte_mbuf *mbuf;
+ struct dpaa_bp_info *bp_info;
+ uint8_t format;
+ void *ptr;
+
+ bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+ format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
+ if (unlikely(format == qm_fd_sg)) {
+ struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+ struct qm_sg_entry *sgt, *sg_temp;
+ void *vaddr, *sg_vaddr;
+ int i = 0;
+ uint16_t fd_offset = fd->offset;
+
+ vaddr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd));
+ if (!vaddr) {
+ DPAA_PMD_ERR("unable to convert physical address");
+ return -1;
+ }
+ sgt = vaddr + fd_offset;
+ sg_temp = &sgt[i++];
+ hw_sg_to_cpu(sg_temp);
+ temp = (struct rte_mbuf *)
+ ((char *)vaddr - bp_info->meta_data_size);
+ sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
+ qm_sg_entry_get64(sg_temp));
+
+ first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+ bp_info->meta_data_size);
+ first_seg->nb_segs = 1;
+ prev_seg = first_seg;
+ while (i < DPAA_SGT_MAX_ENTRIES) {
+ sg_temp = &sgt[i++];
+ hw_sg_to_cpu(sg_temp);
+ sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
+ qm_sg_entry_get64(sg_temp));
+ cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+ bp_info->meta_data_size);
+ first_seg->nb_segs += 1;
+ prev_seg->next = cur_seg;
+ if (sg_temp->final) {
+ cur_seg->next = NULL;
+ break;
+ }
+ prev_seg = cur_seg;
+ }
+
+ rte_pktmbuf_free_seg(temp);
+ rte_pktmbuf_free_seg(first_seg);
+ return 0;
+ }
+
+ ptr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd));
+ mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+
+ rte_pktmbuf_free(mbuf);
+
+ return 0;
+}
+
/* Specific for LS1043 */
void
dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
@@ -1011,6 +1074,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return sent;
}
+uint16_t
+dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+ qman_ern_poll_free();
+
+ return dpaa_eth_queue_tx(q, bufs, nb_bufs);
+}
+
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
struct rte_mbuf **bufs __rte_unused,
uint16_t nb_bufs __rte_unused)
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 4f896fba1..fe8eb6dc7 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -254,6 +254,8 @@ struct annotations_t {
uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+uint16_t dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs,
+ uint16_t nb_bufs);
uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
@@ -266,6 +268,7 @@ int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
struct qm_fd *fd,
uint32_t bpid);
+uint16_t dpaa_free_mbuf(const struct qm_fd *fd);
void dpaa_rx_cb(struct qman_fq **fq,
struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs);
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 10/29] net/dpaa: add 2.5G support
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (8 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 09/29] net/dpaa: enable Tx queue taildrop Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 11/29] net/dpaa: update process specific device info Hemant Agrawal
` (19 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Sachin Saxena, Gagandeep Singh
From: Sachin Saxena <sachin.saxena@nxp.com>
Handle 2.5Gbps ethernet ports as well.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/dpaa.ini | 2 +-
drivers/bus/dpaa/base/fman/fman.c | 6 ++++--
drivers/bus/dpaa/base/fman/netcfg_layer.c | 3 ++-
drivers/bus/dpaa/include/fman.h | 1 +
drivers/net/dpaa/dpaa_ethdev.c | 9 ++++++++-
5 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 24cfd8566..b00f46a97 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,7 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
-Speed capabilities = P
+Speed capabilities = Y
Link status = Y
Jumbo frame = Y
MTU update = Y
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 6d77a7e39..ae26041ca 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -263,7 +263,7 @@ fman_if_init(const struct device_node *dpa_node)
fman_dealloc_bufs_mask_hi = 0;
fman_dealloc_bufs_mask_lo = 0;
}
- /* Is the MAC node 1G, 10G? */
+ /* Is the MAC node 1G, 2.5G, 10G? */
__if->__if.is_memac = 0;
if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
@@ -279,7 +279,9 @@ fman_if_init(const struct device_node *dpa_node)
/* Right now forcing memac to 1g in case of error*/
__if->__if.mac_type = fman_mac_1g;
} else {
- if (strstr(char_prop, "sgmii"))
+ if (strstr(char_prop, "sgmii-2500"))
+ __if->__if.mac_type = fman_mac_2_5g;
+ else if (strstr(char_prop, "sgmii"))
__if->__if.mac_type = fman_mac_1g;
else if (strstr(char_prop, "rgmii")) {
__if->__if.mac_type = fman_mac_1g;
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index 36eca88cd..b7009f229 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -44,7 +44,8 @@ dump_netcfg(struct netcfg_info *cfg_ptr)
printf("\n+ Fman %d, MAC %d (%s);\n",
__if->fman_idx, __if->mac_idx,
- (__if->mac_type == fman_mac_1g) ? "1G" : "10G");
+ (__if->mac_type == fman_mac_1g) ? "1G" :
+ (__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
printf("\tmac_addr: %02x:%02x:%02x:%02x:%02x:%02x\n",
(&__if->mac_addr)->addr_bytes[0],
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index c02d32d22..b6293b61c 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -72,6 +72,7 @@ enum fman_mac_type {
fman_offline = 0,
fman_mac_1g,
fman_mac_10g,
+ fman_mac_2_5g,
};
struct mac_addr {
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index fd2c0c681..c0ded9086 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -356,8 +356,13 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
if (dpaa_intf->fif->mac_type == fman_mac_1g) {
dev_info->speed_capa = ETH_LINK_SPEED_1G;
+ } else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) {
+ dev_info->speed_capa = ETH_LINK_SPEED_1G
+ | ETH_LINK_SPEED_2_5G;
} else if (dpaa_intf->fif->mac_type == fman_mac_10g) {
- dev_info->speed_capa = (ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G);
+ dev_info->speed_capa = ETH_LINK_SPEED_1G
+ | ETH_LINK_SPEED_2_5G
+ | ETH_LINK_SPEED_10G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, dpaa_intf->fif->mac_type);
@@ -388,6 +393,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
if (dpaa_intf->fif->mac_type == fman_mac_1g)
link->link_speed = ETH_SPEED_NUM_1G;
+ else if (dpaa_intf->fif->mac_type == fman_mac_2_5g)
+ link->link_speed = ETH_SPEED_NUM_2_5G;
else if (dpaa_intf->fif->mac_type == fman_mac_10g)
link->link_speed = ETH_SPEED_NUM_10G;
else
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 11/29] net/dpaa: update process specific device info
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (9 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 10/29] net/dpaa: add 2.5G support Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 12/29] drivers: optimize thread local storage for dpaa Hemant Agrawal
` (18 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
For DPAA devices the memory maps stored in the FMAN interface
information is per process. Store them in the device process specific
area.
This is required to support multi-process apps.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 207 ++++++++++++++++-----------------
drivers/net/dpaa/dpaa_ethdev.h | 1 -
2 files changed, 102 insertions(+), 106 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c0ded9086..6c94fd396 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -149,7 +149,6 @@ dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts)
static int
dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
+ VLAN_TAG_SIZE;
uint32_t buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
@@ -185,7 +184,7 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- fman_if_set_maxfrm(dpaa_intf->fif, frame_size);
+ fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
}
@@ -193,7 +192,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
static int
dpaa_eth_dev_configure(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint64_t tx_offloads = eth_conf->txmode.offloads;
@@ -232,14 +230,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
max_len = DPAA_MAX_RX_PKT_LEN;
}
- fman_if_set_maxfrm(dpaa_intf->fif, max_len);
+ fman_if_set_maxfrm(dev->process_private, max_len);
dev->data->mtu = max_len
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
}
if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
- fman_if_set_sg(dpaa_intf->fif, 1);
+ fman_if_set_sg(dev->process_private, 1);
dev->data->scattered_rx = 1;
}
@@ -283,18 +281,18 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
else
dev->tx_pkt_burst = dpaa_eth_queue_tx;
- fman_if_enable_rx(dpaa_intf->fif);
+ fman_if_enable_rx(dev->process_private);
return 0;
}
static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
- fman_if_disable_rx(dpaa_intf->fif);
+ fman_if_disable_rx(fif);
dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
}
@@ -342,6 +340,7 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
DPAA_PMD_DEBUG(": %s", dpaa_intf->name);
@@ -354,18 +353,18 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
dev_info->max_vmdq_pools = ETH_16_POOLS;
dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
- if (dpaa_intf->fif->mac_type == fman_mac_1g) {
+ if (fif->mac_type == fman_mac_1g) {
dev_info->speed_capa = ETH_LINK_SPEED_1G;
- } else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) {
+ } else if (fif->mac_type == fman_mac_2_5g) {
dev_info->speed_capa = ETH_LINK_SPEED_1G
| ETH_LINK_SPEED_2_5G;
- } else if (dpaa_intf->fif->mac_type == fman_mac_10g) {
+ } else if (fif->mac_type == fman_mac_10g) {
dev_info->speed_capa = ETH_LINK_SPEED_1G
| ETH_LINK_SPEED_2_5G
| ETH_LINK_SPEED_10G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
- dpaa_intf->name, dpaa_intf->fif->mac_type);
+ dpaa_intf->name, fif->mac_type);
return -EINVAL;
}
@@ -388,18 +387,19 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
struct rte_eth_link *link = &dev->data->dev_link;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
- if (dpaa_intf->fif->mac_type == fman_mac_1g)
+ if (fif->mac_type == fman_mac_1g)
link->link_speed = ETH_SPEED_NUM_1G;
- else if (dpaa_intf->fif->mac_type == fman_mac_2_5g)
+ else if (fif->mac_type == fman_mac_2_5g)
link->link_speed = ETH_SPEED_NUM_2_5G;
- else if (dpaa_intf->fif->mac_type == fman_mac_10g)
+ else if (fif->mac_type == fman_mac_10g)
link->link_speed = ETH_SPEED_NUM_10G;
else
DPAA_PMD_ERR("invalid link_speed: %s, %d",
- dpaa_intf->name, dpaa_intf->fif->mac_type);
+ dpaa_intf->name, fif->mac_type);
link->link_status = dpaa_intf->valid;
link->link_duplex = ETH_LINK_FULL_DUPLEX;
@@ -410,21 +410,17 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
static int dpaa_eth_stats_get(struct rte_eth_dev *dev,
struct rte_eth_stats *stats)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_stats_get(dpaa_intf->fif, stats);
+ fman_if_stats_get(dev->process_private, stats);
return 0;
}
static int dpaa_eth_stats_reset(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_stats_reset(dpaa_intf->fif);
+ fman_if_stats_reset(dev->process_private);
return 0;
}
@@ -433,7 +429,6 @@ static int
dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
unsigned int n)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
uint64_t values[sizeof(struct dpaa_if_stats) / 8];
@@ -443,7 +438,7 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
if (xstats == NULL)
return 0;
- fman_if_stats_get_all(dpaa_intf->fif, values,
+ fman_if_stats_get_all(dev->process_private, values,
sizeof(struct dpaa_if_stats) / 8);
for (i = 0; i < num; i++) {
@@ -480,15 +475,13 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
if (!ids) {
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
if (n < stat_cnt)
return stat_cnt;
if (!values)
return 0;
- fman_if_stats_get_all(dpaa_intf->fif, values_copy,
+ fman_if_stats_get_all(dev->process_private, values_copy,
sizeof(struct dpaa_if_stats) / 8);
for (i = 0; i < stat_cnt; i++)
@@ -537,44 +530,36 @@ dpaa_xstats_get_names_by_id(
static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_promiscuous_enable(dpaa_intf->fif);
+ fman_if_promiscuous_enable(dev->process_private);
return 0;
}
static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_promiscuous_disable(dpaa_intf->fif);
+ fman_if_promiscuous_disable(dev->process_private);
return 0;
}
static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_set_mcast_filter_table(dpaa_intf->fif);
+ fman_if_set_mcast_filter_table(dev->process_private);
return 0;
}
static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_reset_mcast_filter_table(dpaa_intf->fif);
+ fman_if_reset_mcast_filter_table(dev->process_private);
return 0;
}
@@ -587,6 +572,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct rte_mempool *mp)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
struct qm_mcc_initfq opts = {0};
u32 flags = 0;
@@ -643,22 +629,22 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
icp.iciof = DEFAULT_ICIOF;
icp.iceof = DEFAULT_RX_ICEOF;
icp.icsz = DEFAULT_ICSZ;
- fman_if_set_ic_params(dpaa_intf->fif, &icp);
+ fman_if_set_ic_params(fif, &icp);
fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
- fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+ fman_if_set_fdoff(fif, fd_offset);
/* Buffer pool size should be equal to Dataroom Size*/
bp_size = rte_pktmbuf_data_room_size(mp);
- fman_if_set_bp(dpaa_intf->fif, mp->size,
+ fman_if_set_bp(fif, mp->size,
dpaa_intf->bp_info->bpid, bp_size);
dpaa_intf->valid = 1;
DPAA_PMD_DEBUG("if:%s fd_offset = %d offset = %d",
dpaa_intf->name, fd_offset,
- fman_if_get_fdoff(dpaa_intf->fif));
+ fman_if_get_fdoff(fif));
}
DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(dpaa_intf->fif),
+ fman_if_get_sg_enable(fif),
dev->data->dev_conf.rxmode.max_rx_pkt_len);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
@@ -950,11 +936,12 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
return 0;
} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
fc_conf->mode == RTE_FC_FULL) {
- fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water,
+ fman_if_set_fc_threshold(dev->process_private,
+ fc_conf->high_water,
fc_conf->low_water,
- dpaa_intf->bp_info->bpid);
+ dpaa_intf->bp_info->bpid);
if (fc_conf->pause_time)
- fman_if_set_fc_quanta(dpaa_intf->fif,
+ fman_if_set_fc_quanta(dev->process_private,
fc_conf->pause_time);
}
@@ -990,10 +977,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
fc_conf->autoneg = net_fc->autoneg;
return 0;
}
- ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+ ret = fman_if_get_fc_threshold(dev->process_private);
if (ret) {
fc_conf->mode = RTE_FC_TX_PAUSE;
- fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+ fc_conf->pause_time =
+ fman_if_get_fc_quanta(dev->process_private);
} else {
fc_conf->mode = RTE_FC_NONE;
}
@@ -1008,11 +996,11 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
__rte_unused uint32_t pool)
{
int ret;
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
PMD_INIT_FUNC_TRACE();
- ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, index);
+ ret = fman_if_add_mac_addr(dev->process_private,
+ addr->addr_bytes, index);
if (ret)
DPAA_PMD_ERR("Adding the MAC ADDR failed: err = %d", ret);
@@ -1023,11 +1011,9 @@ static void
dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
uint32_t index)
{
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
- fman_if_clear_mac_addr(dpaa_intf->fif, index);
+ fman_if_clear_mac_addr(dev->process_private, index);
}
static int
@@ -1035,11 +1021,10 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
struct rte_ether_addr *addr)
{
int ret;
- struct dpaa_if *dpaa_intf = dev->data->dev_private;
PMD_INIT_FUNC_TRACE();
- ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, 0);
+ ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
if (ret)
DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret);
@@ -1142,7 +1127,6 @@ int
rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
{
struct rte_eth_dev *dev;
- struct dpaa_if *dpaa_intf;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV);
@@ -1151,17 +1135,16 @@ rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
if (!is_dpaa_supported(dev))
return -ENOTSUP;
- dpaa_intf = dev->data->dev_private;
-
if (on)
- fman_if_loopback_enable(dpaa_intf->fif);
+ fman_if_loopback_enable(dev->process_private);
else
- fman_if_loopback_disable(dpaa_intf->fif);
+ fman_if_loopback_disable(dev->process_private);
return 0;
}
-static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
+static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
+ struct fman_if *fman_intf)
{
struct rte_eth_fc_conf *fc_conf;
int ret;
@@ -1177,10 +1160,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
}
}
fc_conf = dpaa_intf->fc_conf;
- ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+ ret = fman_if_get_fc_threshold(fman_intf);
if (ret) {
fc_conf->mode = RTE_FC_TX_PAUSE;
- fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+ fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
} else {
fc_conf->mode = RTE_FC_NONE;
}
@@ -1342,6 +1325,39 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
}
#endif
+/* Initialise a network interface */
+static int
+dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
+{
+ struct rte_dpaa_device *dpaa_device;
+ struct fm_eth_port_cfg *cfg;
+ struct dpaa_if *dpaa_intf;
+ struct fman_if *fman_intf;
+ int dev_id;
+
+ PMD_INIT_FUNC_TRACE();
+
+ dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
+ dev_id = dpaa_device->id.dev_id;
+ cfg = dpaa_get_eth_port_cfg(dev_id);
+ fman_intf = cfg->fman_if;
+ eth_dev->process_private = fman_intf;
+
+ /* Plugging of UCODE burst API not supported in Secondary */
+ dpaa_intf = eth_dev->data->dev_private;
+ eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+ if (dpaa_intf->cgr_tx)
+ eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+ else
+ eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+ qman_set_fq_lookup_table(
+ dpaa_intf->rx_queues->qman_fq_lookup_table);
+#endif
+
+ return 0;
+}
+
/* Initialise a network interface */
static int
dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -1360,23 +1376,6 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE();
- dpaa_intf = eth_dev->data->dev_private;
- /* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- eth_dev->dev_ops = &dpaa_devops;
- /* Plugging of UCODE burst API not supported in Secondary */
- eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
- if (dpaa_intf->cgr_tx)
- eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
- else
- eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
- qman_set_fq_lookup_table(
- dpaa_intf->rx_queues->qman_fq_lookup_table);
-#endif
- return 0;
- }
-
dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
dev_id = dpaa_device->id.dev_id;
dpaa_intf = eth_dev->data->dev_private;
@@ -1386,7 +1385,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->name = dpaa_device->name;
/* save fman_if & cfg in the interface struture */
- dpaa_intf->fif = fman_intf;
+ eth_dev->process_private = fman_intf;
dpaa_intf->ifid = dev_id;
dpaa_intf->cfg = cfg;
@@ -1455,7 +1454,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
if (default_q)
fqid = cfg->rx_def;
else
- fqid = DPAA_PCD_FQID_START + dpaa_intf->fif->mac_idx *
+ fqid = DPAA_PCD_FQID_START + fman_intf->mac_idx *
DPAA_PCD_FQID_MULTIPLIER + loop;
if (dpaa_intf->cgr_rx)
@@ -1527,7 +1526,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
- dpaa_fc_set_default(dpaa_intf);
+ dpaa_fc_set_default(dpaa_intf, fman_intf);
/* reset bpool list, initialize bpool dynamically */
list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
@@ -1674,6 +1673,13 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
return -ENOMEM;
eth_dev->device = &dpaa_dev->device;
eth_dev->dev_ops = &dpaa_devops;
+
+ ret = dpaa_dev_init_secondary(eth_dev);
+ if (ret != 0) {
+ RTE_LOG(ERR, PMD, "secondary dev init failed\n");
+ return ret;
+ }
+
rte_eth_dev_probing_finish(eth_dev);
return 0;
}
@@ -1709,29 +1715,20 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
}
}
- /* In case of secondary process, the device is already configured
- * and no further action is required, except portal initialization
- * and verifying secondary attachment to port name.
- */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- eth_dev = rte_eth_dev_attach_secondary(dpaa_dev->name);
- if (!eth_dev)
- return -ENOMEM;
- } else {
- eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
- if (eth_dev == NULL)
- return -ENOMEM;
+ eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
+ if (!eth_dev)
+ return -ENOMEM;
- eth_dev->data->dev_private = rte_zmalloc(
- "ethdev private structure",
- sizeof(struct dpaa_if),
- RTE_CACHE_LINE_SIZE);
- if (!eth_dev->data->dev_private) {
- DPAA_PMD_ERR("Cannot allocate memzone for port data");
- rte_eth_dev_release_port(eth_dev);
- return -ENOMEM;
- }
+ eth_dev->data->dev_private =
+ rte_zmalloc("ethdev private structure",
+ sizeof(struct dpaa_if),
+ RTE_CACHE_LINE_SIZE);
+ if (!eth_dev->data->dev_private) {
+ DPAA_PMD_ERR("Cannot allocate memzone for port data");
+ rte_eth_dev_release_port(eth_dev);
+ return -ENOMEM;
}
+
eth_dev->device = &dpaa_dev->device;
dpaa_dev->eth_dev = eth_dev;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index d4261f885..4c40ff86a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -116,7 +116,6 @@ struct dpaa_if {
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
uint32_t ifid;
- struct fman_if *fif;
struct dpaa_bp_info *bp_info;
struct rte_eth_fc_conf *fc_conf;
};
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 12/29] drivers: optimize thread local storage for dpaa
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (10 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 11/29] net/dpaa: update process specific device info Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 13/29] bus/dpaa: enable link state interrupt Hemant Agrawal
` (17 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
Minimize the number of different thread variables
Add all the thread specific variables in dpaa_portal
structure to optimize TLS Usage.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
doc/guides/rel_notes/release_20_08.rst | 6 ++++
drivers/bus/dpaa/dpaa_bus.c | 24 ++++++-------
drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 -
drivers/bus/dpaa/rte_dpaa_bus.h | 42 ++++++++++++++---------
drivers/crypto/dpaa_sec/dpaa_sec.c | 11 +++---
drivers/event/dpaa/dpaa_eventdev.c | 4 +--
drivers/mempool/dpaa/dpaa_mempool.c | 6 ++--
drivers/net/dpaa/dpaa_ethdev.c | 2 +-
drivers/net/dpaa/dpaa_rxtx.c | 4 +--
9 files changed, 54 insertions(+), 46 deletions(-)
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index d915fce12..b1e039d03 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -119,6 +119,12 @@ New Features
See the :doc:`../sample_app_ug/l2_forward_real_virtual` for more
details of this parameter usage.
+* **Updated NXP dpaa ethdev PMD.**
+
+ Updated the NXP dpaa ethdev with new features and improvements, including:
+
+ * Added support to use datapath APIs from non-EAL pthread
+
* **Updated NXP dpaa2 ethdev PMD.**
Updated the NXP dpaa2 ethdev with new features and improvements, including:
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6770fbc52..aa906c34e 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -52,8 +52,7 @@ unsigned int dpaa_svr_family;
#define FSL_DPAA_BUS_NAME dpaa_bus
-RTE_DEFINE_PER_LCORE(bool, dpaa_io);
-RTE_DEFINE_PER_LCORE(struct dpaa_portal_dqrr, held_bufs);
+RTE_DEFINE_PER_LCORE(struct dpaa_portal *, dpaa_io);
struct fm_eth_port_cfg *
dpaa_get_eth_port_cfg(int dev_id)
@@ -253,7 +252,6 @@ int rte_dpaa_portal_init(void *arg)
{
unsigned int cpu, lcore = rte_lcore_id();
int ret;
- struct dpaa_portal *dpaa_io_portal;
BUS_INIT_FUNC_TRACE();
@@ -288,20 +286,21 @@ int rte_dpaa_portal_init(void *arg)
DPAA_BUS_LOG(DEBUG, "QMAN thread initialized - CPU=%d lcore=%d",
cpu, lcore);
- dpaa_io_portal = rte_malloc(NULL, sizeof(struct dpaa_portal),
+ DPAA_PER_LCORE_PORTAL = rte_malloc(NULL, sizeof(struct dpaa_portal),
RTE_CACHE_LINE_SIZE);
- if (!dpaa_io_portal) {
+ if (!DPAA_PER_LCORE_PORTAL) {
DPAA_BUS_LOG(ERR, "Unable to allocate memory");
bman_thread_finish();
qman_thread_finish();
return -ENOMEM;
}
- dpaa_io_portal->qman_idx = qman_get_portal_index();
- dpaa_io_portal->bman_idx = bman_get_portal_index();
- dpaa_io_portal->tid = syscall(SYS_gettid);
+ DPAA_PER_LCORE_PORTAL->qman_idx = qman_get_portal_index();
+ DPAA_PER_LCORE_PORTAL->bman_idx = bman_get_portal_index();
+ DPAA_PER_LCORE_PORTAL->tid = syscall(SYS_gettid);
- ret = pthread_setspecific(dpaa_portal_key, (void *)dpaa_io_portal);
+ ret = pthread_setspecific(dpaa_portal_key,
+ (void *)DPAA_PER_LCORE_PORTAL);
if (ret) {
DPAA_BUS_LOG(ERR, "pthread_setspecific failed on core %u"
" (lcore=%u) with ret: %d", cpu, lcore, ret);
@@ -310,8 +309,6 @@ int rte_dpaa_portal_init(void *arg)
return ret;
}
- RTE_PER_LCORE(dpaa_io) = true;
-
DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
return 0;
@@ -324,7 +321,7 @@ rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
u32 sdqcr;
int ret;
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init(arg);
if (ret < 0) {
DPAA_BUS_LOG(ERR, "portal initialization failure");
@@ -367,8 +364,7 @@ dpaa_portal_finish(void *arg)
rte_free(dpaa_io_portal);
dpaa_io_portal = NULL;
-
- RTE_PER_LCORE(dpaa_io) = false;
+ DPAA_PER_LCORE_PORTAL = NULL;
}
static int
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 8069b05af..2defa7992 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -48,7 +48,6 @@ INTERNAL {
netcfg_acquire;
netcfg_release;
per_lcore_dpaa_io;
- per_lcore_held_bufs;
qman_alloc_cgrid_range;
qman_alloc_pool_range;
qman_clear_irq;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 2a186d83f..25aff2d30 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -35,8 +35,6 @@
extern unsigned int dpaa_svr_family;
-extern RTE_DEFINE_PER_LCORE(bool, dpaa_io);
-
struct rte_dpaa_device;
struct rte_dpaa_driver;
@@ -90,12 +88,38 @@ struct rte_dpaa_driver {
rte_dpaa_remove_t remove;
};
+/* Create storage for dqrr entries per lcore */
+#define DPAA_PORTAL_DEQUEUE_DEPTH 16
+struct dpaa_portal_dqrr {
+ void *mbuf[DPAA_PORTAL_DEQUEUE_DEPTH];
+ uint64_t dqrr_held;
+ uint8_t dqrr_size;
+};
+
struct dpaa_portal {
uint32_t bman_idx; /**< BMAN Portal ID*/
uint32_t qman_idx; /**< QMAN Portal ID*/
+ struct dpaa_portal_dqrr dpaa_held_bufs;
+ struct rte_crypto_op **dpaa_sec_ops;
+ int dpaa_sec_op_nb;
uint64_t tid;/**< Parent Thread id for this portal */
};
+RTE_DECLARE_PER_LCORE(struct dpaa_portal *, dpaa_io);
+
+#define DPAA_PER_LCORE_PORTAL \
+ RTE_PER_LCORE(dpaa_io)
+#define DPAA_PER_LCORE_DQRR_SIZE \
+ RTE_PER_LCORE(dpaa_io)->dpaa_held_bufs.dqrr_size
+#define DPAA_PER_LCORE_DQRR_HELD \
+ RTE_PER_LCORE(dpaa_io)->dpaa_held_bufs.dqrr_held
+#define DPAA_PER_LCORE_DQRR_MBUF(i) \
+ RTE_PER_LCORE(dpaa_io)->dpaa_held_bufs.mbuf[i]
+#define DPAA_PER_LCORE_RTE_CRYPTO_OP \
+ RTE_PER_LCORE(dpaa_io)->dpaa_sec_ops
+#define DPAA_PER_LCORE_DPAA_SEC_OP_NB \
+ RTE_PER_LCORE(dpaa_io)->dpaa_sec_op_nb
+
/* Various structures representing contiguous memory maps */
struct dpaa_memseg {
TAILQ_ENTRY(dpaa_memseg) next;
@@ -200,20 +224,6 @@ RTE_INIT(dpaainitfn_ ##nm) \
} \
RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
-/* Create storage for dqrr entries per lcore */
-#define DPAA_PORTAL_DEQUEUE_DEPTH 16
-struct dpaa_portal_dqrr {
- void *mbuf[DPAA_PORTAL_DEQUEUE_DEPTH];
- uint64_t dqrr_held;
- uint8_t dqrr_size;
-};
-
-RTE_DECLARE_PER_LCORE(struct dpaa_portal_dqrr, held_bufs);
-
-#define DPAA_PER_LCORE_DQRR_SIZE RTE_PER_LCORE(held_bufs).dqrr_size
-#define DPAA_PER_LCORE_DQRR_HELD RTE_PER_LCORE(held_bufs).dqrr_held
-#define DPAA_PER_LCORE_DQRR_MBUF(i) RTE_PER_LCORE(held_bufs).mbuf[i]
-
__rte_internal
struct fm_eth_port_cfg *dpaa_get_eth_port_cfg(int dev_id);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index d9fa8bb36..8fcd57373 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -45,9 +45,6 @@
static uint8_t cryptodev_driver_id;
-static __thread struct rte_crypto_op **dpaa_sec_ops;
-static __thread int dpaa_sec_op_nb;
-
static int
dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess);
@@ -143,7 +140,7 @@ dqrr_out_fq_cb_rx(struct qman_portal *qm __always_unused,
struct dpaa_sec_job *job;
struct dpaa_sec_op_ctx *ctx;
- if (dpaa_sec_op_nb >= DPAA_SEC_BURST)
+ if (DPAA_PER_LCORE_DPAA_SEC_OP_NB >= DPAA_SEC_BURST)
return qman_cb_dqrr_defer;
if (!(dqrr->stat & QM_DQRR_STAT_FD_VALID))
@@ -174,7 +171,7 @@ dqrr_out_fq_cb_rx(struct qman_portal *qm __always_unused,
}
mbuf->data_len = len;
}
- dpaa_sec_ops[dpaa_sec_op_nb++] = ctx->op;
+ DPAA_PER_LCORE_RTE_CRYPTO_OP[DPAA_PER_LCORE_DPAA_SEC_OP_NB++] = ctx->op;
dpaa_sec_op_ending(ctx);
return qman_cb_dqrr_consume;
@@ -2301,7 +2298,7 @@ dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess)
DPAA_SEC_ERR("Unable to prepare sec cdb");
return ret;
}
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_SEC_ERR("Failure in affining portal");
@@ -3463,7 +3460,7 @@ cryptodev_dpaa_sec_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
}
}
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
retval = rte_dpaa_portal_init((void *)1);
if (retval) {
DPAA_SEC_ERR("Unable to initialize portal");
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index e78728b7e..a3c138b7a 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -179,7 +179,7 @@ dpaa_event_dequeue_burst(void *port, struct rte_event ev[],
struct dpaa_port *portal = (struct dpaa_port *)port;
struct rte_mbuf *mbuf;
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
/* Affine current thread context to a qman portal */
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
@@ -251,7 +251,7 @@ dpaa_event_dequeue_burst_intr(void *port, struct rte_event ev[],
struct dpaa_port *portal = (struct dpaa_port *)port;
struct rte_mbuf *mbuf;
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
/* Affine current thread context to a qman portal */
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 8d1da8028..e6b06f057 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -53,7 +53,7 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp)
MEMPOOL_INIT_FUNC_TRACE();
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_MEMPOOL_ERR(
@@ -169,7 +169,7 @@ dpaa_mbuf_free_bulk(struct rte_mempool *pool,
DPAA_MEMPOOL_DPDEBUG("Request to free %d buffers in bpid = %d",
n, bp_info->bpid);
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
@@ -224,7 +224,7 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
return -1;
}
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d",
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 6c94fd396..c9f828a7c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1707,7 +1707,7 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
is_global_init = 1;
}
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)1);
if (ret) {
DPAA_PMD_ERR("Unable to initialize portal");
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 819cad7c6..5303c9b76 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -670,7 +670,7 @@ uint16_t dpaa_eth_queue_rx(void *q,
if (likely(fq->is_static))
return dpaa_eth_queue_portal_rx(fq, bufs, nb_bufs);
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_PMD_ERR("Failure in affining portal");
@@ -970,7 +970,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
int ret, realloc_mbuf = 0;
uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0};
- if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_PMD_ERR("Failure in affining portal");
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 13/29] bus/dpaa: enable link state interrupt
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (11 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 12/29] drivers: optimize thread local storage for dpaa Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 14/29] bus/dpaa: enable set link status Hemant Agrawal
` (16 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
Enable/disable link state interrupt and get link state api is
defined using IOCTL calls from kernel driver
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
doc/guides/nics/features/dpaa.ini | 1 +
doc/guides/rel_notes/release_20_08.rst | 1 +
drivers/bus/dpaa/base/fman/fman.c | 4 +-
drivers/bus/dpaa/base/qbman/process.c | 72 ++++++++++++++++-
drivers/bus/dpaa/dpaa_bus.c | 28 ++++++-
drivers/bus/dpaa/include/fman.h | 2 +
drivers/bus/dpaa/include/process.h | 20 +++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 3 +
drivers/bus/dpaa/rte_dpaa_bus.h | 6 +-
drivers/common/dpaax/compat.h | 5 +-
drivers/net/dpaa/dpaa_ethdev.c | 97 ++++++++++++++++++++++-
11 files changed, 233 insertions(+), 6 deletions(-)
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index b00f46a97..816a6e08e 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -6,6 +6,7 @@
[Features]
Speed capabilities = Y
Link status = Y
+Link status event = Y
Jumbo frame = Y
MTU update = Y
Scattered Rx = Y
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index b1e039d03..e5bc5cfd8 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -123,6 +123,7 @@ New Features
Updated the NXP dpaa ethdev with new features and improvements, including:
+ * Added support for link status and interrupt
* Added support to use datapath APIs from non-EAL pthread
* **Updated NXP dpaa2 ethdev PMD.**
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index ae26041ca..33be9e5d7 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2020 NXP
*
*/
@@ -185,6 +185,8 @@ fman_if_init(const struct device_node *dpa_node)
}
memset(__if, 0, sizeof(*__if));
INIT_LIST_HEAD(&__if->__if.bpool_list);
+ strlcpy(__if->node_name, dpa_node->name, IF_NAME_MAX_LEN - 1);
+ __if->node_name[IF_NAME_MAX_LEN - 1] = '\0';
strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
__if->node_path[PATH_MAX - 1] = '\0';
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
index 2c23c98df..68b7af243 100644
--- a/drivers/bus/dpaa/base/qbman/process.c
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2011-2016 Freescale Semiconductor Inc.
- * Copyright 2017 NXP
+ * Copyright 2017,2020 NXP
*
*/
#include <assert.h>
@@ -296,3 +296,73 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal)
return process_portal_free(&input);
}
+
+#define DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT \
+ _IOW(DPAA_IOCTL_MAGIC, 0x0E, struct usdpaa_ioctl_link_status)
+
+#define DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT \
+ _IOW(DPAA_IOCTL_MAGIC, 0x0F, char*)
+
+int dpaa_intr_enable(char *if_name, int efd)
+{
+ struct usdpaa_ioctl_link_status args;
+
+ int ret = check_fd();
+
+ if (ret)
+ return ret;
+
+ args.efd = (uint32_t)efd;
+ strcpy(args.if_name, if_name);
+
+ ret = ioctl(fd, DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT, &args);
+ if (ret)
+ return errno;
+
+ return 0;
+}
+
+int dpaa_intr_disable(char *if_name)
+{
+ int ret = check_fd();
+
+ if (ret)
+ return ret;
+
+ ret = ioctl(fd, DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT, &if_name);
+ if (ret) {
+ if (errno == EINVAL)
+ printf("Failed to disable interrupt: Not Supported\n");
+ else
+ printf("Failed to disable interrupt\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+#define DPAA_IOCTL_GET_LINK_STATUS \
+ _IOWR(DPAA_IOCTL_MAGIC, 0x10, struct usdpaa_ioctl_link_status_args)
+
+int dpaa_get_link_status(char *if_name)
+{
+ int ret = check_fd();
+ struct usdpaa_ioctl_link_status_args args;
+
+ if (ret)
+ return ret;
+
+ strcpy(args.if_name, if_name);
+ args.link_status = 0;
+
+ ret = ioctl(fd, DPAA_IOCTL_GET_LINK_STATUS, &args);
+ if (ret) {
+ if (errno == EINVAL)
+ printf("Failed to get link status: Not Supported\n");
+ else
+ printf("Failed to get link status\n");
+ return ret;
+ }
+
+ return args.link_status;
+}
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index aa906c34e..32e872da5 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2020 NXP
*
*/
/* System headers */
@@ -13,6 +13,7 @@
#include <pthread.h>
#include <sys/types.h>
#include <sys/syscall.h>
+#include <sys/eventfd.h>
#include <rte_byteorder.h>
#include <rte_common.h>
@@ -542,6 +543,23 @@ rte_dpaa_bus_dev_build(void)
return 0;
}
+static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
+{
+ int fd;
+
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd < 0) {
+ DPAA_BUS_ERR("Cannot set up eventfd, error %i (%s)",
+ errno, strerror(errno));
+ return errno;
+ }
+
+ intr_handle->fd = fd;
+ intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ return 0;
+}
+
static int
rte_dpaa_bus_probe(void)
{
@@ -589,6 +607,14 @@ rte_dpaa_bus_probe(void)
fclose(svr_file);
}
+ TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+ if (dev->device_type == FSL_DPAA_ETH) {
+ ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ if (ret)
+ DPAA_BUS_ERR("Error setting up interrupt.\n");
+ }
+ }
+
/* And initialize the PA->VA translation table */
dpaax_iova_table_populate();
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index b6293b61c..7a0a7d405 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,6 +2,7 @@
*
* Copyright 2010-2012 Freescale Semiconductor, Inc.
* All rights reserved.
+ * Copyright 2019-2020 NXP
*
*/
@@ -361,6 +362,7 @@ struct fman_if_ic_params {
*/
struct __fman_if {
struct fman_if __if;
+ char node_name[IF_NAME_MAX_LEN];
char node_path[PATH_MAX];
uint64_t regs_size;
void *ccsr_map;
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index d9ec94ee2..7305762c2 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -2,6 +2,7 @@
*
* Copyright 2010-2011 Freescale Semiconductor, Inc.
* All rights reserved.
+ * Copyright 2020 NXP
*
*/
@@ -74,4 +75,23 @@ struct dpaa_ioctl_irq_map {
int process_portal_irq_map(int fd, struct dpaa_ioctl_irq_map *irq);
int process_portal_irq_unmap(int fd);
+struct usdpaa_ioctl_link_status {
+ char if_name[IF_NAME_MAX_LEN];
+ uint32_t efd;
+};
+
+__rte_internal
+int dpaa_intr_enable(char *if_name, int efd);
+
+__rte_internal
+int dpaa_intr_disable(char *if_name);
+
+struct usdpaa_ioctl_link_status_args {
+ /* network device node name */
+ char if_name[IF_NAME_MAX_LEN];
+ int link_status;
+};
+__rte_internal
+int dpaa_get_link_status(char *if_name);
+
#endif /* __PROCESS_H */
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 2defa7992..96662d7be 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -15,6 +15,9 @@ INTERNAL {
dpaa_get_eth_port_cfg;
dpaa_get_qm_channel_caam;
dpaa_get_qm_channel_pool;
+ dpaa_get_link_status;
+ dpaa_intr_disable;
+ dpaa_intr_enable;
dpaa_svr_family;
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 25aff2d30..fdaa63a09 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2020 NXP
*
*/
#ifndef __RTE_DPAA_BUS_H__
@@ -30,6 +30,9 @@
#define SVR_LS1046A_FAMILY 0x87070000
#define SVR_MASK 0xffff0000
+/** Device driver supports link state interrupt */
+#define RTE_DPAA_DRV_INTR_LSC 0x0008
+
#define RTE_DEV_TO_DPAA_CONST(ptr) \
container_of(ptr, const struct rte_dpaa_device, device)
@@ -86,6 +89,7 @@ struct rte_dpaa_driver {
enum rte_dpaa_type drv_type;
rte_dpaa_probe_t probe;
rte_dpaa_remove_t remove;
+ uint32_t drv_flags; /**< Flags for controlling device.*/
};
/* Create storage for dqrr entries per lcore */
diff --git a/drivers/common/dpaax/compat.h b/drivers/common/dpaax/compat.h
index 90db68ce7..6793cb256 100644
--- a/drivers/common/dpaax/compat.h
+++ b/drivers/common/dpaax/compat.h
@@ -2,7 +2,7 @@
*
* Copyright 2011 Freescale Semiconductor, Inc.
* All rights reserved.
- * Copyright 2019 NXP
+ * Copyright 2019-2020 NXP
*
*/
@@ -390,4 +390,7 @@ static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
#define atomic_dec_return(v) rte_atomic32_sub_return(v, 1)
#define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
+/* Interface name len*/
+#define IF_NAME_MAX_LEN 16
+
#endif /* __COMPAT_H */
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c9f828a7c..3f805b2b0 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -45,6 +45,7 @@
#include <fsl_qman.h>
#include <fsl_bman.h>
#include <fsl_fman.h>
+#include <process.h>
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
@@ -131,6 +132,11 @@ static struct rte_dpaa_driver rte_dpaa_pmd;
static int
dpaa_eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info);
+static int dpaa_eth_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete __rte_unused);
+
+static void dpaa_interrupt_handler(void *param);
+
static inline void
dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts)
{
@@ -195,9 +201,19 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint64_t tx_offloads = eth_conf->txmode.offloads;
+ struct rte_device *rdev = dev->device;
+ struct rte_dpaa_device *dpaa_dev;
+ struct fman_if *fif = dev->process_private;
+ struct __fman_if *__fif;
+ struct rte_intr_handle *intr_handle;
+ int ret;
PMD_INIT_FUNC_TRACE();
+ dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+ intr_handle = &dpaa_dev->intr_handle;
+ __fif = container_of(fif, struct __fman_if, __if);
+
/* Rx offloads which are enabled by default */
if (dev_rx_offloads_nodis & ~rx_offloads) {
DPAA_PMD_INFO(
@@ -241,6 +257,28 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
dev->data->scattered_rx = 1;
}
+ /* if the interrupts were configured on this devices*/
+ if (intr_handle && intr_handle->fd) {
+ if (dev->data->dev_conf.intr_conf.lsc != 0)
+ rte_intr_callback_register(intr_handle,
+ dpaa_interrupt_handler,
+ (void *)dev);
+
+ ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ if (ret) {
+ if (dev->data->dev_conf.intr_conf.lsc != 0) {
+ rte_intr_callback_unregister(intr_handle,
+ dpaa_interrupt_handler,
+ (void *)dev);
+ if (ret == EINVAL)
+ printf("Failed to enable interrupt: Not Supported\n");
+ else
+ printf("Failed to enable interrupt\n");
+ }
+ dev->data->dev_conf.intr_conf.lsc = 0;
+ dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC;
+ }
+ }
return 0;
}
@@ -269,6 +307,25 @@ dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
return NULL;
}
+static void dpaa_interrupt_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct rte_device *rdev = dev->device;
+ struct rte_dpaa_device *dpaa_dev;
+ struct rte_intr_handle *intr_handle;
+ uint64_t buf;
+ int bytes_read;
+
+ dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+ intr_handle = &dpaa_dev->intr_handle;
+
+ bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ if (bytes_read < 0)
+ DPAA_PMD_ERR("Error reading eventfd\n");
+ dpaa_eth_link_update(dev, 0);
+ _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+}
+
static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -298,9 +355,27 @@ static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+ struct __fman_if *__fif;
+ struct rte_device *rdev = dev->device;
+ struct rte_dpaa_device *dpaa_dev;
+ struct rte_intr_handle *intr_handle;
+
PMD_INIT_FUNC_TRACE();
+ dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+ intr_handle = &dpaa_dev->intr_handle;
+ __fif = container_of(fif, struct __fman_if, __if);
+
dpaa_eth_dev_stop(dev);
+
+ if (intr_handle && intr_handle->fd &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
+ dpaa_intr_disable(__fif->node_name);
+ rte_intr_callback_unregister(intr_handle,
+ dpaa_interrupt_handler,
+ (void *)dev);
+ }
}
static int
@@ -388,6 +463,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
struct dpaa_if *dpaa_intf = dev->data->dev_private;
struct rte_eth_link *link = &dev->data->dev_link;
struct fman_if *fif = dev->process_private;
+ struct __fman_if *__fif = container_of(fif, struct __fman_if, __if);
+ int ret;
PMD_INIT_FUNC_TRACE();
@@ -401,9 +478,23 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
- link->link_status = dpaa_intf->valid;
+ ret = dpaa_get_link_status(__fif->node_name);
+ if (ret < 0) {
+ if (ret == -EINVAL) {
+ DPAA_PMD_DEBUG("Using default link status-No Support");
+ ret = 1;
+ } else {
+ DPAA_PMD_ERR("rte_dpaa_get_link_status %d", ret);
+ return ret;
+ }
+ }
+
+ link->link_status = ret;
link->link_duplex = ETH_LINK_FULL_DUPLEX;
link->link_autoneg = ETH_LINK_AUTONEG;
+
+ DPAA_PMD_INFO("Port %d Link is %s\n", dev->data->port_id,
+ link->link_status ? "Up" : "Down");
return 0;
}
@@ -1734,6 +1825,9 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
qman_ern_register_cb(dpaa_free_mbuf);
+ if (dpaa_drv->drv_flags & RTE_DPAA_DRV_INTR_LSC)
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
+
/* Invoke PMD device initialization function */
diag = dpaa_dev_init(eth_dev);
if (diag == 0) {
@@ -1761,6 +1855,7 @@ rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
}
static struct rte_dpaa_driver rte_dpaa_pmd = {
+ .drv_flags = RTE_DPAA_DRV_INTR_LSC,
.drv_type = FSL_DPAA_ETH,
.probe = rte_dpaa_probe,
.remove = rte_dpaa_remove,
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 14/29] bus/dpaa: enable set link status
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (12 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 13/29] bus/dpaa: enable link state interrupt Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 15/29] net/dpaa2: support dynamic flow control Hemant Agrawal
` (15 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
Enabled set link status API to start/stop phy
device from application.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/bus/dpaa/base/qbman/process.c | 27 +++++++++++++++++
drivers/bus/dpaa/include/process.h | 11 +++++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 +
drivers/net/dpaa/dpaa_ethdev.c | 35 ++++++++++++++++-------
4 files changed, 63 insertions(+), 11 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
index 68b7af243..6f7e37957 100644
--- a/drivers/bus/dpaa/base/qbman/process.c
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -366,3 +366,30 @@ int dpaa_get_link_status(char *if_name)
return args.link_status;
}
+
+#define DPAA_IOCTL_UPDATE_LINK_STATUS \
+ _IOW(DPAA_IOCTL_MAGIC, 0x11, struct usdpaa_ioctl_update_link_status_args)
+
+int dpaa_update_link_status(char *if_name, int link_status)
+{
+ struct usdpaa_ioctl_update_link_status_args args;
+ int ret;
+
+ ret = check_fd();
+ if (ret)
+ return ret;
+
+ strcpy(args.if_name, if_name);
+ args.link_status = link_status;
+
+ ret = ioctl(fd, DPAA_IOCTL_UPDATE_LINK_STATUS, &args);
+ if (ret) {
+ if (errno == EINVAL)
+ printf("Failed to set link status: Not Supported\n");
+ else
+ printf("Failed to set link status");
+ return ret;
+ }
+
+ return 0;
+}
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index 7305762c2..f52ea1635 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -91,7 +91,18 @@ struct usdpaa_ioctl_link_status_args {
char if_name[IF_NAME_MAX_LEN];
int link_status;
};
+
+struct usdpaa_ioctl_update_link_status_args {
+ /* network device node name */
+ char if_name[IF_NAME_MAX_LEN];
+ /* link status(ETH_LINK_UP/DOWN) */
+ int link_status;
+};
+
__rte_internal
int dpaa_get_link_status(char *if_name);
+__rte_internal
+int dpaa_update_link_status(char *if_name, int link_status);
+
#endif /* __PROCESS_H */
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 96662d7be..5dec8d9e5 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -19,6 +19,7 @@ INTERNAL {
dpaa_intr_disable;
dpaa_intr_enable;
dpaa_svr_family;
+ dpaa_update_link_status;
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
fman_if_add_mac_addr;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3f805b2b0..3a5b319d4 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -478,18 +478,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
- ret = dpaa_get_link_status(__fif->node_name);
- if (ret < 0) {
- if (ret == -EINVAL) {
- DPAA_PMD_DEBUG("Using default link status-No Support");
- ret = 1;
- } else {
- DPAA_PMD_ERR("rte_dpaa_get_link_status %d", ret);
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ ret = dpaa_get_link_status(__fif->node_name);
+ if (ret < 0)
return ret;
- }
+ link->link_status = ret;
+ } else {
+ link->link_status = dpaa_intf->valid;
}
- link->link_status = ret;
link->link_duplex = ETH_LINK_FULL_DUPLEX;
link->link_autoneg = ETH_LINK_AUTONEG;
@@ -985,17 +982,33 @@ dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
static int dpaa_link_down(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+ struct __fman_if *__fif;
+
PMD_INIT_FUNC_TRACE();
- dpaa_eth_dev_stop(dev);
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+ else
+ dpaa_eth_dev_stop(dev);
return 0;
}
static int dpaa_link_up(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+ struct __fman_if *__fif;
+
PMD_INIT_FUNC_TRACE();
- dpaa_eth_dev_start(dev);
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+ else
+ dpaa_eth_dev_start(dev);
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 15/29] net/dpaa2: support dynamic flow control
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (13 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 14/29] bus/dpaa: enable set link status Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 16/29] net/dpaa2: support key extracts of flow API Hemant Agrawal
` (14 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Dynamic flow used instead of layout defined.
The actual key/mask size depends on protocols and(or) fields
of patterns specified.
Also, the key and mask should start from the beginning of IOVA.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
doc/guides/nics/features/dpaa2.ini | 1 +
doc/guides/rel_notes/release_20_08.rst | 1 +
drivers/net/dpaa2/dpaa2_flow.c | 146 ++++++-------------------
3 files changed, 36 insertions(+), 112 deletions(-)
diff --git a/doc/guides/nics/features/dpaa2.ini b/doc/guides/nics/features/dpaa2.ini
index c2214fbd5..3685e2e02 100644
--- a/doc/guides/nics/features/dpaa2.ini
+++ b/doc/guides/nics/features/dpaa2.ini
@@ -16,6 +16,7 @@ Unicast MAC filter = Y
RSS hash = Y
VLAN filter = Y
Flow control = Y
+Flow API = Y
VLAN offload = Y
L3 checksum offload = Y
L4 checksum offload = Y
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index e5bc5cfd8..97267f7b7 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -131,6 +131,7 @@ New Features
Updated the NXP dpaa2 ethdev with new features and improvements, including:
* Added support to use datapath APIs from non-EAL pthread
+ * Added support for dynamic flow management
Removed Items
-------------
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 8aa65db30..05d115c78 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -33,29 +33,6 @@ struct rte_flow {
uint16_t flow_id;
};
-/* Layout for rule compositions for supported patterns */
-/* TODO: Current design only supports Ethernet + IPv4 based classification. */
-/* So corresponding offset macros are valid only. Rest are placeholder for */
-/* now. Once support for other netwrok headers will be added then */
-/* corresponding macros will be updated with correct values*/
-#define DPAA2_CLS_RULE_OFFSET_ETH 0 /*Start of buffer*/
-#define DPAA2_CLS_RULE_OFFSET_VLAN 14 /* DPAA2_CLS_RULE_OFFSET_ETH */
- /* + Sizeof Eth fields */
-#define DPAA2_CLS_RULE_OFFSET_IPV4 14 /* DPAA2_CLS_RULE_OFFSET_VLAN */
- /* + Sizeof VLAN fields */
-#define DPAA2_CLS_RULE_OFFSET_IPV6 25 /* DPAA2_CLS_RULE_OFFSET_IPV4 */
- /* + Sizeof IPV4 fields */
-#define DPAA2_CLS_RULE_OFFSET_ICMP 58 /* DPAA2_CLS_RULE_OFFSET_IPV6 */
- /* + Sizeof IPV6 fields */
-#define DPAA2_CLS_RULE_OFFSET_UDP 60 /* DPAA2_CLS_RULE_OFFSET_ICMP */
- /* + Sizeof ICMP fields */
-#define DPAA2_CLS_RULE_OFFSET_TCP 64 /* DPAA2_CLS_RULE_OFFSET_UDP */
- /* + Sizeof UDP fields */
-#define DPAA2_CLS_RULE_OFFSET_SCTP 68 /* DPAA2_CLS_RULE_OFFSET_TCP */
- /* + Sizeof TCP fields */
-#define DPAA2_CLS_RULE_OFFSET_GRE 72 /* DPAA2_CLS_RULE_OFFSET_SCTP */
- /* + Sizeof SCTP fields */
-
static const
enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
RTE_FLOW_ITEM_TYPE_END,
@@ -212,7 +189,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
(pattern->mask ? pattern->mask : default_mask);
/* Key rule */
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_ETH;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(spec->src.addr_bytes),
sizeof(struct rte_ether_addr));
key_iova += sizeof(struct rte_ether_addr);
@@ -223,7 +200,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
sizeof(rte_be16_t));
/* Key mask */
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_ETH;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(mask->src.addr_bytes),
sizeof(struct rte_ether_addr));
mask_iova += sizeof(struct rte_ether_addr);
@@ -233,9 +210,9 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
memcpy((void *)mask_iova, (const void *)(&mask->type),
sizeof(rte_be16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_ETH +
- ((2 * sizeof(struct rte_ether_addr)) +
- sizeof(rte_be16_t)));
+ flow->key_size += ((2 * sizeof(struct rte_ether_addr)) +
+ sizeof(rte_be16_t));
+
return device_configured;
}
@@ -335,15 +312,15 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
mask = (const struct rte_flow_item_vlan *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_VLAN;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(&spec->tci),
sizeof(rte_be16_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_VLAN;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(&mask->tci),
sizeof(rte_be16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_VLAN + sizeof(rte_be16_t));
+ flow->key_size += sizeof(rte_be16_t);
return device_configured;
}
@@ -474,7 +451,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow,
mask = (const struct rte_flow_item_ipv4 *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)&spec->hdr.src_addr,
sizeof(uint32_t));
key_iova += sizeof(uint32_t);
@@ -484,7 +461,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow,
memcpy((void *)key_iova, (const void *)&spec->hdr.next_proto_id,
sizeof(uint8_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_IPV4;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)&mask->hdr.src_addr,
sizeof(uint32_t));
mask_iova += sizeof(uint32_t);
@@ -494,9 +471,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow,
memcpy((void *)mask_iova, (const void *)&mask->hdr.next_proto_id,
sizeof(uint8_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_IPV4 +
- (2 * sizeof(uint32_t)) + sizeof(uint8_t));
-
+ flow->key_size += (2 * sizeof(uint32_t)) + sizeof(uint8_t);
return device_configured;
}
@@ -613,23 +588,22 @@ dpaa2_configure_flow_ipv6(struct rte_flow *flow,
mask = (const struct rte_flow_item_ipv6 *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV6;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(spec->hdr.src_addr),
sizeof(spec->hdr.src_addr));
key_iova += sizeof(spec->hdr.src_addr);
memcpy((void *)key_iova, (const void *)(spec->hdr.dst_addr),
sizeof(spec->hdr.dst_addr));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_IPV6;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(mask->hdr.src_addr),
sizeof(mask->hdr.src_addr));
mask_iova += sizeof(mask->hdr.src_addr);
memcpy((void *)mask_iova, (const void *)(mask->hdr.dst_addr),
sizeof(mask->hdr.dst_addr));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_IPV6 +
- sizeof(spec->hdr.src_addr) +
- sizeof(mask->hdr.dst_addr));
+ flow->key_size += sizeof(spec->hdr.src_addr) +
+ sizeof(mask->hdr.dst_addr);
return device_configured;
}
@@ -746,22 +720,21 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
mask = (const struct rte_flow_item_icmp *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_ICMP;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_type,
sizeof(uint8_t));
key_iova += sizeof(uint8_t);
memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_code,
sizeof(uint8_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_ICMP;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_type,
sizeof(uint8_t));
key_iova += sizeof(uint8_t);
memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_code,
sizeof(uint8_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_ICMP +
- (2 * sizeof(uint8_t)));
+ flow->key_size += 2 * sizeof(uint8_t);
return device_configured;
}
@@ -837,13 +810,6 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.qos_key_cfg.extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -862,13 +828,6 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.fs_key_cfg[group].extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -892,25 +851,21 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
mask = (const struct rte_flow_item_udp *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 +
- (2 * sizeof(uint32_t));
- memset((void *)key_iova, 0x11, sizeof(uint8_t));
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_UDP;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
sizeof(uint16_t));
key_iova += sizeof(uint16_t);
memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
sizeof(uint16_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_UDP;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
sizeof(uint16_t));
mask_iova += sizeof(uint16_t);
memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
sizeof(uint16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_UDP +
- (2 * sizeof(uint16_t)));
+ flow->key_size += (2 * sizeof(uint16_t));
return device_configured;
}
@@ -986,13 +941,6 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.qos_key_cfg.extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1012,13 +960,6 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.fs_key_cfg[group].extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1042,25 +983,21 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
mask = (const struct rte_flow_item_tcp *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 +
- (2 * sizeof(uint32_t));
- memset((void *)key_iova, 0x06, sizeof(uint8_t));
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_TCP;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
sizeof(uint16_t));
key_iova += sizeof(uint16_t);
memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
sizeof(uint16_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_TCP;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
sizeof(uint16_t));
mask_iova += sizeof(uint16_t);
memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
sizeof(uint16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_TCP +
- (2 * sizeof(uint16_t)));
+ flow->key_size += 2 * sizeof(uint16_t);
return device_configured;
}
@@ -1136,13 +1073,6 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.qos_key_cfg.extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1162,13 +1092,6 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
priv->extract.fs_key_cfg[group].extracts[index].type =
DPKG_EXTRACT_FROM_HDR;
priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1192,25 +1115,22 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
mask = (const struct rte_flow_item_sctp *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 +
- (2 * sizeof(uint32_t));
- memset((void *)key_iova, 0x84, sizeof(uint8_t));
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_SCTP;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
sizeof(uint16_t));
key_iova += sizeof(uint16_t);
memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
sizeof(uint16_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_SCTP;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
sizeof(uint16_t));
mask_iova += sizeof(uint16_t);
memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
sizeof(uint16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_SCTP +
- (2 * sizeof(uint16_t)));
+ flow->key_size += 2 * sizeof(uint16_t);
+
return device_configured;
}
@@ -1313,15 +1233,15 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
mask = (const struct rte_flow_item_gre *)
(pattern->mask ? pattern->mask : default_mask);
- key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_GRE;
+ key_iova = flow->rule.key_iova + flow->key_size;
memcpy((void *)key_iova, (const void *)(&spec->protocol),
sizeof(rte_be16_t));
- mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_GRE;
+ mask_iova = flow->rule.mask_iova + flow->key_size;
memcpy((void *)mask_iova, (const void *)(&mask->protocol),
sizeof(rte_be16_t));
- flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_GRE + sizeof(rte_be16_t));
+ flow->key_size += sizeof(rte_be16_t);
return device_configured;
}
@@ -1503,6 +1423,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
action.flow_id = action.flow_id % nic_attr.num_rx_tcs;
index = flow->index + (flow->tc_id * nic_attr.fs_entries);
+ flow->rule.key_size = flow->key_size;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &flow->rule,
flow->tc_id, index,
@@ -1606,6 +1527,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
/* Add Rule into QoS table */
index = flow->index + (flow->tc_id * nic_attr.fs_entries);
+ flow->rule.key_size = flow->key_size;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
&flow->rule, flow->tc_id,
index, 0, 0);
@@ -1862,7 +1784,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
flow->rule.key_iova = key_iova;
flow->rule.mask_iova = mask_iova;
- flow->rule.key_size = 0;
+ flow->key_size = 0;
switch (dpaa2_filter_type) {
case RTE_ETH_FILTER_GENERIC:
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 16/29] net/dpaa2: support key extracts of flow API
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (14 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 15/29] net/dpaa2: support dynamic flow control Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 17/29] net/dpaa2: add sanity check for flow extracts Hemant Agrawal
` (13 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
1) Support QoS extracts and TC extracts for multiple TCs.
2) Protocol type of L2 extract is used to parse L3.
Next protocol of L3 extract is used to parse L4.
3) generic IP key extracts instead of IPv4 and IPv6 respectively.
4) Special for IP address extracts:
Put IP(v4/v6) address extract(s)/rule(s) at the end of extracts array
to make rest fields at fixed poisition.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 35 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 43 +-
drivers/net/dpaa2/dpaa2_flow.c | 3628 +++++++++++++++++++++---------
3 files changed, 2665 insertions(+), 1041 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 8edd4b3cd..492b65840 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1,7 +1,7 @@
/* * SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2016 NXP
+ * Copyright 2016-2020 NXP
*
*/
@@ -2501,23 +2501,41 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->tx_pkt_burst = dpaa2_dev_tx;
/*Init fields w.r.t. classficaition*/
- memset(&priv->extract.qos_key_cfg, 0, sizeof(struct dpkg_profile_cfg));
+ memset(&priv->extract.qos_key_extract, 0,
+ sizeof(struct dpaa2_key_extract));
priv->extract.qos_extract_param = (size_t)rte_malloc(NULL, 256, 64);
if (!priv->extract.qos_extract_param) {
DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow "
" classificaiton ", ret);
goto init_err;
}
+ priv->extract.qos_key_extract.key_info.ipv4_src_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.qos_key_extract.key_info.ipv4_dst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.qos_key_extract.key_info.ipv6_src_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.qos_key_extract.key_info.ipv6_dst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+
for (i = 0; i < MAX_TCS; i++) {
- memset(&priv->extract.fs_key_cfg[i], 0,
- sizeof(struct dpkg_profile_cfg));
- priv->extract.fs_extract_param[i] =
+ memset(&priv->extract.tc_key_extract[i], 0,
+ sizeof(struct dpaa2_key_extract));
+ priv->extract.tc_extract_param[i] =
(size_t)rte_malloc(NULL, 256, 64);
- if (!priv->extract.fs_extract_param[i]) {
+ if (!priv->extract.tc_extract_param[i]) {
DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classificaiton",
ret);
goto init_err;
}
+ priv->extract.tc_key_extract[i].key_info.ipv4_src_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.tc_key_extract[i].key_info.ipv4_dst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.tc_key_extract[i].key_info.ipv6_src_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ priv->extract.tc_key_extract[i].key_info.ipv6_dst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
}
ret = dpni_set_max_frame_length(dpni_dev, CMD_PRI_LOW, priv->token,
@@ -2593,8 +2611,9 @@ dpaa2_dev_uninit(struct rte_eth_dev *eth_dev)
rte_free(dpni);
for (i = 0; i < MAX_TCS; i++) {
- if (priv->extract.fs_extract_param[i])
- rte_free((void *)(size_t)priv->extract.fs_extract_param[i]);
+ if (priv->extract.tc_extract_param[i])
+ rte_free((void *)
+ (size_t)priv->extract.tc_extract_param[i]);
}
if (priv->extract.qos_extract_param)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index c7fb6539f..030c625e3 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -96,10 +96,39 @@ extern enum pmd_dpaa2_ts dpaa2_enable_ts;
#define DPAA2_QOS_TABLE_RECONFIGURE 1
#define DPAA2_FS_TABLE_RECONFIGURE 2
+#define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4
+#define DPAA2_FS_TABLE_IPADDR_EXTRACT 8
+
+
/*Externaly defined*/
extern const struct rte_flow_ops dpaa2_flow_ops;
extern enum rte_filter_type dpaa2_filter_type;
+#define IP_ADDRESS_OFFSET_INVALID (-1)
+
+struct dpaa2_key_info {
+ uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS];
+ uint8_t key_size[DPKG_MAX_NUM_OF_EXTRACTS];
+ /* Special for IP address. */
+ int ipv4_src_offset;
+ int ipv4_dst_offset;
+ int ipv6_src_offset;
+ int ipv6_dst_offset;
+ uint8_t key_total_size;
+};
+
+struct dpaa2_key_extract {
+ struct dpkg_profile_cfg dpkg;
+ struct dpaa2_key_info key_info;
+};
+
+struct extract_s {
+ struct dpaa2_key_extract qos_key_extract;
+ struct dpaa2_key_extract tc_key_extract[MAX_TCS];
+ uint64_t qos_extract_param;
+ uint64_t tc_extract_param[MAX_TCS];
+};
+
struct dpaa2_dev_priv {
void *hw;
int32_t hw_id;
@@ -122,17 +151,9 @@ struct dpaa2_dev_priv {
uint8_t max_cgs;
uint8_t cgid_in_use[MAX_RX_QUEUES];
- struct pattern_s {
- uint8_t item_count;
- uint8_t pattern_type[DPKG_MAX_NUM_OF_EXTRACTS];
- } pattern[MAX_TCS + 1];
-
- struct extract_s {
- struct dpkg_profile_cfg qos_key_cfg;
- struct dpkg_profile_cfg fs_key_cfg[MAX_TCS];
- uint64_t qos_extract_param;
- uint64_t fs_extract_param[MAX_TCS];
- } extract;
+ struct extract_s extract;
+ uint8_t *qos_index;
+ uint8_t *fs_index;
uint16_t ss_offset;
uint64_t ss_iova;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 05d115c78..779cb64ab 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
-/* * SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2018-2020 NXP
*/
#include <sys/queue.h>
@@ -22,15 +22,44 @@
#include <dpaa2_ethdev.h>
#include <dpaa2_pmd_logs.h>
+/* Workaround to discriminate the UDP/TCP/SCTP
+ * with next protocol of l3.
+ * MC/WRIOP are not able to identify
+ * the l4 protocol with l4 ports.
+ */
+int mc_l4_port_identification;
+
+enum flow_rule_ipaddr_type {
+ FLOW_NONE_IPADDR,
+ FLOW_IPV4_ADDR,
+ FLOW_IPV6_ADDR
+};
+
+struct flow_rule_ipaddr {
+ enum flow_rule_ipaddr_type ipaddr_type;
+ int qos_ipsrc_offset;
+ int qos_ipdst_offset;
+ int fs_ipsrc_offset;
+ int fs_ipdst_offset;
+};
+
struct rte_flow {
LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
- struct dpni_rule_cfg rule;
+ struct dpni_rule_cfg qos_rule;
+ struct dpni_rule_cfg fs_rule;
+ uint16_t qos_index;
+ uint16_t fs_index;
uint8_t key_size;
- uint8_t tc_id;
+ uint8_t tc_id; /** Traffic Class ID. */
uint8_t flow_type;
- uint8_t index;
+ uint8_t tc_index; /** index within this Traffic Class. */
enum rte_flow_action_type action;
uint16_t flow_id;
+ /* Special for IP address to specify the offset
+ * in key/mask.
+ */
+ struct flow_rule_ipaddr ipaddr_rule;
+ struct dpni_fs_action_cfg action_cfg;
};
static const
@@ -54,166 +83,717 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = {
RTE_FLOW_ACTION_TYPE_RSS
};
+/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
+#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
+
enum rte_filter_type dpaa2_filter_type = RTE_ETH_FILTER_NONE;
static const void *default_mask;
+static inline void dpaa2_flow_extract_key_set(
+ struct dpaa2_key_info *key_info, int index, uint8_t size)
+{
+ key_info->key_size[index] = size;
+ if (index > 0) {
+ key_info->key_offset[index] =
+ key_info->key_offset[index - 1] +
+ key_info->key_size[index - 1];
+ } else {
+ key_info->key_offset[index] = 0;
+ }
+ key_info->key_total_size += size;
+}
+
+static int dpaa2_flow_extract_add(
+ struct dpaa2_key_extract *key_extract,
+ enum net_prot prot,
+ uint32_t field, uint8_t field_size)
+{
+ int index, ip_src = -1, ip_dst = -1;
+ struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
+ struct dpaa2_key_info *key_info = &key_extract->key_info;
+
+ if (dpkg->num_extracts >=
+ DPKG_MAX_NUM_OF_EXTRACTS) {
+ DPAA2_PMD_WARN("Number of extracts overflows");
+ return -1;
+ }
+ /* Before reorder, the IP SRC and IP DST are already last
+ * extract(s).
+ */
+ for (index = 0; index < dpkg->num_extracts; index++) {
+ if (dpkg->extracts[index].extract.from_hdr.prot ==
+ NET_PROT_IP) {
+ if (dpkg->extracts[index].extract.from_hdr.field ==
+ NH_FLD_IP_SRC) {
+ ip_src = index;
+ }
+ if (dpkg->extracts[index].extract.from_hdr.field ==
+ NH_FLD_IP_DST) {
+ ip_dst = index;
+ }
+ }
+ }
+
+ if (ip_src >= 0)
+ RTE_ASSERT((ip_src + 2) >= dpkg->num_extracts);
+
+ if (ip_dst >= 0)
+ RTE_ASSERT((ip_dst + 2) >= dpkg->num_extracts);
+
+ if (prot == NET_PROT_IP &&
+ (field == NH_FLD_IP_SRC ||
+ field == NH_FLD_IP_DST)) {
+ index = dpkg->num_extracts;
+ } else {
+ if (ip_src >= 0 && ip_dst >= 0)
+ index = dpkg->num_extracts - 2;
+ else if (ip_src >= 0 || ip_dst >= 0)
+ index = dpkg->num_extracts - 1;
+ else
+ index = dpkg->num_extracts;
+ }
+
+ dpkg->extracts[index].type = DPKG_EXTRACT_FROM_HDR;
+ dpkg->extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
+ dpkg->extracts[index].extract.from_hdr.prot = prot;
+ dpkg->extracts[index].extract.from_hdr.field = field;
+ if (prot == NET_PROT_IP &&
+ (field == NH_FLD_IP_SRC ||
+ field == NH_FLD_IP_DST)) {
+ dpaa2_flow_extract_key_set(key_info, index, 0);
+ } else {
+ dpaa2_flow_extract_key_set(key_info, index, field_size);
+ }
+
+ if (prot == NET_PROT_IP) {
+ if (field == NH_FLD_IP_SRC) {
+ if (key_info->ipv4_dst_offset >= 0) {
+ key_info->ipv4_src_offset =
+ key_info->ipv4_dst_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ } else {
+ key_info->ipv4_src_offset =
+ key_info->key_offset[index - 1] +
+ key_info->key_size[index - 1];
+ }
+ if (key_info->ipv6_dst_offset >= 0) {
+ key_info->ipv6_src_offset =
+ key_info->ipv6_dst_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ } else {
+ key_info->ipv6_src_offset =
+ key_info->key_offset[index - 1] +
+ key_info->key_size[index - 1];
+ }
+ } else if (field == NH_FLD_IP_DST) {
+ if (key_info->ipv4_src_offset >= 0) {
+ key_info->ipv4_dst_offset =
+ key_info->ipv4_src_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ } else {
+ key_info->ipv4_dst_offset =
+ key_info->key_offset[index - 1] +
+ key_info->key_size[index - 1];
+ }
+ if (key_info->ipv6_src_offset >= 0) {
+ key_info->ipv6_dst_offset =
+ key_info->ipv6_src_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ } else {
+ key_info->ipv6_dst_offset =
+ key_info->key_offset[index - 1] +
+ key_info->key_size[index - 1];
+ }
+ }
+ }
+
+ if (index == dpkg->num_extracts) {
+ dpkg->num_extracts++;
+ return 0;
+ }
+
+ if (ip_src >= 0) {
+ ip_src++;
+ dpkg->extracts[ip_src].type =
+ DPKG_EXTRACT_FROM_HDR;
+ dpkg->extracts[ip_src].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ dpkg->extracts[ip_src].extract.from_hdr.prot =
+ NET_PROT_IP;
+ dpkg->extracts[ip_src].extract.from_hdr.field =
+ NH_FLD_IP_SRC;
+ dpaa2_flow_extract_key_set(key_info, ip_src, 0);
+ key_info->ipv4_src_offset += field_size;
+ key_info->ipv6_src_offset += field_size;
+ }
+ if (ip_dst >= 0) {
+ ip_dst++;
+ dpkg->extracts[ip_dst].type =
+ DPKG_EXTRACT_FROM_HDR;
+ dpkg->extracts[ip_dst].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ dpkg->extracts[ip_dst].extract.from_hdr.prot =
+ NET_PROT_IP;
+ dpkg->extracts[ip_dst].extract.from_hdr.field =
+ NH_FLD_IP_DST;
+ dpaa2_flow_extract_key_set(key_info, ip_dst, 0);
+ key_info->ipv4_dst_offset += field_size;
+ key_info->ipv6_dst_offset += field_size;
+ }
+
+ dpkg->num_extracts++;
+
+ return 0;
+}
+
+/* Protocol discrimination.
+ * Discriminate IPv4/IPv6/vLan by Eth type.
+ * Discriminate UDP/TCP/ICMP by next proto of IP.
+ */
+static inline int
+dpaa2_flow_proto_discrimination_extract(
+ struct dpaa2_key_extract *key_extract,
+ enum rte_flow_item_type type)
+{
+ if (type == RTE_FLOW_ITEM_TYPE_ETH) {
+ return dpaa2_flow_extract_add(
+ key_extract, NET_PROT_ETH,
+ NH_FLD_ETH_TYPE,
+ sizeof(rte_be16_t));
+ } else if (type == (enum rte_flow_item_type)
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
+ return dpaa2_flow_extract_add(
+ key_extract, NET_PROT_IP,
+ NH_FLD_IP_PROTO,
+ NH_FLD_IP_PROTO_SIZE);
+ }
+
+ return -1;
+}
+
+static inline int dpaa2_flow_extract_search(
+ struct dpkg_profile_cfg *dpkg,
+ enum net_prot prot, uint32_t field)
+{
+ int i;
+
+ for (i = 0; i < dpkg->num_extracts; i++) {
+ if (dpkg->extracts[i].extract.from_hdr.prot == prot &&
+ dpkg->extracts[i].extract.from_hdr.field == field) {
+ return i;
+ }
+ }
+
+ return -1;
+}
+
+static inline int dpaa2_flow_extract_key_offset(
+ struct dpaa2_key_extract *key_extract,
+ enum net_prot prot, uint32_t field)
+{
+ int i;
+ struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
+ struct dpaa2_key_info *key_info = &key_extract->key_info;
+
+ if (prot == NET_PROT_IPV4 ||
+ prot == NET_PROT_IPV6)
+ i = dpaa2_flow_extract_search(dpkg, NET_PROT_IP, field);
+ else
+ i = dpaa2_flow_extract_search(dpkg, prot, field);
+
+ if (i >= 0) {
+ if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_SRC)
+ return key_info->ipv4_src_offset;
+ else if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_DST)
+ return key_info->ipv4_dst_offset;
+ else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_SRC)
+ return key_info->ipv6_src_offset;
+ else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_DST)
+ return key_info->ipv6_dst_offset;
+ else
+ return key_info->key_offset[i];
+ } else {
+ return -1;
+ }
+}
+
+struct proto_discrimination {
+ enum rte_flow_item_type type;
+ union {
+ rte_be16_t eth_type;
+ uint8_t ip_proto;
+ };
+};
+
+static int
+dpaa2_flow_proto_discrimination_rule(
+ struct dpaa2_dev_priv *priv, struct rte_flow *flow,
+ struct proto_discrimination proto, int group)
+{
+ enum net_prot prot;
+ uint32_t field;
+ int offset;
+ size_t key_iova;
+ size_t mask_iova;
+ rte_be16_t eth_type;
+ uint8_t ip_proto;
+
+ if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
+ prot = NET_PROT_ETH;
+ field = NH_FLD_ETH_TYPE;
+ } else if (proto.type == DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) {
+ prot = NET_PROT_IP;
+ field = NH_FLD_IP_PROTO;
+ } else {
+ DPAA2_PMD_ERR(
+ "Only Eth and IP support to discriminate next proto.");
+ return -1;
+ }
+
+ offset = dpaa2_flow_extract_key_offset(&priv->extract.qos_key_extract,
+ prot, field);
+ if (offset < 0) {
+ DPAA2_PMD_ERR("QoS prot %d field %d extract failed",
+ prot, field);
+ return -1;
+ }
+ key_iova = flow->qos_rule.key_iova + offset;
+ mask_iova = flow->qos_rule.mask_iova + offset;
+ if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
+ eth_type = proto.eth_type;
+ memcpy((void *)key_iova, (const void *)(ð_type),
+ sizeof(rte_be16_t));
+ eth_type = 0xffff;
+ memcpy((void *)mask_iova, (const void *)(ð_type),
+ sizeof(rte_be16_t));
+ } else {
+ ip_proto = proto.ip_proto;
+ memcpy((void *)key_iova, (const void *)(&ip_proto),
+ sizeof(uint8_t));
+ ip_proto = 0xff;
+ memcpy((void *)mask_iova, (const void *)(&ip_proto),
+ sizeof(uint8_t));
+ }
+
+ offset = dpaa2_flow_extract_key_offset(
+ &priv->extract.tc_key_extract[group],
+ prot, field);
+ if (offset < 0) {
+ DPAA2_PMD_ERR("FS prot %d field %d extract failed",
+ prot, field);
+ return -1;
+ }
+ key_iova = flow->fs_rule.key_iova + offset;
+ mask_iova = flow->fs_rule.mask_iova + offset;
+
+ if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) {
+ eth_type = proto.eth_type;
+ memcpy((void *)key_iova, (const void *)(ð_type),
+ sizeof(rte_be16_t));
+ eth_type = 0xffff;
+ memcpy((void *)mask_iova, (const void *)(ð_type),
+ sizeof(rte_be16_t));
+ } else {
+ ip_proto = proto.ip_proto;
+ memcpy((void *)key_iova, (const void *)(&ip_proto),
+ sizeof(uint8_t));
+ ip_proto = 0xff;
+ memcpy((void *)mask_iova, (const void *)(&ip_proto),
+ sizeof(uint8_t));
+ }
+
+ return 0;
+}
+
+static inline int
+dpaa2_flow_rule_data_set(
+ struct dpaa2_key_extract *key_extract,
+ struct dpni_rule_cfg *rule,
+ enum net_prot prot, uint32_t field,
+ const void *key, const void *mask, int size)
+{
+ int offset = dpaa2_flow_extract_key_offset(key_extract,
+ prot, field);
+
+ if (offset < 0) {
+ DPAA2_PMD_ERR("prot %d, field %d extract failed",
+ prot, field);
+ return -1;
+ }
+ memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
+ memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+
+ return 0;
+}
+
+static inline int
+_dpaa2_flow_rule_move_ipaddr_tail(
+ struct dpaa2_key_extract *key_extract,
+ struct dpni_rule_cfg *rule, int src_offset,
+ uint32_t field, bool ipv4)
+{
+ size_t key_src;
+ size_t mask_src;
+ size_t key_dst;
+ size_t mask_dst;
+ int dst_offset, len;
+ enum net_prot prot;
+ char tmp[NH_FLD_IPV6_ADDR_SIZE];
+
+ if (field != NH_FLD_IP_SRC &&
+ field != NH_FLD_IP_DST) {
+ DPAA2_PMD_ERR("Field of IP addr reorder must be IP SRC/DST");
+ return -1;
+ }
+ if (ipv4)
+ prot = NET_PROT_IPV4;
+ else
+ prot = NET_PROT_IPV6;
+ dst_offset = dpaa2_flow_extract_key_offset(key_extract,
+ prot, field);
+ if (dst_offset < 0) {
+ DPAA2_PMD_ERR("Field %d reorder extract failed", field);
+ return -1;
+ }
+ key_src = rule->key_iova + src_offset;
+ mask_src = rule->mask_iova + src_offset;
+ key_dst = rule->key_iova + dst_offset;
+ mask_dst = rule->mask_iova + dst_offset;
+ if (ipv4)
+ len = sizeof(rte_be32_t);
+ else
+ len = NH_FLD_IPV6_ADDR_SIZE;
+
+ memcpy(tmp, (char *)key_src, len);
+ memcpy((char *)key_dst, tmp, len);
+
+ memcpy(tmp, (char *)mask_src, len);
+ memcpy((char *)mask_dst, tmp, len);
+
+ return 0;
+}
+
+static inline int
+dpaa2_flow_rule_move_ipaddr_tail(
+ struct rte_flow *flow, struct dpaa2_dev_priv *priv,
+ int fs_group)
+{
+ int ret;
+ enum net_prot prot;
+
+ if (flow->ipaddr_rule.ipaddr_type == FLOW_NONE_IPADDR)
+ return 0;
+
+ if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR)
+ prot = NET_PROT_IPV4;
+ else
+ prot = NET_PROT_IPV6;
+
+ if (flow->ipaddr_rule.qos_ipsrc_offset >= 0) {
+ ret = _dpaa2_flow_rule_move_ipaddr_tail(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ flow->ipaddr_rule.qos_ipsrc_offset,
+ NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS src address reorder failed");
+ return -1;
+ }
+ flow->ipaddr_rule.qos_ipsrc_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.qos_key_extract,
+ prot, NH_FLD_IP_SRC);
+ }
+
+ if (flow->ipaddr_rule.qos_ipdst_offset >= 0) {
+ ret = _dpaa2_flow_rule_move_ipaddr_tail(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ flow->ipaddr_rule.qos_ipdst_offset,
+ NH_FLD_IP_DST, prot == NET_PROT_IPV4);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS dst address reorder failed");
+ return -1;
+ }
+ flow->ipaddr_rule.qos_ipdst_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.qos_key_extract,
+ prot, NH_FLD_IP_DST);
+ }
+
+ if (flow->ipaddr_rule.fs_ipsrc_offset >= 0) {
+ ret = _dpaa2_flow_rule_move_ipaddr_tail(
+ &priv->extract.tc_key_extract[fs_group],
+ &flow->fs_rule,
+ flow->ipaddr_rule.fs_ipsrc_offset,
+ NH_FLD_IP_SRC, prot == NET_PROT_IPV4);
+ if (ret) {
+ DPAA2_PMD_ERR("FS src address reorder failed");
+ return -1;
+ }
+ flow->ipaddr_rule.fs_ipsrc_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.tc_key_extract[fs_group],
+ prot, NH_FLD_IP_SRC);
+ }
+ if (flow->ipaddr_rule.fs_ipdst_offset >= 0) {
+ ret = _dpaa2_flow_rule_move_ipaddr_tail(
+ &priv->extract.tc_key_extract[fs_group],
+ &flow->fs_rule,
+ flow->ipaddr_rule.fs_ipdst_offset,
+ NH_FLD_IP_DST, prot == NET_PROT_IPV4);
+ if (ret) {
+ DPAA2_PMD_ERR("FS dst address reorder failed");
+ return -1;
+ }
+ flow->ipaddr_rule.fs_ipdst_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.tc_key_extract[fs_group],
+ prot, NH_FLD_IP_DST);
+ }
+
+ return 0;
+}
+
static int
dpaa2_configure_flow_eth(struct rte_flow *flow,
struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_eth *spec, *mask;
/* TODO: Currently upper bound of range parameter is not implemented */
const struct rte_flow_item_eth *last __rte_unused;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0};
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- /* TODO: pattern is an array of 9 elements where 9th pattern element */
- /* is for QoS table and 1-8th pattern element is for FS tables. */
- /* It can be changed to macro. */
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_eth *)pattern->spec;
+ last = (const struct rte_flow_item_eth *)pattern->last;
+ mask = (const struct rte_flow_item_eth *)
+ (pattern->mask ? pattern->mask : default_mask);
+ if (!spec) {
+ /* Don't care any field of eth header,
+ * only care eth protocol.
+ */
+ DPAA2_PMD_WARN("No pattern spec for Eth flow, just skip");
+ return 0;
}
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_SA);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_ETH, NH_FLD_ETH_SA,
+ RTE_ETHER_ADDR_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add ETH_SA failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_SA);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_ETH, NH_FLD_ETH_SA,
+ RTE_ETHER_ADDR_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add ETH_SA failed.");
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before ETH_SA rule set failed");
+ return -1;
+ }
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_SA,
+ &spec->src.addr_bytes,
+ &mask->src.addr_bytes,
+ sizeof(struct rte_ether_addr));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
+ return -1;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_SA,
+ &spec->src.addr_bytes,
+ &mask->src.addr_bytes,
+ sizeof(struct rte_ether_addr));
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
+ return -1;
+ }
}
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ETH_SA;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ETH_DA;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ETH_TYPE;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ETH_SA;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ETH_DA;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ETH;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ETH_TYPE;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_DA);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_ETH, NH_FLD_ETH_DA,
+ RTE_ETHER_ADDR_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add ETH_DA failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_DA);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_ETH, NH_FLD_ETH_DA,
+ RTE_ETHER_ADDR_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add ETH_DA failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before ETH DA rule set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_DA,
+ &spec->dst.addr_bytes,
+ &mask->dst.addr_bytes,
+ sizeof(struct rte_ether_addr));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_DA,
+ &spec->dst.addr_bytes,
+ &mask->dst.addr_bytes,
+ sizeof(struct rte_ether_addr));
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_eth *)pattern->spec;
- last = (const struct rte_flow_item_eth *)pattern->last;
- mask = (const struct rte_flow_item_eth *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE,
+ RTE_ETHER_TYPE_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add ETH_TYPE failed.");
- /* Key rule */
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(spec->src.addr_bytes),
- sizeof(struct rte_ether_addr));
- key_iova += sizeof(struct rte_ether_addr);
- memcpy((void *)key_iova, (const void *)(spec->dst.addr_bytes),
- sizeof(struct rte_ether_addr));
- key_iova += sizeof(struct rte_ether_addr);
- memcpy((void *)key_iova, (const void *)(&spec->type),
- sizeof(rte_be16_t));
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_ETH, NH_FLD_ETH_TYPE,
+ RTE_ETHER_TYPE_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add ETH_TYPE failed.");
- /* Key mask */
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(mask->src.addr_bytes),
- sizeof(struct rte_ether_addr));
- mask_iova += sizeof(struct rte_ether_addr);
- memcpy((void *)mask_iova, (const void *)(mask->dst.addr_bytes),
- sizeof(struct rte_ether_addr));
- mask_iova += sizeof(struct rte_ether_addr);
- memcpy((void *)mask_iova, (const void *)(&mask->type),
- sizeof(rte_be16_t));
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before ETH TYPE rule set failed");
+ return -1;
+ }
- flow->key_size += ((2 * sizeof(struct rte_ether_addr)) +
- sizeof(rte_be16_t));
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_TYPE,
+ &spec->type,
+ &mask->type,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
+ return -1;
+ }
- return device_configured;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_ETH,
+ NH_FLD_ETH_TYPE,
+ &spec->type,
+ &mask->type,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
+ return -1;
+ }
+ }
+
+ (*device_configured) |= local_cfg;
+
+ return 0;
}
static int
@@ -222,12 +802,11 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_vlan *spec, *mask;
@@ -236,375 +815,524 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_vlan *)pattern->spec;
+ last = (const struct rte_flow_item_vlan *)pattern->last;
+ mask = (const struct rte_flow_item_vlan *)
+ (pattern->mask ? pattern->mask : default_mask);
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec) {
+ /* Don't care any field of vlan header,
+ * only care vlan protocol.
+ */
+ /* Eth type is actually used for vLan classification.
+ */
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Ext ETH_TYPE to discriminate vLan failed");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Ext ETH_TYPE to discriminate vLan failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before vLan discrimination set failed");
+ return -1;
+ }
+
+ proto.type = RTE_FLOW_ITEM_TYPE_ETH;
+ proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("vLan discrimination rule set failed");
+ return -1;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
+ (*device_configured) |= local_cfg;
+
+ return 0;
}
+ if (!mask->tci)
+ return 0;
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_VLAN, NH_FLD_VLAN_TCI);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_VLAN,
+ NH_FLD_VLAN_TCI,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add VLAN_TCI failed.");
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_VLAN;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_VLAN_TCI;
- priv->extract.qos_key_cfg.num_extracts++;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_VLAN, NH_FLD_VLAN_TCI);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_VLAN,
+ NH_FLD_VLAN_TCI,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add VLAN_TCI failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_VLAN;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_VLAN_TCI;
- priv->extract.fs_key_cfg[group].num_extracts++;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before VLAN TCI rule set failed");
+ return -1;
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_vlan *)pattern->spec;
- last = (const struct rte_flow_item_vlan *)pattern->last;
- mask = (const struct rte_flow_item_vlan *)
- (pattern->mask ? pattern->mask : default_mask);
+ ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_VLAN,
+ NH_FLD_VLAN_TCI,
+ &spec->tci,
+ &mask->tci,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
+ return -1;
+ }
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(&spec->tci),
- sizeof(rte_be16_t));
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_VLAN,
+ NH_FLD_VLAN_TCI,
+ &spec->tci,
+ &mask->tci,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
+ return -1;
+ }
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(&mask->tci),
- sizeof(rte_be16_t));
+ (*device_configured) |= local_cfg;
- flow->key_size += sizeof(rte_be16_t);
- return device_configured;
+ return 0;
}
static int
-dpaa2_configure_flow_ipv4(struct rte_flow *flow,
- struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item *pattern,
- const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+dpaa2_configure_flow_generic_ip(
+ struct rte_flow *flow,
+ struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item *pattern,
+ const struct rte_flow_action actions[] __rte_unused,
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
- const struct rte_flow_item_ipv4 *spec, *mask;
+ const struct rte_flow_item_ipv4 *spec_ipv4 = 0,
+ *mask_ipv4 = 0;
+ const struct rte_flow_item_ipv6 *spec_ipv6 = 0,
+ *mask_ipv6 = 0;
+ const void *key, *mask;
+ enum net_prot prot;
- const struct rte_flow_item_ipv4 *last __rte_unused;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0};
+ int size;
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
+ /* Parse pattern list to get the matching parameters */
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+ spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec;
+ mask_ipv4 = (const struct rte_flow_item_ipv4 *)
+ (pattern->mask ? pattern->mask : default_mask);
+ } else {
+ spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec;
+ mask_ipv6 = (const struct rte_flow_item_ipv6 *)
+ (pattern->mask ? pattern->mask : default_mask);
}
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec_ipv4 && !spec_ipv6) {
+ /* Don't care any field of IP header,
+ * only care IP protocol.
+ * Example: flow create 0 ingress pattern ipv6 /
+ */
+ /* Eth type is actually used for IP identification.
+ */
+ /* TODO: Current design only supports Eth + IP,
+ * Eth + vLan + IP needs to add.
+ */
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Ext ETH_TYPE to discriminate IP failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Ext ETH_TYPE to discriminate IP failed");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
- }
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before IP discrimination set failed");
+ return -1;
+ }
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_DST;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_DST;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
- }
+ proto.type = RTE_FLOW_ITEM_TYPE_ETH;
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
+ proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+ else
+ proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("IP discrimination rule set failed");
+ return -1;
+ }
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_ipv4 *)pattern->spec;
- last = (const struct rte_flow_item_ipv4 *)pattern->last;
- mask = (const struct rte_flow_item_ipv4 *)
- (pattern->mask ? pattern->mask : default_mask);
+ (*device_configured) |= local_cfg;
+
+ return 0;
+ }
+
+ if (mask_ipv4 && (mask_ipv4->hdr.src_addr ||
+ mask_ipv4->hdr.dst_addr)) {
+ flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR;
+ } else if (mask_ipv6 &&
+ (memcmp((const char *)mask_ipv6->hdr.src_addr,
+ zero_cmp, NH_FLD_IPV6_ADDR_SIZE) ||
+ memcmp((const char *)mask_ipv6->hdr.dst_addr,
+ zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
+ flow->ipaddr_rule.ipaddr_type = FLOW_IPV6_ADDR;
+ }
+
+ if ((mask_ipv4 && mask_ipv4->hdr.src_addr) ||
+ (mask_ipv6 &&
+ memcmp((const char *)mask_ipv6->hdr.src_addr,
+ zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_IP,
+ NH_FLD_IP_SRC,
+ 0);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add IP_SRC failed.");
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)&spec->hdr.src_addr,
- sizeof(uint32_t));
- key_iova += sizeof(uint32_t);
- memcpy((void *)key_iova, (const void *)&spec->hdr.dst_addr,
- sizeof(uint32_t));
- key_iova += sizeof(uint32_t);
- memcpy((void *)key_iova, (const void *)&spec->hdr.next_proto_id,
- sizeof(uint8_t));
-
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)&mask->hdr.src_addr,
- sizeof(uint32_t));
- mask_iova += sizeof(uint32_t);
- memcpy((void *)mask_iova, (const void *)&mask->hdr.dst_addr,
- sizeof(uint32_t));
- mask_iova += sizeof(uint32_t);
- memcpy((void *)mask_iova, (const void *)&mask->hdr.next_proto_id,
- sizeof(uint8_t));
-
- flow->key_size += (2 * sizeof(uint32_t)) + sizeof(uint8_t);
- return device_configured;
-}
+ return -1;
+ }
+ local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE |
+ DPAA2_QOS_TABLE_IPADDR_EXTRACT);
+ }
-static int
-dpaa2_configure_flow_ipv6(struct rte_flow *flow,
- struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item *pattern,
- const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
-{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
- uint32_t group;
- const struct rte_flow_item_ipv6 *spec, *mask;
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_IP,
+ NH_FLD_IP_SRC,
+ 0);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add IP_SRC failed.");
- const struct rte_flow_item_ipv6 *last __rte_unused;
- struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ return -1;
+ }
+ local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE |
+ DPAA2_FS_TABLE_IPADDR_EXTRACT);
+ }
- group = attr->group;
+ if (spec_ipv4)
+ key = &spec_ipv4->hdr.src_addr;
+ else
+ key = &spec_ipv6->hdr.src_addr[0];
+ if (mask_ipv4) {
+ mask = &mask_ipv4->hdr.src_addr;
+ size = NH_FLD_IPV4_ADDR_SIZE;
+ prot = NET_PROT_IPV4;
+ } else {
+ mask = &mask_ipv6->hdr.src_addr[0];
+ size = NH_FLD_IPV6_ADDR_SIZE;
+ prot = NET_PROT_IPV6;
+ }
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ prot, NH_FLD_IP_SRC,
+ key, mask, size);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_IP_SRC rule data set failed");
+ return -1;
+ }
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ prot, NH_FLD_IP_SRC,
+ key, mask, size);
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_IP_SRC rule data set failed");
+ return -1;
+ }
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ flow->ipaddr_rule.qos_ipsrc_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.qos_key_extract,
+ prot, NH_FLD_IP_SRC);
+ flow->ipaddr_rule.fs_ipsrc_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.tc_key_extract[group],
+ prot, NH_FLD_IP_SRC);
+ }
+
+ if ((mask_ipv4 && mask_ipv4->hdr.dst_addr) ||
+ (mask_ipv6 &&
+ memcmp((const char *)mask_ipv6->hdr.dst_addr,
+ zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_DST);
+ if (index < 0) {
+ if (mask_ipv4)
+ size = NH_FLD_IPV4_ADDR_SIZE;
+ else
+ size = NH_FLD_IPV6_ADDR_SIZE;
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_IP,
+ NH_FLD_IP_DST,
+ size);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE |
+ DPAA2_QOS_TABLE_IPADDR_EXTRACT);
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_DST);
+ if (index < 0) {
+ if (mask_ipv4)
+ size = NH_FLD_IPV4_ADDR_SIZE;
+ else
+ size = NH_FLD_IPV6_ADDR_SIZE;
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_IP,
+ NH_FLD_IP_DST,
+ size);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
+ return -1;
+ }
+ local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE |
+ DPAA2_FS_TABLE_IPADDR_EXTRACT);
+ }
+
+ if (spec_ipv4)
+ key = &spec_ipv4->hdr.dst_addr;
+ else
+ key = spec_ipv6->hdr.dst_addr;
+ if (mask_ipv4) {
+ mask = &mask_ipv4->hdr.dst_addr;
+ size = NH_FLD_IPV4_ADDR_SIZE;
+ prot = NET_PROT_IPV4;
} else {
- entry_found = 1;
- break;
+ mask = &mask_ipv6->hdr.dst_addr[0];
+ size = NH_FLD_IPV6_ADDR_SIZE;
+ prot = NET_PROT_IPV6;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
- }
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ prot, NH_FLD_IP_DST,
+ key, mask, size);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_IP_DST rule data set failed");
+ return -1;
+ }
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_DST;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_DST;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ prot, NH_FLD_IP_DST,
+ key, mask, size);
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_IP_DST rule data set failed");
+ return -1;
+ }
+ flow->ipaddr_rule.qos_ipdst_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.qos_key_extract,
+ prot, NH_FLD_IP_DST);
+ flow->ipaddr_rule.fs_ipdst_offset =
+ dpaa2_flow_extract_key_offset(
+ &priv->extract.tc_key_extract[group],
+ prot, NH_FLD_IP_DST);
+ }
+
+ if ((mask_ipv4 && mask_ipv4->hdr.next_proto_id) ||
+ (mask_ipv6 && mask_ipv6->hdr.proto)) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_IP,
+ NH_FLD_IP_PROTO,
+ NH_FLD_IP_PROTO_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_IP,
+ NH_FLD_IP_PROTO,
+ NH_FLD_IP_PROTO_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add IP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr after NH_FLD_IP_PROTO rule set failed");
+ return -1;
+ }
+
+ if (spec_ipv4)
+ key = &spec_ipv4->hdr.next_proto_id;
+ else
+ key = &spec_ipv6->hdr.proto;
+ if (mask_ipv4)
+ mask = &mask_ipv4->hdr.next_proto_id;
+ else
+ mask = &mask_ipv6->hdr.proto;
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_IP,
+ NH_FLD_IP_PROTO,
+ key, mask, NH_FLD_IP_PROTO_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_IP_PROTO rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_IP,
+ NH_FLD_IP_PROTO,
+ key, mask, NH_FLD_IP_PROTO_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_IP_PROTO rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_ipv6 *)pattern->spec;
- last = (const struct rte_flow_item_ipv6 *)pattern->last;
- mask = (const struct rte_flow_item_ipv6 *)
- (pattern->mask ? pattern->mask : default_mask);
+ (*device_configured) |= local_cfg;
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(spec->hdr.src_addr),
- sizeof(spec->hdr.src_addr));
- key_iova += sizeof(spec->hdr.src_addr);
- memcpy((void *)key_iova, (const void *)(spec->hdr.dst_addr),
- sizeof(spec->hdr.dst_addr));
-
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(mask->hdr.src_addr),
- sizeof(mask->hdr.src_addr));
- mask_iova += sizeof(mask->hdr.src_addr);
- memcpy((void *)mask_iova, (const void *)(mask->hdr.dst_addr),
- sizeof(mask->hdr.dst_addr));
-
- flow->key_size += sizeof(spec->hdr.src_addr) +
- sizeof(mask->hdr.dst_addr);
- return device_configured;
+ return 0;
}
static int
@@ -613,12 +1341,11 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_icmp *spec, *mask;
@@ -627,116 +1354,220 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_icmp *)pattern->spec;
+ last = (const struct rte_flow_item_icmp *)pattern->last;
+ mask = (const struct rte_flow_item_icmp *)
+ (pattern->mask ? pattern->mask : default_mask);
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec) {
+ /* Don't care any field of ICMP header,
+ * only care ICMP protocol.
+ * Example: flow create 0 ingress pattern icmp /
+ */
+ /* Next proto of Generical IP is actually used
+ * for ICMP identification.
+ */
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract IP protocol to discriminate ICMP failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract IP protocol to discriminate ICMP failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move IP addr before ICMP discrimination set failed");
+ return -1;
+ }
+
+ proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
+ proto.ip_proto = IPPROTO_ICMP;
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("ICMP discrimination rule set failed");
+ return -1;
+ }
+
+ (*device_configured) |= local_cfg;
+
+ return 0;
}
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ICMP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ICMP_TYPE;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ICMP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ICMP_CODE;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ICMP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ICMP_TYPE;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ICMP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ICMP_CODE;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ if (mask->hdr.icmp_type) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_TYPE,
+ NH_FLD_ICMP_TYPE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add ICMP_TYPE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ICMP, NH_FLD_ICMP_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_TYPE,
+ NH_FLD_ICMP_TYPE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add ICMP_TYPE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before ICMP TYPE set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_TYPE,
+ &spec->hdr.icmp_type,
+ &mask->hdr.icmp_type,
+ NH_FLD_ICMP_TYPE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_ICMP_TYPE rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_TYPE,
+ &spec->hdr.icmp_type,
+ &mask->hdr.icmp_type,
+ NH_FLD_ICMP_TYPE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_ICMP_TYPE rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_icmp *)pattern->spec;
- last = (const struct rte_flow_item_icmp *)pattern->last;
- mask = (const struct rte_flow_item_icmp *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (mask->hdr.icmp_code) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ICMP, NH_FLD_ICMP_CODE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_CODE,
+ NH_FLD_ICMP_CODE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add ICMP_CODE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_type,
- sizeof(uint8_t));
- key_iova += sizeof(uint8_t);
- memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_code,
- sizeof(uint8_t));
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ICMP, NH_FLD_ICMP_CODE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_CODE,
+ NH_FLD_ICMP_CODE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add ICMP_CODE failed.");
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_type,
- sizeof(uint8_t));
- key_iova += sizeof(uint8_t);
- memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_code,
- sizeof(uint8_t));
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
- flow->key_size += 2 * sizeof(uint8_t);
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr after ICMP CODE set failed");
+ return -1;
+ }
- return device_configured;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_CODE,
+ &spec->hdr.icmp_code,
+ &mask->hdr.icmp_code,
+ NH_FLD_ICMP_CODE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS NH_FLD_ICMP_CODE rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_ICMP,
+ NH_FLD_ICMP_CODE,
+ &spec->hdr.icmp_code,
+ &mask->hdr.icmp_code,
+ NH_FLD_ICMP_CODE_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS NH_FLD_ICMP_CODE rule data set failed");
+ return -1;
+ }
+ }
+
+ (*device_configured) |= local_cfg;
+
+ return 0;
}
static int
@@ -745,12 +1576,11 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_udp *spec, *mask;
@@ -759,115 +1589,217 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_udp *)pattern->spec;
+ last = (const struct rte_flow_item_udp *)pattern->last;
+ mask = (const struct rte_flow_item_udp *)
+ (pattern->mask ? pattern->mask : default_mask);
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec || !mc_l4_port_identification) {
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract IP protocol to discriminate UDP failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract IP protocol to discriminate UDP failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move IP addr before UDP discrimination set failed");
+ return -1;
+ }
+
+ proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
+ proto.ip_proto = IPPROTO_UDP;
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("UDP discrimination rule set failed");
+ return -1;
+ }
+
+ (*device_configured) |= local_cfg;
+
+ if (!spec)
+ return 0;
}
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
+ if (mask->hdr.src_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_SRC,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add UDP_SRC failed.");
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_UDP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_SRC;
- index++;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
- priv->extract.qos_key_cfg.extracts[index].type = DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_UDP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
- index++;
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_UDP, NH_FLD_UDP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_SRC,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add UDP_SRC failed.");
- priv->extract.qos_key_cfg.num_extracts = index;
- }
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_UDP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_SRC;
- index++;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before UDP_PORT_SRC set failed");
+ return -1;
+ }
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_UDP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
- index++;
+ ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_UDP_PORT_SRC rule data set failed");
+ return -1;
+ }
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_UDP_PORT_SRC rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_udp *)pattern->spec;
- last = (const struct rte_flow_item_udp *)pattern->last;
- mask = (const struct rte_flow_item_udp *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (mask->hdr.dst_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_DST,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add UDP_DST failed.");
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
- sizeof(uint16_t));
- key_iova += sizeof(uint16_t);
- memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
- sizeof(uint16_t));
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
- sizeof(uint16_t));
- mask_iova += sizeof(uint16_t);
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
- sizeof(uint16_t));
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_UDP, NH_FLD_UDP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_DST,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add UDP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before UDP_PORT_DST set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_UDP_PORT_DST rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_UDP,
+ NH_FLD_UDP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_UDP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_UDP_PORT_DST rule data set failed");
+ return -1;
+ }
+ }
- flow->key_size += (2 * sizeof(uint16_t));
+ (*device_configured) |= local_cfg;
- return device_configured;
+ return 0;
}
static int
@@ -876,130 +1808,231 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_tcp *spec, *mask;
- const struct rte_flow_item_tcp *last __rte_unused;
- struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_tcp *last __rte_unused;
+ struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+ group = attr->group;
+
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_tcp *)pattern->spec;
+ last = (const struct rte_flow_item_tcp *)pattern->last;
+ mask = (const struct rte_flow_item_tcp *)
+ (pattern->mask ? pattern->mask : default_mask);
+
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec || !mc_l4_port_identification) {
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract IP protocol to discriminate TCP failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract IP protocol to discriminate TCP failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move IP addr before TCP discrimination set failed");
+ return -1;
+ }
+
+ proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
+ proto.ip_proto = IPPROTO_TCP;
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("TCP discrimination rule set failed");
+ return -1;
+ }
- group = attr->group;
+ (*device_configured) |= local_cfg;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
+ if (!spec)
+ return 0;
}
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ if (mask->hdr.src_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_SRC,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add TCP_SRC failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_TCP, NH_FLD_TCP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_SRC,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add TCP_SRC failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
- }
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before TCP_PORT_SRC set failed");
+ return -1;
+ }
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_TCP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_SRC;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_TCP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_DST;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_TCP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_SRC;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_TCP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_DST;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_TCP_PORT_SRC rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_TCP_PORT_SRC rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_tcp *)pattern->spec;
- last = (const struct rte_flow_item_tcp *)pattern->last;
- mask = (const struct rte_flow_item_tcp *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (mask->hdr.dst_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_DST,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add TCP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_TCP, NH_FLD_TCP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_DST,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add TCP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
- sizeof(uint16_t));
- key_iova += sizeof(uint16_t);
- memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
- sizeof(uint16_t));
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before TCP_PORT_DST set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_TCP_PORT_DST rule data set failed");
+ return -1;
+ }
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
- sizeof(uint16_t));
- mask_iova += sizeof(uint16_t);
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
- sizeof(uint16_t));
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_TCP,
+ NH_FLD_TCP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_TCP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_TCP_PORT_DST rule data set failed");
+ return -1;
+ }
+ }
- flow->key_size += 2 * sizeof(uint16_t);
+ (*device_configured) |= local_cfg;
- return device_configured;
+ return 0;
}
static int
@@ -1008,12 +2041,11 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_sctp *spec, *mask;
@@ -1022,116 +2054,218 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too. */
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_sctp *)pattern->spec;
+ last = (const struct rte_flow_item_sctp *)pattern->last;
+ mask = (const struct rte_flow_item_sctp *)
+ (pattern->mask ? pattern->mask : default_mask);
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec || !mc_l4_port_identification) {
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract IP protocol to discriminate SCTP failed.");
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract IP protocol to discriminate SCTP failed.");
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before SCTP discrimination set failed");
+ return -1;
+ }
+
+ proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
+ proto.ip_proto = IPPROTO_SCTP;
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("SCTP discrimination rule set failed");
+ return -1;
+ }
+
+ (*device_configured) |= local_cfg;
+
+ if (!spec)
+ return 0;
}
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
-
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_SCTP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_SRC;
- index++;
-
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_SCTP;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_DST;
- index++;
-
- priv->extract.qos_key_cfg.num_extracts = index;
- }
-
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_SCTP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_SRC;
- index++;
-
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_SCTP;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_DST;
- index++;
-
- priv->extract.fs_key_cfg[group].num_extracts = index;
+ if (mask->hdr.src_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_SRC,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add SCTP_SRC failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_SRC,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add SCTP_SRC failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before SCTP_PORT_SRC set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_SCTP_PORT_SRC rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_SRC,
+ &spec->hdr.src_port,
+ &mask->hdr.src_port,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_SCTP_PORT_SRC rule data set failed");
+ return -1;
+ }
}
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_sctp *)pattern->spec;
- last = (const struct rte_flow_item_sctp *)pattern->last;
- mask = (const struct rte_flow_item_sctp *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (mask->hdr.dst_port) {
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_DST,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add SCTP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_DST,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add SCTP_DST failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before SCTP_PORT_DST set failed");
+ return -1;
+ }
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
- sizeof(uint16_t));
- key_iova += sizeof(uint16_t);
- memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
- sizeof(uint16_t));
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_SCTP_PORT_DST rule data set failed");
+ return -1;
+ }
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
- sizeof(uint16_t));
- mask_iova += sizeof(uint16_t);
- memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
- sizeof(uint16_t));
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_SCTP,
+ NH_FLD_SCTP_PORT_DST,
+ &spec->hdr.dst_port,
+ &mask->hdr.dst_port,
+ NH_FLD_SCTP_PORT_SIZE);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_SCTP_PORT_DST rule data set failed");
+ return -1;
+ }
+ }
- flow->key_size += 2 * sizeof(uint16_t);
+ (*device_configured) |= local_cfg;
- return device_configured;
+ return 0;
}
static int
@@ -1140,12 +2274,11 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
{
- int index, j = 0;
- size_t key_iova;
- size_t mask_iova;
- int device_configured = 0, entry_found = 0;
+ int index, ret;
+ int local_cfg = 0;
uint32_t group;
const struct rte_flow_item_gre *spec, *mask;
@@ -1154,96 +2287,413 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
group = attr->group;
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too. */
- if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
- }
+ /* Parse pattern list to get the matching parameters */
+ spec = (const struct rte_flow_item_gre *)pattern->spec;
+ last = (const struct rte_flow_item_gre *)pattern->last;
+ mask = (const struct rte_flow_item_gre *)
+ (pattern->mask ? pattern->mask : default_mask);
+
+ /* Get traffic class index and flow id to be configured */
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (!spec) {
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract IP protocol to discriminate GRE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_IP, NH_FLD_IP_PROTO);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ DPAA2_FLOW_ITEM_TYPE_GENERIC_IP);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract IP protocol to discriminate GRE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
- if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) {
- DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n",
- DPKG_MAX_NUM_OF_EXTRACTS);
- return -ENOTSUP;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move IP addr before GRE discrimination set failed");
+ return -1;
+ }
+
+ proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP;
+ proto.ip_proto = IPPROTO_GRE;
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
+ proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("GRE discrimination rule set failed");
+ return -1;
+ }
+
+ (*device_configured) |= local_cfg;
+
+ return 0;
}
- for (j = 0; j < priv->pattern[8].item_count; j++) {
- if (priv->pattern[8].pattern_type[j] != pattern->type) {
- continue;
- } else {
- entry_found = 1;
- break;
+ if (!mask->protocol)
+ return 0;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_GRE, NH_FLD_GRE_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.qos_key_extract,
+ NET_PROT_GRE,
+ NH_FLD_GRE_TYPE,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract add GRE_TYPE failed.");
+
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_GRE, NH_FLD_GRE_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_extract_add(
+ &priv->extract.tc_key_extract[group],
+ NET_PROT_GRE,
+ NH_FLD_GRE_TYPE,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract add GRE_TYPE failed.");
+
+ return -1;
}
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
- if (!entry_found) {
- priv->pattern[8].pattern_type[j] = pattern->type;
- priv->pattern[8].item_count++;
- device_configured |= DPAA2_QOS_TABLE_RECONFIGURE;
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before GRE_TYPE set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.qos_key_extract,
+ &flow->qos_rule,
+ NET_PROT_GRE,
+ NH_FLD_GRE_TYPE,
+ &spec->protocol,
+ &mask->protocol,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS NH_FLD_GRE_TYPE rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set(
+ &priv->extract.tc_key_extract[group],
+ &flow->fs_rule,
+ NET_PROT_GRE,
+ NH_FLD_GRE_TYPE,
+ &spec->protocol,
+ &mask->protocol,
+ sizeof(rte_be16_t));
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS NH_FLD_GRE_TYPE rule data set failed");
+ return -1;
}
- entry_found = 0;
- for (j = 0; j < priv->pattern[group].item_count; j++) {
- if (priv->pattern[group].pattern_type[j] != pattern->type) {
+ (*device_configured) |= local_cfg;
+
+ return 0;
+}
+
+/* The existing QoS/FS entry with IP address(es)
+ * needs update after
+ * new extract(s) are inserted before IP
+ * address(es) extract(s).
+ */
+static int
+dpaa2_flow_entry_update(
+ struct dpaa2_dev_priv *priv, uint8_t tc_id)
+{
+ struct rte_flow *curr = LIST_FIRST(&priv->flows);
+ struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
+ int ret;
+ int qos_ipsrc_offset = -1, qos_ipdst_offset = -1;
+ int fs_ipsrc_offset = -1, fs_ipdst_offset = -1;
+ struct dpaa2_key_extract *qos_key_extract =
+ &priv->extract.qos_key_extract;
+ struct dpaa2_key_extract *tc_key_extract =
+ &priv->extract.tc_key_extract[tc_id];
+ char ipsrc_key[NH_FLD_IPV6_ADDR_SIZE];
+ char ipdst_key[NH_FLD_IPV6_ADDR_SIZE];
+ char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
+ char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
+ int extend = -1, extend1, size;
+
+ while (curr) {
+ if (curr->ipaddr_rule.ipaddr_type ==
+ FLOW_NONE_IPADDR) {
+ curr = LIST_NEXT(curr, next);
continue;
+ }
+
+ if (curr->ipaddr_rule.ipaddr_type ==
+ FLOW_IPV4_ADDR) {
+ qos_ipsrc_offset =
+ qos_key_extract->key_info.ipv4_src_offset;
+ qos_ipdst_offset =
+ qos_key_extract->key_info.ipv4_dst_offset;
+ fs_ipsrc_offset =
+ tc_key_extract->key_info.ipv4_src_offset;
+ fs_ipdst_offset =
+ tc_key_extract->key_info.ipv4_dst_offset;
+ size = NH_FLD_IPV4_ADDR_SIZE;
} else {
- entry_found = 1;
- break;
+ qos_ipsrc_offset =
+ qos_key_extract->key_info.ipv6_src_offset;
+ qos_ipdst_offset =
+ qos_key_extract->key_info.ipv6_dst_offset;
+ fs_ipsrc_offset =
+ tc_key_extract->key_info.ipv6_src_offset;
+ fs_ipdst_offset =
+ tc_key_extract->key_info.ipv6_dst_offset;
+ size = NH_FLD_IPV6_ADDR_SIZE;
}
- }
- if (!entry_found) {
- priv->pattern[group].pattern_type[j] = pattern->type;
- priv->pattern[group].item_count++;
- device_configured |= DPAA2_FS_TABLE_RECONFIGURE;
- }
+ ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+ priv->token, &curr->qos_rule);
+ if (ret) {
+ DPAA2_PMD_ERR("Qos entry remove failed.");
+ return -1;
+ }
- /* Get traffic class index and flow id to be configured */
- flow->tc_id = group;
- flow->index = attr->priority;
+ extend = -1;
+
+ if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
+ RTE_ASSERT(qos_ipsrc_offset >=
+ curr->ipaddr_rule.qos_ipsrc_offset);
+ extend1 = qos_ipsrc_offset -
+ curr->ipaddr_rule.qos_ipsrc_offset;
+ if (extend >= 0)
+ RTE_ASSERT(extend == extend1);
+ else
+ extend = extend1;
+
+ memcpy(ipsrc_key,
+ (char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ size);
+ memset((char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ 0, size);
+
+ memcpy(ipsrc_mask,
+ (char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ size);
+ memset((char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ 0, size);
+
+ curr->ipaddr_rule.qos_ipsrc_offset = qos_ipsrc_offset;
+ }
- if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- index = priv->extract.qos_key_cfg.num_extracts;
- priv->extract.qos_key_cfg.extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_GRE;
- priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_GRE_TYPE;
- index++;
+ if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
+ RTE_ASSERT(qos_ipdst_offset >=
+ curr->ipaddr_rule.qos_ipdst_offset);
+ extend1 = qos_ipdst_offset -
+ curr->ipaddr_rule.qos_ipdst_offset;
+ if (extend >= 0)
+ RTE_ASSERT(extend == extend1);
+ else
+ extend = extend1;
+
+ memcpy(ipdst_key,
+ (char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ size);
+ memset((char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ 0, size);
+
+ memcpy(ipdst_mask,
+ (char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ size);
+ memset((char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ 0, size);
+
+ curr->ipaddr_rule.qos_ipdst_offset = qos_ipdst_offset;
+ }
- priv->extract.qos_key_cfg.num_extracts = index;
- }
+ if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
+ memcpy((char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ ipsrc_key,
+ size);
+ memcpy((char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipsrc_offset,
+ ipsrc_mask,
+ size);
+ }
+ if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
+ memcpy((char *)(size_t)curr->qos_rule.key_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ ipdst_key,
+ size);
+ memcpy((char *)(size_t)curr->qos_rule.mask_iova +
+ curr->ipaddr_rule.qos_ipdst_offset,
+ ipdst_mask,
+ size);
+ }
- if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- index = priv->extract.fs_key_cfg[group].num_extracts;
- priv->extract.fs_key_cfg[group].extracts[index].type =
- DPKG_EXTRACT_FROM_HDR;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_GRE;
- priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_GRE_TYPE;
- index++;
+ if (extend >= 0)
+ curr->qos_rule.key_size += extend;
- priv->extract.fs_key_cfg[group].num_extracts = index;
- }
+ ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+ priv->token, &curr->qos_rule,
+ curr->tc_id, curr->qos_index,
+ 0, 0);
+ if (ret) {
+ DPAA2_PMD_ERR("Qos entry update failed.");
+ return -1;
+ }
- /* Parse pattern list to get the matching parameters */
- spec = (const struct rte_flow_item_gre *)pattern->spec;
- last = (const struct rte_flow_item_gre *)pattern->last;
- mask = (const struct rte_flow_item_gre *)
- (pattern->mask ? pattern->mask : default_mask);
+ if (curr->action != RTE_FLOW_ACTION_TYPE_QUEUE) {
+ curr = LIST_NEXT(curr, next);
+ continue;
+ }
+
+ extend = -1;
+
+ ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW,
+ priv->token, curr->tc_id, &curr->fs_rule);
+ if (ret) {
+ DPAA2_PMD_ERR("FS entry remove failed.");
+ return -1;
+ }
+
+ if (curr->ipaddr_rule.fs_ipsrc_offset >= 0 &&
+ tc_id == curr->tc_id) {
+ RTE_ASSERT(fs_ipsrc_offset >=
+ curr->ipaddr_rule.fs_ipsrc_offset);
+ extend1 = fs_ipsrc_offset -
+ curr->ipaddr_rule.fs_ipsrc_offset;
+ if (extend >= 0)
+ RTE_ASSERT(extend == extend1);
+ else
+ extend = extend1;
+
+ memcpy(ipsrc_key,
+ (char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ size);
+ memset((char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ 0, size);
+
+ memcpy(ipsrc_mask,
+ (char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ size);
+ memset((char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ 0, size);
+
+ curr->ipaddr_rule.fs_ipsrc_offset = fs_ipsrc_offset;
+ }
+
+ if (curr->ipaddr_rule.fs_ipdst_offset >= 0 &&
+ tc_id == curr->tc_id) {
+ RTE_ASSERT(fs_ipdst_offset >=
+ curr->ipaddr_rule.fs_ipdst_offset);
+ extend1 = fs_ipdst_offset -
+ curr->ipaddr_rule.fs_ipdst_offset;
+ if (extend >= 0)
+ RTE_ASSERT(extend == extend1);
+ else
+ extend = extend1;
+
+ memcpy(ipdst_key,
+ (char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ size);
+ memset((char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ 0, size);
+
+ memcpy(ipdst_mask,
+ (char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ size);
+ memset((char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ 0, size);
+
+ curr->ipaddr_rule.fs_ipdst_offset = fs_ipdst_offset;
+ }
+
+ if (curr->ipaddr_rule.fs_ipsrc_offset >= 0) {
+ memcpy((char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ ipsrc_key,
+ size);
+ memcpy((char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipsrc_offset,
+ ipsrc_mask,
+ size);
+ }
+ if (curr->ipaddr_rule.fs_ipdst_offset >= 0) {
+ memcpy((char *)(size_t)curr->fs_rule.key_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ ipdst_key,
+ size);
+ memcpy((char *)(size_t)curr->fs_rule.mask_iova +
+ curr->ipaddr_rule.fs_ipdst_offset,
+ ipdst_mask,
+ size);
+ }
- key_iova = flow->rule.key_iova + flow->key_size;
- memcpy((void *)key_iova, (const void *)(&spec->protocol),
- sizeof(rte_be16_t));
+ if (extend >= 0)
+ curr->fs_rule.key_size += extend;
- mask_iova = flow->rule.mask_iova + flow->key_size;
- memcpy((void *)mask_iova, (const void *)(&mask->protocol),
- sizeof(rte_be16_t));
+ ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
+ priv->token, curr->tc_id, curr->fs_index,
+ &curr->fs_rule, &curr->action_cfg);
+ if (ret) {
+ DPAA2_PMD_ERR("FS entry update failed.");
+ return -1;
+ }
- flow->key_size += sizeof(rte_be16_t);
+ curr = LIST_NEXT(curr, next);
+ }
- return device_configured;
+ return 0;
}
static int
@@ -1262,7 +2712,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
struct dpni_attr nic_attr;
struct dpni_rx_tc_dist_cfg tc_cfg;
struct dpni_qos_tbl_cfg qos_cfg;
- struct dpkg_profile_cfg key_cfg;
struct dpni_fs_action_cfg action;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
@@ -1273,75 +2722,77 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
while (!end_of_list) {
switch (pattern[i].type) {
case RTE_FLOW_ITEM_TYPE_ETH:
- is_keycfg_configured = dpaa2_configure_flow_eth(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_eth(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("ETH flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
- is_keycfg_configured = dpaa2_configure_flow_vlan(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_vlan(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("vLan flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
- is_keycfg_configured = dpaa2_configure_flow_ipv4(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
- break;
case RTE_FLOW_ITEM_TYPE_IPV6:
- is_keycfg_configured = dpaa2_configure_flow_ipv6(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_generic_ip(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("IP flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_ICMP:
- is_keycfg_configured = dpaa2_configure_flow_icmp(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_icmp(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("ICMP flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_UDP:
- is_keycfg_configured = dpaa2_configure_flow_udp(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_udp(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("UDP flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_TCP:
- is_keycfg_configured = dpaa2_configure_flow_tcp(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_tcp(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("TCP flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_SCTP:
- is_keycfg_configured = dpaa2_configure_flow_sctp(flow,
- dev, attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_sctp(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("SCTP flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_GRE:
- is_keycfg_configured = dpaa2_configure_flow_gre(flow,
- dev,
- attr,
- &pattern[i],
- actions,
- error);
+ ret = dpaa2_configure_flow_gre(flow,
+ dev, attr, &pattern[i], actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("GRE flow configuration failed!");
+ return ret;
+ }
break;
case RTE_FLOW_ITEM_TYPE_END:
end_of_list = 1;
@@ -1365,8 +2816,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
action.flow_id = flow->flow_id;
if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- if (dpkg_prepare_key_cfg(&priv->extract.qos_key_cfg,
- (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
+ if (dpkg_prepare_key_cfg(&priv->extract.qos_key_extract.dpkg,
+ (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
DPAA2_PMD_ERR(
"Unable to prepare extract parameters");
return -1;
@@ -1377,7 +2828,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
qos_cfg.keep_entries = true;
qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param;
ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
- priv->token, &qos_cfg);
+ priv->token, &qos_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
"Distribution cannot be configured.(%d)"
@@ -1386,8 +2837,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
}
if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- if (dpkg_prepare_key_cfg(&priv->extract.fs_key_cfg[flow->tc_id],
- (uint8_t *)(size_t)priv->extract.fs_extract_param[flow->tc_id]) < 0) {
+ if (dpkg_prepare_key_cfg(
+ &priv->extract.tc_key_extract[flow->tc_id].dpkg,
+ (uint8_t *)(size_t)priv->extract
+ .tc_extract_param[flow->tc_id]) < 0) {
DPAA2_PMD_ERR(
"Unable to prepare extract parameters");
return -1;
@@ -1397,7 +2850,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc;
tc_cfg.dist_mode = DPNI_DIST_MODE_FS;
tc_cfg.key_cfg_iova =
- (uint64_t)priv->extract.fs_extract_param[flow->tc_id];
+ (uint64_t)priv->extract.tc_extract_param[flow->tc_id];
tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP;
tc_cfg.fs_cfg.keep_entries = true;
ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW,
@@ -1422,27 +2875,114 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
action.flow_id = action.flow_id % nic_attr.num_rx_tcs;
- index = flow->index + (flow->tc_id * nic_attr.fs_entries);
- flow->rule.key_size = flow->key_size;
+
+ if (!priv->qos_index) {
+ priv->qos_index = rte_zmalloc(0,
+ nic_attr.qos_entries, 64);
+ }
+ for (index = 0; index < nic_attr.qos_entries; index++) {
+ if (!priv->qos_index[index]) {
+ priv->qos_index[index] = 1;
+ break;
+ }
+ }
+ if (index >= nic_attr.qos_entries) {
+ DPAA2_PMD_ERR("QoS table with %d entries full",
+ nic_attr.qos_entries);
+ return -1;
+ }
+ flow->qos_rule.key_size = priv->extract
+ .qos_key_extract.key_info.key_total_size;
+ if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
+ if (flow->ipaddr_rule.qos_ipdst_offset >=
+ flow->ipaddr_rule.qos_ipsrc_offset) {
+ flow->qos_rule.key_size =
+ flow->ipaddr_rule.qos_ipdst_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ } else {
+ flow->qos_rule.key_size =
+ flow->ipaddr_rule.qos_ipsrc_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ }
+ } else if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV6_ADDR) {
+ if (flow->ipaddr_rule.qos_ipdst_offset >=
+ flow->ipaddr_rule.qos_ipsrc_offset) {
+ flow->qos_rule.key_size =
+ flow->ipaddr_rule.qos_ipdst_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ } else {
+ flow->qos_rule.key_size =
+ flow->ipaddr_rule.qos_ipsrc_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ }
+ }
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
- priv->token, &flow->rule,
+ priv->token, &flow->qos_rule,
flow->tc_id, index,
0, 0);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in addnig entry to QoS table(%d)", ret);
+ priv->qos_index[index] = 0;
return ret;
}
+ flow->qos_index = index;
/* Then Configure FS table */
+ if (!priv->fs_index) {
+ priv->fs_index = rte_zmalloc(0,
+ nic_attr.fs_entries, 64);
+ }
+ for (index = 0; index < nic_attr.fs_entries; index++) {
+ if (!priv->fs_index[index]) {
+ priv->fs_index[index] = 1;
+ break;
+ }
+ }
+ if (index >= nic_attr.fs_entries) {
+ DPAA2_PMD_ERR("FS table with %d entries full",
+ nic_attr.fs_entries);
+ return -1;
+ }
+ flow->fs_rule.key_size = priv->extract
+ .tc_key_extract[attr->group].key_info.key_total_size;
+ if (flow->ipaddr_rule.ipaddr_type ==
+ FLOW_IPV4_ADDR) {
+ if (flow->ipaddr_rule.fs_ipdst_offset >=
+ flow->ipaddr_rule.fs_ipsrc_offset) {
+ flow->fs_rule.key_size =
+ flow->ipaddr_rule.fs_ipdst_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ } else {
+ flow->fs_rule.key_size =
+ flow->ipaddr_rule.fs_ipsrc_offset +
+ NH_FLD_IPV4_ADDR_SIZE;
+ }
+ } else if (flow->ipaddr_rule.ipaddr_type ==
+ FLOW_IPV6_ADDR) {
+ if (flow->ipaddr_rule.fs_ipdst_offset >=
+ flow->ipaddr_rule.fs_ipsrc_offset) {
+ flow->fs_rule.key_size =
+ flow->ipaddr_rule.fs_ipdst_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ } else {
+ flow->fs_rule.key_size =
+ flow->ipaddr_rule.fs_ipsrc_offset +
+ NH_FLD_IPV6_ADDR_SIZE;
+ }
+ }
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
- flow->tc_id, flow->index,
- &flow->rule, &action);
+ flow->tc_id, index,
+ &flow->fs_rule, &action);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in adding entry to FS table(%d)", ret);
+ priv->fs_index[index] = 0;
return ret;
}
+ flow->fs_index = index;
+ memcpy(&flow->action_cfg, &action,
+ sizeof(struct dpni_fs_action_cfg));
break;
case RTE_FLOW_ACTION_TYPE_RSS:
ret = dpni_get_attributes(dpni, CMD_PRI_LOW,
@@ -1465,7 +3005,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
flow->action = RTE_FLOW_ACTION_TYPE_RSS;
ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types,
- &key_cfg);
+ &priv->extract.tc_key_extract[flow->tc_id].dpkg);
if (ret < 0) {
DPAA2_PMD_ERR(
"unable to set flow distribution.please check queue config\n");
@@ -1479,7 +3019,9 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
- if (dpkg_prepare_key_cfg(&key_cfg, (uint8_t *)param) < 0) {
+ if (dpkg_prepare_key_cfg(
+ &priv->extract.tc_key_extract[flow->tc_id].dpkg,
+ (uint8_t *)param) < 0) {
DPAA2_PMD_ERR(
"Unable to prepare extract parameters");
rte_free((void *)param);
@@ -1503,8 +3045,9 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
rte_free((void *)param);
- if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
- if (dpkg_prepare_key_cfg(&priv->extract.qos_key_cfg,
+ if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
+ if (dpkg_prepare_key_cfg(
+ &priv->extract.qos_key_extract.dpkg,
(uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
DPAA2_PMD_ERR(
"Unable to prepare extract parameters");
@@ -1514,29 +3057,47 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
sizeof(struct dpni_qos_tbl_cfg));
qos_cfg.discard_on_miss = true;
qos_cfg.keep_entries = true;
- qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param;
+ qos_cfg.key_cfg_iova =
+ (size_t)priv->extract.qos_extract_param;
ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
priv->token, &qos_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Distribution can not be configured(%d)\n",
+ "Distribution can't be configured %d\n",
ret);
return -1;
}
}
/* Add Rule into QoS table */
- index = flow->index + (flow->tc_id * nic_attr.fs_entries);
- flow->rule.key_size = flow->key_size;
+ if (!priv->qos_index) {
+ priv->qos_index = rte_zmalloc(0,
+ nic_attr.qos_entries, 64);
+ }
+ for (index = 0; index < nic_attr.qos_entries; index++) {
+ if (!priv->qos_index[index]) {
+ priv->qos_index[index] = 1;
+ break;
+ }
+ }
+ if (index >= nic_attr.qos_entries) {
+ DPAA2_PMD_ERR("QoS table with %d entries full",
+ nic_attr.qos_entries);
+ return -1;
+ }
+ flow->qos_rule.key_size =
+ priv->extract.qos_key_extract.key_info.key_total_size;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
- &flow->rule, flow->tc_id,
+ &flow->qos_rule, flow->tc_id,
index, 0, 0);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in entry addition in QoS table(%d)",
ret);
+ priv->qos_index[index] = 0;
return ret;
}
+ flow->qos_index = index;
break;
case RTE_FLOW_ACTION_TYPE_END:
end_of_list = 1;
@@ -1550,6 +3111,12 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
if (!ret) {
+ ret = dpaa2_flow_entry_update(priv, flow->tc_id);
+ if (ret) {
+ DPAA2_PMD_ERR("Flow entry update failed.");
+
+ return -1;
+ }
/* New rules are inserted. */
if (!curr) {
LIST_INSERT_HEAD(&priv->flows, flow, next);
@@ -1625,15 +3192,15 @@ dpaa2_dev_update_default_mask(const struct rte_flow_item *pattern)
}
static inline int
-dpaa2_dev_verify_patterns(struct dpaa2_dev_priv *dev_priv,
- const struct rte_flow_item pattern[])
+dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
{
- unsigned int i, j, k, is_found = 0;
+ unsigned int i, j, is_found = 0;
int ret = 0;
for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) {
for (i = 0; i < RTE_DIM(dpaa2_supported_pattern_type); i++) {
- if (dpaa2_supported_pattern_type[i] == pattern[j].type) {
+ if (dpaa2_supported_pattern_type[i]
+ == pattern[j].type) {
is_found = 1;
break;
}
@@ -1653,18 +3220,6 @@ dpaa2_dev_verify_patterns(struct dpaa2_dev_priv *dev_priv,
dpaa2_dev_update_default_mask(&pattern[j]);
}
- /* DPAA2 platform has a limitation that extract parameter can not be */
- /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too. */
- for (i = 0; pattern[i].type != RTE_FLOW_ITEM_TYPE_END; i++) {
- for (j = 0; j < MAX_TCS + 1; j++) {
- for (k = 0; k < DPKG_MAX_NUM_OF_EXTRACTS; k++) {
- if (dev_priv->pattern[j].pattern_type[k] == pattern[i].type)
- break;
- }
- if (dev_priv->pattern[j].item_count >= DPKG_MAX_NUM_OF_EXTRACTS)
- ret = -ENOTSUP;
- }
- }
return ret;
}
@@ -1687,7 +3242,8 @@ dpaa2_dev_verify_actions(const struct rte_flow_action actions[])
}
}
for (j = 0; actions[j].type != RTE_FLOW_ACTION_TYPE_END; j++) {
- if ((actions[j].type != RTE_FLOW_ACTION_TYPE_DROP) && (!actions[j].conf))
+ if ((actions[j].type
+ != RTE_FLOW_ACTION_TYPE_DROP) && (!actions[j].conf))
ret = -EINVAL;
}
return ret;
@@ -1729,7 +3285,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
goto not_valid_params;
}
/* Verify input pattern list */
- ret = dpaa2_dev_verify_patterns(priv, pattern);
+ ret = dpaa2_dev_verify_patterns(pattern);
if (ret < 0) {
DPAA2_PMD_ERR(
"Invalid pattern list is given\n");
@@ -1763,28 +3319,54 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
size_t key_iova = 0, mask_iova = 0;
int ret;
- flow = rte_malloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
+ flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
if (!flow) {
DPAA2_PMD_ERR("Failure to allocate memory for flow");
goto mem_failure;
}
/* Allocate DMA'ble memory to write the rules */
- key_iova = (size_t)rte_malloc(NULL, 256, 64);
+ key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
+ if (!key_iova) {
+ DPAA2_PMD_ERR(
+ "Memory allocation failure for rule configration\n");
+ goto mem_failure;
+ }
+ mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
+ if (!mask_iova) {
+ DPAA2_PMD_ERR(
+ "Memory allocation failure for rule configration\n");
+ goto mem_failure;
+ }
+
+ flow->qos_rule.key_iova = key_iova;
+ flow->qos_rule.mask_iova = mask_iova;
+
+ /* Allocate DMA'ble memory to write the rules */
+ key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!key_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configration\n");
goto mem_failure;
}
- mask_iova = (size_t)rte_malloc(NULL, 256, 64);
+ mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!mask_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configration\n");
goto mem_failure;
}
- flow->rule.key_iova = key_iova;
- flow->rule.mask_iova = mask_iova;
- flow->key_size = 0;
+ flow->fs_rule.key_iova = key_iova;
+ flow->fs_rule.mask_iova = mask_iova;
+
+ flow->ipaddr_rule.ipaddr_type = FLOW_NONE_IPADDR;
+ flow->ipaddr_rule.qos_ipsrc_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ flow->ipaddr_rule.qos_ipdst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ flow->ipaddr_rule.fs_ipsrc_offset =
+ IP_ADDRESS_OFFSET_INVALID;
+ flow->ipaddr_rule.fs_ipdst_offset =
+ IP_ADDRESS_OFFSET_INVALID;
switch (dpaa2_filter_type) {
case RTE_ETH_FILTER_GENERIC:
@@ -1832,25 +3414,27 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
case RTE_FLOW_ACTION_TYPE_QUEUE:
/* Remove entry from QoS table first */
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
- &flow->rule);
+ &flow->qos_rule);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in adding entry to QoS table(%d)", ret);
goto error;
}
+ priv->qos_index[flow->qos_index] = 0;
/* Then remove entry from FS table */
ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token,
- flow->tc_id, &flow->rule);
+ flow->tc_id, &flow->fs_rule);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in entry addition in FS table(%d)", ret);
goto error;
}
+ priv->fs_index[flow->fs_index] = 0;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
- &flow->rule);
+ &flow->qos_rule);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in entry addition in QoS table(%d)", ret);
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 17/29] net/dpaa2: add sanity check for flow extracts
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (15 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 16/29] net/dpaa2: support key extracts of flow API Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 18/29] net/dpaa2: free flow rule memory Hemant Agrawal
` (12 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Define extracts support for each protocol and check the fields of each
pattern before building extracts of QoS/FS table.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 7 +-
drivers/net/dpaa2/dpaa2_flow.c | 250 +++++++++++++++++++++++++------
2 files changed, 204 insertions(+), 53 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 492b65840..fd3097c7d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2610,11 +2610,8 @@ dpaa2_dev_uninit(struct rte_eth_dev *eth_dev)
eth_dev->process_private = NULL;
rte_free(dpni);
- for (i = 0; i < MAX_TCS; i++) {
- if (priv->extract.tc_extract_param[i])
- rte_free((void *)
- (size_t)priv->extract.tc_extract_param[i]);
- }
+ for (i = 0; i < MAX_TCS; i++)
+ rte_free((void *)(size_t)priv->extract.tc_extract_param[i]);
if (priv->extract.qos_extract_param)
rte_free((void *)(size_t)priv->extract.qos_extract_param);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 779cb64ab..507a5d0e3 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -87,7 +87,68 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = {
#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
enum rte_filter_type dpaa2_filter_type = RTE_ETH_FILTER_NONE;
-static const void *default_mask;
+
+#ifndef __cplusplus
+static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
+ .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+ .src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+ .type = RTE_BE16(0xffff),
+};
+
+static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = {
+ .tci = RTE_BE16(0xffff),
+};
+
+static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = {
+ .hdr.src_addr = RTE_BE32(0xffffffff),
+ .hdr.dst_addr = RTE_BE32(0xffffffff),
+ .hdr.next_proto_id = 0xff,
+};
+
+static const struct rte_flow_item_ipv6 dpaa2_flow_item_ipv6_mask = {
+ .hdr = {
+ .src_addr =
+ "\xff\xff\xff\xff\xff\xff\xff\xff"
+ "\xff\xff\xff\xff\xff\xff\xff\xff",
+ .dst_addr =
+ "\xff\xff\xff\xff\xff\xff\xff\xff"
+ "\xff\xff\xff\xff\xff\xff\xff\xff",
+ .proto = 0xff
+ },
+};
+
+static const struct rte_flow_item_icmp dpaa2_flow_item_icmp_mask = {
+ .hdr.icmp_type = 0xff,
+ .hdr.icmp_code = 0xff,
+};
+
+static const struct rte_flow_item_udp dpaa2_flow_item_udp_mask = {
+ .hdr = {
+ .src_port = RTE_BE16(0xffff),
+ .dst_port = RTE_BE16(0xffff),
+ },
+};
+
+static const struct rte_flow_item_tcp dpaa2_flow_item_tcp_mask = {
+ .hdr = {
+ .src_port = RTE_BE16(0xffff),
+ .dst_port = RTE_BE16(0xffff),
+ },
+};
+
+static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
+ .hdr = {
+ .src_port = RTE_BE16(0xffff),
+ .dst_port = RTE_BE16(0xffff),
+ },
+};
+
+static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
+ .protocol = RTE_BE16(0xffff),
+};
+
+#endif
+
static inline void dpaa2_flow_extract_key_set(
struct dpaa2_key_info *key_info, int index, uint8_t size)
@@ -555,6 +616,67 @@ dpaa2_flow_rule_move_ipaddr_tail(
return 0;
}
+static int
+dpaa2_flow_extract_support(
+ const uint8_t *mask_src,
+ enum rte_flow_item_type type)
+{
+ char mask[64];
+ int i, size = 0;
+ const char *mask_support = 0;
+
+ switch (type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ mask_support = (const char *)&dpaa2_flow_item_eth_mask;
+ size = sizeof(struct rte_flow_item_eth);
+ break;
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ mask_support = (const char *)&dpaa2_flow_item_vlan_mask;
+ size = sizeof(struct rte_flow_item_vlan);
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ mask_support = (const char *)&dpaa2_flow_item_ipv4_mask;
+ size = sizeof(struct rte_flow_item_ipv4);
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ mask_support = (const char *)&dpaa2_flow_item_ipv6_mask;
+ size = sizeof(struct rte_flow_item_ipv6);
+ break;
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ mask_support = (const char *)&dpaa2_flow_item_icmp_mask;
+ size = sizeof(struct rte_flow_item_icmp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ mask_support = (const char *)&dpaa2_flow_item_udp_mask;
+ size = sizeof(struct rte_flow_item_udp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ mask_support = (const char *)&dpaa2_flow_item_tcp_mask;
+ size = sizeof(struct rte_flow_item_tcp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ mask_support = (const char *)&dpaa2_flow_item_sctp_mask;
+ size = sizeof(struct rte_flow_item_sctp);
+ break;
+ case RTE_FLOW_ITEM_TYPE_GRE:
+ mask_support = (const char *)&dpaa2_flow_item_gre_mask;
+ size = sizeof(struct rte_flow_item_gre);
+ break;
+ default:
+ return -1;
+ }
+
+ memcpy(mask, mask_support, size);
+
+ for (i = 0; i < size; i++)
+ mask[i] = (mask[i] | mask_src[i]);
+
+ if (memcmp(mask, mask_support, size))
+ return -1;
+
+ return 0;
+}
+
static int
dpaa2_configure_flow_eth(struct rte_flow *flow,
struct rte_eth_dev *dev,
@@ -580,7 +702,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
spec = (const struct rte_flow_item_eth *)pattern->spec;
last = (const struct rte_flow_item_eth *)pattern->last;
mask = (const struct rte_flow_item_eth *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_eth_mask);
if (!spec) {
/* Don't care any field of eth header,
* only care eth protocol.
@@ -593,6 +715,13 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
flow->tc_id = group;
flow->tc_index = attr->priority;
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_ETH)) {
+ DPAA2_PMD_WARN("Extract field(s) of ethernet not support.");
+
+ return -1;
+ }
+
if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
index = dpaa2_flow_extract_search(
&priv->extract.qos_key_extract.dpkg,
@@ -819,7 +948,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
spec = (const struct rte_flow_item_vlan *)pattern->spec;
last = (const struct rte_flow_item_vlan *)pattern->last;
mask = (const struct rte_flow_item_vlan *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -886,6 +1015,13 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_VLAN)) {
+ DPAA2_PMD_WARN("Extract field(s) of vlan not support.");
+
+ return -1;
+ }
+
if (!mask->tci)
return 0;
@@ -990,11 +1126,13 @@ dpaa2_configure_flow_generic_ip(
if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec;
mask_ipv4 = (const struct rte_flow_item_ipv4 *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask :
+ &dpaa2_flow_item_ipv4_mask);
} else {
spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec;
mask_ipv6 = (const struct rte_flow_item_ipv6 *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask :
+ &dpaa2_flow_item_ipv6_mask);
}
/* Get traffic class index and flow id to be configured */
@@ -1069,6 +1207,24 @@ dpaa2_configure_flow_generic_ip(
return 0;
}
+ if (mask_ipv4) {
+ if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
+ RTE_FLOW_ITEM_TYPE_IPV4)) {
+ DPAA2_PMD_WARN("Extract field(s) of IPv4 not support.");
+
+ return -1;
+ }
+ }
+
+ if (mask_ipv6) {
+ if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6,
+ RTE_FLOW_ITEM_TYPE_IPV6)) {
+ DPAA2_PMD_WARN("Extract field(s) of IPv6 not support.");
+
+ return -1;
+ }
+ }
+
if (mask_ipv4 && (mask_ipv4->hdr.src_addr ||
mask_ipv4->hdr.dst_addr)) {
flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR;
@@ -1358,7 +1514,7 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
spec = (const struct rte_flow_item_icmp *)pattern->spec;
last = (const struct rte_flow_item_icmp *)pattern->last;
mask = (const struct rte_flow_item_icmp *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_icmp_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -1427,6 +1583,13 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_ICMP)) {
+ DPAA2_PMD_WARN("Extract field(s) of ICMP not support.");
+
+ return -1;
+ }
+
if (mask->hdr.icmp_type) {
index = dpaa2_flow_extract_search(
&priv->extract.qos_key_extract.dpkg,
@@ -1593,7 +1756,7 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
spec = (const struct rte_flow_item_udp *)pattern->spec;
last = (const struct rte_flow_item_udp *)pattern->last;
mask = (const struct rte_flow_item_udp *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_udp_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -1656,6 +1819,13 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_UDP)) {
+ DPAA2_PMD_WARN("Extract field(s) of UDP not support.");
+
+ return -1;
+ }
+
if (mask->hdr.src_port) {
index = dpaa2_flow_extract_search(
&priv->extract.qos_key_extract.dpkg,
@@ -1825,7 +1995,7 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
spec = (const struct rte_flow_item_tcp *)pattern->spec;
last = (const struct rte_flow_item_tcp *)pattern->last;
mask = (const struct rte_flow_item_tcp *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_tcp_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -1888,6 +2058,13 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_TCP)) {
+ DPAA2_PMD_WARN("Extract field(s) of TCP not support.");
+
+ return -1;
+ }
+
if (mask->hdr.src_port) {
index = dpaa2_flow_extract_search(
&priv->extract.qos_key_extract.dpkg,
@@ -2058,7 +2235,8 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
spec = (const struct rte_flow_item_sctp *)pattern->spec;
last = (const struct rte_flow_item_sctp *)pattern->last;
mask = (const struct rte_flow_item_sctp *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask :
+ &dpaa2_flow_item_sctp_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -2121,6 +2299,13 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_SCTP)) {
+ DPAA2_PMD_WARN("Extract field(s) of SCTP not support.");
+
+ return -1;
+ }
+
if (mask->hdr.src_port) {
index = dpaa2_flow_extract_search(
&priv->extract.qos_key_extract.dpkg,
@@ -2291,7 +2476,7 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
spec = (const struct rte_flow_item_gre *)pattern->spec;
last = (const struct rte_flow_item_gre *)pattern->last;
mask = (const struct rte_flow_item_gre *)
- (pattern->mask ? pattern->mask : default_mask);
+ (pattern->mask ? pattern->mask : &dpaa2_flow_item_gre_mask);
/* Get traffic class index and flow id to be configured */
flow->tc_id = group;
@@ -2353,6 +2538,13 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
return 0;
}
+ if (dpaa2_flow_extract_support((const uint8_t *)mask,
+ RTE_FLOW_ITEM_TYPE_GRE)) {
+ DPAA2_PMD_WARN("Extract field(s) of GRE not support.");
+
+ return -1;
+ }
+
if (!mask->protocol)
return 0;
@@ -3155,42 +3347,6 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
return ret;
}
-static inline void
-dpaa2_dev_update_default_mask(const struct rte_flow_item *pattern)
-{
- switch (pattern->type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- default_mask = (const void *)&rte_flow_item_eth_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_VLAN:
- default_mask = (const void *)&rte_flow_item_vlan_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_IPV4:
- default_mask = (const void *)&rte_flow_item_ipv4_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- default_mask = (const void *)&rte_flow_item_ipv6_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_ICMP:
- default_mask = (const void *)&rte_flow_item_icmp_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- default_mask = (const void *)&rte_flow_item_udp_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- default_mask = (const void *)&rte_flow_item_tcp_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- default_mask = (const void *)&rte_flow_item_sctp_mask;
- break;
- case RTE_FLOW_ITEM_TYPE_GRE:
- default_mask = (const void *)&rte_flow_item_gre_mask;
- break;
- default:
- DPAA2_PMD_ERR("Invalid pattern type");
- }
-}
-
static inline int
dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
{
@@ -3216,8 +3372,6 @@ dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[])
ret = -EINVAL;
break;
}
- if ((pattern[j].last) && (!pattern[j].mask))
- dpaa2_dev_update_default_mask(&pattern[j]);
}
return ret;
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 18/29] net/dpaa2: free flow rule memory
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (16 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 17/29] net/dpaa2: add sanity check for flow extracts Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 19/29] net/dpaa2: support QoS or FS table entry indexing Hemant Agrawal
` (11 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Free rule memory when the flow is destroyed.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 507a5d0e3..941d62b80 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3594,6 +3594,7 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
"Error in entry addition in QoS table(%d)", ret);
goto error;
}
+ priv->qos_index[flow->qos_index] = 0;
break;
default:
DPAA2_PMD_ERR(
@@ -3603,6 +3604,10 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
}
LIST_REMOVE(flow, next);
+ rte_free((void *)(size_t)flow->qos_rule.key_iova);
+ rte_free((void *)(size_t)flow->qos_rule.mask_iova);
+ rte_free((void *)(size_t)flow->fs_rule.key_iova);
+ rte_free((void *)(size_t)flow->fs_rule.mask_iova);
/* Now free the flow */
rte_free(flow);
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 19/29] net/dpaa2: support QoS or FS table entry indexing
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (17 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 18/29] net/dpaa2: free flow rule memory Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 20/29] net/dpaa2: define the size of table entry Hemant Agrawal
` (10 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Calculate QoS/FS entry index by group and priority of flow.
1)The less index of entry, the higher priority of flow.
2)Verify if the flow with same group and priority has been added before
creating flow.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 4 +
drivers/net/dpaa2/dpaa2_ethdev.h | 5 +-
drivers/net/dpaa2/dpaa2_flow.c | 127 +++++++++++++------------------
3 files changed, 59 insertions(+), 77 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index fd3097c7d..008e1c570 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2392,6 +2392,10 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
}
priv->num_rx_tc = attr.num_rx_tcs;
+ priv->qos_entries = attr.qos_entries;
+ priv->fs_entries = attr.fs_entries;
+ priv->dist_queues = attr.num_queues;
+
/* only if the custom CG is enabled */
if (attr.options & DPNI_OPT_CUSTOM_CG)
priv->max_cgs = attr.num_cgs;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 030c625e3..b49b88a2d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -145,6 +145,9 @@ struct dpaa2_dev_priv {
uint8_t max_mac_filters;
uint8_t max_vlan_filters;
uint8_t num_rx_tc;
+ uint16_t qos_entries;
+ uint16_t fs_entries;
+ uint8_t dist_queues;
uint8_t flags; /*dpaa2 config flags */
uint8_t en_ordered;
uint8_t en_loose_ordered;
@@ -152,8 +155,6 @@ struct dpaa2_dev_priv {
uint8_t cgid_in_use[MAX_RX_QUEUES];
struct extract_s extract;
- uint8_t *qos_index;
- uint8_t *fs_index;
uint16_t ss_offset;
uint64_t ss_iova;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 941d62b80..760a8a793 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -47,11 +47,8 @@ struct rte_flow {
LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
struct dpni_rule_cfg qos_rule;
struct dpni_rule_cfg fs_rule;
- uint16_t qos_index;
- uint16_t fs_index;
uint8_t key_size;
uint8_t tc_id; /** Traffic Class ID. */
- uint8_t flow_type;
uint8_t tc_index; /** index within this Traffic Class. */
enum rte_flow_action_type action;
uint16_t flow_id;
@@ -2645,6 +2642,7 @@ dpaa2_flow_entry_update(
char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
int extend = -1, extend1, size;
+ uint16_t qos_index;
while (curr) {
if (curr->ipaddr_rule.ipaddr_type ==
@@ -2676,6 +2674,9 @@ dpaa2_flow_entry_update(
size = NH_FLD_IPV6_ADDR_SIZE;
}
+ qos_index = curr->tc_id * priv->fs_entries +
+ curr->tc_index;
+
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &curr->qos_rule);
if (ret) {
@@ -2769,7 +2770,7 @@ dpaa2_flow_entry_update(
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &curr->qos_rule,
- curr->tc_id, curr->qos_index,
+ curr->tc_id, qos_index,
0, 0);
if (ret) {
DPAA2_PMD_ERR("Qos entry update failed.");
@@ -2875,7 +2876,7 @@ dpaa2_flow_entry_update(
curr->fs_rule.key_size += extend;
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
- priv->token, curr->tc_id, curr->fs_index,
+ priv->token, curr->tc_id, curr->tc_index,
&curr->fs_rule, &curr->action_cfg);
if (ret) {
DPAA2_PMD_ERR("FS entry update failed.");
@@ -2888,6 +2889,28 @@ dpaa2_flow_entry_update(
return 0;
}
+static inline int
+dpaa2_flow_verify_attr(
+ struct dpaa2_dev_priv *priv,
+ const struct rte_flow_attr *attr)
+{
+ struct rte_flow *curr = LIST_FIRST(&priv->flows);
+
+ while (curr) {
+ if (curr->tc_id == attr->group &&
+ curr->tc_index == attr->priority) {
+ DPAA2_PMD_ERR(
+ "Flow with group %d and priority %d already exists.",
+ attr->group, attr->priority);
+
+ return -1;
+ }
+ curr = LIST_NEXT(curr, next);
+ }
+
+ return 0;
+}
+
static int
dpaa2_generic_flow_set(struct rte_flow *flow,
struct rte_eth_dev *dev,
@@ -2898,10 +2921,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
{
const struct rte_flow_action_queue *dest_queue;
const struct rte_flow_action_rss *rss_conf;
- uint16_t index;
int is_keycfg_configured = 0, end_of_list = 0;
int ret = 0, i = 0, j = 0;
- struct dpni_attr nic_attr;
struct dpni_rx_tc_dist_cfg tc_cfg;
struct dpni_qos_tbl_cfg qos_cfg;
struct dpni_fs_action_cfg action;
@@ -2909,6 +2930,11 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
size_t param;
struct rte_flow *curr = LIST_FIRST(&priv->flows);
+ uint16_t qos_index;
+
+ ret = dpaa2_flow_verify_attr(priv, attr);
+ if (ret)
+ return ret;
/* Parse pattern list to get the matching parameters */
while (!end_of_list) {
@@ -3056,31 +3082,15 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
}
/* Configure QoS table first */
- memset(&nic_attr, 0, sizeof(struct dpni_attr));
- ret = dpni_get_attributes(dpni, CMD_PRI_LOW,
- priv->token, &nic_attr);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Failure to get attribute. dpni@%p err code(%d)\n",
- dpni, ret);
- return ret;
- }
- action.flow_id = action.flow_id % nic_attr.num_rx_tcs;
+ action.flow_id = action.flow_id % priv->num_rx_tc;
- if (!priv->qos_index) {
- priv->qos_index = rte_zmalloc(0,
- nic_attr.qos_entries, 64);
- }
- for (index = 0; index < nic_attr.qos_entries; index++) {
- if (!priv->qos_index[index]) {
- priv->qos_index[index] = 1;
- break;
- }
- }
- if (index >= nic_attr.qos_entries) {
+ qos_index = flow->tc_id * priv->fs_entries +
+ flow->tc_index;
+
+ if (qos_index >= priv->qos_entries) {
DPAA2_PMD_ERR("QoS table with %d entries full",
- nic_attr.qos_entries);
+ priv->qos_entries);
return -1;
}
flow->qos_rule.key_size = priv->extract
@@ -3110,30 +3120,18 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &flow->qos_rule,
- flow->tc_id, index,
+ flow->tc_id, qos_index,
0, 0);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in addnig entry to QoS table(%d)", ret);
- priv->qos_index[index] = 0;
return ret;
}
- flow->qos_index = index;
/* Then Configure FS table */
- if (!priv->fs_index) {
- priv->fs_index = rte_zmalloc(0,
- nic_attr.fs_entries, 64);
- }
- for (index = 0; index < nic_attr.fs_entries; index++) {
- if (!priv->fs_index[index]) {
- priv->fs_index[index] = 1;
- break;
- }
- }
- if (index >= nic_attr.fs_entries) {
+ if (flow->tc_index >= priv->fs_entries) {
DPAA2_PMD_ERR("FS table with %d entries full",
- nic_attr.fs_entries);
+ priv->fs_entries);
return -1;
}
flow->fs_rule.key_size = priv->extract
@@ -3164,31 +3162,23 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
}
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
- flow->tc_id, index,
+ flow->tc_id, flow->tc_index,
&flow->fs_rule, &action);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in adding entry to FS table(%d)", ret);
- priv->fs_index[index] = 0;
return ret;
}
- flow->fs_index = index;
memcpy(&flow->action_cfg, &action,
sizeof(struct dpni_fs_action_cfg));
break;
case RTE_FLOW_ACTION_TYPE_RSS:
- ret = dpni_get_attributes(dpni, CMD_PRI_LOW,
- priv->token, &nic_attr);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Failure to get attribute. dpni@%p err code(%d)\n",
- dpni, ret);
- return ret;
- }
rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
for (i = 0; i < (int)rss_conf->queue_num; i++) {
- if (rss_conf->queue[i] < (attr->group * nic_attr.num_queues) ||
- rss_conf->queue[i] >= ((attr->group + 1) * nic_attr.num_queues)) {
+ if (rss_conf->queue[i] <
+ (attr->group * priv->dist_queues) ||
+ rss_conf->queue[i] >=
+ ((attr->group + 1) * priv->dist_queues)) {
DPAA2_PMD_ERR(
"Queue/Group combination are not supported\n");
return -ENOTSUP;
@@ -3262,34 +3252,24 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
/* Add Rule into QoS table */
- if (!priv->qos_index) {
- priv->qos_index = rte_zmalloc(0,
- nic_attr.qos_entries, 64);
- }
- for (index = 0; index < nic_attr.qos_entries; index++) {
- if (!priv->qos_index[index]) {
- priv->qos_index[index] = 1;
- break;
- }
- }
- if (index >= nic_attr.qos_entries) {
+ qos_index = flow->tc_id * priv->fs_entries +
+ flow->tc_index;
+ if (qos_index >= priv->qos_entries) {
DPAA2_PMD_ERR("QoS table with %d entries full",
- nic_attr.qos_entries);
+ priv->qos_entries);
return -1;
}
flow->qos_rule.key_size =
priv->extract.qos_key_extract.key_info.key_total_size;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
&flow->qos_rule, flow->tc_id,
- index, 0, 0);
+ qos_index, 0, 0);
if (ret < 0) {
DPAA2_PMD_ERR(
"Error in entry addition in QoS table(%d)",
ret);
- priv->qos_index[index] = 0;
return ret;
}
- flow->qos_index = index;
break;
case RTE_FLOW_ACTION_TYPE_END:
end_of_list = 1;
@@ -3574,7 +3554,6 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
"Error in adding entry to QoS table(%d)", ret);
goto error;
}
- priv->qos_index[flow->qos_index] = 0;
/* Then remove entry from FS table */
ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token,
@@ -3584,7 +3563,6 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
"Error in entry addition in FS table(%d)", ret);
goto error;
}
- priv->fs_index[flow->fs_index] = 0;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
@@ -3594,7 +3572,6 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
"Error in entry addition in QoS table(%d)", ret);
goto error;
}
- priv->qos_index[flow->qos_index] = 0;
break;
default:
DPAA2_PMD_ERR(
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 20/29] net/dpaa2: define the size of table entry
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (18 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 19/29] net/dpaa2: support QoS or FS table entry indexing Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 21/29] net/dpaa2: add logging of flow extracts and rules Hemant Agrawal
` (9 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
If entry size is not bigger than 27, MC alloc one TCAM entry,
otherwise, alloc 2 TCAM entries.
Extracts size by HW must be not bigger than TCAM entry size(27 or 54).
So define the flow entry size as 54.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 90 ++++++++++++++++++++++------------
1 file changed, 60 insertions(+), 30 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 760a8a793..bcbd5977a 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -29,6 +29,8 @@
*/
int mc_l4_port_identification;
+#define FIXED_ENTRY_SIZE 54
+
enum flow_rule_ipaddr_type {
FLOW_NONE_IPADDR,
FLOW_IPV4_ADDR,
@@ -47,7 +49,8 @@ struct rte_flow {
LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
struct dpni_rule_cfg qos_rule;
struct dpni_rule_cfg fs_rule;
- uint8_t key_size;
+ uint8_t qos_real_key_size;
+ uint8_t fs_real_key_size;
uint8_t tc_id; /** Traffic Class ID. */
uint8_t tc_index; /** index within this Traffic Class. */
enum rte_flow_action_type action;
@@ -478,6 +481,7 @@ dpaa2_flow_rule_data_set(
prot, field);
return -1;
}
+
memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
@@ -523,9 +527,11 @@ _dpaa2_flow_rule_move_ipaddr_tail(
len = NH_FLD_IPV6_ADDR_SIZE;
memcpy(tmp, (char *)key_src, len);
+ memset((char *)key_src, 0, len);
memcpy((char *)key_dst, tmp, len);
memcpy(tmp, (char *)mask_src, len);
+ memset((char *)mask_src, 0, len);
memcpy((char *)mask_dst, tmp, len);
return 0;
@@ -1251,8 +1257,7 @@ dpaa2_configure_flow_generic_ip(
return -1;
}
- local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE |
- DPAA2_QOS_TABLE_IPADDR_EXTRACT);
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
index = dpaa2_flow_extract_search(
@@ -1269,8 +1274,7 @@ dpaa2_configure_flow_generic_ip(
return -1;
}
- local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE |
- DPAA2_FS_TABLE_IPADDR_EXTRACT);
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
if (spec_ipv4)
@@ -1339,8 +1343,7 @@ dpaa2_configure_flow_generic_ip(
return -1;
}
- local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE |
- DPAA2_QOS_TABLE_IPADDR_EXTRACT);
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
}
index = dpaa2_flow_extract_search(
@@ -1361,8 +1364,7 @@ dpaa2_configure_flow_generic_ip(
return -1;
}
- local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE |
- DPAA2_FS_TABLE_IPADDR_EXTRACT);
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
}
if (spec_ipv4)
@@ -2641,7 +2643,7 @@ dpaa2_flow_entry_update(
char ipdst_key[NH_FLD_IPV6_ADDR_SIZE];
char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE];
char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE];
- int extend = -1, extend1, size;
+ int extend = -1, extend1, size = -1;
uint16_t qos_index;
while (curr) {
@@ -2696,6 +2698,9 @@ dpaa2_flow_entry_update(
else
extend = extend1;
+ RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
+ (size == NH_FLD_IPV6_ADDR_SIZE));
+
memcpy(ipsrc_key,
(char *)(size_t)curr->qos_rule.key_iova +
curr->ipaddr_rule.qos_ipsrc_offset,
@@ -2725,6 +2730,9 @@ dpaa2_flow_entry_update(
else
extend = extend1;
+ RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
+ (size == NH_FLD_IPV6_ADDR_SIZE));
+
memcpy(ipdst_key,
(char *)(size_t)curr->qos_rule.key_iova +
curr->ipaddr_rule.qos_ipdst_offset,
@@ -2745,6 +2753,8 @@ dpaa2_flow_entry_update(
}
if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) {
+ RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
+ (size == NH_FLD_IPV6_ADDR_SIZE));
memcpy((char *)(size_t)curr->qos_rule.key_iova +
curr->ipaddr_rule.qos_ipsrc_offset,
ipsrc_key,
@@ -2755,6 +2765,8 @@ dpaa2_flow_entry_update(
size);
}
if (curr->ipaddr_rule.qos_ipdst_offset >= 0) {
+ RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) ||
+ (size == NH_FLD_IPV6_ADDR_SIZE));
memcpy((char *)(size_t)curr->qos_rule.key_iova +
curr->ipaddr_rule.qos_ipdst_offset,
ipdst_key,
@@ -2766,7 +2778,9 @@ dpaa2_flow_entry_update(
}
if (extend >= 0)
- curr->qos_rule.key_size += extend;
+ curr->qos_real_key_size += extend;
+
+ curr->qos_rule.key_size = FIXED_ENTRY_SIZE;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &curr->qos_rule,
@@ -2873,7 +2887,8 @@ dpaa2_flow_entry_update(
}
if (extend >= 0)
- curr->fs_rule.key_size += extend;
+ curr->fs_real_key_size += extend;
+ curr->fs_rule.key_size = FIXED_ENTRY_SIZE;
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
priv->token, curr->tc_id, curr->tc_index,
@@ -3093,31 +3108,34 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->qos_entries);
return -1;
}
- flow->qos_rule.key_size = priv->extract
- .qos_key_extract.key_info.key_total_size;
+ flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
if (flow->ipaddr_rule.qos_ipdst_offset >=
flow->ipaddr_rule.qos_ipsrc_offset) {
- flow->qos_rule.key_size =
+ flow->qos_real_key_size =
flow->ipaddr_rule.qos_ipdst_offset +
NH_FLD_IPV4_ADDR_SIZE;
} else {
- flow->qos_rule.key_size =
+ flow->qos_real_key_size =
flow->ipaddr_rule.qos_ipsrc_offset +
NH_FLD_IPV4_ADDR_SIZE;
}
- } else if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV6_ADDR) {
+ } else if (flow->ipaddr_rule.ipaddr_type ==
+ FLOW_IPV6_ADDR) {
if (flow->ipaddr_rule.qos_ipdst_offset >=
flow->ipaddr_rule.qos_ipsrc_offset) {
- flow->qos_rule.key_size =
+ flow->qos_real_key_size =
flow->ipaddr_rule.qos_ipdst_offset +
NH_FLD_IPV6_ADDR_SIZE;
} else {
- flow->qos_rule.key_size =
+ flow->qos_real_key_size =
flow->ipaddr_rule.qos_ipsrc_offset +
NH_FLD_IPV6_ADDR_SIZE;
}
}
+
+ flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
+
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &flow->qos_rule,
flow->tc_id, qos_index,
@@ -3134,17 +3152,20 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->fs_entries);
return -1;
}
- flow->fs_rule.key_size = priv->extract
- .tc_key_extract[attr->group].key_info.key_total_size;
+
+ flow->fs_real_key_size =
+ priv->extract.tc_key_extract[flow->tc_id]
+ .key_info.key_total_size;
+
if (flow->ipaddr_rule.ipaddr_type ==
FLOW_IPV4_ADDR) {
if (flow->ipaddr_rule.fs_ipdst_offset >=
flow->ipaddr_rule.fs_ipsrc_offset) {
- flow->fs_rule.key_size =
+ flow->fs_real_key_size =
flow->ipaddr_rule.fs_ipdst_offset +
NH_FLD_IPV4_ADDR_SIZE;
} else {
- flow->fs_rule.key_size =
+ flow->fs_real_key_size =
flow->ipaddr_rule.fs_ipsrc_offset +
NH_FLD_IPV4_ADDR_SIZE;
}
@@ -3152,15 +3173,18 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
FLOW_IPV6_ADDR) {
if (flow->ipaddr_rule.fs_ipdst_offset >=
flow->ipaddr_rule.fs_ipsrc_offset) {
- flow->fs_rule.key_size =
+ flow->fs_real_key_size =
flow->ipaddr_rule.fs_ipdst_offset +
NH_FLD_IPV6_ADDR_SIZE;
} else {
- flow->fs_rule.key_size =
+ flow->fs_real_key_size =
flow->ipaddr_rule.fs_ipsrc_offset +
NH_FLD_IPV6_ADDR_SIZE;
}
}
+
+ flow->fs_rule.key_size = FIXED_ENTRY_SIZE;
+
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
flow->tc_id, flow->tc_index,
&flow->fs_rule, &action);
@@ -3259,8 +3283,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->qos_entries);
return -1;
}
- flow->qos_rule.key_size =
+
+ flow->qos_real_key_size =
priv->extract.qos_key_extract.key_info.key_total_size;
+ flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
&flow->qos_rule, flow->tc_id,
qos_index, 0, 0);
@@ -3283,11 +3309,15 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
if (!ret) {
- ret = dpaa2_flow_entry_update(priv, flow->tc_id);
- if (ret) {
- DPAA2_PMD_ERR("Flow entry update failed.");
+ if (is_keycfg_configured &
+ (DPAA2_QOS_TABLE_RECONFIGURE |
+ DPAA2_FS_TABLE_RECONFIGURE)) {
+ ret = dpaa2_flow_entry_update(priv, flow->tc_id);
+ if (ret) {
+ DPAA2_PMD_ERR("Flow entry update failed.");
- return -1;
+ return -1;
+ }
}
/* New rules are inserted. */
if (!curr) {
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 21/29] net/dpaa2: add logging of flow extracts and rules
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (19 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 20/29] net/dpaa2: define the size of table entry Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 22/29] net/dpaa2: support iscrimination between IPv4 and IPv6 Hemant Agrawal
` (8 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
This patch add support for logging the flow rules.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 213 ++++++++++++++++++++++++++++++++-
1 file changed, 209 insertions(+), 4 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index bcbd5977a..95756bf7b 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -29,6 +29,8 @@
*/
int mc_l4_port_identification;
+static char *dpaa2_flow_control_log;
+
#define FIXED_ENTRY_SIZE 54
enum flow_rule_ipaddr_type {
@@ -149,6 +151,189 @@ static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
#endif
+static inline void dpaa2_prot_field_string(
+ enum net_prot prot, uint32_t field,
+ char *string)
+{
+ if (!dpaa2_flow_control_log)
+ return;
+
+ if (prot == NET_PROT_ETH) {
+ strcpy(string, "eth");
+ if (field == NH_FLD_ETH_DA)
+ strcat(string, ".dst");
+ else if (field == NH_FLD_ETH_SA)
+ strcat(string, ".src");
+ else if (field == NH_FLD_ETH_TYPE)
+ strcat(string, ".type");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_VLAN) {
+ strcpy(string, "vlan");
+ if (field == NH_FLD_VLAN_TCI)
+ strcat(string, ".tci");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_IP) {
+ strcpy(string, "ip");
+ if (field == NH_FLD_IP_SRC)
+ strcat(string, ".src");
+ else if (field == NH_FLD_IP_DST)
+ strcat(string, ".dst");
+ else if (field == NH_FLD_IP_PROTO)
+ strcat(string, ".proto");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_TCP) {
+ strcpy(string, "tcp");
+ if (field == NH_FLD_TCP_PORT_SRC)
+ strcat(string, ".src");
+ else if (field == NH_FLD_TCP_PORT_DST)
+ strcat(string, ".dst");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_UDP) {
+ strcpy(string, "udp");
+ if (field == NH_FLD_UDP_PORT_SRC)
+ strcat(string, ".src");
+ else if (field == NH_FLD_UDP_PORT_DST)
+ strcat(string, ".dst");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_ICMP) {
+ strcpy(string, "icmp");
+ if (field == NH_FLD_ICMP_TYPE)
+ strcat(string, ".type");
+ else if (field == NH_FLD_ICMP_CODE)
+ strcat(string, ".code");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_SCTP) {
+ strcpy(string, "sctp");
+ if (field == NH_FLD_SCTP_PORT_SRC)
+ strcat(string, ".src");
+ else if (field == NH_FLD_SCTP_PORT_DST)
+ strcat(string, ".dst");
+ else
+ strcat(string, ".unknown field");
+ } else if (prot == NET_PROT_GRE) {
+ strcpy(string, "gre");
+ if (field == NH_FLD_GRE_TYPE)
+ strcat(string, ".type");
+ else
+ strcat(string, ".unknown field");
+ } else {
+ strcpy(string, "unknown protocol");
+ }
+}
+
+static inline void dpaa2_flow_qos_table_extracts_log(
+ const struct dpaa2_dev_priv *priv)
+{
+ int idx;
+ char string[32];
+
+ if (!dpaa2_flow_control_log)
+ return;
+
+ printf("Setup QoS table: number of extracts: %d\r\n",
+ priv->extract.qos_key_extract.dpkg.num_extracts);
+ for (idx = 0; idx < priv->extract.qos_key_extract.dpkg.num_extracts;
+ idx++) {
+ dpaa2_prot_field_string(priv->extract.qos_key_extract.dpkg
+ .extracts[idx].extract.from_hdr.prot,
+ priv->extract.qos_key_extract.dpkg.extracts[idx]
+ .extract.from_hdr.field,
+ string);
+ printf("%s", string);
+ if ((idx + 1) < priv->extract.qos_key_extract.dpkg.num_extracts)
+ printf(" / ");
+ }
+ printf("\r\n");
+}
+
+static inline void dpaa2_flow_fs_table_extracts_log(
+ const struct dpaa2_dev_priv *priv, int tc_id)
+{
+ int idx;
+ char string[32];
+
+ if (!dpaa2_flow_control_log)
+ return;
+
+ printf("Setup FS table: number of extracts of TC[%d]: %d\r\n",
+ tc_id, priv->extract.tc_key_extract[tc_id]
+ .dpkg.num_extracts);
+ for (idx = 0; idx < priv->extract.tc_key_extract[tc_id]
+ .dpkg.num_extracts; idx++) {
+ dpaa2_prot_field_string(priv->extract.tc_key_extract[tc_id]
+ .dpkg.extracts[idx].extract.from_hdr.prot,
+ priv->extract.tc_key_extract[tc_id].dpkg.extracts[idx]
+ .extract.from_hdr.field,
+ string);
+ printf("%s", string);
+ if ((idx + 1) < priv->extract.tc_key_extract[tc_id]
+ .dpkg.num_extracts)
+ printf(" / ");
+ }
+ printf("\r\n");
+}
+
+static inline void dpaa2_flow_qos_entry_log(
+ const char *log_info, const struct rte_flow *flow, int qos_index)
+{
+ int idx;
+ uint8_t *key, *mask;
+
+ if (!dpaa2_flow_control_log)
+ return;
+
+ printf("\r\n%s QoS entry[%d] for TC[%d], extracts size is %d\r\n",
+ log_info, qos_index, flow->tc_id, flow->qos_real_key_size);
+
+ key = (uint8_t *)(size_t)flow->qos_rule.key_iova;
+ mask = (uint8_t *)(size_t)flow->qos_rule.mask_iova;
+
+ printf("key:\r\n");
+ for (idx = 0; idx < flow->qos_real_key_size; idx++)
+ printf("%02x ", key[idx]);
+
+ printf("\r\nmask:\r\n");
+ for (idx = 0; idx < flow->qos_real_key_size; idx++)
+ printf("%02x ", mask[idx]);
+
+ printf("\r\n%s QoS ipsrc: %d, ipdst: %d\r\n", log_info,
+ flow->ipaddr_rule.qos_ipsrc_offset,
+ flow->ipaddr_rule.qos_ipdst_offset);
+}
+
+static inline void dpaa2_flow_fs_entry_log(
+ const char *log_info, const struct rte_flow *flow)
+{
+ int idx;
+ uint8_t *key, *mask;
+
+ if (!dpaa2_flow_control_log)
+ return;
+
+ printf("\r\n%s FS/TC entry[%d] of TC[%d], extracts size is %d\r\n",
+ log_info, flow->tc_index, flow->tc_id, flow->fs_real_key_size);
+
+ key = (uint8_t *)(size_t)flow->fs_rule.key_iova;
+ mask = (uint8_t *)(size_t)flow->fs_rule.mask_iova;
+
+ printf("key:\r\n");
+ for (idx = 0; idx < flow->fs_real_key_size; idx++)
+ printf("%02x ", key[idx]);
+
+ printf("\r\nmask:\r\n");
+ for (idx = 0; idx < flow->fs_real_key_size; idx++)
+ printf("%02x ", mask[idx]);
+
+ printf("\r\n%s FS ipsrc: %d, ipdst: %d\r\n", log_info,
+ flow->ipaddr_rule.fs_ipsrc_offset,
+ flow->ipaddr_rule.fs_ipdst_offset);
+}
static inline void dpaa2_flow_extract_key_set(
struct dpaa2_key_info *key_info, int index, uint8_t size)
@@ -2679,6 +2864,8 @@ dpaa2_flow_entry_update(
qos_index = curr->tc_id * priv->fs_entries +
curr->tc_index;
+ dpaa2_flow_qos_entry_log("Before update", curr, qos_index);
+
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &curr->qos_rule);
if (ret) {
@@ -2782,6 +2969,8 @@ dpaa2_flow_entry_update(
curr->qos_rule.key_size = FIXED_ENTRY_SIZE;
+ dpaa2_flow_qos_entry_log("Start update", curr, qos_index);
+
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &curr->qos_rule,
curr->tc_id, qos_index,
@@ -2796,6 +2985,7 @@ dpaa2_flow_entry_update(
continue;
}
+ dpaa2_flow_fs_entry_log("Before update", curr);
extend = -1;
ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW,
@@ -2890,6 +3080,8 @@ dpaa2_flow_entry_update(
curr->fs_real_key_size += extend;
curr->fs_rule.key_size = FIXED_ENTRY_SIZE;
+ dpaa2_flow_fs_entry_log("Start update", curr);
+
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW,
priv->token, curr->tc_id, curr->tc_index,
&curr->fs_rule, &curr->action_cfg);
@@ -3043,14 +3235,18 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
while (!end_of_list) {
switch (actions[j].type) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
- dest_queue = (const struct rte_flow_action_queue *)(actions[j].conf);
+ dest_queue =
+ (const struct rte_flow_action_queue *)(actions[j].conf);
flow->flow_id = dest_queue->index;
flow->action = RTE_FLOW_ACTION_TYPE_QUEUE;
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
action.flow_id = flow->flow_id;
if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- if (dpkg_prepare_key_cfg(&priv->extract.qos_key_extract.dpkg,
- (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
+ dpaa2_flow_qos_table_extracts_log(priv);
+ if (dpkg_prepare_key_cfg(
+ &priv->extract.qos_key_extract.dpkg,
+ (uint8_t *)(size_t)priv->extract.qos_extract_param)
+ < 0) {
DPAA2_PMD_ERR(
"Unable to prepare extract parameters");
return -1;
@@ -3059,7 +3255,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
qos_cfg.discard_on_miss = true;
qos_cfg.keep_entries = true;
- qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param;
+ qos_cfg.key_cfg_iova =
+ (size_t)priv->extract.qos_extract_param;
ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
priv->token, &qos_cfg);
if (ret < 0) {
@@ -3070,6 +3267,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
}
if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
+ dpaa2_flow_fs_table_extracts_log(priv, flow->tc_id);
if (dpkg_prepare_key_cfg(
&priv->extract.tc_key_extract[flow->tc_id].dpkg,
(uint8_t *)(size_t)priv->extract
@@ -3136,6 +3334,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
+ dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
+
ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &flow->qos_rule,
flow->tc_id, qos_index,
@@ -3185,6 +3385,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
flow->fs_rule.key_size = FIXED_ENTRY_SIZE;
+ dpaa2_flow_fs_entry_log("Start add", flow);
+
ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token,
flow->tc_id, flow->tc_index,
&flow->fs_rule, &action);
@@ -3483,6 +3685,9 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
size_t key_iova = 0, mask_iova = 0;
int ret;
+ dpaa2_flow_control_log =
+ getenv("DPAA2_FLOW_CONTROL_LOG");
+
flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
if (!flow) {
DPAA2_PMD_ERR("Failure to allocate memory for flow");
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 22/29] net/dpaa2: support iscrimination between IPv4 and IPv6
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (20 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 21/29] net/dpaa2: add logging of flow extracts and rules Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 23/29] net/dpaa2: support distribution size set on multiple TCs Hemant Agrawal
` (7 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Discriminate between IPv4 and IPv6 in generic IP flow setup.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 153 +++++++++++++++++----------------
1 file changed, 80 insertions(+), 73 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 95756bf7b..6f3139f86 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1284,6 +1284,70 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
return 0;
}
+static int
+dpaa2_configure_flow_ip_discrimation(
+ struct dpaa2_dev_priv *priv, struct rte_flow *flow,
+ const struct rte_flow_item *pattern,
+ int *local_cfg, int *device_configured,
+ uint32_t group)
+{
+ int index, ret;
+ struct proto_discrimination proto;
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.qos_key_extract.dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.qos_key_extract,
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "QoS Extract ETH_TYPE to discriminate IP failed.");
+ return -1;
+ }
+ (*local_cfg) |= DPAA2_QOS_TABLE_RECONFIGURE;
+ }
+
+ index = dpaa2_flow_extract_search(
+ &priv->extract.tc_key_extract[group].dpkg,
+ NET_PROT_ETH, NH_FLD_ETH_TYPE);
+ if (index < 0) {
+ ret = dpaa2_flow_proto_discrimination_extract(
+ &priv->extract.tc_key_extract[group],
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "FS Extract ETH_TYPE to discriminate IP failed.");
+ return -1;
+ }
+ (*local_cfg) |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Move ipaddr before IP discrimination set failed");
+ return -1;
+ }
+
+ proto.type = RTE_FLOW_ITEM_TYPE_ETH;
+ if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
+ proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+ else
+ proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+ ret = dpaa2_flow_proto_discrimination_rule(priv, flow, proto, group);
+ if (ret) {
+ DPAA2_PMD_ERR("IP discrimination rule set failed");
+ return -1;
+ }
+
+ (*device_configured) |= (*local_cfg);
+
+ return 0;
+}
+
+
static int
dpaa2_configure_flow_generic_ip(
struct rte_flow *flow,
@@ -1327,73 +1391,16 @@ dpaa2_configure_flow_generic_ip(
flow->tc_id = group;
flow->tc_index = attr->priority;
- if (!spec_ipv4 && !spec_ipv6) {
- /* Don't care any field of IP header,
- * only care IP protocol.
- * Example: flow create 0 ingress pattern ipv6 /
- */
- /* Eth type is actually used for IP identification.
- */
- /* TODO: Current design only supports Eth + IP,
- * Eth + vLan + IP needs to add.
- */
- struct proto_discrimination proto;
-
- index = dpaa2_flow_extract_search(
- &priv->extract.qos_key_extract.dpkg,
- NET_PROT_ETH, NH_FLD_ETH_TYPE);
- if (index < 0) {
- ret = dpaa2_flow_proto_discrimination_extract(
- &priv->extract.qos_key_extract,
- RTE_FLOW_ITEM_TYPE_ETH);
- if (ret) {
- DPAA2_PMD_ERR(
- "QoS Ext ETH_TYPE to discriminate IP failed.");
-
- return -1;
- }
- local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
- }
-
- index = dpaa2_flow_extract_search(
- &priv->extract.tc_key_extract[group].dpkg,
- NET_PROT_ETH, NH_FLD_ETH_TYPE);
- if (index < 0) {
- ret = dpaa2_flow_proto_discrimination_extract(
- &priv->extract.tc_key_extract[group],
- RTE_FLOW_ITEM_TYPE_ETH);
- if (ret) {
- DPAA2_PMD_ERR(
- "FS Ext ETH_TYPE to discriminate IP failed");
-
- return -1;
- }
- local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
- }
-
- ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group);
- if (ret) {
- DPAA2_PMD_ERR(
- "Move ipaddr before IP discrimination set failed");
- return -1;
- }
-
- proto.type = RTE_FLOW_ITEM_TYPE_ETH;
- if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4)
- proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
- else
- proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
- ret = dpaa2_flow_proto_discrimination_rule(priv, flow,
- proto, group);
- if (ret) {
- DPAA2_PMD_ERR("IP discrimination rule set failed");
- return -1;
- }
-
- (*device_configured) |= local_cfg;
+ ret = dpaa2_configure_flow_ip_discrimation(priv,
+ flow, pattern, &local_cfg,
+ device_configured, group);
+ if (ret) {
+ DPAA2_PMD_ERR("IP discrimation failed!");
+ return -1;
+ }
+ if (!spec_ipv4 && !spec_ipv6)
return 0;
- }
if (mask_ipv4) {
if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4,
@@ -1433,10 +1440,10 @@ dpaa2_configure_flow_generic_ip(
NET_PROT_IP, NH_FLD_IP_SRC);
if (index < 0) {
ret = dpaa2_flow_extract_add(
- &priv->extract.qos_key_extract,
- NET_PROT_IP,
- NH_FLD_IP_SRC,
- 0);
+ &priv->extract.qos_key_extract,
+ NET_PROT_IP,
+ NH_FLD_IP_SRC,
+ 0);
if (ret) {
DPAA2_PMD_ERR("QoS Extract add IP_SRC failed.");
@@ -1519,10 +1526,10 @@ dpaa2_configure_flow_generic_ip(
else
size = NH_FLD_IPV6_ADDR_SIZE;
ret = dpaa2_flow_extract_add(
- &priv->extract.qos_key_extract,
- NET_PROT_IP,
- NH_FLD_IP_DST,
- size);
+ &priv->extract.qos_key_extract,
+ NET_PROT_IP,
+ NH_FLD_IP_DST,
+ size);
if (ret) {
DPAA2_PMD_ERR("QoS Extract add IP_DST failed.");
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 23/29] net/dpaa2: support distribution size set on multiple TCs
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (21 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 22/29] net/dpaa2: support iscrimination between IPv4 and IPv6 Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 24/29] net/dpaa2: support ndex of queue action for flow Hemant Agrawal
` (6 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Default distribution size of TC is 1, which is limited by MC. We have to
set the distribution size for each TC to support multiple RXQs per TC.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 6 +--
drivers/net/dpaa2/dpaa2_ethdev.c | 51 ++++++++++++++++----------
drivers/net/dpaa2/dpaa2_ethdev.h | 2 +-
3 files changed, 36 insertions(+), 23 deletions(-)
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 34de0d1f7..9f0dad6e7 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -81,14 +81,14 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
int
dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
- uint64_t req_dist_set)
+ uint64_t req_dist_set, int tc_index)
{
struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
struct fsl_mc_io *dpni = priv->hw;
struct dpni_rx_tc_dist_cfg tc_cfg;
struct dpkg_profile_cfg kg_cfg;
void *p_params;
- int ret, tc_index = 0;
+ int ret;
p_params = rte_malloc(
NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
@@ -107,7 +107,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
return ret;
}
tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
- tc_cfg.dist_size = eth_dev->data->nb_rx_queues;
+ tc_cfg.dist_size = priv->dist_queues;
tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
ret = dpkg_prepare_key_cfg(&kg_cfg, p_params);
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 008e1c570..020af4b03 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -453,7 +453,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
int rx_l4_csum_offload = false;
int tx_l3_csum_offload = false;
int tx_l4_csum_offload = false;
- int ret;
+ int ret, tc_index;
PMD_INIT_FUNC_TRACE();
@@ -493,12 +493,16 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
}
if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
- ret = dpaa2_setup_flow_dist(dev,
- eth_conf->rx_adv_conf.rss_conf.rss_hf);
- if (ret) {
- DPAA2_PMD_ERR("Unable to set flow distribution."
- "Check queue config");
- return ret;
+ for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
+ ret = dpaa2_setup_flow_dist(dev,
+ eth_conf->rx_adv_conf.rss_conf.rss_hf,
+ tc_index);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Unable to set flow distribution on tc%d."
+ "Check queue config", tc_index);
+ return ret;
+ }
}
}
@@ -755,11 +759,11 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
flow_id = 0;
ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
- tc_id, flow_id, options, &tx_flow_cfg);
+ tc_id, flow_id, options, &tx_flow_cfg);
if (ret) {
DPAA2_PMD_ERR("Error in setting the tx flow: "
- "tc_id=%d, flow=%d err=%d",
- tc_id, flow_id, ret);
+ "tc_id=%d, flow=%d err=%d",
+ tc_id, flow_id, ret);
return -1;
}
@@ -1984,22 +1988,31 @@ dpaa2_dev_rss_hash_update(struct rte_eth_dev *dev,
struct rte_eth_rss_conf *rss_conf)
{
struct rte_eth_dev_data *data = dev->data;
+ struct dpaa2_dev_priv *priv = data->dev_private;
struct rte_eth_conf *eth_conf = &data->dev_conf;
- int ret;
+ int ret, tc_index;
PMD_INIT_FUNC_TRACE();
if (rss_conf->rss_hf) {
- ret = dpaa2_setup_flow_dist(dev, rss_conf->rss_hf);
- if (ret) {
- DPAA2_PMD_ERR("Unable to set flow dist");
- return ret;
+ for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
+ ret = dpaa2_setup_flow_dist(dev, rss_conf->rss_hf,
+ tc_index);
+ if (ret) {
+ DPAA2_PMD_ERR("Unable to set flow dist on tc%d",
+ tc_index);
+ return ret;
+ }
}
} else {
- ret = dpaa2_remove_flow_dist(dev, 0);
- if (ret) {
- DPAA2_PMD_ERR("Unable to remove flow dist");
- return ret;
+ for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
+ ret = dpaa2_remove_flow_dist(dev, tc_index);
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Unable to remove flow dist on tc%d",
+ tc_index);
+ return ret;
+ }
}
}
eth_conf->rx_adv_conf.rss_conf.rss_hf = rss_conf->rss_hf;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index b49b88a2d..52faeeefe 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -179,7 +179,7 @@ int dpaa2_distset_to_dpkg_profile_cfg(uint64_t req_dist_set,
struct dpkg_profile_cfg *kg_cfg);
int dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
- uint64_t req_dist_set);
+ uint64_t req_dist_set, int tc_index);
int dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
uint8_t tc_index);
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 24/29] net/dpaa2: support ndex of queue action for flow
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (22 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 23/29] net/dpaa2: support distribution size set on multiple TCs Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 25/29] net/dpaa2: add flow data sanity check Hemant Agrawal
` (5 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Make more sense to use RXQ index for queue distribution
instead of flow ID.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 6f3139f86..76f68b903 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -56,7 +56,6 @@ struct rte_flow {
uint8_t tc_id; /** Traffic Class ID. */
uint8_t tc_index; /** index within this Traffic Class. */
enum rte_flow_action_type action;
- uint16_t flow_id;
/* Special for IP address to specify the offset
* in key/mask.
*/
@@ -3141,6 +3140,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
struct dpni_qos_tbl_cfg qos_cfg;
struct dpni_fs_action_cfg action;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ struct dpaa2_queue *rxq;
struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
size_t param;
struct rte_flow *curr = LIST_FIRST(&priv->flows);
@@ -3244,10 +3244,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
case RTE_FLOW_ACTION_TYPE_QUEUE:
dest_queue =
(const struct rte_flow_action_queue *)(actions[j].conf);
- flow->flow_id = dest_queue->index;
+ rxq = priv->rx_vq[dest_queue->index];
flow->action = RTE_FLOW_ACTION_TYPE_QUEUE;
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
- action.flow_id = flow->flow_id;
+ action.flow_id = rxq->flow_id;
if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
dpaa2_flow_qos_table_extracts_log(priv);
if (dpkg_prepare_key_cfg(
@@ -3303,8 +3303,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
/* Configure QoS table first */
- action.flow_id = action.flow_id % priv->num_rx_tc;
-
qos_index = flow->tc_id * priv->fs_entries +
flow->tc_index;
@@ -3407,13 +3405,22 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
break;
case RTE_FLOW_ACTION_TYPE_RSS:
rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
+ if (rss_conf->queue_num > priv->dist_queues) {
+ DPAA2_PMD_ERR(
+ "RSS number exceeds the distrbution size");
+ return -ENOTSUP;
+ }
+
for (i = 0; i < (int)rss_conf->queue_num; i++) {
- if (rss_conf->queue[i] <
- (attr->group * priv->dist_queues) ||
- rss_conf->queue[i] >=
- ((attr->group + 1) * priv->dist_queues)) {
+ if (rss_conf->queue[i] >= priv->nb_rx_queues) {
+ DPAA2_PMD_ERR(
+ "RSS RXQ number exceeds the total number");
+ return -ENOTSUP;
+ }
+ rxq = priv->rx_vq[rss_conf->queue[i]];
+ if (rxq->tc_index != attr->group) {
DPAA2_PMD_ERR(
- "Queue/Group combination are not supported\n");
+ "RSS RXQ distributed is not in current group");
return -ENOTSUP;
}
}
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 25/29] net/dpaa2: add flow data sanity check
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (23 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 24/29] net/dpaa2: support ndex of queue action for flow Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 26/29] net/dpaa2: modify flow API QoS setup to follow FS setup Hemant Agrawal
` (4 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Check flow attributions and actions before creating flow.
Otherwise, the QoS table and FS table need to re-build
if checking fails.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 84 ++++++++++++++++++++++++++--------
1 file changed, 65 insertions(+), 19 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 76f68b903..3601829c9 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3124,6 +3124,67 @@ dpaa2_flow_verify_attr(
return 0;
}
+static inline int
+dpaa2_flow_verify_action(
+ struct dpaa2_dev_priv *priv,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action actions[])
+{
+ int end_of_list = 0, i, j = 0;
+ const struct rte_flow_action_queue *dest_queue;
+ const struct rte_flow_action_rss *rss_conf;
+ struct dpaa2_queue *rxq;
+
+ while (!end_of_list) {
+ switch (actions[j].type) {
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ dest_queue = (const struct rte_flow_action_queue *)
+ (actions[j].conf);
+ rxq = priv->rx_vq[dest_queue->index];
+ if (attr->group != rxq->tc_index) {
+ DPAA2_PMD_ERR(
+ "RXQ[%d] does not belong to the group %d",
+ dest_queue->index, attr->group);
+
+ return -1;
+ }
+ break;
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ rss_conf = (const struct rte_flow_action_rss *)
+ (actions[j].conf);
+ if (rss_conf->queue_num > priv->dist_queues) {
+ DPAA2_PMD_ERR(
+ "RSS number exceeds the distrbution size");
+ return -ENOTSUP;
+ }
+ for (i = 0; i < (int)rss_conf->queue_num; i++) {
+ if (rss_conf->queue[i] >= priv->nb_rx_queues) {
+ DPAA2_PMD_ERR(
+ "RSS queue index exceeds the number of RXQs");
+ return -ENOTSUP;
+ }
+ rxq = priv->rx_vq[rss_conf->queue[i]];
+ if (rxq->tc_index != attr->group) {
+ DPAA2_PMD_ERR(
+ "Queue/Group combination are not supported\n");
+ return -ENOTSUP;
+ }
+ }
+
+ break;
+ case RTE_FLOW_ACTION_TYPE_END:
+ end_of_list = 1;
+ break;
+ default:
+ DPAA2_PMD_ERR("Invalid action type");
+ return -ENOTSUP;
+ }
+ j++;
+ }
+
+ return 0;
+}
+
static int
dpaa2_generic_flow_set(struct rte_flow *flow,
struct rte_eth_dev *dev,
@@ -3150,6 +3211,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
if (ret)
return ret;
+ ret = dpaa2_flow_verify_action(priv, attr, actions);
+ if (ret)
+ return ret;
+
/* Parse pattern list to get the matching parameters */
while (!end_of_list) {
switch (pattern[i].type) {
@@ -3405,25 +3470,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
break;
case RTE_FLOW_ACTION_TYPE_RSS:
rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf);
- if (rss_conf->queue_num > priv->dist_queues) {
- DPAA2_PMD_ERR(
- "RSS number exceeds the distrbution size");
- return -ENOTSUP;
- }
-
- for (i = 0; i < (int)rss_conf->queue_num; i++) {
- if (rss_conf->queue[i] >= priv->nb_rx_queues) {
- DPAA2_PMD_ERR(
- "RSS RXQ number exceeds the total number");
- return -ENOTSUP;
- }
- rxq = priv->rx_vq[rss_conf->queue[i]];
- if (rxq->tc_index != attr->group) {
- DPAA2_PMD_ERR(
- "RSS RXQ distributed is not in current group");
- return -ENOTSUP;
- }
- }
flow->action = RTE_FLOW_ACTION_TYPE_RSS;
ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types,
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 26/29] net/dpaa2: modify flow API QoS setup to follow FS setup
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (24 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 25/29] net/dpaa2: add flow data sanity check Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 27/29] net/dpaa2: support flow API FS miss action configuration Hemant Agrawal
` (3 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
In HW/MC logical, QoS setup should follow FS setup.
In addition, Skip QoS setup if MAX TC number of DPNI is set 1.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_flow.c | 151 ++++++++++++++++++---------------
1 file changed, 84 insertions(+), 67 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 3601829c9..9239fa459 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -2872,11 +2872,13 @@ dpaa2_flow_entry_update(
dpaa2_flow_qos_entry_log("Before update", curr, qos_index);
- ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
- priv->token, &curr->qos_rule);
- if (ret) {
- DPAA2_PMD_ERR("Qos entry remove failed.");
- return -1;
+ if (priv->num_rx_tc > 1) {
+ ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW,
+ priv->token, &curr->qos_rule);
+ if (ret) {
+ DPAA2_PMD_ERR("Qos entry remove failed.");
+ return -1;
+ }
}
extend = -1;
@@ -2977,13 +2979,15 @@ dpaa2_flow_entry_update(
dpaa2_flow_qos_entry_log("Start update", curr, qos_index);
- ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
- priv->token, &curr->qos_rule,
- curr->tc_id, qos_index,
- 0, 0);
- if (ret) {
- DPAA2_PMD_ERR("Qos entry update failed.");
- return -1;
+ if (priv->num_rx_tc > 1) {
+ ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+ priv->token, &curr->qos_rule,
+ curr->tc_id, qos_index,
+ 0, 0);
+ if (ret) {
+ DPAA2_PMD_ERR("Qos entry update failed.");
+ return -1;
+ }
}
if (curr->action != RTE_FLOW_ACTION_TYPE_QUEUE) {
@@ -3313,31 +3317,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
flow->action = RTE_FLOW_ACTION_TYPE_QUEUE;
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
action.flow_id = rxq->flow_id;
- if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
- dpaa2_flow_qos_table_extracts_log(priv);
- if (dpkg_prepare_key_cfg(
- &priv->extract.qos_key_extract.dpkg,
- (uint8_t *)(size_t)priv->extract.qos_extract_param)
- < 0) {
- DPAA2_PMD_ERR(
- "Unable to prepare extract parameters");
- return -1;
- }
- memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
- qos_cfg.discard_on_miss = true;
- qos_cfg.keep_entries = true;
- qos_cfg.key_cfg_iova =
- (size_t)priv->extract.qos_extract_param;
- ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
- priv->token, &qos_cfg);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Distribution cannot be configured.(%d)"
- , ret);
- return -1;
- }
- }
+ /* Configure FS table first*/
if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
dpaa2_flow_fs_table_extracts_log(priv, flow->tc_id);
if (dpkg_prepare_key_cfg(
@@ -3366,17 +3347,39 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
}
- /* Configure QoS table first */
- qos_index = flow->tc_id * priv->fs_entries +
- flow->tc_index;
+ /* Configure QoS table then.*/
+ if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
+ dpaa2_flow_qos_table_extracts_log(priv);
+ if (dpkg_prepare_key_cfg(
+ &priv->extract.qos_key_extract.dpkg,
+ (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) {
+ DPAA2_PMD_ERR(
+ "Unable to prepare extract parameters");
+ return -1;
+ }
- if (qos_index >= priv->qos_entries) {
- DPAA2_PMD_ERR("QoS table with %d entries full",
- priv->qos_entries);
- return -1;
+ memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg));
+ qos_cfg.discard_on_miss = false;
+ qos_cfg.default_tc = 0;
+ qos_cfg.keep_entries = true;
+ qos_cfg.key_cfg_iova =
+ (size_t)priv->extract.qos_extract_param;
+ /* QoS table is effecitive for multiple TCs.*/
+ if (priv->num_rx_tc > 1) {
+ ret = dpni_set_qos_table(dpni, CMD_PRI_LOW,
+ priv->token, &qos_cfg);
+ if (ret < 0) {
+ DPAA2_PMD_ERR(
+ "RSS QoS table can not be configured(%d)\n",
+ ret);
+ return -1;
+ }
+ }
}
- flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
+
+ flow->qos_real_key_size = priv->extract
+ .qos_key_extract.key_info.key_total_size;
if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) {
if (flow->ipaddr_rule.qos_ipdst_offset >=
flow->ipaddr_rule.qos_ipsrc_offset) {
@@ -3402,21 +3405,30 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
}
}
- flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
+ /* QoS entry added is only effective for multiple TCs.*/
+ if (priv->num_rx_tc > 1) {
+ qos_index = flow->tc_id * priv->fs_entries +
+ flow->tc_index;
+ if (qos_index >= priv->qos_entries) {
+ DPAA2_PMD_ERR("QoS table with %d entries full",
+ priv->qos_entries);
+ return -1;
+ }
+ flow->qos_rule.key_size = FIXED_ENTRY_SIZE;
- dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
+ dpaa2_flow_qos_entry_log("Start add", flow, qos_index);
- ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
+ ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
priv->token, &flow->qos_rule,
flow->tc_id, qos_index,
0, 0);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Error in addnig entry to QoS table(%d)", ret);
- return ret;
+ if (ret < 0) {
+ DPAA2_PMD_ERR(
+ "Error in addnig entry to QoS table(%d)", ret);
+ return ret;
+ }
}
- /* Then Configure FS table */
if (flow->tc_index >= priv->fs_entries) {
DPAA2_PMD_ERR("FS table with %d entries full",
priv->fs_entries);
@@ -3507,7 +3519,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
&tc_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Distribution cannot be configured: %d\n", ret);
+ "RSS FS table cannot be configured: %d\n",
+ ret);
rte_free((void *)param);
return -1;
}
@@ -3841,13 +3854,15 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
switch (flow->action) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
- /* Remove entry from QoS table first */
- ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
- &flow->qos_rule);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Error in adding entry to QoS table(%d)", ret);
- goto error;
+ if (priv->num_rx_tc > 1) {
+ /* Remove entry from QoS table first */
+ ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
+ &flow->qos_rule);
+ if (ret < 0) {
+ DPAA2_PMD_ERR(
+ "Error in removing entry from QoS table(%d)", ret);
+ goto error;
+ }
}
/* Then remove entry from FS table */
@@ -3855,17 +3870,19 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
flow->tc_id, &flow->fs_rule);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Error in entry addition in FS table(%d)", ret);
+ "Error in removing entry from FS table(%d)", ret);
goto error;
}
break;
case RTE_FLOW_ACTION_TYPE_RSS:
- ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
- &flow->qos_rule);
- if (ret < 0) {
- DPAA2_PMD_ERR(
- "Error in entry addition in QoS table(%d)", ret);
- goto error;
+ if (priv->num_rx_tc > 1) {
+ ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
+ &flow->qos_rule);
+ if (ret < 0) {
+ DPAA2_PMD_ERR(
+ "Error in entry addition in QoS table(%d)", ret);
+ goto error;
+ }
}
break;
default:
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 27/29] net/dpaa2: support flow API FS miss action configuration
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (25 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 26/29] net/dpaa2: modify flow API QoS setup to follow FS setup Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 28/29] net/dpaa2: configure per class distribution size Hemant Agrawal
` (2 subsequent siblings)
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
1) dpni_set_rx_hash_dist and dpni_set_rx_fs_dist used for TC configuration
instead of dpni_set_rx_tc_dist. Otherwise, re-configuration of
default TC of QoS fails.
2) Default miss action is to drop.
"export DPAA2_FLOW_CONTROL_MISS_FLOW=flow_id" is used receive
the missed packets from flow with flow ID specified.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 30 +++++++------
drivers/net/dpaa2/dpaa2_flow.c | 62 ++++++++++++++++++--------
2 files changed, 60 insertions(+), 32 deletions(-)
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 9f0dad6e7..d69156bcc 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -85,7 +85,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
{
struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
struct fsl_mc_io *dpni = priv->hw;
- struct dpni_rx_tc_dist_cfg tc_cfg;
+ struct dpni_rx_dist_cfg tc_cfg;
struct dpkg_profile_cfg kg_cfg;
void *p_params;
int ret;
@@ -96,8 +96,9 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
return -ENOMEM;
}
+
memset(p_params, 0, DIST_PARAM_IOVA_SIZE);
- memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
+ memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
ret = dpaa2_distset_to_dpkg_profile_cfg(req_dist_set, &kg_cfg);
if (ret) {
@@ -106,9 +107,11 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
rte_free(p_params);
return ret;
}
+
tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
tc_cfg.dist_size = priv->dist_queues;
- tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
+ tc_cfg.enable = true;
+ tc_cfg.tc = tc_index;
ret = dpkg_prepare_key_cfg(&kg_cfg, p_params);
if (ret) {
@@ -117,8 +120,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
return ret;
}
- ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index,
- &tc_cfg);
+ ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, &tc_cfg);
rte_free(p_params);
if (ret) {
DPAA2_PMD_ERR(
@@ -136,7 +138,7 @@ int dpaa2_remove_flow_dist(
{
struct dpaa2_dev_priv *priv = eth_dev->data->dev_private;
struct fsl_mc_io *dpni = priv->hw;
- struct dpni_rx_tc_dist_cfg tc_cfg;
+ struct dpni_rx_dist_cfg tc_cfg;
struct dpkg_profile_cfg kg_cfg;
void *p_params;
int ret;
@@ -147,13 +149,15 @@ int dpaa2_remove_flow_dist(
DPAA2_PMD_ERR("Unable to allocate flow-dist parameters");
return -ENOMEM;
}
- memset(p_params, 0, DIST_PARAM_IOVA_SIZE);
- memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
- kg_cfg.num_extracts = 0;
- tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+
+ memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
tc_cfg.dist_size = 0;
- tc_cfg.dist_mode = DPNI_DIST_MODE_NONE;
+ tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
+ tc_cfg.enable = true;
+ tc_cfg.tc = tc_index;
+ memset(p_params, 0, DIST_PARAM_IOVA_SIZE);
+ kg_cfg.num_extracts = 0;
ret = dpkg_prepare_key_cfg(&kg_cfg, p_params);
if (ret) {
DPAA2_PMD_ERR("Unable to prepare extract parameters");
@@ -161,8 +165,8 @@ int dpaa2_remove_flow_dist(
return ret;
}
- ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index,
- &tc_cfg);
+ ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token,
+ &tc_cfg);
rte_free(p_params);
if (ret)
DPAA2_PMD_ERR(
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 9239fa459..cc789346a 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -30,6 +30,8 @@
int mc_l4_port_identification;
static char *dpaa2_flow_control_log;
+static int dpaa2_flow_miss_flow_id =
+ DPNI_FS_MISS_DROP;
#define FIXED_ENTRY_SIZE 54
@@ -3201,7 +3203,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
const struct rte_flow_action_rss *rss_conf;
int is_keycfg_configured = 0, end_of_list = 0;
int ret = 0, i = 0, j = 0;
- struct dpni_rx_tc_dist_cfg tc_cfg;
+ struct dpni_rx_dist_cfg tc_cfg;
struct dpni_qos_tbl_cfg qos_cfg;
struct dpni_fs_action_cfg action;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
@@ -3330,20 +3332,30 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
- memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
+ memset(&tc_cfg, 0,
+ sizeof(struct dpni_rx_dist_cfg));
tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc;
- tc_cfg.dist_mode = DPNI_DIST_MODE_FS;
tc_cfg.key_cfg_iova =
(uint64_t)priv->extract.tc_extract_param[flow->tc_id];
- tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP;
- tc_cfg.fs_cfg.keep_entries = true;
- ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW,
- priv->token,
- flow->tc_id, &tc_cfg);
+ tc_cfg.tc = flow->tc_id;
+ tc_cfg.enable = false;
+ ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
+ priv->token, &tc_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Distribution cannot be configured.(%d)"
- , ret);
+ "TC hash cannot be disabled.(%d)",
+ ret);
+ return -1;
+ }
+ tc_cfg.enable = true;
+ tc_cfg.fs_miss_flow_id =
+ dpaa2_flow_miss_flow_id;
+ ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
+ priv->token, &tc_cfg);
+ if (ret < 0) {
+ DPAA2_PMD_ERR(
+ "TC distribution cannot be configured.(%d)",
+ ret);
return -1;
}
}
@@ -3508,18 +3520,16 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
- memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg));
+ memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg));
tc_cfg.dist_size = rss_conf->queue_num;
- tc_cfg.dist_mode = DPNI_DIST_MODE_HASH;
tc_cfg.key_cfg_iova = (size_t)param;
- tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP;
-
- ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW,
- priv->token, flow->tc_id,
- &tc_cfg);
+ tc_cfg.enable = true;
+ tc_cfg.tc = flow->tc_id;
+ ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW,
+ priv->token, &tc_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "RSS FS table cannot be configured: %d\n",
+ "RSS TC table cannot be configured: %d\n",
ret);
rte_free((void *)param);
return -1;
@@ -3544,7 +3554,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->token, &qos_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Distribution can't be configured %d\n",
+ "RSS QoS dist can't be configured-%d\n",
ret);
return -1;
}
@@ -3761,6 +3771,20 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
dpaa2_flow_control_log =
getenv("DPAA2_FLOW_CONTROL_LOG");
+ if (getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")) {
+ struct dpaa2_dev_priv *priv = dev->data->dev_private;
+
+ dpaa2_flow_miss_flow_id =
+ atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW"));
+ if (dpaa2_flow_miss_flow_id >= priv->dist_queues) {
+ DPAA2_PMD_ERR(
+ "The missed flow ID %d exceeds the max flow ID %d",
+ dpaa2_flow_miss_flow_id,
+ priv->dist_queues - 1);
+ return NULL;
+ }
+ }
+
flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE);
if (!flow) {
DPAA2_PMD_ERR("Failure to allocate memory for flow");
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 28/29] net/dpaa2: configure per class distribution size
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (26 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 27/29] net/dpaa2: support flow API FS miss action configuration Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 29/29] net/dpaa2: support raw flow classification Hemant Agrawal
2020-07-09 1:54 ` [dpdk-dev] [PATCH v2 00/29] NXP DPAAx enhancements Ferruh Yigit
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
TC distribution size is set with dist_queues or
nb_rx_queues % dist_queues in order of TC priority.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index d69156bcc..25b1d2bb6 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -88,7 +88,21 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
struct dpni_rx_dist_cfg tc_cfg;
struct dpkg_profile_cfg kg_cfg;
void *p_params;
- int ret;
+ int ret, tc_dist_queues;
+
+ /*TC distribution size is set with dist_queues or
+ * nb_rx_queues % dist_queues in order of TC priority index.
+ * Calculating dist size for this tc_index:-
+ */
+ tc_dist_queues = eth_dev->data->nb_rx_queues -
+ tc_index * priv->dist_queues;
+ if (tc_dist_queues <= 0) {
+ DPAA2_PMD_INFO("No distribution on TC%d", tc_index);
+ return 0;
+ }
+
+ if (tc_dist_queues > priv->dist_queues)
+ tc_dist_queues = priv->dist_queues;
p_params = rte_malloc(
NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE);
@@ -109,7 +123,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev,
}
tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params));
- tc_cfg.dist_size = priv->dist_queues;
+ tc_cfg.dist_size = tc_dist_queues;
tc_cfg.enable = true;
tc_cfg.tc = tc_index;
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* [dpdk-dev] [PATCH v2 29/29] net/dpaa2: support raw flow classification
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (27 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 28/29] net/dpaa2: configure per class distribution size Hemant Agrawal
@ 2020-07-07 9:22 ` Hemant Agrawal
2020-07-09 1:54 ` [dpdk-dev] [PATCH v2 00/29] NXP DPAAx enhancements Ferruh Yigit
29 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-07 9:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Add support for raw flow, which can be used for any
protocol rules.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.h | 3 +-
drivers/net/dpaa2/dpaa2_flow.c | 135 +++++++++++++++++++++++++++++++
2 files changed, 137 insertions(+), 1 deletion(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 52faeeefe..2bc0f3f5a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2016-2019 NXP
+ * Copyright 2016-2020 NXP
*
*/
@@ -99,6 +99,7 @@ extern enum pmd_dpaa2_ts dpaa2_enable_ts;
#define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4
#define DPAA2_FS_TABLE_IPADDR_EXTRACT 8
+#define DPAA2_FLOW_MAX_KEY_SIZE 16
/*Externaly defined*/
extern const struct rte_flow_ops dpaa2_flow_ops;
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index cc789346a..136bdd5fa 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -493,6 +493,42 @@ static int dpaa2_flow_extract_add(
return 0;
}
+static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract,
+ int size)
+{
+ struct dpkg_profile_cfg *dpkg = &key_extract->dpkg;
+ struct dpaa2_key_info *key_info = &key_extract->key_info;
+ int last_extract_size, index;
+
+ if (dpkg->num_extracts != 0 && dpkg->extracts[0].type !=
+ DPKG_EXTRACT_FROM_DATA) {
+ DPAA2_PMD_WARN("RAW extract cannot be combined with others");
+ return -1;
+ }
+
+ last_extract_size = (size % DPAA2_FLOW_MAX_KEY_SIZE);
+ dpkg->num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE);
+ if (last_extract_size)
+ dpkg->num_extracts++;
+ else
+ last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE;
+
+ for (index = 0; index < dpkg->num_extracts; index++) {
+ dpkg->extracts[index].type = DPKG_EXTRACT_FROM_DATA;
+ if (index == dpkg->num_extracts - 1)
+ dpkg->extracts[index].extract.from_data.size =
+ last_extract_size;
+ else
+ dpkg->extracts[index].extract.from_data.size =
+ DPAA2_FLOW_MAX_KEY_SIZE;
+ dpkg->extracts[index].extract.from_data.offset =
+ DPAA2_FLOW_MAX_KEY_SIZE * index;
+ }
+
+ key_info->key_total_size = size;
+ return 0;
+}
+
/* Protocol discrimination.
* Discriminate IPv4/IPv6/vLan by Eth type.
* Discriminate UDP/TCP/ICMP by next proto of IP.
@@ -674,6 +710,18 @@ dpaa2_flow_rule_data_set(
return 0;
}
+static inline int
+dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule,
+ const void *key, const void *mask, int size)
+{
+ int offset = 0;
+
+ memcpy((void *)(size_t)(rule->key_iova + offset), key, size);
+ memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size);
+
+ return 0;
+}
+
static inline int
_dpaa2_flow_rule_move_ipaddr_tail(
struct dpaa2_key_extract *key_extract,
@@ -2814,6 +2862,83 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
return 0;
}
+static int
+dpaa2_configure_flow_raw(struct rte_flow *flow,
+ struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item *pattern,
+ const struct rte_flow_action actions[] __rte_unused,
+ struct rte_flow_error *error __rte_unused,
+ int *device_configured)
+{
+ struct dpaa2_dev_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_raw *spec = pattern->spec;
+ const struct rte_flow_item_raw *mask = pattern->mask;
+ int prev_key_size =
+ priv->extract.qos_key_extract.key_info.key_total_size;
+ int local_cfg = 0, ret;
+ uint32_t group;
+
+ /* Need both spec and mask */
+ if (!spec || !mask) {
+ DPAA2_PMD_ERR("spec or mask not present.");
+ return -EINVAL;
+ }
+ /* Only supports non-relative with offset 0 */
+ if (spec->relative || spec->offset != 0 ||
+ spec->search || spec->limit) {
+ DPAA2_PMD_ERR("relative and non zero offset not supported.");
+ return -EINVAL;
+ }
+ /* Spec len and mask len should be same */
+ if (spec->length != mask->length) {
+ DPAA2_PMD_ERR("Spec len and mask len mismatch.");
+ return -EINVAL;
+ }
+
+ /* Get traffic class index and flow id to be configured */
+ group = attr->group;
+ flow->tc_id = group;
+ flow->tc_index = attr->priority;
+
+ if (prev_key_size < spec->length) {
+ ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract,
+ spec->length);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS Extract RAW add failed.");
+ return -1;
+ }
+ local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE;
+
+ ret = dpaa2_flow_extract_add_raw(
+ &priv->extract.tc_key_extract[group],
+ spec->length);
+ if (ret) {
+ DPAA2_PMD_ERR("FS Extract RAW add failed.");
+ return -1;
+ }
+ local_cfg |= DPAA2_FS_TABLE_RECONFIGURE;
+ }
+
+ ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern,
+ mask->pattern, spec->length);
+ if (ret) {
+ DPAA2_PMD_ERR("QoS RAW rule data set failed");
+ return -1;
+ }
+
+ ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern,
+ mask->pattern, spec->length);
+ if (ret) {
+ DPAA2_PMD_ERR("FS RAW rule data set failed");
+ return -1;
+ }
+
+ (*device_configured) |= local_cfg;
+
+ return 0;
+}
+
/* The existing QoS/FS entry with IP address(es)
* needs update after
* new extract(s) are inserted before IP
@@ -3297,6 +3422,16 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return ret;
}
break;
+ case RTE_FLOW_ITEM_TYPE_RAW:
+ ret = dpaa2_configure_flow_raw(flow,
+ dev, attr, &pattern[i],
+ actions, error,
+ &is_keycfg_configured);
+ if (ret) {
+ DPAA2_PMD_ERR("RAW flow configuration failed!");
+ return ret;
+ }
+ break;
case RTE_FLOW_ITEM_TYPE_END:
end_of_list = 1;
break; /*End of List*/
--
2.17.1
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/29] NXP DPAAx enhancements
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
` (28 preceding siblings ...)
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 29/29] net/dpaa2: support raw flow classification Hemant Agrawal
@ 2020-07-09 1:54 ` Ferruh Yigit
29 siblings, 0 replies; 83+ messages in thread
From: Ferruh Yigit @ 2020-07-09 1:54 UTC (permalink / raw)
To: Hemant Agrawal, dev
On 7/7/2020 10:22 AM, Hemant Agrawal wrote:
> v2: dropping the fmlib changes - we will send them separately
>
> This patch-set mainly address following enhancements
>
> 1. Supporting the non-EAL thread based I/O processing
> 2. Reducing the thread local storage
> 3. DPAA2 flow support
> 4. other minor fixes and enhancements
>
> Gagandeep Singh (3):
> net/dpaa2: enable timestamp for Rx offload case as well
> bus/fslmc: combine thread specific variables
> net/dpaa: enable Tx queue taildrop
>
> Hemant Agrawal (1):
> bus/fslmc: support handle portal alloc failure
>
> Jun Yang (14):
> net/dpaa2: support dynamic flow control
> net/dpaa2: support key extracts of flow API
> net/dpaa2: add sanity check for flow extracts
> net/dpaa2: free flow rule memory
> net/dpaa2: support QoS or FS table entry indexing
> net/dpaa2: define the size of table entry
> net/dpaa2: add logging of flow extracts and rules
> net/dpaa2: support iscrimination between IPv4 and IPv6
> net/dpaa2: support distribution size set on multiple TCs
> net/dpaa2: support ndex of queue action for flow
> net/dpaa2: add flow data sanity check
> net/dpaa2: modify flow API QoS setup to follow FS setup
> net/dpaa2: support flow API FS miss action configuration
> net/dpaa2: configure per class distribution size
>
> Nipun Gupta (7):
> bus/fslmc: fix getting the FD error
> net/dpaa: fix fd offset data type
> bus/fslmc: rework portal allocation to a per thread basis
> bus/fslmc: support portal migration
> bus/fslmc: rename the cinh read functions used for ls1088
> net/dpaa: update process specific device info
> net/dpaa2: support raw flow classification
>
> Rohit Raj (3):
> drivers: optimize thread local storage for dpaa
> bus/dpaa: enable link state interrupt
> bus/dpaa: enable set link status
>
> Sachin Saxena (1):
> net/dpaa: add 2.5G support
Hi Hemant,
I guess you have your implicit ack since you have sent the patches, I am
converting it to explicit ack for the ones that have not been sent by its
maintainer:
For series,
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
(Please component maintainer explicitly ack next time to prevent any confusion)
Series applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH v2 03/29] net/dpaa2: enable timestamp for Rx offload case as well
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 03/29] net/dpaa2: enable timestamp for Rx offload case as well Hemant Agrawal
@ 2020-07-11 13:46 ` Thomas Monjalon
2020-07-13 3:47 ` Hemant Agrawal
0 siblings, 1 reply; 83+ messages in thread
From: Thomas Monjalon @ 2020-07-11 13:46 UTC (permalink / raw)
To: Gagandeep Singh, Hemant Agrawal; +Cc: dev, ferruh.yigit
07/07/2020 11:22, Hemant Agrawal:
> From: Gagandeep Singh <g.singh@nxp.com>
>
> This patch enables the packet timestamping
> conditionally when Rx offload is enabled for timestamp.
>
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> ---
> drivers/net/dpaa2/dpaa2_ethdev.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index a1f19194d..8edd4b3cd 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -524,8 +524,10 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> return ret;
> }
>
> +#if !defined(RTE_LIBRTE_IEEE1588)
> if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
> - dpaa2_enable_ts = true;
> +#endif
> + dpaa2_enable_ts = true;
I don't understand this patch at all.
There is no comment in the code, and the commit log
is not very explanatory.
You are lucky Ferruh is less strict than me.
I remember I already said I was bored of the lack of explanations
in NXP drivers.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [dpdk-dev] [PATCH v2 03/29] net/dpaa2: enable timestamp for Rx offload case as well
2020-07-11 13:46 ` Thomas Monjalon
@ 2020-07-13 3:47 ` Hemant Agrawal
0 siblings, 0 replies; 83+ messages in thread
From: Hemant Agrawal @ 2020-07-13 3:47 UTC (permalink / raw)
To: Thomas Monjalon, Gagandeep Singh; +Cc: dev, ferruh.yigit
-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Saturday, July 11, 2020 7:16 PM
To: Gagandeep Singh <G.Singh@nxp.com>; Hemant Agrawal <hemant.agrawal@nxp.com>
Cc: dev@dpdk.org; ferruh.yigit@intel.com
Subject: Re: [dpdk-dev] [PATCH v2 03/29] net/dpaa2: enable timestamp for Rx offload case as well
Importance: High
07/07/2020 11:22, Hemant Agrawal:
> From: Gagandeep Singh <g.singh@nxp.com>
>
> This patch enables the packet timestamping conditionally when Rx
> offload is enabled for timestamp.
>
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> ---
> drivers/net/dpaa2/dpaa2_ethdev.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> index a1f19194d..8edd4b3cd 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -524,8 +524,10 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> return ret;
> }
>
> +#if !defined(RTE_LIBRTE_IEEE1588)
> if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
> - dpaa2_enable_ts = true;
> +#endif
> + dpaa2_enable_ts = true;
I don't understand this patch at all.
There is no comment in the code, and the commit log is not very explanatory.
You are lucky Ferruh is less strict than me.
I remember I already said I was bored of the lack of explanations in NXP drivers.
[Hemant] We will improve next time.
The patch description says: "> This patch enables the packet timestamping conditionally when Rx
> offload is enabled for timestamp."
It should be improved with - Enable the timestamping by default when IEEE1588 is enabled irrespective of offload flag.
^ permalink raw reply [flat|nested] 83+ messages in thread
end of thread, other threads:[~2020-07-13 3:47 UTC | newest]
Thread overview: 83+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 01/37] bus/fslmc: fix getting the FD error Hemant Agrawal
2020-05-27 18:07 ` Akhil Goyal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 02/37] net/dpaa: fix fd offset data type Hemant Agrawal
2020-05-27 18:08 ` Akhil Goyal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 03/37] net/dpaa2: enable timestamp for Rx offload case as well Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 04/37] bus/fslmc: combine thread specific variables Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 05/37] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
2020-07-01 7:23 ` Ferruh Yigit
2020-05-27 13:22 ` [dpdk-dev] [PATCH 06/37] bus/fslmc: support handle portal alloc failure Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 07/37] bus/fslmc: support portal migration Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 08/37] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 09/37] net/dpaa: enable Tx queue taildrop Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 10/37] net/dpaa: add 2.5G support Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 11/37] net/dpaa: update process specific device info Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 12/37] drivers: optimize thread local storage for dpaa Hemant Agrawal
2020-05-27 18:13 ` Akhil Goyal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 13/37] bus/dpaa: enable link state interrupt Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 14/37] bus/dpaa: enable set link status Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 15/37] net/dpaa: add support for fmlib in dpdk Hemant Agrawal
2020-06-30 17:00 ` Ferruh Yigit
2020-07-01 4:18 ` Hemant Agrawal
2020-07-01 7:35 ` Ferruh Yigit
2020-05-27 13:23 ` [dpdk-dev] [PATCH 16/37] net/dpaa: add VSP support in FMLIB Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 17/37] net/dpaa: add support for fmcless mode Hemant Agrawal
2020-06-30 17:01 ` Ferruh Yigit
2020-07-01 4:04 ` Hemant Agrawal
2020-07-01 7:37 ` Ferruh Yigit
2020-05-27 13:23 ` [dpdk-dev] [PATCH 18/37] bus/dpaa: add shared MAC support Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 19/37] bus/dpaa: add Virtual Storage Profile port init Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 20/37] net/dpaa: add support for Virtual Storage Profile Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 21/37] net/dpaa: add fmc parser support for VSP Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 22/37] net/dpaa: add RSS update func with FMCless Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 23/37] net/dpaa2: dynamic flow control support Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 24/37] net/dpaa2: key extracts of flow API Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 25/37] net/dpaa2: sanity check for flow extracts Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 26/37] net/dpaa2: free flow rule memory Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 27/37] net/dpaa2: flow QoS or FS table entry indexing Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 28/37] net/dpaa2: define the size of table entry Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 29/37] net/dpaa2: log of flow extracts and rules Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 30/37] net/dpaa2: discrimination between IPv4 and IPv6 Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 31/37] net/dpaa2: distribution size set on multiple TCs Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 32/37] net/dpaa2: index of queue action for flow Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 33/37] net/dpaa2: flow data sanity check Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 34/37] net/dpaa2: flow API QoS setup follows FS setup Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 35/37] net/dpaa2: flow API FS miss action configuration Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 36/37] net/dpaa2: configure per class distribution size Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 37/37] net/dpaa2: support raw flow classification Hemant Agrawal
2020-06-30 17:01 ` [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Ferruh Yigit
2020-07-01 4:08 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 01/29] bus/fslmc: fix getting the FD error Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 02/29] net/dpaa: fix fd offset data type Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 03/29] net/dpaa2: enable timestamp for Rx offload case as well Hemant Agrawal
2020-07-11 13:46 ` Thomas Monjalon
2020-07-13 3:47 ` Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 04/29] bus/fslmc: combine thread specific variables Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 05/29] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 06/29] bus/fslmc: support handle portal alloc failure Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 07/29] bus/fslmc: support portal migration Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 08/29] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 09/29] net/dpaa: enable Tx queue taildrop Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 10/29] net/dpaa: add 2.5G support Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 11/29] net/dpaa: update process specific device info Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 12/29] drivers: optimize thread local storage for dpaa Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 13/29] bus/dpaa: enable link state interrupt Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 14/29] bus/dpaa: enable set link status Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 15/29] net/dpaa2: support dynamic flow control Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 16/29] net/dpaa2: support key extracts of flow API Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 17/29] net/dpaa2: add sanity check for flow extracts Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 18/29] net/dpaa2: free flow rule memory Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 19/29] net/dpaa2: support QoS or FS table entry indexing Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 20/29] net/dpaa2: define the size of table entry Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 21/29] net/dpaa2: add logging of flow extracts and rules Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 22/29] net/dpaa2: support iscrimination between IPv4 and IPv6 Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 23/29] net/dpaa2: support distribution size set on multiple TCs Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 24/29] net/dpaa2: support ndex of queue action for flow Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 25/29] net/dpaa2: add flow data sanity check Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 26/29] net/dpaa2: modify flow API QoS setup to follow FS setup Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 27/29] net/dpaa2: support flow API FS miss action configuration Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 28/29] net/dpaa2: configure per class distribution size Hemant Agrawal
2020-07-07 9:22 ` [dpdk-dev] [PATCH v2 29/29] net/dpaa2: support raw flow classification Hemant Agrawal
2020-07-09 1:54 ` [dpdk-dev] [PATCH v2 00/29] NXP DPAAx enhancements Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).