* [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes
@ 2021-09-27 12:26 nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 01/11] bus/fslmc: updated MC FW to 10.28 nipun.gupta
` (13 more replies)
0 siblings, 14 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Nipun Gupta <nipun.gupta@nxp.com>
This series adds new functionality related to flow redirection,
multiple ordered tx enqueues, generating HW hash key etc.
It also updates the MC firmware version and includes a fix in
dpaxx library.
Gagandeep Singh (1):
common/dpaax: fix paddr to vaddr invalid conversion
Hemant Agrawal (4):
bus/fslmc: updated MC FW to 10.28
bus/fslmc: add qbman debug APIs support
net/dpaa2: add debug print for MTU set for jumbo
net/dpaa2: add function to generate HW hash key
Jun Yang (2):
net/dpaa2: support Tx flow redirection action
net/dpaa2: support multiple Tx queues enqueue for ordered
Nipun Gupta (2):
raw/dpaa2_qdma: use correct params for config and queue setup
raw/dpaa2_qdma: remove checks for lcore ID
Rohit Raj (1):
net/dpaa: add comments to explain driver behaviour
Vanshika Shukla (1):
net/dpaa2: update RSS to support additional distributions
drivers/bus/fslmc/mc/dpdmai.c | 4 +-
drivers/bus/fslmc/mc/fsl_dpdmai.h | 21 +-
drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h | 15 +-
drivers/bus/fslmc/mc/fsl_dpmng.h | 4 +-
drivers/bus/fslmc/mc/fsl_dpopr.h | 7 +-
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 203 +++++-
drivers/bus/fslmc/qbman/qbman_debug.c | 623 ++++++++++++++++++
drivers/bus/fslmc/qbman/qbman_portal.c | 6 +
drivers/common/dpaax/dpaax_iova_table.h | 8 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 12 +-
drivers/net/dpaa/dpaa_fmc.c | 8 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 70 +-
drivers/net/dpaa2/base/dpaa2_tlu_hash.c | 149 +++++
drivers/net/dpaa2/dpaa2_ethdev.c | 9 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 11 +-
drivers/net/dpaa2/dpaa2_flow.c | 116 +++-
drivers/net/dpaa2/dpaa2_rxtx.c | 142 ++++
drivers/net/dpaa2/mc/dpdmux.c | 43 ++
drivers/net/dpaa2/mc/dpni.c | 48 +-
drivers/net/dpaa2/mc/dprtc.c | 78 ++-
drivers/net/dpaa2/mc/fsl_dpdmux.h | 6 +
drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 9 +
drivers/net/dpaa2/mc/fsl_dpkg.h | 6 +-
drivers/net/dpaa2/mc/fsl_dpni.h | 147 ++++-
drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 55 +-
drivers/net/dpaa2/mc/fsl_dprtc.h | 19 +-
drivers/net/dpaa2/mc/fsl_dprtc_cmd.h | 25 +-
drivers/net/dpaa2/meson.build | 1 +
drivers/net/dpaa2/rte_pmd_dpaa2.h | 19 +
drivers/net/dpaa2/version.map | 3 +
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 26 +-
drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h | 8 +-
32 files changed, 1788 insertions(+), 113 deletions(-)
create mode 100644 drivers/net/dpaa2/base/dpaa2_tlu_hash.c
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 01/11] bus/fslmc: updated MC FW to 10.28
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-10-06 13:28 ` Hemant Agrawal
2021-09-27 12:26 ` [dpdk-dev] [PATCH 02/11] net/dpaa2: support Tx flow redirection action nipun.gupta
` (12 subsequent siblings)
13 siblings, 1 reply; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Updating MC firmware support APIs to be latest. It supports
improved DPDMUX (SRIOV equivalent) for traffic split between
dpnis and additional PTP APIs.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/fslmc/mc/dpdmai.c | 4 +-
drivers/bus/fslmc/mc/fsl_dpdmai.h | 21 ++++-
drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h | 15 ++--
drivers/bus/fslmc/mc/fsl_dpmng.h | 4 +-
drivers/bus/fslmc/mc/fsl_dpopr.h | 7 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
drivers/net/dpaa2/mc/dpdmux.c | 43 +++++++++
drivers/net/dpaa2/mc/dpni.c | 48 ++++++----
drivers/net/dpaa2/mc/dprtc.c | 78 +++++++++++++++-
drivers/net/dpaa2/mc/fsl_dpdmux.h | 6 ++
drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 9 ++
drivers/net/dpaa2/mc/fsl_dpkg.h | 6 +-
drivers/net/dpaa2/mc/fsl_dpni.h | 124 ++++++++++++++++++++++----
drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 55 +++++++++---
drivers/net/dpaa2/mc/fsl_dprtc.h | 19 +++-
drivers/net/dpaa2/mc/fsl_dprtc_cmd.h | 25 +++++-
16 files changed, 401 insertions(+), 65 deletions(-)
diff --git a/drivers/bus/fslmc/mc/dpdmai.c b/drivers/bus/fslmc/mc/dpdmai.c
index dcb9d516a1..9c2f3bf9d5 100644
--- a/drivers/bus/fslmc/mc/dpdmai.c
+++ b/drivers/bus/fslmc/mc/dpdmai.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#include <fsl_mc_sys.h>
@@ -116,6 +116,7 @@ int dpdmai_create(struct fsl_mc_io *mc_io,
cmd_params->num_queues = cfg->num_queues;
cmd_params->priorities[0] = cfg->priorities[0];
cmd_params->priorities[1] = cfg->priorities[1];
+ cmd_params->options = cpu_to_le32(cfg->adv.options);
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -299,6 +300,7 @@ int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
attr->id = le32_to_cpu(rsp_params->id);
attr->num_of_priorities = rsp_params->num_of_priorities;
attr->num_of_queues = rsp_params->num_of_queues;
+ attr->options = le32_to_cpu(rsp_params->options);
return 0;
}
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
index 19328c00a0..5af8ed48c0 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef __FSL_DPDMAI_H
@@ -36,15 +36,32 @@ int dpdmai_close(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
+/* DPDMAI options */
+
+/**
+ * Enable individual Congestion Groups usage per each priority queue
+ * If this option is not enabled then only one CG is used for all priority
+ * queues
+ * If this option is enabled then a separate specific CG is used for each
+ * individual priority queue.
+ * In this case the priority queue must be specified via congestion notification
+ * API
+ */
+#define DPDMAI_OPT_CG_PER_PRIORITY 0x00000001
+
/**
* struct dpdmai_cfg - Structure representing DPDMAI configuration
* @priorities: Priorities for the DMA hardware processing; valid priorities are
* configured with values 1-8; the entry following last valid entry
* should be configured with 0
+ * @options: dpdmai options
*/
struct dpdmai_cfg {
uint8_t num_queues;
uint8_t priorities[DPDMAI_PRIO_NUM];
+ struct {
+ uint32_t options;
+ } adv;
};
int dpdmai_create(struct fsl_mc_io *mc_io,
@@ -81,11 +98,13 @@ int dpdmai_reset(struct fsl_mc_io *mc_io,
* struct dpdmai_attr - Structure representing DPDMAI attributes
* @id: DPDMAI object ID
* @num_of_priorities: number of priorities
+ * @options: dpdmai options
*/
struct dpdmai_attr {
int id;
uint8_t num_of_priorities;
uint8_t num_of_queues;
+ uint32_t options;
};
__rte_internal
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h b/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
index 7e122de4ef..c8f6b990f8 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
@@ -1,32 +1,33 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2017-2018, 2020-2021 NXP
*/
-
#ifndef _FSL_DPDMAI_CMD_H
#define _FSL_DPDMAI_CMD_H
/* DPDMAI Version */
#define DPDMAI_VER_MAJOR 3
-#define DPDMAI_VER_MINOR 3
+#define DPDMAI_VER_MINOR 4
/* Command versioning */
#define DPDMAI_CMD_BASE_VERSION 1
#define DPDMAI_CMD_VERSION_2 2
+#define DPDMAI_CMD_VERSION_3 3
#define DPDMAI_CMD_ID_OFFSET 4
#define DPDMAI_CMD(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_BASE_VERSION)
#define DPDMAI_CMD_V2(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_VERSION_2)
+#define DPDMAI_CMD_V3(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_VERSION_3)
/* Command IDs */
#define DPDMAI_CMDID_CLOSE DPDMAI_CMD(0x800)
#define DPDMAI_CMDID_OPEN DPDMAI_CMD(0x80E)
-#define DPDMAI_CMDID_CREATE DPDMAI_CMD_V2(0x90E)
+#define DPDMAI_CMDID_CREATE DPDMAI_CMD_V3(0x90E)
#define DPDMAI_CMDID_DESTROY DPDMAI_CMD(0x98E)
#define DPDMAI_CMDID_GET_API_VERSION DPDMAI_CMD(0xa0E)
#define DPDMAI_CMDID_ENABLE DPDMAI_CMD(0x002)
#define DPDMAI_CMDID_DISABLE DPDMAI_CMD(0x003)
-#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMD_V2(0x004)
+#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMD_V3(0x004)
#define DPDMAI_CMDID_RESET DPDMAI_CMD(0x005)
#define DPDMAI_CMDID_IS_ENABLED DPDMAI_CMD(0x006)
@@ -51,6 +52,8 @@ struct dpdmai_cmd_open {
struct dpdmai_cmd_create {
uint8_t num_queues;
uint8_t priorities[2];
+ uint8_t pad;
+ uint32_t options;
};
struct dpdmai_cmd_destroy {
@@ -69,6 +72,8 @@ struct dpdmai_rsp_get_attr {
uint32_t id;
uint8_t num_of_priorities;
uint8_t num_of_queues;
+ uint16_t pad;
+ uint32_t options;
};
#define DPDMAI_DEST_TYPE_SHIFT 0
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index 8764ceaed9..7e9bd96429 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2021 NXP
*
*/
#ifndef __FSL_DPMNG_H
@@ -20,7 +20,7 @@ struct fsl_mc_io;
* Management Complex firmware version information
*/
#define MC_VER_MAJOR 10
-#define MC_VER_MINOR 18
+#define MC_VER_MINOR 28
/**
* struct mc_version
diff --git a/drivers/bus/fslmc/mc/fsl_dpopr.h b/drivers/bus/fslmc/mc/fsl_dpopr.h
index fd727e011b..74dd32f783 100644
--- a/drivers/bus/fslmc/mc/fsl_dpopr.h
+++ b/drivers/bus/fslmc/mc/fsl_dpopr.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*
*/
#ifndef __FSL_DPOPR_H_
@@ -22,7 +22,10 @@
* Retire an existing Order Point Record option
*/
#define OPR_OPT_RETIRE 0x2
-
+/**
+ * Assign an existing Order Point Record to a queue
+ */
+#define OPR_OPT_ASSIGN 0x4
/**
* struct opr_cfg - Structure representing OPR configuration
* @oprrws: Order point record (OPR) restoration window size (0 to 5)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..560b79151b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2273,7 +2273,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
ret = dpni_set_opr(dpni, CMD_PRI_LOW, eth_priv->token,
dpaa2_ethq->tc_index, flow_id,
- OPR_OPT_CREATE, &ocfg);
+ OPR_OPT_CREATE, &ocfg, 0);
if (ret) {
DPAA2_PMD_ERR("Error setting opr: ret: %d\n", ret);
return ret;
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 93912ef9d3..edbb01b45b 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -491,6 +491,49 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
+/**
+ * dpdmux_get_max_frame_length() - Return the maximum frame length for DPDMUX
+ * interface
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPDMUX object
+ * @if_id: Interface id
+ * @max_frame_length: maximum frame length
+ *
+ * When dpdmux object is in VEPA mode this function will ignore if_id parameter
+ * and will return maximum frame length for uplink interface (if_id==0).
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dpdmux_get_max_frame_length(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint16_t if_id,
+ uint16_t *max_frame_length)
+{
+ struct mc_command cmd = { 0 };
+ struct dpdmux_cmd_get_max_frame_len *cmd_params;
+ struct dpdmux_rsp_get_max_frame_len *rsp_params;
+ int err = 0;
+
+ /* prepare command */
+ cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_GET_MAX_FRAME_LENGTH,
+ cmd_flags,
+ token);
+ cmd_params = (struct dpdmux_cmd_get_max_frame_len *)cmd.params;
+ cmd_params->if_id = cpu_to_le16(if_id);
+
+ err = mc_send_command(mc_io, &cmd);
+ if (err)
+ return err;
+
+ rsp_params = (struct dpdmux_rsp_get_max_frame_len *)cmd.params;
+ *max_frame_length = le16_to_cpu(rsp_params->max_len);
+
+ /* send command to mc*/
+ return err;
+}
+
/**
* dpdmux_ul_reset_counters() - Function resets the uplink counter
* @mc_io: Pointer to MC portal's I/O object
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index b254931386..60048d6c43 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#include <fsl_mc_sys.h>
@@ -126,6 +126,8 @@ int dpni_create(struct fsl_mc_io *mc_io,
cmd_params->qos_entries = cfg->qos_entries;
cmd_params->fs_entries = cpu_to_le16(cfg->fs_entries);
cmd_params->num_cgs = cfg->num_cgs;
+ cmd_params->num_opr = cfg->num_opr;
+ cmd_params->dist_key_size = cfg->dist_key_size;
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -1829,6 +1831,7 @@ int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
cmd_params->options = cpu_to_le16(action->options);
cmd_params->flow_id = cpu_to_le16(action->flow_id);
cmd_params->flc = cpu_to_le64(action->flc);
+ cmd_params->redir_token = cpu_to_le16(action->redirect_obj_token);
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
@@ -2442,7 +2445,7 @@ int dpni_reset_statistics(struct fsl_mc_io *mc_io,
}
/**
- * dpni_set_taildrop() - Set taildrop per queue or TC
+ * dpni_set_taildrop() - Set taildrop per congestion group
*
* Setting a per-TC taildrop (cg_point = DPNI_CP_GROUP) will reset any current
* congestion notification or early drop (WRED) configuration previously applied
@@ -2451,13 +2454,14 @@ int dpni_reset_statistics(struct fsl_mc_io *mc_io,
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
- * @cg_point: Congestion point, DPNI_CP_QUEUE is only supported in
+ * @cg_point: Congestion group identifier DPNI_CP_QUEUE is only supported in
* combination with DPNI_QUEUE_RX.
* @q_type: Queue type, can be DPNI_QUEUE_RX or DPNI_QUEUE_TX.
* @tc: Traffic class to apply this taildrop to
- * @q_index: Index of the queue if the DPNI supports multiple queues for
+ * @index/cgid: Index of the queue if the DPNI supports multiple queues for
* traffic distribution.
- * Ignored if CONGESTION_POINT is not DPNI_CP_QUEUE.
+ * If CONGESTION_POINT is DPNI_CP_CONGESTION_GROUP then it
+ * represent the cgid of the congestion point
* @taildrop: Taildrop structure
*
* Return: '0' on Success; Error code otherwise.
@@ -2577,7 +2581,8 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
uint8_t options,
- struct opr_cfg *cfg)
+ struct opr_cfg *cfg,
+ uint8_t opr_id)
{
struct dpni_cmd_set_opr *cmd_params;
struct mc_command cmd = { 0 };
@@ -2591,6 +2596,7 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
cmd_params->tc_id = tc;
cmd_params->index = index;
cmd_params->options = options;
+ cmd_params->opr_id = opr_id;
cmd_params->oloe = cfg->oloe;
cmd_params->oeane = cfg->oeane;
cmd_params->olws = cfg->olws;
@@ -2621,7 +2627,9 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
struct opr_cfg *cfg,
- struct opr_qry *qry)
+ struct opr_qry *qry,
+ uint8_t flags,
+ uint8_t opr_id)
{
struct dpni_rsp_get_opr *rsp_params;
struct dpni_cmd_get_opr *cmd_params;
@@ -2635,6 +2643,8 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
cmd_params = (struct dpni_cmd_get_opr *)cmd.params;
cmd_params->index = index;
cmd_params->tc_id = tc;
+ cmd_params->flags = flags;
+ cmd_params->opr_id = opr_id;
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -2673,7 +2683,7 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
* If the FS is already enabled with a previous call the classification
* key will be changed but all the table rules are kept. If the
* existing rules do not match the key the results will not be
- * predictable. It is the user responsibility to keep key integrity
+ * predictable. It is the user responsibility to keep keyintegrity.
* If cfg.enable is set to 1 the command will create a flow steering table
* and will classify packets according to this table. The packets
* that miss all the table rules will be classified according to
@@ -2695,7 +2705,7 @@ int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
cmd_params = (struct dpni_cmd_set_rx_fs_dist *)cmd.params;
cmd_params->dist_size = cpu_to_le16(cfg->dist_size);
dpni_set_field(cmd_params->enable, RX_FS_DIST_ENABLE, cfg->enable);
- cmd_params->tc = cfg->tc;
+ cmd_params->tc = cfg->tc;
cmd_params->miss_flow_id = cpu_to_le16(cfg->fs_miss_flow_id);
cmd_params->key_cfg_iova = cpu_to_le64(cfg->key_cfg_iova);
@@ -2710,9 +2720,9 @@ int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
* @token: Token of DPNI object
* @cfg: Distribution configuration
* If cfg.enable is set to 1 the packets will be classified using a hash
- * function based on the key received in cfg.key_cfg_iova parameter
+ * function based on the key received in cfg.key_cfg_iova parameter.
* If cfg.enable is set to 0 the packets will be sent to the queue configured in
- * dpni_set_rx_dist_default_queue() call
+ * dpni_set_rx_dist_default_queue() call
*/
int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
uint16_t token, const struct dpni_rx_dist_cfg *cfg)
@@ -2735,9 +2745,9 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
}
/**
- * dpni_add_custom_tpid() - Configures a distinct Ethertype value
- * (or TPID value) to indicate VLAN tag in addition to the common
- * TPID values 0x8100 and 0x88A8
+ * dpni_add_custom_tpid() - Configures a distinct Ethertype value (or TPID
+ * value) to indicate VLAN tag in adition to the common TPID values
+ * 0x81000 and 0x88A8
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
@@ -2745,8 +2755,8 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
*
* Only two custom values are accepted. If the function is called for the third
* time it will return error.
- * To replace an existing value use dpni_remove_custom_tpid() to remove
- * a previous TPID and after that use again the function.
+ * To replace an existing value use dpni_remove_custom_tpid() to remove a
+ * previous TPID and after that use again the function.
*
* Return: '0' on Success; Error code otherwise.
*/
@@ -2769,7 +2779,7 @@ int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
/**
* dpni_remove_custom_tpid() - Removes a distinct Ethertype value added
- * previously with dpni_add_custom_tpid()
+ * previously with dpni_add_custom_tpid()
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
@@ -2798,8 +2808,8 @@ int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
}
/**
- * dpni_get_custom_tpid() - Returns custom TPID (vlan tags) values configured
- * to detect 802.1q frames
+ * dpni_get_custom_tpid() - Returns custom TPID (vlan tags) values configured to
+ * detect 802.1q frames
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
diff --git a/drivers/net/dpaa2/mc/dprtc.c b/drivers/net/dpaa2/mc/dprtc.c
index 42ac89150e..36e62eb0c3 100644
--- a/drivers/net/dpaa2/mc/dprtc.c
+++ b/drivers/net/dpaa2/mc/dprtc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#include <fsl_mc_sys.h>
#include <fsl_mc_cmd.h>
@@ -521,3 +521,79 @@ int dprtc_get_api_version(struct fsl_mc_io *mc_io,
return 0;
}
+
+/**
+ * dprtc_get_ext_trigger_timestamp - Retrieve the Ext trigger timestamp status
+ * (timestamp + flag for unread timestamp in FIFO).
+ *
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @id: External trigger id.
+ * @status: Returned object's external trigger status
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dprtc_get_ext_trigger_timestamp(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ struct dprtc_ext_trigger_status *status)
+{
+ struct dprtc_rsp_ext_trigger_timestamp *rsp_params;
+ struct dprtc_cmd_ext_trigger_timestamp *cmd_params;
+ struct mc_command cmd = { 0 };
+ int err;
+
+ /* prepare command */
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_EXT_TRIGGER_TIMESTAMP,
+ cmd_flags,
+ token);
+
+ cmd_params = (struct dprtc_cmd_ext_trigger_timestamp *)cmd.params;
+ cmd_params->id = id;
+ /* send command to mc*/
+ err = mc_send_command(mc_io, &cmd);
+ if (err)
+ return err;
+
+ /* retrieve response parameters */
+ rsp_params = (struct dprtc_rsp_ext_trigger_timestamp *)cmd.params;
+ status->timestamp = le64_to_cpu(rsp_params->timestamp);
+ status->unread_valid_timestamp = rsp_params->unread_valid_timestamp;
+
+ return 0;
+}
+
+/**
+ * dprtc_set_fiper_loopback() - Set the fiper pulse as source of interrupt for
+ * External Trigger stamps
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @id: External trigger id.
+ * @fiper_as_input: Bit used to control interrupt signal source:
+ * 0 = Normal operation, interrupt external source
+ * 1 = Fiper pulse is looped back into Trigger input
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dprtc_set_fiper_loopback(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ uint8_t fiper_as_input)
+{
+ struct dprtc_ext_trigger_cfg *cmd_params;
+ struct mc_command cmd = { 0 };
+
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_FIPER_LOOPBACK,
+ cmd_flags,
+ token);
+
+ cmd_params = (struct dprtc_ext_trigger_cfg *)cmd.params;
+ cmd_params->id = id;
+ cmd_params->fiper_as_input = fiper_as_input;
+
+ return mc_send_command(mc_io, &cmd);
+}
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index f4f9598a29..b01a98eb59 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -196,6 +196,12 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
uint16_t token,
uint16_t max_frame_length);
+int dpdmux_get_max_frame_length(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint16_t if_id,
+ uint16_t *max_frame_length);
+
/**
* enum dpdmux_counter_type - Counter types
* @DPDMUX_CNT_ING_FRAME: Counts ingress frames
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
index 2ab4d75dfb..f8a1b5b1ae 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
@@ -39,6 +39,7 @@
#define DPDMUX_CMDID_RESET DPDMUX_CMD(0x005)
#define DPDMUX_CMDID_IS_ENABLED DPDMUX_CMD(0x006)
#define DPDMUX_CMDID_SET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a1)
+#define DPDMUX_CMDID_GET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a2)
#define DPDMUX_CMDID_UL_RESET_COUNTERS DPDMUX_CMD(0x0a3)
@@ -124,6 +125,14 @@ struct dpdmux_cmd_set_max_frame_length {
uint16_t max_frame_length;
};
+struct dpdmux_cmd_get_max_frame_len {
+ uint16_t if_id;
+};
+
+struct dpdmux_rsp_get_max_frame_len {
+ uint16_t max_len;
+};
+
#define DPDMUX_ACCEPTED_FRAMES_TYPE_SHIFT 0
#define DPDMUX_ACCEPTED_FRAMES_TYPE_SIZE 4
#define DPDMUX_UNACCEPTED_FRAMES_ACTION_SHIFT 4
diff --git a/drivers/net/dpaa2/mc/fsl_dpkg.h b/drivers/net/dpaa2/mc/fsl_dpkg.h
index 02fe8d50e7..70f2339ea5 100644
--- a/drivers/net/dpaa2/mc/fsl_dpkg.h
+++ b/drivers/net/dpaa2/mc/fsl_dpkg.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef __FSL_DPKG_H_
@@ -21,7 +21,7 @@
/**
* Number of extractions per key profile
*/
-#define DPKG_MAX_NUM_OF_EXTRACTS 10
+#define DPKG_MAX_NUM_OF_EXTRACTS 20
/**
* enum dpkg_extract_from_hdr_type - Selecting extraction by header types
@@ -177,7 +177,7 @@ struct dpni_ext_set_rx_tc_dist {
uint8_t num_extracts;
uint8_t pad[7];
/* words 1..25 */
- struct dpni_dist_extract extracts[10];
+ struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
};
int dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index df42746c9a..34c6b20033 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef __FSL_DPNI_H
@@ -19,6 +19,11 @@ struct fsl_mc_io;
/** General DPNI macros */
+/**
+ * Maximum size of a key
+ */
+#define DPNI_MAX_KEY_SIZE 56
+
/**
* Maximum number of traffic classes
*/
@@ -95,8 +100,18 @@ struct fsl_mc_io;
* Define a custom number of congestion groups
*/
#define DPNI_OPT_CUSTOM_CG 0x000200
-
-
+/**
+ * Define a custom number of order point records
+ */
+#define DPNI_OPT_CUSTOM_OPR 0x000400
+/**
+ * Hash key is shared between all traffic classes
+ */
+#define DPNI_OPT_SHARED_HASH_KEY 0x000800
+/**
+ * Flow steering table is shared between all traffic classes
+ */
+#define DPNI_OPT_SHARED_FS 0x001000
/**
* Software sequence maximum layout size
*/
@@ -183,6 +198,8 @@ struct dpni_cfg {
uint8_t num_rx_tcs;
uint8_t qos_entries;
uint8_t num_cgs;
+ uint16_t num_opr;
+ uint8_t dist_key_size;
};
int dpni_create(struct fsl_mc_io *mc_io,
@@ -366,28 +383,45 @@ int dpni_get_attributes(struct fsl_mc_io *mc_io,
/**
* Extract out of frame header error
*/
-#define DPNI_ERROR_EOFHE 0x00020000
+#define DPNI_ERROR_MS 0x40000000
+#define DPNI_ERROR_PTP 0x08000000
+/* Ethernet multicast frame */
+#define DPNI_ERROR_MC 0x04000000
+/* Ethernet broadcast frame */
+#define DPNI_ERROR_BC 0x02000000
+#define DPNI_ERROR_KSE 0x00040000
+#define DPNI_ERROR_EOFHE 0x00020000
+#define DPNI_ERROR_MNLE 0x00010000
+#define DPNI_ERROR_TIDE 0x00008000
+#define DPNI_ERROR_PIEE 0x00004000
/**
* Frame length error
*/
-#define DPNI_ERROR_FLE 0x00002000
+#define DPNI_ERROR_FLE 0x00002000
/**
* Frame physical error
*/
-#define DPNI_ERROR_FPE 0x00001000
+#define DPNI_ERROR_FPE 0x00001000
+#define DPNI_ERROR_PTE 0x00000080
+#define DPNI_ERROR_ISP 0x00000040
/**
* Parsing header error
*/
-#define DPNI_ERROR_PHE 0x00000020
+#define DPNI_ERROR_PHE 0x00000020
+
+#define DPNI_ERROR_BLE 0x00000010
/**
* Parser L3 checksum error
*/
-#define DPNI_ERROR_L3CE 0x00000004
+#define DPNI_ERROR_L3CV 0x00000008
+
+#define DPNI_ERROR_L3CE 0x00000004
/**
- * Parser L3 checksum error
+ * Parser L4 checksum error
*/
-#define DPNI_ERROR_L4CE 0x00000001
+#define DPNI_ERROR_L4CV 0x00000002
+#define DPNI_ERROR_L4CE 0x00000001
/**
* enum dpni_error_action - Defines DPNI behavior for errors
* @DPNI_ERROR_ACTION_DISCARD: Discard the frame
@@ -455,6 +489,10 @@ int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
* Select to modify the sw-opaque value setting
*/
#define DPNI_BUF_LAYOUT_OPT_SW_OPAQUE 0x00000080
+/**
+ * Select to disable Scatter Gather and use single buffer
+ */
+#define DPNI_BUF_LAYOUT_OPT_NO_SG 0x00000100
/**
* struct dpni_buffer_layout - Structure representing DPNI buffer layout
@@ -733,7 +771,7 @@ int dpni_get_link_state(struct fsl_mc_io *mc_io,
/**
* struct dpni_tx_shaping - Structure representing DPNI tx shaping configuration
- * @rate_limit: Rate in Mbps
+ * @rate_limit: Rate in Mbits/s
* @max_burst_size: Burst size in bytes (up to 64KB)
*/
struct dpni_tx_shaping_cfg {
@@ -798,6 +836,11 @@ int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
uint16_t token,
uint8_t mac_addr[6]);
+/**
+ * Set mac addr queue action
+ */
+#define DPNI_MAC_SET_QUEUE_ACTION 1
+
int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -1464,6 +1507,7 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
struct dpni_fs_action_cfg {
uint64_t flc;
uint16_t flow_id;
+ uint16_t redirect_obj_token;
uint16_t options;
};
@@ -1595,7 +1639,8 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
uint8_t options,
- struct opr_cfg *cfg);
+ struct opr_cfg *cfg,
+ uint8_t opr_id);
int dpni_get_opr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
@@ -1603,7 +1648,9 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
struct opr_cfg *cfg,
- struct opr_qry *qry);
+ struct opr_qry *qry,
+ uint8_t flags,
+ uint8_t opr_id);
/**
* When used for queue_idx in function dpni_set_rx_dist_default_queue will
@@ -1779,14 +1826,57 @@ int dpni_get_sw_sequence_layout(struct fsl_mc_io *mc_io,
/**
* dpni_extract_sw_sequence_layout() - extract the software sequence layout
- * @layout: software sequence layout
- * @sw_sequence_layout_buf: Zeroed 264 bytes of memory before mapping it
- * to DMA
+ * @layout: software sequence layout
+ * @sw_sequence_layout_buf:Zeroed 264 bytes of memory before mapping it to DMA
*
* This function has to be called after dpni_get_sw_sequence_layout
- *
*/
void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
const uint8_t *sw_sequence_layout_buf);
+/**
+ * struct dpni_ptp_cfg - configure single step PTP (IEEE 1588)
+ * @en: enable single step PTP. When enabled the PTPv1 functionality will
+ * not work. If the field is zero, offset and ch_update parameters
+ * will be ignored
+ * @offset: start offset from the beginning of the frame where timestamp
+ * field is found. The offset must respect all MAC headers, VLAN
+ * tags and other protocol headers
+ * @ch_update: when set UDP checksum will be updated inside packet
+ * @peer_delay: For peer-to-peer transparent clocks add this value to the
+ * correction field in addition to the transient time update. The
+ * value expresses nanoseconds.
+ */
+struct dpni_single_step_cfg {
+ uint8_t en;
+ uint8_t ch_update;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_single_step_cfg *ptp_cfg);
+
+int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_single_step_cfg *ptp_cfg);
+
+/**
+ * loopback_en field is valid when calling function dpni_set_port_cfg
+ */
+#define DPNI_PORT_CFG_LOOPBACK 0x01
+
+/**
+ * struct dpni_port_cfg - custom configuration for dpni physical port
+ * @loopback_en: port loopback enabled
+ */
+struct dpni_port_cfg {
+ int loopback_en;
+};
+
+int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, uint32_t flags, struct dpni_port_cfg *port_cfg);
+
+int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_port_cfg *port_cfg);
+
#endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
index c40090b8fe..6fbd93bb38 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef _FSL_DPNI_CMD_H
@@ -9,21 +9,25 @@
/* DPNI Version */
#define DPNI_VER_MAJOR 7
-#define DPNI_VER_MINOR 13
+#define DPNI_VER_MINOR 17
#define DPNI_CMD_BASE_VERSION 1
#define DPNI_CMD_VERSION_2 2
#define DPNI_CMD_VERSION_3 3
+#define DPNI_CMD_VERSION_4 4
+#define DPNI_CMD_VERSION_5 5
#define DPNI_CMD_ID_OFFSET 4
#define DPNI_CMD(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_BASE_VERSION)
#define DPNI_CMD_V2(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_2)
#define DPNI_CMD_V3(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_3)
+#define DPNI_CMD_V4(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_4)
+#define DPNI_CMD_V5(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_5)
/* Command IDs */
#define DPNI_CMDID_OPEN DPNI_CMD(0x801)
#define DPNI_CMDID_CLOSE DPNI_CMD(0x800)
-#define DPNI_CMDID_CREATE DPNI_CMD_V3(0x901)
+#define DPNI_CMDID_CREATE DPNI_CMD_V5(0x901)
#define DPNI_CMDID_DESTROY DPNI_CMD(0x981)
#define DPNI_CMDID_GET_API_VERSION DPNI_CMD(0xa01)
@@ -67,7 +71,7 @@
#define DPNI_CMDID_REMOVE_VLAN_ID DPNI_CMD(0x232)
#define DPNI_CMDID_CLR_VLAN_FILTERS DPNI_CMD(0x233)
-#define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V3(0x235)
+#define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V4(0x235)
#define DPNI_CMDID_SET_RX_TC_POLICING DPNI_CMD(0x23E)
@@ -75,7 +79,7 @@
#define DPNI_CMDID_ADD_QOS_ENT DPNI_CMD_V2(0x241)
#define DPNI_CMDID_REMOVE_QOS_ENT DPNI_CMD(0x242)
#define DPNI_CMDID_CLR_QOS_TBL DPNI_CMD(0x243)
-#define DPNI_CMDID_ADD_FS_ENT DPNI_CMD(0x244)
+#define DPNI_CMDID_ADD_FS_ENT DPNI_CMD_V2(0x244)
#define DPNI_CMDID_REMOVE_FS_ENT DPNI_CMD(0x245)
#define DPNI_CMDID_CLR_FS_ENT DPNI_CMD(0x246)
@@ -140,7 +144,9 @@ struct dpni_cmd_create {
uint16_t fs_entries;
uint8_t num_rx_tcs;
uint8_t pad4;
- uint8_t num_cgs;
+ uint8_t num_cgs;
+ uint16_t num_opr;
+ uint8_t dist_key_size;
};
struct dpni_cmd_destroy {
@@ -411,8 +417,6 @@ struct dpni_rsp_get_port_mac_addr {
uint8_t mac_addr[6];
};
-#define DPNI_MAC_SET_QUEUE_ACTION 1
-
struct dpni_cmd_add_mac_addr {
uint8_t flags;
uint8_t pad;
@@ -594,6 +598,7 @@ struct dpni_cmd_add_fs_entry {
uint64_t key_iova;
uint64_t mask_iova;
uint64_t flc;
+ uint16_t redir_token;
};
struct dpni_cmd_remove_fs_entry {
@@ -779,7 +784,7 @@ struct dpni_rsp_get_congestion_notification {
};
struct dpni_cmd_set_opr {
- uint8_t pad0;
+ uint8_t opr_id;
uint8_t tc_id;
uint8_t index;
uint8_t options;
@@ -792,9 +797,10 @@ struct dpni_cmd_set_opr {
};
struct dpni_cmd_get_opr {
- uint8_t pad;
+ uint8_t flags;
uint8_t tc_id;
uint8_t index;
+ uint8_t opr_id;
};
#define DPNI_RIP_SHIFT 0
@@ -911,5 +917,34 @@ struct dpni_sw_sequence_layout_entry {
uint16_t pad;
};
+#define DPNI_PTP_ENABLE_SHIFT 0
+#define DPNI_PTP_ENABLE_SIZE 1
+#define DPNI_PTP_CH_UPDATE_SHIFT 1
+#define DPNI_PTP_CH_UPDATE_SIZE 1
+struct dpni_cmd_single_step_cfg {
+ uint16_t flags;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+struct dpni_rsp_single_step_cfg {
+ uint16_t flags;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+#define DPNI_PORT_LOOPBACK_EN_SHIFT 0
+#define DPNI_PORT_LOOPBACK_EN_SIZE 1
+
+struct dpni_cmd_set_port_cfg {
+ uint32_t flags;
+ uint32_t bit_params;
+};
+
+struct dpni_rsp_get_port_cfg {
+ uint32_t flags;
+ uint32_t bit_params;
+};
+
#pragma pack(pop)
#endif /* _FSL_DPNI_CMD_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dprtc.h b/drivers/net/dpaa2/mc/fsl_dprtc.h
index 49edb5a050..84ab158444 100644
--- a/drivers/net/dpaa2/mc/fsl_dprtc.h
+++ b/drivers/net/dpaa2/mc/fsl_dprtc.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#ifndef __FSL_DPRTC_H
#define __FSL_DPRTC_H
@@ -86,6 +86,23 @@ int dprtc_set_alarm(struct fsl_mc_io *mc_io,
uint16_t token,
uint64_t time);
+struct dprtc_ext_trigger_status {
+ uint64_t timestamp;
+ uint8_t unread_valid_timestamp;
+};
+
+int dprtc_get_ext_trigger_timestamp(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ struct dprtc_ext_trigger_status *status);
+
+int dprtc_set_fiper_loopback(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ uint8_t fiper_as_input);
+
/**
* struct dprtc_attr - Structure representing DPRTC attributes
* @id: DPRTC object ID
diff --git a/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h b/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
index eca12ff5ee..61aaa4daab 100644
--- a/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#include <fsl_mc_sys.h>
#ifndef _FSL_DPRTC_CMD_H
@@ -7,13 +7,15 @@
/* DPRTC Version */
#define DPRTC_VER_MAJOR 2
-#define DPRTC_VER_MINOR 1
+#define DPRTC_VER_MINOR 3
/* Command versioning */
#define DPRTC_CMD_BASE_VERSION 1
+#define DPRTC_CMD_VERSION_2 2
#define DPRTC_CMD_ID_OFFSET 4
#define DPRTC_CMD(id) (((id) << DPRTC_CMD_ID_OFFSET) | DPRTC_CMD_BASE_VERSION)
+#define DPRTC_CMD_V2(id) (((id) << DPRTC_CMD_ID_OFFSET) | DPRTC_CMD_VERSION_2)
/* Command IDs */
#define DPRTC_CMDID_CLOSE DPRTC_CMD(0x800)
@@ -39,6 +41,7 @@
#define DPRTC_CMDID_SET_EXT_TRIGGER DPRTC_CMD(0x1d8)
#define DPRTC_CMDID_CLEAR_EXT_TRIGGER DPRTC_CMD(0x1d9)
#define DPRTC_CMDID_GET_EXT_TRIGGER_TIMESTAMP DPRTC_CMD(0x1dA)
+#define DPRTC_CMDID_SET_FIPER_LOOPBACK DPRTC_CMD(0x1dB)
/* Macros for accessing command fields smaller than 1byte */
#define DPRTC_MASK(field) \
@@ -87,5 +90,23 @@ struct dprtc_rsp_get_api_version {
uint16_t major;
uint16_t minor;
};
+
+struct dprtc_cmd_ext_trigger_timestamp {
+ uint32_t pad;
+ uint8_t id;
+};
+
+struct dprtc_rsp_ext_trigger_timestamp {
+ uint8_t unread_valid_timestamp;
+ uint8_t pad1;
+ uint16_t pad2;
+ uint32_t pad3;
+ uint64_t timestamp;
+};
+
+struct dprtc_ext_trigger_cfg {
+ uint8_t id;
+ uint8_t fiper_as_input;
+};
#pragma pack(pop)
#endif /* _FSL_DPRTC_CMD_H */
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 02/11] net/dpaa2: support Tx flow redirection action
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 01/11] bus/fslmc: updated MC FW to 10.28 nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 03/11] bus/fslmc: add qbman debug APIs support nipun.gupta
` (11 subsequent siblings)
13 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
TX redirection support by flow action RTE_FLOW_ACTION_TYPE_PHY_PORT
and RTE_FLOW_ACTION_TYPE_PORT_ID
This action is executed by HW to forward packets between ports.
If the ingress packets match the rule, the packets are switched
without software involved and perf is improved as well.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 5 ++
drivers/net/dpaa2/dpaa2_ethdev.h | 1 +
drivers/net/dpaa2/dpaa2_flow.c | 116 +++++++++++++++++++++++++++----
drivers/net/dpaa2/mc/fsl_dpni.h | 23 ++++++
4 files changed, 132 insertions(+), 13 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 560b79151b..9cf55c0f0b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2822,6 +2822,11 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
+int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
+{
+ return dev->device->driver == &rte_dpaa2_pmd.driver;
+}
+
static int
rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
struct rte_dpaa2_device *dpaa2_dev)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index b9c729f6cd..3f34d7ecff 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -240,6 +240,7 @@ uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
uint16_t dpaa2_dev_tx_conf(void *queue) __rte_unused;
+int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev);
int dpaa2_timesync_enable(struct rte_eth_dev *dev);
int dpaa2_timesync_disable(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index bfe17c350a..5de886ec5e 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#include <sys/queue.h>
@@ -30,10 +30,10 @@
int mc_l4_port_identification;
static char *dpaa2_flow_control_log;
-static int dpaa2_flow_miss_flow_id =
+static uint16_t dpaa2_flow_miss_flow_id =
DPNI_FS_MISS_DROP;
-#define FIXED_ENTRY_SIZE 54
+#define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
enum flow_rule_ipaddr_type {
FLOW_NONE_IPADDR,
@@ -83,9 +83,18 @@ static const
enum rte_flow_action_type dpaa2_supported_action_type[] = {
RTE_FLOW_ACTION_TYPE_END,
RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_PHY_PORT,
+ RTE_FLOW_ACTION_TYPE_PORT_ID,
RTE_FLOW_ACTION_TYPE_RSS
};
+static const
+enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_PHY_PORT,
+ RTE_FLOW_ACTION_TYPE_PORT_ID
+};
+
/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
@@ -2937,6 +2946,19 @@ dpaa2_configure_flow_raw(struct rte_flow *flow,
return 0;
}
+static inline int dpaa2_fs_action_supported(
+ enum rte_flow_action_type action)
+{
+ int i;
+
+ for (i = 0; i < (int)(sizeof(dpaa2_supported_fs_action_type) /
+ sizeof(enum rte_flow_action_type)); i++) {
+ if (action == dpaa2_supported_fs_action_type[i])
+ return 1;
+ }
+
+ return 0;
+}
/* The existing QoS/FS entry with IP address(es)
* needs update after
* new extract(s) are inserted before IP
@@ -3115,7 +3137,7 @@ dpaa2_flow_entry_update(
}
}
- if (curr->action != RTE_FLOW_ACTION_TYPE_QUEUE) {
+ if (!dpaa2_fs_action_supported(curr->action)) {
curr = LIST_NEXT(curr, next);
continue;
}
@@ -3253,6 +3275,43 @@ dpaa2_flow_verify_attr(
return 0;
}
+static inline struct rte_eth_dev *
+dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
+ const struct rte_flow_action *action)
+{
+ const struct rte_flow_action_phy_port *phy_port;
+ const struct rte_flow_action_port_id *port_id;
+ int idx = -1;
+ struct rte_eth_dev *dest_dev;
+
+ if (action->type == RTE_FLOW_ACTION_TYPE_PHY_PORT) {
+ phy_port = (const struct rte_flow_action_phy_port *)
+ action->conf;
+ if (!phy_port->original)
+ idx = phy_port->index;
+ } else if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
+ port_id = (const struct rte_flow_action_port_id *)
+ action->conf;
+ if (!port_id->original)
+ idx = port_id->id;
+ } else {
+ return NULL;
+ }
+
+ if (idx >= 0) {
+ if (!rte_eth_dev_is_valid_port(idx))
+ return NULL;
+ dest_dev = &rte_eth_devices[idx];
+ } else {
+ dest_dev = priv->eth_dev;
+ }
+
+ if (!dpaa2_dev_is_dpaa2(dest_dev))
+ return NULL;
+
+ return dest_dev;
+}
+
static inline int
dpaa2_flow_verify_action(
struct dpaa2_dev_priv *priv,
@@ -3278,6 +3337,14 @@ dpaa2_flow_verify_action(
return -1;
}
break;
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
+ if (!dpaa2_flow_redirect_dev(priv, &actions[j])) {
+ DPAA2_PMD_ERR(
+ "Invalid port id of action");
+ return -ENOTSUP;
+ }
+ break;
case RTE_FLOW_ACTION_TYPE_RSS:
rss_conf = (const struct rte_flow_action_rss *)
(actions[j].conf);
@@ -3330,11 +3397,13 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
struct dpni_qos_tbl_cfg qos_cfg;
struct dpni_fs_action_cfg action;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
- struct dpaa2_queue *rxq;
+ struct dpaa2_queue *dest_q;
struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
size_t param;
struct rte_flow *curr = LIST_FIRST(&priv->flows);
uint16_t qos_index;
+ struct rte_eth_dev *dest_dev;
+ struct dpaa2_dev_priv *dest_priv;
ret = dpaa2_flow_verify_attr(priv, attr);
if (ret)
@@ -3446,12 +3515,32 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
while (!end_of_list) {
switch (actions[j].type) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
- dest_queue =
- (const struct rte_flow_action_queue *)(actions[j].conf);
- rxq = priv->rx_vq[dest_queue->index];
- flow->action = RTE_FLOW_ACTION_TYPE_QUEUE;
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
- action.flow_id = rxq->flow_id;
+ flow->action = actions[j].type;
+
+ if (actions[j].type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+ dest_queue = (const struct rte_flow_action_queue *)
+ (actions[j].conf);
+ dest_q = priv->rx_vq[dest_queue->index];
+ action.flow_id = dest_q->flow_id;
+ } else {
+ dest_dev = dpaa2_flow_redirect_dev(priv,
+ &actions[j]);
+ if (!dest_dev) {
+ DPAA2_PMD_ERR(
+ "Invalid destination device to redirect!");
+ return -1;
+ }
+
+ dest_priv = dest_dev->data->dev_private;
+ dest_q = dest_priv->tx_vq[0];
+ action.options =
+ DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
+ action.redirect_obj_token = dest_priv->token;
+ action.flow_id = dest_q->flow_id;
+ }
/* Configure FS table first*/
if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
@@ -3481,8 +3570,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
tc_cfg.enable = true;
- tc_cfg.fs_miss_flow_id =
- dpaa2_flow_miss_flow_id;
+ tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
priv->token, &tc_cfg);
if (ret < 0) {
@@ -3970,7 +4058,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
ret = dpaa2_generic_flow_set(flow, dev, attr, pattern,
actions, error);
if (ret < 0) {
- if (error->type > RTE_FLOW_ERROR_TYPE_ACTION)
+ if (error && error->type > RTE_FLOW_ERROR_TYPE_ACTION)
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
attr, "unknown");
@@ -4002,6 +4090,8 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
switch (flow->action) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
if (priv->num_rx_tc > 1) {
/* Remove entry from QoS table first */
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index 34c6b20033..bf14765e2b 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1496,12 +1496,35 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
*/
#define DPNI_FS_OPT_SET_STASH_CONTROL 0x4
+/**
+ * Redirect matching traffic to Rx part of another dpni object. The frame
+ * will be classified according to new qos and flow steering rules from
+ * target dpni object.
+ */
+#define DPNI_FS_OPT_REDIRECT_TO_DPNI_RX 0x08
+
+/**
+ * Redirect matching traffic into Tx queue of another dpni object. The
+ * frame will be transmitted directly
+ */
+#define DPNI_FS_OPT_REDIRECT_TO_DPNI_TX 0x10
+
/**
* struct dpni_fs_action_cfg - Action configuration for table look-up
* @flc: FLC value for traffic matching this rule. Please check the Frame
* Descriptor section in the hardware documentation for more information.
* @flow_id: Identifies the Rx queue used for matching traffic. Supported
* values are in range 0 to num_queue-1.
+ * @redirect_obj_token: token that identifies the object where frame is
+ * redirected when this rule is hit. This paraneter is used only when one of the
+ * flags DPNI_FS_OPT_REDIRECT_TO_DPNI_RX or DPNI_FS_OPT_REDIRECT_TO_DPNI_TX is
+ * set.
+ * The token is obtained using dpni_open() API call. The object must stay
+ * open during the operation to ensure the fact that application has access
+ * on it. If the object is destroyed of closed next actions will take place:
+ * - if DPNI_FS_OPT_DISCARD is set the frame will be discarded by current dpni
+ * - if DPNI_FS_OPT_DISCARD is cleared the frame will be enqueued in queue with
+ * index provided in flow_id parameter.
* @options: Any combination of DPNI_FS_OPT_ values.
*/
struct dpni_fs_action_cfg {
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 03/11] bus/fslmc: add qbman debug APIs support
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 01/11] bus/fslmc: updated MC FW to 10.28 nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 02/11] net/dpaa2: support Tx flow redirection action nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 04/11] net/dpaa2: support multiple Tx queues enqueue for ordered nipun.gupta
` (10 subsequent siblings)
13 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena,
Youri Querry, Roy Pledge, Nipun Gupta
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Add support for debugging qbman FQs
Signed-off-by: Youri Querry <youri.querry_1@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 203 +++++-
drivers/bus/fslmc/qbman/qbman_debug.c | 623 ++++++++++++++++++
drivers/bus/fslmc/qbman/qbman_portal.c | 6 +
3 files changed, 826 insertions(+), 6 deletions(-)
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 54096e8774..fa02bc928e 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,13 +1,118 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2020 NXP
+ * Copyright 2018-2020 NXP
*/
#ifndef _FSL_QBMAN_DEBUG_H
#define _FSL_QBMAN_DEBUG_H
-#include <rte_compat.h>
struct qbman_swp;
+/* Buffer pool query commands */
+struct qbman_bp_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[4];
+ uint8_t bdi;
+ uint8_t state;
+ uint32_t fill;
+ uint32_t hdptr;
+ uint16_t swdet;
+ uint16_t swdxt;
+ uint16_t hwdet;
+ uint16_t hwdxt;
+ uint16_t swset;
+ uint16_t swsxt;
+ uint16_t vbpid;
+ uint16_t icid;
+ uint64_t bpscn_addr;
+ uint64_t bpscn_ctx;
+ uint16_t hw_targ;
+ uint8_t dbe;
+ uint8_t reserved2;
+ uint8_t sdcnt;
+ uint8_t hdcnt;
+ uint8_t sscnt;
+ uint8_t reserved3[9];
+};
+
+int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
+ struct qbman_bp_query_rslt *r);
+int qbman_bp_get_bdi(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_va(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_wae(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swdet(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swdxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hwdet(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hwdxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swset(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swsxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_vbpid(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_icid(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_pl(struct qbman_bp_query_rslt *r);
+uint64_t qbman_bp_get_bpscn_addr(struct qbman_bp_query_rslt *r);
+uint64_t qbman_bp_get_bpscn_ctx(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hw_targ(struct qbman_bp_query_rslt *r);
+int qbman_bp_has_free_bufs(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_num_free_bufs(struct qbman_bp_query_rslt *r);
+int qbman_bp_is_depleted(struct qbman_bp_query_rslt *r);
+int qbman_bp_is_surplus(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_hdptr(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_sdcnt(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_hdcnt(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_sscnt(struct qbman_bp_query_rslt *r);
+
+/* FQ query function for programmable fields */
+struct qbman_fq_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[8];
+ uint16_t cgid;
+ uint16_t dest_wq;
+ uint8_t reserved2;
+ uint8_t fq_ctrl;
+ uint16_t ics_cred;
+ uint16_t td_thresh;
+ uint16_t oal_oac;
+ uint8_t reserved3;
+ uint8_t mctl;
+ uint64_t fqd_ctx;
+ uint16_t icid;
+ uint16_t reserved4;
+ uint32_t vfqid;
+ uint32_t fqid_er;
+ uint16_t opridsz;
+ uint8_t reserved5[18];
+};
+
+int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
+ struct qbman_fq_query_rslt *r);
+uint8_t qbman_fq_attr_get_fqctrl(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_cgrid(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_destwq(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_tdthresh(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_oa_ics(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_oa_cgr(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_oal(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_bdi(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_ff(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_va(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_ps(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_pps(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_icid(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_pl(struct qbman_fq_query_rslt *r);
+uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r);
+uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r);
+
+/* FQ query command for non-programmable fields*/
+enum qbman_fq_schedstate_e {
+ qbman_fq_schedstate_oos = 0,
+ qbman_fq_schedstate_retired,
+ qbman_fq_schedstate_tentatively_scheduled,
+ qbman_fq_schedstate_truly_scheduled,
+ qbman_fq_schedstate_parked,
+ qbman_fq_schedstate_held_active,
+};
struct qbman_fq_query_np_rslt {
uint8_t verb;
@@ -29,13 +134,99 @@ uint8_t verb;
uint8_t reserved2[29];
};
-__rte_internal
int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
struct qbman_fq_query_np_rslt *r);
-
-__rte_internal
+uint8_t qbman_fq_state_schedstate(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_force_eligible(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_xoff(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_retirement_pending(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_overflow_error(const struct qbman_fq_query_np_rslt *r);
uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r);
-
uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r);
+/* CGR query */
+struct qbman_cgr_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[6];
+ uint8_t ctl1;
+ uint8_t reserved1;
+ uint16_t oal;
+ uint16_t reserved2;
+ uint8_t mode;
+ uint8_t ctl2;
+ uint8_t iwc;
+ uint8_t tdc;
+ uint16_t cs_thres;
+ uint16_t cs_thres_x;
+ uint16_t td_thres;
+ uint16_t cscn_tdcp;
+ uint16_t cscn_wqid;
+ uint16_t cscn_vcgid;
+ uint16_t cg_icid;
+ uint64_t cg_wr_addr;
+ uint64_t cscn_ctx;
+ uint64_t i_cnt;
+ uint64_t a_cnt;
+};
+
+int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_en_enter(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_en_exit(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_icd(struct qbman_cgr_query_rslt *r);
+uint8_t qbman_cgr_get_mode(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_rej_cnt_mode(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_bdi(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_cs_thres(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_cs_thres_x(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_td_thres(struct qbman_cgr_query_rslt *r);
+
+/* WRED query */
+struct qbman_wred_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[6];
+ uint8_t edp[7];
+ uint8_t reserved1;
+ uint32_t wred_parm_dp[7];
+ uint8_t reserved2[20];
+};
+
+int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_wred_query_rslt *r);
+int qbman_cgr_attr_wred_get_edp(struct qbman_wred_query_rslt *r, uint32_t idx);
+void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
+ uint64_t *maxth, uint8_t *maxp);
+uint32_t qbman_cgr_attr_wred_get_parm_dp(struct qbman_wred_query_rslt *r,
+ uint32_t idx);
+
+/* CGR/CCGR/CQ statistics query */
+int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+
+/* Query Work Queue Channel */
+struct qbman_wqchan_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint16_t chid;
+ uint8_t reserved;
+ uint8_t ctrl;
+ uint16_t cdan_wqid;
+ uint64_t cdan_ctx;
+ uint32_t reserved2[4];
+ uint32_t wq_len[8];
+};
+
+int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
+ struct qbman_wqchan_query_rslt *r);
+uint32_t qbman_wqchan_attr_get_wqlen(struct qbman_wqchan_query_rslt *r, int wq);
+uint64_t qbman_wqchan_attr_get_cdan_ctx(struct qbman_wqchan_query_rslt *r);
+uint16_t qbman_wqchan_attr_get_cdan_wqid(struct qbman_wqchan_query_rslt *r);
+uint8_t qbman_wqchan_attr_get_ctrl(struct qbman_wqchan_query_rslt *r);
+uint16_t qbman_wqchan_attr_get_chanid(struct qbman_wqchan_query_rslt *r);
#endif /* !_FSL_QBMAN_DEBUG_H */
diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 34374ae4b6..c0d288faa8 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
+ * Copyright 2018-2020 NXP
*/
#include "compat.h"
@@ -16,6 +17,179 @@
#define QBMAN_CGR_STAT_QUERY 0x55
#define QBMAN_CGR_STAT_QUERY_CLR 0x56
+struct qbman_bp_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t bpid;
+ uint8_t reserved2[60];
+};
+
+#define QB_BP_STATE_SHIFT 24
+#define QB_BP_VA_SHIFT 1
+#define QB_BP_VA_MASK 0x2
+#define QB_BP_WAE_SHIFT 2
+#define QB_BP_WAE_MASK 0x4
+#define QB_BP_PL_SHIFT 15
+#define QB_BP_PL_MASK 0x8000
+#define QB_BP_ICID_MASK 0x7FFF
+
+int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
+ struct qbman_bp_query_rslt *r)
+{
+ struct qbman_bp_query_desc *p;
+
+ /* Start the management command */
+ p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ /* Encode the caller-provided attributes */
+ p->bpid = bpid;
+
+ /* Complete the management command */
+ *r = *(struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_BP_QUERY);
+ if (!r) {
+ pr_err("qbman: Query BPID %d failed, no response\n",
+ bpid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of BPID 0x%x failed, code=0x%02x\n", bpid,
+ r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_bp_get_bdi(struct qbman_bp_query_rslt *r)
+{
+ return r->bdi & 1;
+}
+
+int qbman_bp_get_va(struct qbman_bp_query_rslt *r)
+{
+ return (r->bdi & QB_BP_VA_MASK) >> QB_BP_VA_MASK;
+}
+
+int qbman_bp_get_wae(struct qbman_bp_query_rslt *r)
+{
+ return (r->bdi & QB_BP_WAE_MASK) >> QB_BP_WAE_SHIFT;
+}
+
+static uint16_t qbman_bp_thresh_to_value(uint16_t val)
+{
+ return (val & 0xff) << ((val & 0xf00) >> 8);
+}
+
+uint16_t qbman_bp_get_swdet(struct qbman_bp_query_rslt *r)
+{
+
+ return qbman_bp_thresh_to_value(r->swdet);
+}
+
+uint16_t qbman_bp_get_swdxt(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->swdxt);
+}
+
+uint16_t qbman_bp_get_hwdet(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->hwdet);
+}
+
+uint16_t qbman_bp_get_hwdxt(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->hwdxt);
+}
+
+uint16_t qbman_bp_get_swset(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->swset);
+}
+
+uint16_t qbman_bp_get_swsxt(struct qbman_bp_query_rslt *r)
+{
+
+ return qbman_bp_thresh_to_value(r->swsxt);
+}
+
+uint16_t qbman_bp_get_vbpid(struct qbman_bp_query_rslt *r)
+{
+ return r->vbpid;
+}
+
+uint16_t qbman_bp_get_icid(struct qbman_bp_query_rslt *r)
+{
+ return r->icid & QB_BP_ICID_MASK;
+}
+
+int qbman_bp_get_pl(struct qbman_bp_query_rslt *r)
+{
+ return (r->icid & QB_BP_PL_MASK) >> QB_BP_PL_SHIFT;
+}
+
+uint64_t qbman_bp_get_bpscn_addr(struct qbman_bp_query_rslt *r)
+{
+ return r->bpscn_addr;
+}
+
+uint64_t qbman_bp_get_bpscn_ctx(struct qbman_bp_query_rslt *r)
+{
+ return r->bpscn_ctx;
+}
+
+uint16_t qbman_bp_get_hw_targ(struct qbman_bp_query_rslt *r)
+{
+ return r->hw_targ;
+}
+
+int qbman_bp_has_free_bufs(struct qbman_bp_query_rslt *r)
+{
+ return !(int)(r->state & 0x1);
+}
+
+int qbman_bp_is_depleted(struct qbman_bp_query_rslt *r)
+{
+ return (int)((r->state & 0x2) >> 1);
+}
+
+int qbman_bp_is_surplus(struct qbman_bp_query_rslt *r)
+{
+ return (int)((r->state & 0x4) >> 2);
+}
+
+uint32_t qbman_bp_num_free_bufs(struct qbman_bp_query_rslt *r)
+{
+ return r->fill;
+}
+
+uint32_t qbman_bp_get_hdptr(struct qbman_bp_query_rslt *r)
+{
+ return r->hdptr;
+}
+
+uint32_t qbman_bp_get_sdcnt(struct qbman_bp_query_rslt *r)
+{
+ return r->sdcnt;
+}
+
+uint32_t qbman_bp_get_hdcnt(struct qbman_bp_query_rslt *r)
+{
+ return r->hdcnt;
+}
+
+uint32_t qbman_bp_get_sscnt(struct qbman_bp_query_rslt *r)
+{
+ return r->sscnt;
+}
+
struct qbman_fq_query_desc {
uint8_t verb;
uint8_t reserved[3];
@@ -23,6 +197,129 @@ struct qbman_fq_query_desc {
uint8_t reserved2[56];
};
+/* FQ query function for programmable fields */
+int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
+ struct qbman_fq_query_rslt *r)
+{
+ struct qbman_fq_query_desc *p;
+
+ p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->fqid = fqid;
+ *r = *(struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_FQ_QUERY);
+ if (!r) {
+ pr_err("qbman: Query FQID %d failed, no response\n",
+ fqid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of FQID 0x%x failed, code=0x%02x\n",
+ fqid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+uint8_t qbman_fq_attr_get_fqctrl(struct qbman_fq_query_rslt *r)
+{
+ return r->fq_ctrl;
+}
+
+uint16_t qbman_fq_attr_get_cgrid(struct qbman_fq_query_rslt *r)
+{
+ return r->cgid;
+}
+
+uint16_t qbman_fq_attr_get_destwq(struct qbman_fq_query_rslt *r)
+{
+ return r->dest_wq;
+}
+
+static uint16_t qbman_thresh_to_value(uint16_t val)
+{
+ return ((val & 0x1FE0) >> 5) << (val & 0x1F);
+}
+
+uint16_t qbman_fq_attr_get_tdthresh(struct qbman_fq_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->td_thresh);
+}
+
+int qbman_fq_attr_get_oa_ics(struct qbman_fq_query_rslt *r)
+{
+ return (int)(r->oal_oac >> 14) & 0x1;
+}
+
+int qbman_fq_attr_get_oa_cgr(struct qbman_fq_query_rslt *r)
+{
+ return (int)(r->oal_oac >> 15);
+}
+
+uint16_t qbman_fq_attr_get_oal(struct qbman_fq_query_rslt *r)
+{
+ return (r->oal_oac & 0x0FFF);
+}
+
+int qbman_fq_attr_get_bdi(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x1);
+}
+
+int qbman_fq_attr_get_ff(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x2) >> 1;
+}
+
+int qbman_fq_attr_get_va(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x4) >> 2;
+}
+
+int qbman_fq_attr_get_ps(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x8) >> 3;
+}
+
+int qbman_fq_attr_get_pps(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x30) >> 4;
+}
+
+uint16_t qbman_fq_attr_get_icid(struct qbman_fq_query_rslt *r)
+{
+ return r->icid & 0x7FFF;
+}
+
+int qbman_fq_attr_get_pl(struct qbman_fq_query_rslt *r)
+{
+ return (int)((r->icid & 0x8000) >> 15);
+}
+
+uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r)
+{
+ return r->vfqid & 0x00FFFFFF;
+}
+
+uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r)
+{
+ return r->fqid_er & 0x00FFFFFF;
+}
+
+uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r)
+{
+ return r->opridsz;
+}
+
+__rte_internal
int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
struct qbman_fq_query_np_rslt *r)
{
@@ -55,6 +352,32 @@ int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
return 0;
}
+uint8_t qbman_fq_state_schedstate(const struct qbman_fq_query_np_rslt *r)
+{
+ return r->st1 & 0x7;
+}
+
+int qbman_fq_state_force_eligible(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x8) >> 3);
+}
+
+int qbman_fq_state_xoff(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x10) >> 4);
+}
+
+int qbman_fq_state_retirement_pending(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x20) >> 5);
+}
+
+int qbman_fq_state_overflow_error(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x40) >> 6);
+}
+
+__rte_internal
uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r)
{
return (r->frm_cnt & 0x00FFFFFF);
@@ -64,3 +387,303 @@ uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r)
{
return r->byte_cnt;
}
+
+/* Query CGR */
+struct qbman_cgr_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t cgid;
+ uint8_t reserved2[60];
+};
+
+int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_cgr_query_rslt *r)
+{
+ struct qbman_cgr_query_desc *p;
+
+ p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ *r = *(struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_CGR_QUERY);
+ if (!r) {
+ pr_err("qbman: Query CGID %d failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_CGR_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query CGID 0x%x failed,code=0x%02x\n", cgid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_cgr_get_cscn_wq_en_enter(struct qbman_cgr_query_rslt *r)
+{
+ return (int)(r->ctl1 & 0x1);
+}
+
+int qbman_cgr_get_cscn_wq_en_exit(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->ctl1 & 0x2) >> 1);
+}
+
+int qbman_cgr_get_cscn_wq_icd(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->ctl1 & 0x4) >> 2);
+}
+
+uint8_t qbman_cgr_get_mode(struct qbman_cgr_query_rslt *r)
+{
+ return r->mode & 0x3;
+}
+
+int qbman_cgr_get_rej_cnt_mode(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->mode & 0x4) >> 2);
+}
+
+int qbman_cgr_get_cscn_bdi(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->mode & 0x8) >> 3);
+}
+
+uint16_t qbman_cgr_attr_get_cs_thres(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->cs_thres);
+}
+
+uint16_t qbman_cgr_attr_get_cs_thres_x(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->cs_thres_x);
+}
+
+uint16_t qbman_cgr_attr_get_td_thres(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->td_thres);
+}
+
+int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_wred_query_rslt *r)
+{
+ struct qbman_cgr_query_desc *p;
+
+ p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ *r = *(struct qbman_wred_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_WRED_QUERY);
+ if (!r) {
+ pr_err("qbman: Query CGID WRED %d failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WRED_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query CGID WRED 0x%x failed,code=0x%02x\n",
+ cgid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_cgr_attr_wred_get_edp(struct qbman_wred_query_rslt *r, uint32_t idx)
+{
+ return (int)(r->edp[idx] & 1);
+}
+
+uint32_t qbman_cgr_attr_wred_get_parm_dp(struct qbman_wred_query_rslt *r,
+ uint32_t idx)
+{
+ return r->wred_parm_dp[idx];
+}
+
+void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
+ uint64_t *maxth, uint8_t *maxp)
+{
+ uint8_t ma, mn, step_i, step_s, pn;
+
+ ma = (uint8_t)(dp >> 24);
+ mn = (uint8_t)(dp >> 19) & 0x1f;
+ step_i = (uint8_t)(dp >> 11);
+ step_s = (uint8_t)(dp >> 6) & 0x1f;
+ pn = (uint8_t)dp & 0x3f;
+
+ *maxp = (uint8_t)(((pn<<2) * 100)/256);
+
+ if (mn == 0)
+ *maxth = ma;
+ else
+ *maxth = ((ma+256) * (1<<(mn-1)));
+
+ if (step_s == 0)
+ *minth = *maxth - step_i;
+ else
+ *minth = *maxth - (256 + step_i) * (1<<(step_s - 1));
+}
+
+/* Query CGR/CCGR/CQ statistics */
+struct qbman_cgr_statistics_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t cgid;
+ uint8_t reserved1;
+ uint8_t ct;
+ uint8_t reserved2[58];
+};
+
+struct qbman_cgr_statistics_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[14];
+ uint64_t frm_cnt;
+ uint64_t byte_cnt;
+ uint32_t reserved2[8];
+};
+
+static int qbman_cgr_statistics_query(struct qbman_swp *s, uint32_t cgid,
+ int clear, uint32_t command_type,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ struct qbman_cgr_statistics_query_desc *p;
+ struct qbman_cgr_statistics_query_rslt *r;
+ uint32_t query_verb;
+
+ p = (struct qbman_cgr_statistics_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ if (command_type < 2)
+ p->ct = command_type;
+ query_verb = clear ?
+ QBMAN_CGR_STAT_QUERY_CLR : QBMAN_CGR_STAT_QUERY;
+ r = (struct qbman_cgr_statistics_query_rslt *)qbman_swp_mc_complete(s,
+ p, query_verb);
+ if (!r) {
+ pr_err("qbman: Query CGID %d statistics failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != query_verb);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query statistics of CGID 0x%x failed, code=0x%02x\n",
+ cgid, r->rslt);
+ return -EIO;
+ }
+
+ if (*frame_cnt)
+ *frame_cnt = r->frm_cnt & 0xFFFFFFFFFFllu;
+ if (*byte_cnt)
+ *byte_cnt = r->byte_cnt & 0xFFFFFFFFFFllu;
+
+ return 0;
+}
+
+int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 0xff,
+ frame_cnt, byte_cnt);
+}
+
+int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 1,
+ frame_cnt, byte_cnt);
+}
+
+int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 0,
+ frame_cnt, byte_cnt);
+}
+
+/* WQ Chan Query */
+struct qbman_wqchan_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t chid;
+ uint8_t reserved2[60];
+};
+
+int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
+ struct qbman_wqchan_query_rslt *r)
+{
+ struct qbman_wqchan_query_desc *p;
+
+ /* Start the management command */
+ p = (struct qbman_wqchan_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ /* Encode the caller-provided attributes */
+ p->chid = chanid;
+
+ /* Complete the management command */
+ *r = *(struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_WQ_QUERY);
+ if (!r) {
+ pr_err("qbman: Query WQ Channel %d failed, no response\n",
+ chanid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WQ_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of WQCHAN 0x%x failed, code=0x%02x\n",
+ chanid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+uint32_t qbman_wqchan_attr_get_wqlen(struct qbman_wqchan_query_rslt *r, int wq)
+{
+ return r->wq_len[wq] & 0x00FFFFFF;
+}
+
+uint64_t qbman_wqchan_attr_get_cdan_ctx(struct qbman_wqchan_query_rslt *r)
+{
+ return r->cdan_ctx;
+}
+
+uint16_t qbman_wqchan_attr_get_cdan_wqid(struct qbman_wqchan_query_rslt *r)
+{
+ return r->cdan_wqid;
+}
+
+uint8_t qbman_wqchan_attr_get_ctrl(struct qbman_wqchan_query_rslt *r)
+{
+ return r->ctrl;
+}
+
+uint16_t qbman_wqchan_attr_get_chanid(struct qbman_wqchan_query_rslt *r)
+{
+ return r->chid;
+}
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index aedcad9258..3a7579c8a7 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1865,6 +1865,12 @@ void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
d->pull.dq_src = chid;
}
+/**
+ * qbman_pull_desc_set_rad() - Decide whether reschedule the fq after dequeue
+ *
+ * @rad: 1 = Reschedule the FQ after dequeue.
+ * 0 = Allow the FQ to remain active after dequeue.
+ */
void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad)
{
if (d->pull.verb & (1 << QB_VDQCR_VERB_RLS_SHIFT)) {
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 04/11] net/dpaa2: support multiple Tx queues enqueue for ordered
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (2 preceding siblings ...)
2021-09-27 12:26 ` [dpdk-dev] [PATCH 03/11] bus/fslmc: add qbman debug APIs support nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 05/11] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
` (9 subsequent siblings)
13 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Support the tx enqueue in order queue mode, where the
queue id for each event may be different.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/event/dpaa2/dpaa2_eventdev.c | 12 ++-
drivers/net/dpaa2/dpaa2_ethdev.h | 3 +
drivers/net/dpaa2/dpaa2_rxtx.c | 142 +++++++++++++++++++++++++++
drivers/net/dpaa2/version.map | 1 +
4 files changed, 154 insertions(+), 4 deletions(-)
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 5ccf22f77f..28f3bbca9a 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017,2019 NXP
+ * Copyright 2017,2019-2021 NXP
*/
#include <assert.h>
@@ -1002,16 +1002,20 @@ dpaa2_eventdev_txa_enqueue(void *port,
struct rte_event ev[],
uint16_t nb_events)
{
- struct rte_mbuf *m = (struct rte_mbuf *)ev[0].mbuf;
+ void *txq[32];
+ struct rte_mbuf *m[32];
uint8_t qid, i;
RTE_SET_USED(port);
for (i = 0; i < nb_events; i++) {
- qid = rte_event_eth_tx_adapter_txq_get(m);
- rte_eth_tx_burst(m->port, qid, &m, 1);
+ m[i] = (struct rte_mbuf *)ev[i].mbuf;
+ qid = rte_event_eth_tx_adapter_txq_get(m[i]);
+ txq[i] = rte_eth_devices[m[i]->port].data->tx_queues[qid];
}
+ dpaa2_dev_tx_multi_txq_ordered(txq, m, nb_events);
+
return nb_events;
}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 3f34d7ecff..07a6811dd2 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -236,6 +236,9 @@ void dpaa2_dev_process_ordered_event(struct qbman_swp *swp,
uint16_t dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
uint16_t dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs,
uint16_t nb_pkts);
+uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
+ struct rte_mbuf **bufs, uint16_t nb_pkts);
+
uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3..447063b3c3 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1445,6 +1445,148 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q,
*dpaa2_seqn(m) = DPAA2_INVALID_MBUF_SEQN;
}
+__rte_internal uint16_t
+dpaa2_dev_tx_multi_txq_ordered(void **queue,
+ struct rte_mbuf **bufs, uint16_t nb_pkts)
+{
+ /* Function to transmit the frames to multiple queues respectively.*/
+ uint32_t loop, retry_count;
+ int32_t ret;
+ struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+ uint32_t frames_to_send;
+ struct rte_mempool *mp;
+ struct qbman_eq_desc eqdesc[MAX_TX_RING_SLOTS];
+ struct dpaa2_queue *dpaa2_q[MAX_TX_RING_SLOTS];
+ struct qbman_swp *swp;
+ uint16_t bpid;
+ struct rte_mbuf *mi;
+ struct rte_eth_dev_data *eth_data;
+ struct dpaa2_dev_priv *priv;
+ struct dpaa2_queue *order_sendq;
+
+ if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
+ ret = dpaa2_affine_qbman_swp();
+ if (ret) {
+ DPAA2_PMD_ERR(
+ "Failed to allocate IO portal, tid: %d\n",
+ rte_gettid());
+ return 0;
+ }
+ }
+ swp = DPAA2_PER_LCORE_PORTAL;
+
+ for (loop = 0; loop < nb_pkts; loop++) {
+ dpaa2_q[loop] = (struct dpaa2_queue *)queue[loop];
+ eth_data = dpaa2_q[loop]->eth_data;
+ priv = eth_data->dev_private;
+ qbman_eq_desc_clear(&eqdesc[loop]);
+ if (*dpaa2_seqn(*bufs) && priv->en_ordered) {
+ order_sendq = (struct dpaa2_queue *)priv->tx_vq[0];
+ dpaa2_set_enqueue_descriptor(order_sendq,
+ (*bufs),
+ &eqdesc[loop]);
+ } else {
+ qbman_eq_desc_set_no_orp(&eqdesc[loop],
+ DPAA2_EQ_RESP_ERR_FQ);
+ qbman_eq_desc_set_fq(&eqdesc[loop],
+ dpaa2_q[loop]->fqid);
+ }
+
+ retry_count = 0;
+ while (qbman_result_SCN_state(dpaa2_q[loop]->cscn)) {
+ retry_count++;
+ /* Retry for some time before giving up */
+ if (retry_count > CONG_RETRY_COUNT)
+ goto send_frames;
+ }
+
+ if (likely(RTE_MBUF_DIRECT(*bufs))) {
+ mp = (*bufs)->pool;
+ /* Check the basic scenario and set
+ * the FD appropriately here itself.
+ */
+ if (likely(mp && mp->ops_index ==
+ priv->bp_list->dpaa2_ops_index &&
+ (*bufs)->nb_segs == 1 &&
+ rte_mbuf_refcnt_read((*bufs)) == 1)) {
+ if (unlikely((*bufs)->ol_flags
+ & PKT_TX_VLAN_PKT)) {
+ ret = rte_vlan_insert(bufs);
+ if (ret)
+ goto send_frames;
+ }
+ DPAA2_MBUF_TO_CONTIG_FD((*bufs),
+ &fd_arr[loop],
+ mempool_to_bpid(mp));
+ bufs++;
+ dpaa2_q[loop]++;
+ continue;
+ }
+ } else {
+ mi = rte_mbuf_from_indirect(*bufs);
+ mp = mi->pool;
+ }
+ /* Not a hw_pkt pool allocated frame */
+ if (unlikely(!mp || !priv->bp_list)) {
+ DPAA2_PMD_ERR("Err: No buffer pool attached");
+ goto send_frames;
+ }
+
+ if (mp->ops_index != priv->bp_list->dpaa2_ops_index) {
+ DPAA2_PMD_WARN("Non DPAA2 buffer pool");
+ /* alloc should be from the default buffer pool
+ * attached to this interface
+ */
+ bpid = priv->bp_list->buf_pool.bpid;
+
+ if (unlikely((*bufs)->nb_segs > 1)) {
+ DPAA2_PMD_ERR(
+ "S/G not supp for non hw offload buffer");
+ goto send_frames;
+ }
+ if (eth_copy_mbuf_to_fd(*bufs,
+ &fd_arr[loop], bpid)) {
+ goto send_frames;
+ }
+ /* free the original packet */
+ rte_pktmbuf_free(*bufs);
+ } else {
+ bpid = mempool_to_bpid(mp);
+ if (unlikely((*bufs)->nb_segs > 1)) {
+ if (eth_mbuf_to_sg_fd(*bufs,
+ &fd_arr[loop],
+ mp,
+ bpid))
+ goto send_frames;
+ } else {
+ eth_mbuf_to_fd(*bufs,
+ &fd_arr[loop], bpid);
+ }
+ }
+
+ bufs++;
+ dpaa2_q[loop]++;
+ }
+
+send_frames:
+ frames_to_send = loop;
+ loop = 0;
+ while (loop < frames_to_send) {
+ ret = qbman_swp_enqueue_multiple_desc(swp, &eqdesc[loop],
+ &fd_arr[loop],
+ frames_to_send - loop);
+ if (likely(ret > 0)) {
+ loop += ret;
+ } else {
+ retry_count++;
+ if (retry_count > DPAA2_MAX_TX_RETRY_COUNT)
+ break;
+ }
+ }
+
+ return loop;
+}
+
/* Callback to handle sending ordered packets through WRIOP based interface */
uint16_t
dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 3ab96344c4..f9786af7e4 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
INTERNAL {
global:
+ dpaa2_dev_tx_multi_txq_ordered;
dpaa2_eth_eventq_attach;
dpaa2_eth_eventq_detach;
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 05/11] net/dpaa2: add debug print for MTU set for jumbo
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (3 preceding siblings ...)
2021-09-27 12:26 ` [dpdk-dev] [PATCH 04/11] net/dpaa2: support multiple Tx queues enqueue for ordered nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 06/11] net/dpaa2: add function to generate HW hash key nipun.gupta
` (8 subsequent siblings)
13 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch adds a debug print for MTU configured on the
device when jumbo frames are enabled.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9cf55c0f0b..275656fbe4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -573,6 +573,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.max_rx_pkt_len -
RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
VLAN_TAG_SIZE;
+ DPAA2_PMD_INFO("MTU configured for the device: %d",
+ dev->data->mtu);
} else {
return -1;
}
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 06/11] net/dpaa2: add function to generate HW hash key
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (4 preceding siblings ...)
2021-09-27 12:26 ` [dpdk-dev] [PATCH 05/11] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 07/11] net/dpaa2: update RSS to support additional distributions nipun.gupta
` (7 subsequent siblings)
13 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch add support to generate the hash key in software
equivalent to WRIOP key generation.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_tlu_hash.c | 149 ++++++++++++++++++++++++
drivers/net/dpaa2/meson.build | 1 +
drivers/net/dpaa2/rte_pmd_dpaa2.h | 19 +++
drivers/net/dpaa2/version.map | 2 +
4 files changed, 171 insertions(+)
create mode 100644 drivers/net/dpaa2/base/dpaa2_tlu_hash.c
diff --git a/drivers/net/dpaa2/base/dpaa2_tlu_hash.c b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
new file mode 100644
index 0000000000..e92f4f03ed
--- /dev/null
+++ b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
@@ -0,0 +1,149 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <rte_pmd_dpaa2.h>
+
+static unsigned int sbox(unsigned int x)
+{
+ unsigned int a, b, c, d;
+ unsigned int oa, ob, oc, od;
+
+ a = x & 0x1;
+ b = (x >> 1) & 0x1;
+ c = (x >> 2) & 0x1;
+ d = (x >> 3) & 0x1;
+
+ oa = ((a & ~b & ~c & d) | (~a & b) | (~a & ~c & ~d) | (b & c)) & 0x1;
+ ob = ((a & ~b & d) | (~a & c & ~d) | (b & ~c)) & 0x1;
+ oc = ((a & ~b & c) | (a & ~b & ~d) | (~a & b & ~d) | (~a & c & ~d) |
+ (b & c & d)) & 0x1;
+ od = ((a & ~b & c) | (~a & b & ~c) | (a & b & ~d) | (~a & c & d)) & 0x1;
+
+ return ((od << 3) | (oc << 2) | (ob << 1) | oa);
+}
+
+static unsigned int sbox_tbl[16];
+
+static int pbox_tbl[16] = {5, 9, 0, 13,
+ 7, 2, 11, 14,
+ 1, 4, 12, 8,
+ 3, 15, 6, 10 };
+
+static unsigned int mix_tbl[8][16];
+
+static unsigned int stage(unsigned int input)
+{
+ int sbox_out = 0;
+ int pbox_out = 0;
+
+ // mix
+ input ^= input >> 16; // xor lower
+ input ^= input << 16; // move original lower to upper
+
+ // printf("%08x\n",input);
+
+ for (int i = 0; i < 32; i += 4)// sbox stage
+ sbox_out |= (sbox_tbl[(input >> i) & 0xf]) << i;
+
+ // permutation
+ for (int i = 0; i < 16; i++)
+ pbox_out |= ((sbox_out >> i) & 0x10001) << pbox_tbl[i];
+
+ return pbox_out;
+}
+
+static unsigned int fast_stage(unsigned int input)
+{
+ int pbox_out = 0;
+
+ // mix
+ input ^= input >> 16; // xor lower
+ input ^= input << 16; // move original lower to upper
+
+ for (int i = 0; i < 32; i += 4) // sbox stage
+ pbox_out |= mix_tbl[i >> 2][(input >> i) & 0xf];
+
+ return pbox_out;
+}
+
+static unsigned int fast_hash32(unsigned int x)
+{
+ for (int i = 0; i < 4; i++)
+ x = fast_stage(x);
+ return x;
+}
+
+static unsigned int
+byte_crc32(unsigned char data/* new byte for the crc calculation */,
+ unsigned old_crc/* crc result of the last iteration */)
+{
+ int i;
+ unsigned int crc, polynom = 0xedb88320;
+ /* the polynomial is built on the reversed version of
+ * the CRC polynomial with out the x64 element.
+ */
+
+ crc = old_crc;
+ for (i = 0; i < 8; i++, data >>= 1)
+ crc = (crc >> 1) ^ (((crc ^ data) & 0x1) ? polynom : 0);
+ /* xor with polynomial is lsb of crc^data is 1 */
+
+ return crc;
+}
+
+static unsigned int crc32_table[256];
+
+static void init_crc32_table(void)
+{
+ int i;
+
+ for (i = 0; i < 256; i++)
+ crc32_table[i] = byte_crc32((unsigned char)i, 0LL);
+}
+
+static unsigned int
+crc32_string(unsigned char *data,
+ int size, unsigned int old_crc)
+{
+ unsigned int crc;
+
+ crc = old_crc;
+ for (int i = 0; i < size; i++)
+ crc = (crc >> 8) ^ crc32_table[(crc ^ data[i]) & 0xff];
+
+ return crc;
+}
+
+static void hash_init(void)
+{
+ init_crc32_table();
+
+ for (int i = 0; i < 16; i++)
+ sbox_tbl[i] = sbox(i);
+
+ for (int i = 0; i < 32; i += 4)
+ for (int j = 0; j < 16; j++) {
+ // (a,b)
+ // (b,a^b)=(X,Y)
+ // (X^Y,X)
+ unsigned int input = (0x88888888 ^ (8 << i)) | (j << i);
+
+ input ^= input << 16; // (X^Y,Y)
+ input ^= input >> 16; // (X^Y,X)
+ mix_tbl[i >> 2][j] = stage(input);
+ //printf("aaa %08x\n", stage(input));
+ }
+}
+
+uint32_t rte_pmd_dpaa2_get_tlu_hash(uint8_t *data, int size)
+{
+ static int init;
+
+ if (~init)
+ hash_init();
+ init = 1;
+ return fast_hash32(crc32_string(data, size, 0x0));
+}
diff --git a/drivers/net/dpaa2/meson.build b/drivers/net/dpaa2/meson.build
index 20eaf0b8e4..4a6397d09e 100644
--- a/drivers/net/dpaa2/meson.build
+++ b/drivers/net/dpaa2/meson.build
@@ -20,6 +20,7 @@ sources = files(
'mc/dpkg.c',
'mc/dpdmux.c',
'mc/dpni.c',
+ 'base/dpaa2_tlu_hash.c',
)
includes += include_directories('base', 'mc')
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index a68244c974..8ea42ee130 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -82,4 +82,23 @@ __rte_experimental
void
rte_pmd_dpaa2_thread_init(void);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Generate the DPAA2 WRIOP based hash value
+ *
+ * @param key
+ * Array of key data
+ * @param size
+ * Size of the hash input key in bytes
+ *
+ * @return
+ * - 0 if successful.
+ * - Negative in case of failure.
+ */
+
+__rte_experimental
+uint32_t
+rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
#endif /* _RTE_PMD_DPAA2_H */
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index f9786af7e4..2059fc5ae8 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -10,6 +10,8 @@ DPDK_22 {
EXPERIMENTAL {
global:
+ # added in 21.11
+ rte_pmd_dpaa2_get_tlu_hash;
# added in 21.05
rte_pmd_dpaa2_mux_rx_frame_len;
# added in 21.08
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 07/11] net/dpaa2: update RSS to support additional distributions
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (5 preceding siblings ...)
2021-09-27 12:26 ` [dpdk-dev] [PATCH 06/11] net/dpaa2: add function to generate HW hash key nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 08/11] net/dpaa: add comments to explain driver behaviour nipun.gupta
` (6 subsequent siblings)
13 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch updates the RSS support to support following additional
distributions:
- VLAN
- ESP
- AH
- PPPOE
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 70 +++++++++++++++++++++++++-
drivers/net/dpaa2/dpaa2_ethdev.h | 7 ++-
2 files changed, 75 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 641e7027f1..08f49af768 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -210,6 +210,10 @@ dpaa2_distset_to_dpkg_profile_cfg(
int l2_configured = 0, l3_configured = 0;
int l4_configured = 0, sctp_configured = 0;
int mpls_configured = 0;
+ int vlan_configured = 0;
+ int esp_configured = 0;
+ int ah_configured = 0;
+ int pppoe_configured = 0;
memset(kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
while (req_dist_set) {
@@ -217,6 +221,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
dist_field = 1ULL << loop;
switch (dist_field) {
case ETH_RSS_L2_PAYLOAD:
+ case ETH_RSS_ETH:
if (l2_configured)
break;
@@ -231,7 +236,70 @@ dpaa2_distset_to_dpkg_profile_cfg(
kg_cfg->extracts[i].extract.from_hdr.type =
DPKG_FULL_FIELD;
i++;
- break;
+ break;
+
+ case ETH_RSS_PPPOE:
+ if (pppoe_configured)
+ break;
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_PPPOE;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_PPPOE_SID;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_ESP:
+ if (esp_configured)
+ break;
+ esp_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_IPSEC_ESP;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_IPSEC_ESP_SPI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_AH:
+ if (ah_configured)
+ break;
+ ah_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_IPSEC_AH;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_IPSEC_AH_SPI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_C_VLAN:
+ case ETH_RSS_S_VLAN:
+ if (vlan_configured)
+ break;
+ vlan_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_VLAN;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_VLAN_TCI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
case ETH_RSS_MPLS:
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 07a6811dd2..e1fe14c8b4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -70,7 +70,12 @@
ETH_RSS_UDP | \
ETH_RSS_TCP | \
ETH_RSS_SCTP | \
- ETH_RSS_MPLS)
+ ETH_RSS_MPLS | \
+ ETH_RSS_C_VLAN | \
+ ETH_RSS_S_VLAN | \
+ ETH_RSS_ESP | \
+ ETH_RSS_AH | \
+ ETH_RSS_PPPOE)
/* LX2 FRC Parsed values (Little Endian) */
#define DPAA2_PKT_TYPE_ETHER 0x0060
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 08/11] net/dpaa: add comments to explain driver behaviour
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (6 preceding siblings ...)
2021-09-27 12:26 ` [dpdk-dev] [PATCH 07/11] net/dpaa2: update RSS to support additional distributions nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 09/11] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
` (5 subsequent siblings)
13 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
This patch adds comment to explain how dpaa_port_fmc_ccnode_parse
function is working to get the HW queue from FMC policy file
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/net/dpaa/dpaa_fmc.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index 5195053361..f8c9360311 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2021 NXP
*/
/* System headers */
@@ -338,6 +338,12 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
fqid = keys_params->key_params[j].cc_next_engine_params
.params.enqueue_params.new_fqid;
+ /* We read DPDK queue from last classification rule present in
+ * FMC policy file. Hence, this check is required here.
+ * Also, the last classification rule in FMC policy file must
+ * have userspace queue so that it can be used by DPDK
+ * application.
+ */
if (keys_params->key_params[j].cc_next_engine_params
.next_engine != e_IOC_FM_PCD_DONE) {
DPAA_PMD_WARN("FMC CC next engine not support");
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 09/11] raw/dpaa2_qdma: use correct params for config and queue setup
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (7 preceding siblings ...)
2021-09-27 12:26 ` [dpdk-dev] [PATCH 08/11] net/dpaa: add comments to explain driver behaviour nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 10/11] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
` (4 subsequent siblings)
13 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
RAW configure and Queue setup APIs support size parameter for
configure. This patch supports the same for DPAA2 QDMA PMD APIs
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 12 +++++++++---
drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h | 8 ++++----
2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c961e18d67..2048c2c514 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#include <string.h>
@@ -1146,8 +1146,11 @@ dpaa2_qdma_configure(const struct rte_rawdev *rawdev,
DPAA2_QDMA_FUNC_TRACE();
- if (config_size != sizeof(*qdma_config))
+ if (config_size != sizeof(*qdma_config)) {
+ DPAA2_QDMA_ERR("Config size mismatch. Expected %ld, Got: %ld",
+ sizeof(*qdma_config), config_size);
return -EINVAL;
+ }
/* In case QDMA device is not in stopped state, return -EBUSY */
if (qdma_dev->state == 1) {
@@ -1247,8 +1250,11 @@ dpaa2_qdma_queue_setup(struct rte_rawdev *rawdev,
DPAA2_QDMA_FUNC_TRACE();
- if (conf_size != sizeof(*q_config))
+ if (conf_size != sizeof(*q_config)) {
+ DPAA2_QDMA_ERR("Config size mismatch. Expected %ld, Got: %ld",
+ sizeof(*q_config), conf_size);
return -EINVAL;
+ }
rte_spinlock_lock(&qdma_dev->lock);
diff --git a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
index cc1ac25451..1314474271 100644
--- a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
+++ b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef __RTE_PMD_DPAA2_QDMA_H__
@@ -177,13 +177,13 @@ struct rte_qdma_queue_config {
#define rte_qdma_info rte_rawdev_info
#define rte_qdma_start(id) rte_rawdev_start(id)
#define rte_qdma_reset(id) rte_rawdev_reset(id)
-#define rte_qdma_configure(id, cf) rte_rawdev_configure(id, cf)
+#define rte_qdma_configure(id, cf, sz) rte_rawdev_configure(id, cf, sz)
#define rte_qdma_dequeue_buffers(id, buf, num, ctxt) \
rte_rawdev_dequeue_buffers(id, buf, num, ctxt)
#define rte_qdma_enqueue_buffers(id, buf, num, ctxt) \
rte_rawdev_enqueue_buffers(id, buf, num, ctxt)
-#define rte_qdma_queue_setup(id, qid, cfg) \
- rte_rawdev_queue_setup(id, qid, cfg)
+#define rte_qdma_queue_setup(id, qid, cfg, sz) \
+ rte_rawdev_queue_setup(id, qid, cfg, sz)
/*TODO introduce per queue stats API in rawdew */
/**
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 10/11] raw/dpaa2_qdma: remove checks for lcore ID
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (8 preceding siblings ...)
2021-09-27 12:26 ` [dpdk-dev] [PATCH 09/11] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 11/11] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
` (3 subsequent siblings)
13 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
There is no need for preventional check of rte_lcore_id() in
data path. This patch removes the same.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 14 --------------
1 file changed, 14 deletions(-)
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index 2048c2c514..807a3694cd 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1389,13 +1389,6 @@ dpaa2_qdma_enqueue(struct rte_rawdev *rawdev,
&dpdmai_dev->qdma_dev->vqs[e_context->vq_id];
int ret;
- /* Return error in case of wrong lcore_id */
- if (rte_lcore_id() != qdma_vq->lcore_id) {
- DPAA2_QDMA_ERR("QDMA enqueue for vqid %d on wrong core",
- e_context->vq_id);
- return -EINVAL;
- }
-
ret = qdma_vq->enqueue_job(qdma_vq, e_context->job, nb_jobs);
if (ret < 0) {
DPAA2_QDMA_ERR("DPDMAI device enqueue failed: %d", ret);
@@ -1428,13 +1421,6 @@ dpaa2_qdma_dequeue(struct rte_rawdev *rawdev,
return -EINVAL;
}
- /* Return error in case of wrong lcore_id */
- if (rte_lcore_id() != (unsigned int)(qdma_vq->lcore_id)) {
- DPAA2_QDMA_WARN("QDMA dequeue for vqid %d on wrong core",
- context->vq_id);
- return -1;
- }
-
/* Only dequeue when there are pending jobs on VQ */
if (qdma_vq->num_enqueues == qdma_vq->num_dequeues)
return 0;
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH 11/11] common/dpaax: fix paddr to vaddr invalid conversion
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (9 preceding siblings ...)
2021-09-27 12:26 ` [dpdk-dev] [PATCH 10/11] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
@ 2021-09-27 12:26 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (2 subsequent siblings)
13 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 12:26 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, stable,
Gagandeep Singh, Nipun Gupta
From: Gagandeep Singh <g.singh@nxp.com>
If some of the VA entries of table are somehow not populated and are
NULL, it can add offset to NULL and return the invalid VA in PA to
VA conversion.
In this patch, adding a check if the VA entry has valid address only
then add offset and return VA.
Fixes: 2f3d633aa593 ("common/dpaax: add library for PA/VA translation table")
Cc: stable@dpdk.org
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/common/dpaax/dpaax_iova_table.h | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h
index 230fba8ba0..d7087ee7ca 100644
--- a/drivers/common/dpaax/dpaax_iova_table.h
+++ b/drivers/common/dpaax/dpaax_iova_table.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef _DPAAX_IOVA_TABLE_H_
@@ -101,6 +101,12 @@ dpaax_iova_table_get_va(phys_addr_t paddr) {
/* paddr > entry->start && paddr <= entry->(start+len) */
index = (paddr_align - entry[i].start)/DPAAX_MEM_SPLIT;
+ /* paddr is within range, but no vaddr entry ever written
+ * at index
+ */
+ if ((void *)entry[i].pages[index] == NULL)
+ return NULL;
+
vaddr = (void *)((uintptr_t)entry[i].pages[index] + offset);
break;
} while (1);
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (10 preceding siblings ...)
2021-09-27 12:26 ` [dpdk-dev] [PATCH 11/11] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 01/11] bus/fslmc: updated MC FW to 10.28 nipun.gupta
` (10 more replies)
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
13 siblings, 11 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Nipun Gupta <nipun.gupta@nxp.com>
This series adds new functionality related to flow redirection,
multiple ordered tx enqueues, generating HW hash key etc.
It also updates the MC firmware version and includes a fix in
dpaxx library.
Changes in v1:
- Fix checkpatch errors
Gagandeep Singh (1):
common/dpaax: fix paddr to vaddr invalid conversion
Hemant Agrawal (4):
bus/fslmc: updated MC FW to 10.28
bus/fslmc: add qbman debug APIs support
net/dpaa2: add debug print for MTU set for jumbo
net/dpaa2: add function to generate HW hash key
Jun Yang (2):
net/dpaa2: support Tx flow redirection action
net/dpaa2: support multiple Tx queues enqueue for ordered
Nipun Gupta (2):
raw/dpaa2_qdma: use correct params for config and queue setup
raw/dpaa2_qdma: remove checks for lcore ID
Rohit Raj (1):
net/dpaa: add comments to explain driver behaviour
Vanshika Shukla (1):
net/dpaa2: update RSS to support additional distributions
drivers/bus/fslmc/mc/dpdmai.c | 4 +-
drivers/bus/fslmc/mc/fsl_dpdmai.h | 21 +-
drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h | 15 +-
drivers/bus/fslmc/mc/fsl_dpmng.h | 4 +-
drivers/bus/fslmc/mc/fsl_dpopr.h | 7 +-
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 201 +++++-
drivers/bus/fslmc/qbman/qbman_debug.c | 621 ++++++++++++++++++
drivers/bus/fslmc/qbman/qbman_portal.c | 6 +
drivers/common/dpaax/dpaax_iova_table.h | 8 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 12 +-
drivers/net/dpaa/dpaa_fmc.c | 8 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 70 +-
drivers/net/dpaa2/base/dpaa2_tlu_hash.c | 153 +++++
drivers/net/dpaa2/dpaa2_ethdev.c | 9 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 12 +-
drivers/net/dpaa2/dpaa2_flow.c | 116 +++-
drivers/net/dpaa2/dpaa2_rxtx.c | 140 ++++
drivers/net/dpaa2/mc/dpdmux.c | 43 ++
drivers/net/dpaa2/mc/dpni.c | 48 +-
drivers/net/dpaa2/mc/dprtc.c | 78 ++-
drivers/net/dpaa2/mc/fsl_dpdmux.h | 6 +
drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 9 +
drivers/net/dpaa2/mc/fsl_dpkg.h | 6 +-
drivers/net/dpaa2/mc/fsl_dpni.h | 147 ++++-
drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 55 +-
drivers/net/dpaa2/mc/fsl_dprtc.h | 19 +-
drivers/net/dpaa2/mc/fsl_dprtc_cmd.h | 25 +-
drivers/net/dpaa2/meson.build | 1 +
drivers/net/dpaa2/rte_pmd_dpaa2.h | 19 +
drivers/net/dpaa2/version.map | 3 +
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 26 +-
drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h | 8 +-
32 files changed, 1789 insertions(+), 111 deletions(-)
create mode 100644 drivers/net/dpaa2/base/dpaa2_tlu_hash.c
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 01/11] bus/fslmc: updated MC FW to 10.28
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 02/11] net/dpaa2: support Tx flow redirection action nipun.gupta
` (9 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Updating MC firmware support APIs to be latest. It supports
improved DPDMUX (SRIOV equivalent) for traffic split between
dpnis and additional PTP APIs.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/fslmc/mc/dpdmai.c | 4 +-
drivers/bus/fslmc/mc/fsl_dpdmai.h | 21 ++++-
drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h | 15 ++--
drivers/bus/fslmc/mc/fsl_dpmng.h | 4 +-
drivers/bus/fslmc/mc/fsl_dpopr.h | 7 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
drivers/net/dpaa2/mc/dpdmux.c | 43 +++++++++
drivers/net/dpaa2/mc/dpni.c | 48 ++++++----
drivers/net/dpaa2/mc/dprtc.c | 78 +++++++++++++++-
drivers/net/dpaa2/mc/fsl_dpdmux.h | 6 ++
drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 9 ++
drivers/net/dpaa2/mc/fsl_dpkg.h | 6 +-
drivers/net/dpaa2/mc/fsl_dpni.h | 124 ++++++++++++++++++++++----
drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 55 +++++++++---
drivers/net/dpaa2/mc/fsl_dprtc.h | 19 +++-
drivers/net/dpaa2/mc/fsl_dprtc_cmd.h | 25 +++++-
16 files changed, 401 insertions(+), 65 deletions(-)
diff --git a/drivers/bus/fslmc/mc/dpdmai.c b/drivers/bus/fslmc/mc/dpdmai.c
index dcb9d516a1..9c2f3bf9d5 100644
--- a/drivers/bus/fslmc/mc/dpdmai.c
+++ b/drivers/bus/fslmc/mc/dpdmai.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#include <fsl_mc_sys.h>
@@ -116,6 +116,7 @@ int dpdmai_create(struct fsl_mc_io *mc_io,
cmd_params->num_queues = cfg->num_queues;
cmd_params->priorities[0] = cfg->priorities[0];
cmd_params->priorities[1] = cfg->priorities[1];
+ cmd_params->options = cpu_to_le32(cfg->adv.options);
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -299,6 +300,7 @@ int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
attr->id = le32_to_cpu(rsp_params->id);
attr->num_of_priorities = rsp_params->num_of_priorities;
attr->num_of_queues = rsp_params->num_of_queues;
+ attr->options = le32_to_cpu(rsp_params->options);
return 0;
}
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
index 19328c00a0..5af8ed48c0 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef __FSL_DPDMAI_H
@@ -36,15 +36,32 @@ int dpdmai_close(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
+/* DPDMAI options */
+
+/**
+ * Enable individual Congestion Groups usage per each priority queue
+ * If this option is not enabled then only one CG is used for all priority
+ * queues
+ * If this option is enabled then a separate specific CG is used for each
+ * individual priority queue.
+ * In this case the priority queue must be specified via congestion notification
+ * API
+ */
+#define DPDMAI_OPT_CG_PER_PRIORITY 0x00000001
+
/**
* struct dpdmai_cfg - Structure representing DPDMAI configuration
* @priorities: Priorities for the DMA hardware processing; valid priorities are
* configured with values 1-8; the entry following last valid entry
* should be configured with 0
+ * @options: dpdmai options
*/
struct dpdmai_cfg {
uint8_t num_queues;
uint8_t priorities[DPDMAI_PRIO_NUM];
+ struct {
+ uint32_t options;
+ } adv;
};
int dpdmai_create(struct fsl_mc_io *mc_io,
@@ -81,11 +98,13 @@ int dpdmai_reset(struct fsl_mc_io *mc_io,
* struct dpdmai_attr - Structure representing DPDMAI attributes
* @id: DPDMAI object ID
* @num_of_priorities: number of priorities
+ * @options: dpdmai options
*/
struct dpdmai_attr {
int id;
uint8_t num_of_priorities;
uint8_t num_of_queues;
+ uint32_t options;
};
__rte_internal
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h b/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
index 7e122de4ef..c8f6b990f8 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
@@ -1,32 +1,33 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2017-2018, 2020-2021 NXP
*/
-
#ifndef _FSL_DPDMAI_CMD_H
#define _FSL_DPDMAI_CMD_H
/* DPDMAI Version */
#define DPDMAI_VER_MAJOR 3
-#define DPDMAI_VER_MINOR 3
+#define DPDMAI_VER_MINOR 4
/* Command versioning */
#define DPDMAI_CMD_BASE_VERSION 1
#define DPDMAI_CMD_VERSION_2 2
+#define DPDMAI_CMD_VERSION_3 3
#define DPDMAI_CMD_ID_OFFSET 4
#define DPDMAI_CMD(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_BASE_VERSION)
#define DPDMAI_CMD_V2(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_VERSION_2)
+#define DPDMAI_CMD_V3(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_VERSION_3)
/* Command IDs */
#define DPDMAI_CMDID_CLOSE DPDMAI_CMD(0x800)
#define DPDMAI_CMDID_OPEN DPDMAI_CMD(0x80E)
-#define DPDMAI_CMDID_CREATE DPDMAI_CMD_V2(0x90E)
+#define DPDMAI_CMDID_CREATE DPDMAI_CMD_V3(0x90E)
#define DPDMAI_CMDID_DESTROY DPDMAI_CMD(0x98E)
#define DPDMAI_CMDID_GET_API_VERSION DPDMAI_CMD(0xa0E)
#define DPDMAI_CMDID_ENABLE DPDMAI_CMD(0x002)
#define DPDMAI_CMDID_DISABLE DPDMAI_CMD(0x003)
-#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMD_V2(0x004)
+#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMD_V3(0x004)
#define DPDMAI_CMDID_RESET DPDMAI_CMD(0x005)
#define DPDMAI_CMDID_IS_ENABLED DPDMAI_CMD(0x006)
@@ -51,6 +52,8 @@ struct dpdmai_cmd_open {
struct dpdmai_cmd_create {
uint8_t num_queues;
uint8_t priorities[2];
+ uint8_t pad;
+ uint32_t options;
};
struct dpdmai_cmd_destroy {
@@ -69,6 +72,8 @@ struct dpdmai_rsp_get_attr {
uint32_t id;
uint8_t num_of_priorities;
uint8_t num_of_queues;
+ uint16_t pad;
+ uint32_t options;
};
#define DPDMAI_DEST_TYPE_SHIFT 0
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index 8764ceaed9..7e9bd96429 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2021 NXP
*
*/
#ifndef __FSL_DPMNG_H
@@ -20,7 +20,7 @@ struct fsl_mc_io;
* Management Complex firmware version information
*/
#define MC_VER_MAJOR 10
-#define MC_VER_MINOR 18
+#define MC_VER_MINOR 28
/**
* struct mc_version
diff --git a/drivers/bus/fslmc/mc/fsl_dpopr.h b/drivers/bus/fslmc/mc/fsl_dpopr.h
index fd727e011b..74dd32f783 100644
--- a/drivers/bus/fslmc/mc/fsl_dpopr.h
+++ b/drivers/bus/fslmc/mc/fsl_dpopr.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*
*/
#ifndef __FSL_DPOPR_H_
@@ -22,7 +22,10 @@
* Retire an existing Order Point Record option
*/
#define OPR_OPT_RETIRE 0x2
-
+/**
+ * Assign an existing Order Point Record to a queue
+ */
+#define OPR_OPT_ASSIGN 0x4
/**
* struct opr_cfg - Structure representing OPR configuration
* @oprrws: Order point record (OPR) restoration window size (0 to 5)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..560b79151b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2273,7 +2273,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
ret = dpni_set_opr(dpni, CMD_PRI_LOW, eth_priv->token,
dpaa2_ethq->tc_index, flow_id,
- OPR_OPT_CREATE, &ocfg);
+ OPR_OPT_CREATE, &ocfg, 0);
if (ret) {
DPAA2_PMD_ERR("Error setting opr: ret: %d\n", ret);
return ret;
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 93912ef9d3..edbb01b45b 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -491,6 +491,49 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
+/**
+ * dpdmux_get_max_frame_length() - Return the maximum frame length for DPDMUX
+ * interface
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPDMUX object
+ * @if_id: Interface id
+ * @max_frame_length: maximum frame length
+ *
+ * When dpdmux object is in VEPA mode this function will ignore if_id parameter
+ * and will return maximum frame length for uplink interface (if_id==0).
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dpdmux_get_max_frame_length(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint16_t if_id,
+ uint16_t *max_frame_length)
+{
+ struct mc_command cmd = { 0 };
+ struct dpdmux_cmd_get_max_frame_len *cmd_params;
+ struct dpdmux_rsp_get_max_frame_len *rsp_params;
+ int err = 0;
+
+ /* prepare command */
+ cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_GET_MAX_FRAME_LENGTH,
+ cmd_flags,
+ token);
+ cmd_params = (struct dpdmux_cmd_get_max_frame_len *)cmd.params;
+ cmd_params->if_id = cpu_to_le16(if_id);
+
+ err = mc_send_command(mc_io, &cmd);
+ if (err)
+ return err;
+
+ rsp_params = (struct dpdmux_rsp_get_max_frame_len *)cmd.params;
+ *max_frame_length = le16_to_cpu(rsp_params->max_len);
+
+ /* send command to mc*/
+ return err;
+}
+
/**
* dpdmux_ul_reset_counters() - Function resets the uplink counter
* @mc_io: Pointer to MC portal's I/O object
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index b254931386..60048d6c43 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#include <fsl_mc_sys.h>
@@ -126,6 +126,8 @@ int dpni_create(struct fsl_mc_io *mc_io,
cmd_params->qos_entries = cfg->qos_entries;
cmd_params->fs_entries = cpu_to_le16(cfg->fs_entries);
cmd_params->num_cgs = cfg->num_cgs;
+ cmd_params->num_opr = cfg->num_opr;
+ cmd_params->dist_key_size = cfg->dist_key_size;
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -1829,6 +1831,7 @@ int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
cmd_params->options = cpu_to_le16(action->options);
cmd_params->flow_id = cpu_to_le16(action->flow_id);
cmd_params->flc = cpu_to_le64(action->flc);
+ cmd_params->redir_token = cpu_to_le16(action->redirect_obj_token);
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
@@ -2442,7 +2445,7 @@ int dpni_reset_statistics(struct fsl_mc_io *mc_io,
}
/**
- * dpni_set_taildrop() - Set taildrop per queue or TC
+ * dpni_set_taildrop() - Set taildrop per congestion group
*
* Setting a per-TC taildrop (cg_point = DPNI_CP_GROUP) will reset any current
* congestion notification or early drop (WRED) configuration previously applied
@@ -2451,13 +2454,14 @@ int dpni_reset_statistics(struct fsl_mc_io *mc_io,
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
- * @cg_point: Congestion point, DPNI_CP_QUEUE is only supported in
+ * @cg_point: Congestion group identifier DPNI_CP_QUEUE is only supported in
* combination with DPNI_QUEUE_RX.
* @q_type: Queue type, can be DPNI_QUEUE_RX or DPNI_QUEUE_TX.
* @tc: Traffic class to apply this taildrop to
- * @q_index: Index of the queue if the DPNI supports multiple queues for
+ * @index/cgid: Index of the queue if the DPNI supports multiple queues for
* traffic distribution.
- * Ignored if CONGESTION_POINT is not DPNI_CP_QUEUE.
+ * If CONGESTION_POINT is DPNI_CP_CONGESTION_GROUP then it
+ * represent the cgid of the congestion point
* @taildrop: Taildrop structure
*
* Return: '0' on Success; Error code otherwise.
@@ -2577,7 +2581,8 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
uint8_t options,
- struct opr_cfg *cfg)
+ struct opr_cfg *cfg,
+ uint8_t opr_id)
{
struct dpni_cmd_set_opr *cmd_params;
struct mc_command cmd = { 0 };
@@ -2591,6 +2596,7 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
cmd_params->tc_id = tc;
cmd_params->index = index;
cmd_params->options = options;
+ cmd_params->opr_id = opr_id;
cmd_params->oloe = cfg->oloe;
cmd_params->oeane = cfg->oeane;
cmd_params->olws = cfg->olws;
@@ -2621,7 +2627,9 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
struct opr_cfg *cfg,
- struct opr_qry *qry)
+ struct opr_qry *qry,
+ uint8_t flags,
+ uint8_t opr_id)
{
struct dpni_rsp_get_opr *rsp_params;
struct dpni_cmd_get_opr *cmd_params;
@@ -2635,6 +2643,8 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
cmd_params = (struct dpni_cmd_get_opr *)cmd.params;
cmd_params->index = index;
cmd_params->tc_id = tc;
+ cmd_params->flags = flags;
+ cmd_params->opr_id = opr_id;
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -2673,7 +2683,7 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
* If the FS is already enabled with a previous call the classification
* key will be changed but all the table rules are kept. If the
* existing rules do not match the key the results will not be
- * predictable. It is the user responsibility to keep key integrity
+ * predictable. It is the user responsibility to keep keyintegrity.
* If cfg.enable is set to 1 the command will create a flow steering table
* and will classify packets according to this table. The packets
* that miss all the table rules will be classified according to
@@ -2695,7 +2705,7 @@ int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
cmd_params = (struct dpni_cmd_set_rx_fs_dist *)cmd.params;
cmd_params->dist_size = cpu_to_le16(cfg->dist_size);
dpni_set_field(cmd_params->enable, RX_FS_DIST_ENABLE, cfg->enable);
- cmd_params->tc = cfg->tc;
+ cmd_params->tc = cfg->tc;
cmd_params->miss_flow_id = cpu_to_le16(cfg->fs_miss_flow_id);
cmd_params->key_cfg_iova = cpu_to_le64(cfg->key_cfg_iova);
@@ -2710,9 +2720,9 @@ int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
* @token: Token of DPNI object
* @cfg: Distribution configuration
* If cfg.enable is set to 1 the packets will be classified using a hash
- * function based on the key received in cfg.key_cfg_iova parameter
+ * function based on the key received in cfg.key_cfg_iova parameter.
* If cfg.enable is set to 0 the packets will be sent to the queue configured in
- * dpni_set_rx_dist_default_queue() call
+ * dpni_set_rx_dist_default_queue() call
*/
int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
uint16_t token, const struct dpni_rx_dist_cfg *cfg)
@@ -2735,9 +2745,9 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
}
/**
- * dpni_add_custom_tpid() - Configures a distinct Ethertype value
- * (or TPID value) to indicate VLAN tag in addition to the common
- * TPID values 0x8100 and 0x88A8
+ * dpni_add_custom_tpid() - Configures a distinct Ethertype value (or TPID
+ * value) to indicate VLAN tag in adition to the common TPID values
+ * 0x81000 and 0x88A8
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
@@ -2745,8 +2755,8 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
*
* Only two custom values are accepted. If the function is called for the third
* time it will return error.
- * To replace an existing value use dpni_remove_custom_tpid() to remove
- * a previous TPID and after that use again the function.
+ * To replace an existing value use dpni_remove_custom_tpid() to remove a
+ * previous TPID and after that use again the function.
*
* Return: '0' on Success; Error code otherwise.
*/
@@ -2769,7 +2779,7 @@ int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
/**
* dpni_remove_custom_tpid() - Removes a distinct Ethertype value added
- * previously with dpni_add_custom_tpid()
+ * previously with dpni_add_custom_tpid()
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
@@ -2798,8 +2808,8 @@ int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
}
/**
- * dpni_get_custom_tpid() - Returns custom TPID (vlan tags) values configured
- * to detect 802.1q frames
+ * dpni_get_custom_tpid() - Returns custom TPID (vlan tags) values configured to
+ * detect 802.1q frames
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
diff --git a/drivers/net/dpaa2/mc/dprtc.c b/drivers/net/dpaa2/mc/dprtc.c
index 42ac89150e..36e62eb0c3 100644
--- a/drivers/net/dpaa2/mc/dprtc.c
+++ b/drivers/net/dpaa2/mc/dprtc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#include <fsl_mc_sys.h>
#include <fsl_mc_cmd.h>
@@ -521,3 +521,79 @@ int dprtc_get_api_version(struct fsl_mc_io *mc_io,
return 0;
}
+
+/**
+ * dprtc_get_ext_trigger_timestamp - Retrieve the Ext trigger timestamp status
+ * (timestamp + flag for unread timestamp in FIFO).
+ *
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @id: External trigger id.
+ * @status: Returned object's external trigger status
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dprtc_get_ext_trigger_timestamp(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ struct dprtc_ext_trigger_status *status)
+{
+ struct dprtc_rsp_ext_trigger_timestamp *rsp_params;
+ struct dprtc_cmd_ext_trigger_timestamp *cmd_params;
+ struct mc_command cmd = { 0 };
+ int err;
+
+ /* prepare command */
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_EXT_TRIGGER_TIMESTAMP,
+ cmd_flags,
+ token);
+
+ cmd_params = (struct dprtc_cmd_ext_trigger_timestamp *)cmd.params;
+ cmd_params->id = id;
+ /* send command to mc*/
+ err = mc_send_command(mc_io, &cmd);
+ if (err)
+ return err;
+
+ /* retrieve response parameters */
+ rsp_params = (struct dprtc_rsp_ext_trigger_timestamp *)cmd.params;
+ status->timestamp = le64_to_cpu(rsp_params->timestamp);
+ status->unread_valid_timestamp = rsp_params->unread_valid_timestamp;
+
+ return 0;
+}
+
+/**
+ * dprtc_set_fiper_loopback() - Set the fiper pulse as source of interrupt for
+ * External Trigger stamps
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @id: External trigger id.
+ * @fiper_as_input: Bit used to control interrupt signal source:
+ * 0 = Normal operation, interrupt external source
+ * 1 = Fiper pulse is looped back into Trigger input
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dprtc_set_fiper_loopback(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ uint8_t fiper_as_input)
+{
+ struct dprtc_ext_trigger_cfg *cmd_params;
+ struct mc_command cmd = { 0 };
+
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_FIPER_LOOPBACK,
+ cmd_flags,
+ token);
+
+ cmd_params = (struct dprtc_ext_trigger_cfg *)cmd.params;
+ cmd_params->id = id;
+ cmd_params->fiper_as_input = fiper_as_input;
+
+ return mc_send_command(mc_io, &cmd);
+}
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index f4f9598a29..b01a98eb59 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -196,6 +196,12 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
uint16_t token,
uint16_t max_frame_length);
+int dpdmux_get_max_frame_length(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint16_t if_id,
+ uint16_t *max_frame_length);
+
/**
* enum dpdmux_counter_type - Counter types
* @DPDMUX_CNT_ING_FRAME: Counts ingress frames
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
index 2ab4d75dfb..f8a1b5b1ae 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
@@ -39,6 +39,7 @@
#define DPDMUX_CMDID_RESET DPDMUX_CMD(0x005)
#define DPDMUX_CMDID_IS_ENABLED DPDMUX_CMD(0x006)
#define DPDMUX_CMDID_SET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a1)
+#define DPDMUX_CMDID_GET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a2)
#define DPDMUX_CMDID_UL_RESET_COUNTERS DPDMUX_CMD(0x0a3)
@@ -124,6 +125,14 @@ struct dpdmux_cmd_set_max_frame_length {
uint16_t max_frame_length;
};
+struct dpdmux_cmd_get_max_frame_len {
+ uint16_t if_id;
+};
+
+struct dpdmux_rsp_get_max_frame_len {
+ uint16_t max_len;
+};
+
#define DPDMUX_ACCEPTED_FRAMES_TYPE_SHIFT 0
#define DPDMUX_ACCEPTED_FRAMES_TYPE_SIZE 4
#define DPDMUX_UNACCEPTED_FRAMES_ACTION_SHIFT 4
diff --git a/drivers/net/dpaa2/mc/fsl_dpkg.h b/drivers/net/dpaa2/mc/fsl_dpkg.h
index 02fe8d50e7..70f2339ea5 100644
--- a/drivers/net/dpaa2/mc/fsl_dpkg.h
+++ b/drivers/net/dpaa2/mc/fsl_dpkg.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef __FSL_DPKG_H_
@@ -21,7 +21,7 @@
/**
* Number of extractions per key profile
*/
-#define DPKG_MAX_NUM_OF_EXTRACTS 10
+#define DPKG_MAX_NUM_OF_EXTRACTS 20
/**
* enum dpkg_extract_from_hdr_type - Selecting extraction by header types
@@ -177,7 +177,7 @@ struct dpni_ext_set_rx_tc_dist {
uint8_t num_extracts;
uint8_t pad[7];
/* words 1..25 */
- struct dpni_dist_extract extracts[10];
+ struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
};
int dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index df42746c9a..34c6b20033 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef __FSL_DPNI_H
@@ -19,6 +19,11 @@ struct fsl_mc_io;
/** General DPNI macros */
+/**
+ * Maximum size of a key
+ */
+#define DPNI_MAX_KEY_SIZE 56
+
/**
* Maximum number of traffic classes
*/
@@ -95,8 +100,18 @@ struct fsl_mc_io;
* Define a custom number of congestion groups
*/
#define DPNI_OPT_CUSTOM_CG 0x000200
-
-
+/**
+ * Define a custom number of order point records
+ */
+#define DPNI_OPT_CUSTOM_OPR 0x000400
+/**
+ * Hash key is shared between all traffic classes
+ */
+#define DPNI_OPT_SHARED_HASH_KEY 0x000800
+/**
+ * Flow steering table is shared between all traffic classes
+ */
+#define DPNI_OPT_SHARED_FS 0x001000
/**
* Software sequence maximum layout size
*/
@@ -183,6 +198,8 @@ struct dpni_cfg {
uint8_t num_rx_tcs;
uint8_t qos_entries;
uint8_t num_cgs;
+ uint16_t num_opr;
+ uint8_t dist_key_size;
};
int dpni_create(struct fsl_mc_io *mc_io,
@@ -366,28 +383,45 @@ int dpni_get_attributes(struct fsl_mc_io *mc_io,
/**
* Extract out of frame header error
*/
-#define DPNI_ERROR_EOFHE 0x00020000
+#define DPNI_ERROR_MS 0x40000000
+#define DPNI_ERROR_PTP 0x08000000
+/* Ethernet multicast frame */
+#define DPNI_ERROR_MC 0x04000000
+/* Ethernet broadcast frame */
+#define DPNI_ERROR_BC 0x02000000
+#define DPNI_ERROR_KSE 0x00040000
+#define DPNI_ERROR_EOFHE 0x00020000
+#define DPNI_ERROR_MNLE 0x00010000
+#define DPNI_ERROR_TIDE 0x00008000
+#define DPNI_ERROR_PIEE 0x00004000
/**
* Frame length error
*/
-#define DPNI_ERROR_FLE 0x00002000
+#define DPNI_ERROR_FLE 0x00002000
/**
* Frame physical error
*/
-#define DPNI_ERROR_FPE 0x00001000
+#define DPNI_ERROR_FPE 0x00001000
+#define DPNI_ERROR_PTE 0x00000080
+#define DPNI_ERROR_ISP 0x00000040
/**
* Parsing header error
*/
-#define DPNI_ERROR_PHE 0x00000020
+#define DPNI_ERROR_PHE 0x00000020
+
+#define DPNI_ERROR_BLE 0x00000010
/**
* Parser L3 checksum error
*/
-#define DPNI_ERROR_L3CE 0x00000004
+#define DPNI_ERROR_L3CV 0x00000008
+
+#define DPNI_ERROR_L3CE 0x00000004
/**
- * Parser L3 checksum error
+ * Parser L4 checksum error
*/
-#define DPNI_ERROR_L4CE 0x00000001
+#define DPNI_ERROR_L4CV 0x00000002
+#define DPNI_ERROR_L4CE 0x00000001
/**
* enum dpni_error_action - Defines DPNI behavior for errors
* @DPNI_ERROR_ACTION_DISCARD: Discard the frame
@@ -455,6 +489,10 @@ int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
* Select to modify the sw-opaque value setting
*/
#define DPNI_BUF_LAYOUT_OPT_SW_OPAQUE 0x00000080
+/**
+ * Select to disable Scatter Gather and use single buffer
+ */
+#define DPNI_BUF_LAYOUT_OPT_NO_SG 0x00000100
/**
* struct dpni_buffer_layout - Structure representing DPNI buffer layout
@@ -733,7 +771,7 @@ int dpni_get_link_state(struct fsl_mc_io *mc_io,
/**
* struct dpni_tx_shaping - Structure representing DPNI tx shaping configuration
- * @rate_limit: Rate in Mbps
+ * @rate_limit: Rate in Mbits/s
* @max_burst_size: Burst size in bytes (up to 64KB)
*/
struct dpni_tx_shaping_cfg {
@@ -798,6 +836,11 @@ int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
uint16_t token,
uint8_t mac_addr[6]);
+/**
+ * Set mac addr queue action
+ */
+#define DPNI_MAC_SET_QUEUE_ACTION 1
+
int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -1464,6 +1507,7 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
struct dpni_fs_action_cfg {
uint64_t flc;
uint16_t flow_id;
+ uint16_t redirect_obj_token;
uint16_t options;
};
@@ -1595,7 +1639,8 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
uint8_t options,
- struct opr_cfg *cfg);
+ struct opr_cfg *cfg,
+ uint8_t opr_id);
int dpni_get_opr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
@@ -1603,7 +1648,9 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
struct opr_cfg *cfg,
- struct opr_qry *qry);
+ struct opr_qry *qry,
+ uint8_t flags,
+ uint8_t opr_id);
/**
* When used for queue_idx in function dpni_set_rx_dist_default_queue will
@@ -1779,14 +1826,57 @@ int dpni_get_sw_sequence_layout(struct fsl_mc_io *mc_io,
/**
* dpni_extract_sw_sequence_layout() - extract the software sequence layout
- * @layout: software sequence layout
- * @sw_sequence_layout_buf: Zeroed 264 bytes of memory before mapping it
- * to DMA
+ * @layout: software sequence layout
+ * @sw_sequence_layout_buf:Zeroed 264 bytes of memory before mapping it to DMA
*
* This function has to be called after dpni_get_sw_sequence_layout
- *
*/
void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
const uint8_t *sw_sequence_layout_buf);
+/**
+ * struct dpni_ptp_cfg - configure single step PTP (IEEE 1588)
+ * @en: enable single step PTP. When enabled the PTPv1 functionality will
+ * not work. If the field is zero, offset and ch_update parameters
+ * will be ignored
+ * @offset: start offset from the beginning of the frame where timestamp
+ * field is found. The offset must respect all MAC headers, VLAN
+ * tags and other protocol headers
+ * @ch_update: when set UDP checksum will be updated inside packet
+ * @peer_delay: For peer-to-peer transparent clocks add this value to the
+ * correction field in addition to the transient time update. The
+ * value expresses nanoseconds.
+ */
+struct dpni_single_step_cfg {
+ uint8_t en;
+ uint8_t ch_update;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_single_step_cfg *ptp_cfg);
+
+int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_single_step_cfg *ptp_cfg);
+
+/**
+ * loopback_en field is valid when calling function dpni_set_port_cfg
+ */
+#define DPNI_PORT_CFG_LOOPBACK 0x01
+
+/**
+ * struct dpni_port_cfg - custom configuration for dpni physical port
+ * @loopback_en: port loopback enabled
+ */
+struct dpni_port_cfg {
+ int loopback_en;
+};
+
+int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, uint32_t flags, struct dpni_port_cfg *port_cfg);
+
+int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_port_cfg *port_cfg);
+
#endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
index c40090b8fe..6fbd93bb38 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef _FSL_DPNI_CMD_H
@@ -9,21 +9,25 @@
/* DPNI Version */
#define DPNI_VER_MAJOR 7
-#define DPNI_VER_MINOR 13
+#define DPNI_VER_MINOR 17
#define DPNI_CMD_BASE_VERSION 1
#define DPNI_CMD_VERSION_2 2
#define DPNI_CMD_VERSION_3 3
+#define DPNI_CMD_VERSION_4 4
+#define DPNI_CMD_VERSION_5 5
#define DPNI_CMD_ID_OFFSET 4
#define DPNI_CMD(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_BASE_VERSION)
#define DPNI_CMD_V2(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_2)
#define DPNI_CMD_V3(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_3)
+#define DPNI_CMD_V4(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_4)
+#define DPNI_CMD_V5(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_5)
/* Command IDs */
#define DPNI_CMDID_OPEN DPNI_CMD(0x801)
#define DPNI_CMDID_CLOSE DPNI_CMD(0x800)
-#define DPNI_CMDID_CREATE DPNI_CMD_V3(0x901)
+#define DPNI_CMDID_CREATE DPNI_CMD_V5(0x901)
#define DPNI_CMDID_DESTROY DPNI_CMD(0x981)
#define DPNI_CMDID_GET_API_VERSION DPNI_CMD(0xa01)
@@ -67,7 +71,7 @@
#define DPNI_CMDID_REMOVE_VLAN_ID DPNI_CMD(0x232)
#define DPNI_CMDID_CLR_VLAN_FILTERS DPNI_CMD(0x233)
-#define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V3(0x235)
+#define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V4(0x235)
#define DPNI_CMDID_SET_RX_TC_POLICING DPNI_CMD(0x23E)
@@ -75,7 +79,7 @@
#define DPNI_CMDID_ADD_QOS_ENT DPNI_CMD_V2(0x241)
#define DPNI_CMDID_REMOVE_QOS_ENT DPNI_CMD(0x242)
#define DPNI_CMDID_CLR_QOS_TBL DPNI_CMD(0x243)
-#define DPNI_CMDID_ADD_FS_ENT DPNI_CMD(0x244)
+#define DPNI_CMDID_ADD_FS_ENT DPNI_CMD_V2(0x244)
#define DPNI_CMDID_REMOVE_FS_ENT DPNI_CMD(0x245)
#define DPNI_CMDID_CLR_FS_ENT DPNI_CMD(0x246)
@@ -140,7 +144,9 @@ struct dpni_cmd_create {
uint16_t fs_entries;
uint8_t num_rx_tcs;
uint8_t pad4;
- uint8_t num_cgs;
+ uint8_t num_cgs;
+ uint16_t num_opr;
+ uint8_t dist_key_size;
};
struct dpni_cmd_destroy {
@@ -411,8 +417,6 @@ struct dpni_rsp_get_port_mac_addr {
uint8_t mac_addr[6];
};
-#define DPNI_MAC_SET_QUEUE_ACTION 1
-
struct dpni_cmd_add_mac_addr {
uint8_t flags;
uint8_t pad;
@@ -594,6 +598,7 @@ struct dpni_cmd_add_fs_entry {
uint64_t key_iova;
uint64_t mask_iova;
uint64_t flc;
+ uint16_t redir_token;
};
struct dpni_cmd_remove_fs_entry {
@@ -779,7 +784,7 @@ struct dpni_rsp_get_congestion_notification {
};
struct dpni_cmd_set_opr {
- uint8_t pad0;
+ uint8_t opr_id;
uint8_t tc_id;
uint8_t index;
uint8_t options;
@@ -792,9 +797,10 @@ struct dpni_cmd_set_opr {
};
struct dpni_cmd_get_opr {
- uint8_t pad;
+ uint8_t flags;
uint8_t tc_id;
uint8_t index;
+ uint8_t opr_id;
};
#define DPNI_RIP_SHIFT 0
@@ -911,5 +917,34 @@ struct dpni_sw_sequence_layout_entry {
uint16_t pad;
};
+#define DPNI_PTP_ENABLE_SHIFT 0
+#define DPNI_PTP_ENABLE_SIZE 1
+#define DPNI_PTP_CH_UPDATE_SHIFT 1
+#define DPNI_PTP_CH_UPDATE_SIZE 1
+struct dpni_cmd_single_step_cfg {
+ uint16_t flags;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+struct dpni_rsp_single_step_cfg {
+ uint16_t flags;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+#define DPNI_PORT_LOOPBACK_EN_SHIFT 0
+#define DPNI_PORT_LOOPBACK_EN_SIZE 1
+
+struct dpni_cmd_set_port_cfg {
+ uint32_t flags;
+ uint32_t bit_params;
+};
+
+struct dpni_rsp_get_port_cfg {
+ uint32_t flags;
+ uint32_t bit_params;
+};
+
#pragma pack(pop)
#endif /* _FSL_DPNI_CMD_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dprtc.h b/drivers/net/dpaa2/mc/fsl_dprtc.h
index 49edb5a050..84ab158444 100644
--- a/drivers/net/dpaa2/mc/fsl_dprtc.h
+++ b/drivers/net/dpaa2/mc/fsl_dprtc.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#ifndef __FSL_DPRTC_H
#define __FSL_DPRTC_H
@@ -86,6 +86,23 @@ int dprtc_set_alarm(struct fsl_mc_io *mc_io,
uint16_t token,
uint64_t time);
+struct dprtc_ext_trigger_status {
+ uint64_t timestamp;
+ uint8_t unread_valid_timestamp;
+};
+
+int dprtc_get_ext_trigger_timestamp(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ struct dprtc_ext_trigger_status *status);
+
+int dprtc_set_fiper_loopback(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ uint8_t fiper_as_input);
+
/**
* struct dprtc_attr - Structure representing DPRTC attributes
* @id: DPRTC object ID
diff --git a/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h b/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
index eca12ff5ee..61aaa4daab 100644
--- a/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#include <fsl_mc_sys.h>
#ifndef _FSL_DPRTC_CMD_H
@@ -7,13 +7,15 @@
/* DPRTC Version */
#define DPRTC_VER_MAJOR 2
-#define DPRTC_VER_MINOR 1
+#define DPRTC_VER_MINOR 3
/* Command versioning */
#define DPRTC_CMD_BASE_VERSION 1
+#define DPRTC_CMD_VERSION_2 2
#define DPRTC_CMD_ID_OFFSET 4
#define DPRTC_CMD(id) (((id) << DPRTC_CMD_ID_OFFSET) | DPRTC_CMD_BASE_VERSION)
+#define DPRTC_CMD_V2(id) (((id) << DPRTC_CMD_ID_OFFSET) | DPRTC_CMD_VERSION_2)
/* Command IDs */
#define DPRTC_CMDID_CLOSE DPRTC_CMD(0x800)
@@ -39,6 +41,7 @@
#define DPRTC_CMDID_SET_EXT_TRIGGER DPRTC_CMD(0x1d8)
#define DPRTC_CMDID_CLEAR_EXT_TRIGGER DPRTC_CMD(0x1d9)
#define DPRTC_CMDID_GET_EXT_TRIGGER_TIMESTAMP DPRTC_CMD(0x1dA)
+#define DPRTC_CMDID_SET_FIPER_LOOPBACK DPRTC_CMD(0x1dB)
/* Macros for accessing command fields smaller than 1byte */
#define DPRTC_MASK(field) \
@@ -87,5 +90,23 @@ struct dprtc_rsp_get_api_version {
uint16_t major;
uint16_t minor;
};
+
+struct dprtc_cmd_ext_trigger_timestamp {
+ uint32_t pad;
+ uint8_t id;
+};
+
+struct dprtc_rsp_ext_trigger_timestamp {
+ uint8_t unread_valid_timestamp;
+ uint8_t pad1;
+ uint16_t pad2;
+ uint32_t pad3;
+ uint64_t timestamp;
+};
+
+struct dprtc_ext_trigger_cfg {
+ uint8_t id;
+ uint8_t fiper_as_input;
+};
#pragma pack(pop)
#endif /* _FSL_DPRTC_CMD_H */
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 02/11] net/dpaa2: support Tx flow redirection action
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 01/11] bus/fslmc: updated MC FW to 10.28 nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 03/11] bus/fslmc: add qbman debug APIs support nipun.gupta
` (8 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
TX redirection support by flow action RTE_FLOW_ACTION_TYPE_PHY_PORT
and RTE_FLOW_ACTION_TYPE_PORT_ID
This action is executed by HW to forward packets between ports.
If the ingress packets match the rule, the packets are switched
without software involved and perf is improved as well.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 5 ++
drivers/net/dpaa2/dpaa2_ethdev.h | 1 +
drivers/net/dpaa2/dpaa2_flow.c | 116 +++++++++++++++++++++++++++----
drivers/net/dpaa2/mc/fsl_dpni.h | 23 ++++++
4 files changed, 132 insertions(+), 13 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 560b79151b..9cf55c0f0b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2822,6 +2822,11 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
+int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
+{
+ return dev->device->driver == &rte_dpaa2_pmd.driver;
+}
+
static int
rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
struct rte_dpaa2_device *dpaa2_dev)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index b9c729f6cd..3f34d7ecff 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -240,6 +240,7 @@ uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
uint16_t dpaa2_dev_tx_conf(void *queue) __rte_unused;
+int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev);
int dpaa2_timesync_enable(struct rte_eth_dev *dev);
int dpaa2_timesync_disable(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index bfe17c350a..5de886ec5e 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#include <sys/queue.h>
@@ -30,10 +30,10 @@
int mc_l4_port_identification;
static char *dpaa2_flow_control_log;
-static int dpaa2_flow_miss_flow_id =
+static uint16_t dpaa2_flow_miss_flow_id =
DPNI_FS_MISS_DROP;
-#define FIXED_ENTRY_SIZE 54
+#define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
enum flow_rule_ipaddr_type {
FLOW_NONE_IPADDR,
@@ -83,9 +83,18 @@ static const
enum rte_flow_action_type dpaa2_supported_action_type[] = {
RTE_FLOW_ACTION_TYPE_END,
RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_PHY_PORT,
+ RTE_FLOW_ACTION_TYPE_PORT_ID,
RTE_FLOW_ACTION_TYPE_RSS
};
+static const
+enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_PHY_PORT,
+ RTE_FLOW_ACTION_TYPE_PORT_ID
+};
+
/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
@@ -2937,6 +2946,19 @@ dpaa2_configure_flow_raw(struct rte_flow *flow,
return 0;
}
+static inline int dpaa2_fs_action_supported(
+ enum rte_flow_action_type action)
+{
+ int i;
+
+ for (i = 0; i < (int)(sizeof(dpaa2_supported_fs_action_type) /
+ sizeof(enum rte_flow_action_type)); i++) {
+ if (action == dpaa2_supported_fs_action_type[i])
+ return 1;
+ }
+
+ return 0;
+}
/* The existing QoS/FS entry with IP address(es)
* needs update after
* new extract(s) are inserted before IP
@@ -3115,7 +3137,7 @@ dpaa2_flow_entry_update(
}
}
- if (curr->action != RTE_FLOW_ACTION_TYPE_QUEUE) {
+ if (!dpaa2_fs_action_supported(curr->action)) {
curr = LIST_NEXT(curr, next);
continue;
}
@@ -3253,6 +3275,43 @@ dpaa2_flow_verify_attr(
return 0;
}
+static inline struct rte_eth_dev *
+dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
+ const struct rte_flow_action *action)
+{
+ const struct rte_flow_action_phy_port *phy_port;
+ const struct rte_flow_action_port_id *port_id;
+ int idx = -1;
+ struct rte_eth_dev *dest_dev;
+
+ if (action->type == RTE_FLOW_ACTION_TYPE_PHY_PORT) {
+ phy_port = (const struct rte_flow_action_phy_port *)
+ action->conf;
+ if (!phy_port->original)
+ idx = phy_port->index;
+ } else if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
+ port_id = (const struct rte_flow_action_port_id *)
+ action->conf;
+ if (!port_id->original)
+ idx = port_id->id;
+ } else {
+ return NULL;
+ }
+
+ if (idx >= 0) {
+ if (!rte_eth_dev_is_valid_port(idx))
+ return NULL;
+ dest_dev = &rte_eth_devices[idx];
+ } else {
+ dest_dev = priv->eth_dev;
+ }
+
+ if (!dpaa2_dev_is_dpaa2(dest_dev))
+ return NULL;
+
+ return dest_dev;
+}
+
static inline int
dpaa2_flow_verify_action(
struct dpaa2_dev_priv *priv,
@@ -3278,6 +3337,14 @@ dpaa2_flow_verify_action(
return -1;
}
break;
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
+ if (!dpaa2_flow_redirect_dev(priv, &actions[j])) {
+ DPAA2_PMD_ERR(
+ "Invalid port id of action");
+ return -ENOTSUP;
+ }
+ break;
case RTE_FLOW_ACTION_TYPE_RSS:
rss_conf = (const struct rte_flow_action_rss *)
(actions[j].conf);
@@ -3330,11 +3397,13 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
struct dpni_qos_tbl_cfg qos_cfg;
struct dpni_fs_action_cfg action;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
- struct dpaa2_queue *rxq;
+ struct dpaa2_queue *dest_q;
struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
size_t param;
struct rte_flow *curr = LIST_FIRST(&priv->flows);
uint16_t qos_index;
+ struct rte_eth_dev *dest_dev;
+ struct dpaa2_dev_priv *dest_priv;
ret = dpaa2_flow_verify_attr(priv, attr);
if (ret)
@@ -3446,12 +3515,32 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
while (!end_of_list) {
switch (actions[j].type) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
- dest_queue =
- (const struct rte_flow_action_queue *)(actions[j].conf);
- rxq = priv->rx_vq[dest_queue->index];
- flow->action = RTE_FLOW_ACTION_TYPE_QUEUE;
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
- action.flow_id = rxq->flow_id;
+ flow->action = actions[j].type;
+
+ if (actions[j].type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+ dest_queue = (const struct rte_flow_action_queue *)
+ (actions[j].conf);
+ dest_q = priv->rx_vq[dest_queue->index];
+ action.flow_id = dest_q->flow_id;
+ } else {
+ dest_dev = dpaa2_flow_redirect_dev(priv,
+ &actions[j]);
+ if (!dest_dev) {
+ DPAA2_PMD_ERR(
+ "Invalid destination device to redirect!");
+ return -1;
+ }
+
+ dest_priv = dest_dev->data->dev_private;
+ dest_q = dest_priv->tx_vq[0];
+ action.options =
+ DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
+ action.redirect_obj_token = dest_priv->token;
+ action.flow_id = dest_q->flow_id;
+ }
/* Configure FS table first*/
if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
@@ -3481,8 +3570,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
tc_cfg.enable = true;
- tc_cfg.fs_miss_flow_id =
- dpaa2_flow_miss_flow_id;
+ tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
priv->token, &tc_cfg);
if (ret < 0) {
@@ -3970,7 +4058,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
ret = dpaa2_generic_flow_set(flow, dev, attr, pattern,
actions, error);
if (ret < 0) {
- if (error->type > RTE_FLOW_ERROR_TYPE_ACTION)
+ if (error && error->type > RTE_FLOW_ERROR_TYPE_ACTION)
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
attr, "unknown");
@@ -4002,6 +4090,8 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
switch (flow->action) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
if (priv->num_rx_tc > 1) {
/* Remove entry from QoS table first */
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index 34c6b20033..469ab9b3d4 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1496,12 +1496,35 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
*/
#define DPNI_FS_OPT_SET_STASH_CONTROL 0x4
+/**
+ * Redirect matching traffic to Rx part of another dpni object. The frame
+ * will be classified according to new qos and flow steering rules from
+ * target dpni object.
+ */
+#define DPNI_FS_OPT_REDIRECT_TO_DPNI_RX 0x08
+
+/**
+ * Redirect matching traffic into Tx queue of another dpni object. The
+ * frame will be transmitted directly
+ */
+#define DPNI_FS_OPT_REDIRECT_TO_DPNI_TX 0x10
+
/**
* struct dpni_fs_action_cfg - Action configuration for table look-up
* @flc: FLC value for traffic matching this rule. Please check the Frame
* Descriptor section in the hardware documentation for more information.
* @flow_id: Identifies the Rx queue used for matching traffic. Supported
* values are in range 0 to num_queue-1.
+ * @redirect_obj_token: token that identifies the object where frame is
+ * redirected when this rule is hit. This paraneter is used only when one of the
+ * flags DPNI_FS_OPT_REDIRECT_TO_DPNI_RX or DPNI_FS_OPT_REDIRECT_TO_DPNI_TX is
+ * set.
+ * The token is obtained using dpni_open() API call. The object must stay
+ * open during the operation to ensure the fact that application has access
+ * on it. If the object is destroyed of closed next actions will take place:
+ * - if DPNI_FS_OPT_DISCARD is set the frame will be discarded by current dpni
+ * - if DPNI_FS_OPT_DISCARD is cleared the frame will be enqueued in queue with
+ * index provided in flow_id parameter.
* @options: Any combination of DPNI_FS_OPT_ values.
*/
struct dpni_fs_action_cfg {
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 03/11] bus/fslmc: add qbman debug APIs support
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 01/11] bus/fslmc: updated MC FW to 10.28 nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 02/11] net/dpaa2: support Tx flow redirection action nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 04/11] net/dpaa2: support multiple Tx queues enqueue for ordered nipun.gupta
` (7 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena,
Youri Querry, Nipun Gupta
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Add support for debugging qbman FQs
Signed-off-by: Youri Querry <youri.querry_1@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 201 +++++-
drivers/bus/fslmc/qbman/qbman_debug.c | 621 ++++++++++++++++++
drivers/bus/fslmc/qbman/qbman_portal.c | 6 +
3 files changed, 824 insertions(+), 4 deletions(-)
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 54096e8774..14c40a74c5 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,13 +1,118 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2020 NXP
+ * Copyright 2018-2020 NXP
*/
#ifndef _FSL_QBMAN_DEBUG_H
#define _FSL_QBMAN_DEBUG_H
-#include <rte_compat.h>
struct qbman_swp;
+/* Buffer pool query commands */
+struct qbman_bp_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[4];
+ uint8_t bdi;
+ uint8_t state;
+ uint32_t fill;
+ uint32_t hdptr;
+ uint16_t swdet;
+ uint16_t swdxt;
+ uint16_t hwdet;
+ uint16_t hwdxt;
+ uint16_t swset;
+ uint16_t swsxt;
+ uint16_t vbpid;
+ uint16_t icid;
+ uint64_t bpscn_addr;
+ uint64_t bpscn_ctx;
+ uint16_t hw_targ;
+ uint8_t dbe;
+ uint8_t reserved2;
+ uint8_t sdcnt;
+ uint8_t hdcnt;
+ uint8_t sscnt;
+ uint8_t reserved3[9];
+};
+
+int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
+ struct qbman_bp_query_rslt *r);
+int qbman_bp_get_bdi(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_va(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_wae(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swdet(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swdxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hwdet(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hwdxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swset(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swsxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_vbpid(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_icid(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_pl(struct qbman_bp_query_rslt *r);
+uint64_t qbman_bp_get_bpscn_addr(struct qbman_bp_query_rslt *r);
+uint64_t qbman_bp_get_bpscn_ctx(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hw_targ(struct qbman_bp_query_rslt *r);
+int qbman_bp_has_free_bufs(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_num_free_bufs(struct qbman_bp_query_rslt *r);
+int qbman_bp_is_depleted(struct qbman_bp_query_rslt *r);
+int qbman_bp_is_surplus(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_hdptr(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_sdcnt(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_hdcnt(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_sscnt(struct qbman_bp_query_rslt *r);
+
+/* FQ query function for programmable fields */
+struct qbman_fq_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[8];
+ uint16_t cgid;
+ uint16_t dest_wq;
+ uint8_t reserved2;
+ uint8_t fq_ctrl;
+ uint16_t ics_cred;
+ uint16_t td_thresh;
+ uint16_t oal_oac;
+ uint8_t reserved3;
+ uint8_t mctl;
+ uint64_t fqd_ctx;
+ uint16_t icid;
+ uint16_t reserved4;
+ uint32_t vfqid;
+ uint32_t fqid_er;
+ uint16_t opridsz;
+ uint8_t reserved5[18];
+};
+
+int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
+ struct qbman_fq_query_rslt *r);
+uint8_t qbman_fq_attr_get_fqctrl(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_cgrid(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_destwq(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_tdthresh(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_oa_ics(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_oa_cgr(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_oal(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_bdi(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_ff(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_va(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_ps(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_pps(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_icid(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_pl(struct qbman_fq_query_rslt *r);
+uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r);
+uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r);
+
+/* FQ query command for non-programmable fields*/
+enum qbman_fq_schedstate_e {
+ qbman_fq_schedstate_oos = 0,
+ qbman_fq_schedstate_retired,
+ qbman_fq_schedstate_tentatively_scheduled,
+ qbman_fq_schedstate_truly_scheduled,
+ qbman_fq_schedstate_parked,
+ qbman_fq_schedstate_held_active,
+};
struct qbman_fq_query_np_rslt {
uint8_t verb;
@@ -32,10 +137,98 @@ uint8_t verb;
__rte_internal
int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
struct qbman_fq_query_np_rslt *r);
-
+uint8_t qbman_fq_state_schedstate(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_force_eligible(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_xoff(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_retirement_pending(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_overflow_error(const struct qbman_fq_query_np_rslt *r);
__rte_internal
uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r);
-
uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r);
+/* CGR query */
+struct qbman_cgr_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[6];
+ uint8_t ctl1;
+ uint8_t reserved1;
+ uint16_t oal;
+ uint16_t reserved2;
+ uint8_t mode;
+ uint8_t ctl2;
+ uint8_t iwc;
+ uint8_t tdc;
+ uint16_t cs_thres;
+ uint16_t cs_thres_x;
+ uint16_t td_thres;
+ uint16_t cscn_tdcp;
+ uint16_t cscn_wqid;
+ uint16_t cscn_vcgid;
+ uint16_t cg_icid;
+ uint64_t cg_wr_addr;
+ uint64_t cscn_ctx;
+ uint64_t i_cnt;
+ uint64_t a_cnt;
+};
+
+int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_en_enter(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_en_exit(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_icd(struct qbman_cgr_query_rslt *r);
+uint8_t qbman_cgr_get_mode(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_rej_cnt_mode(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_bdi(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_cs_thres(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_cs_thres_x(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_td_thres(struct qbman_cgr_query_rslt *r);
+
+/* WRED query */
+struct qbman_wred_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[6];
+ uint8_t edp[7];
+ uint8_t reserved1;
+ uint32_t wred_parm_dp[7];
+ uint8_t reserved2[20];
+};
+
+int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_wred_query_rslt *r);
+int qbman_cgr_attr_wred_get_edp(struct qbman_wred_query_rslt *r, uint32_t idx);
+void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
+ uint64_t *maxth, uint8_t *maxp);
+uint32_t qbman_cgr_attr_wred_get_parm_dp(struct qbman_wred_query_rslt *r,
+ uint32_t idx);
+
+/* CGR/CCGR/CQ statistics query */
+int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+
+/* Query Work Queue Channel */
+struct qbman_wqchan_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint16_t chid;
+ uint8_t reserved;
+ uint8_t ctrl;
+ uint16_t cdan_wqid;
+ uint64_t cdan_ctx;
+ uint32_t reserved2[4];
+ uint32_t wq_len[8];
+};
+
+int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
+ struct qbman_wqchan_query_rslt *r);
+uint32_t qbman_wqchan_attr_get_wqlen(struct qbman_wqchan_query_rslt *r, int wq);
+uint64_t qbman_wqchan_attr_get_cdan_ctx(struct qbman_wqchan_query_rslt *r);
+uint16_t qbman_wqchan_attr_get_cdan_wqid(struct qbman_wqchan_query_rslt *r);
+uint8_t qbman_wqchan_attr_get_ctrl(struct qbman_wqchan_query_rslt *r);
+uint16_t qbman_wqchan_attr_get_chanid(struct qbman_wqchan_query_rslt *r);
#endif /* !_FSL_QBMAN_DEBUG_H */
diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 34374ae4b6..eea06988ff 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
+ * Copyright 2018-2020 NXP
*/
#include "compat.h"
@@ -16,6 +17,179 @@
#define QBMAN_CGR_STAT_QUERY 0x55
#define QBMAN_CGR_STAT_QUERY_CLR 0x56
+struct qbman_bp_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t bpid;
+ uint8_t reserved2[60];
+};
+
+#define QB_BP_STATE_SHIFT 24
+#define QB_BP_VA_SHIFT 1
+#define QB_BP_VA_MASK 0x2
+#define QB_BP_WAE_SHIFT 2
+#define QB_BP_WAE_MASK 0x4
+#define QB_BP_PL_SHIFT 15
+#define QB_BP_PL_MASK 0x8000
+#define QB_BP_ICID_MASK 0x7FFF
+
+int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
+ struct qbman_bp_query_rslt *r)
+{
+ struct qbman_bp_query_desc *p;
+
+ /* Start the management command */
+ p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ /* Encode the caller-provided attributes */
+ p->bpid = bpid;
+
+ /* Complete the management command */
+ *r = *(struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_BP_QUERY);
+ if (!r) {
+ pr_err("qbman: Query BPID %d failed, no response\n",
+ bpid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of BPID 0x%x failed, code=0x%02x\n", bpid,
+ r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_bp_get_bdi(struct qbman_bp_query_rslt *r)
+{
+ return r->bdi & 1;
+}
+
+int qbman_bp_get_va(struct qbman_bp_query_rslt *r)
+{
+ return (r->bdi & QB_BP_VA_MASK) >> QB_BP_VA_MASK;
+}
+
+int qbman_bp_get_wae(struct qbman_bp_query_rslt *r)
+{
+ return (r->bdi & QB_BP_WAE_MASK) >> QB_BP_WAE_SHIFT;
+}
+
+static uint16_t qbman_bp_thresh_to_value(uint16_t val)
+{
+ return (val & 0xff) << ((val & 0xf00) >> 8);
+}
+
+uint16_t qbman_bp_get_swdet(struct qbman_bp_query_rslt *r)
+{
+
+ return qbman_bp_thresh_to_value(r->swdet);
+}
+
+uint16_t qbman_bp_get_swdxt(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->swdxt);
+}
+
+uint16_t qbman_bp_get_hwdet(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->hwdet);
+}
+
+uint16_t qbman_bp_get_hwdxt(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->hwdxt);
+}
+
+uint16_t qbman_bp_get_swset(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->swset);
+}
+
+uint16_t qbman_bp_get_swsxt(struct qbman_bp_query_rslt *r)
+{
+
+ return qbman_bp_thresh_to_value(r->swsxt);
+}
+
+uint16_t qbman_bp_get_vbpid(struct qbman_bp_query_rslt *r)
+{
+ return r->vbpid;
+}
+
+uint16_t qbman_bp_get_icid(struct qbman_bp_query_rslt *r)
+{
+ return r->icid & QB_BP_ICID_MASK;
+}
+
+int qbman_bp_get_pl(struct qbman_bp_query_rslt *r)
+{
+ return (r->icid & QB_BP_PL_MASK) >> QB_BP_PL_SHIFT;
+}
+
+uint64_t qbman_bp_get_bpscn_addr(struct qbman_bp_query_rslt *r)
+{
+ return r->bpscn_addr;
+}
+
+uint64_t qbman_bp_get_bpscn_ctx(struct qbman_bp_query_rslt *r)
+{
+ return r->bpscn_ctx;
+}
+
+uint16_t qbman_bp_get_hw_targ(struct qbman_bp_query_rslt *r)
+{
+ return r->hw_targ;
+}
+
+int qbman_bp_has_free_bufs(struct qbman_bp_query_rslt *r)
+{
+ return !(int)(r->state & 0x1);
+}
+
+int qbman_bp_is_depleted(struct qbman_bp_query_rslt *r)
+{
+ return (int)((r->state & 0x2) >> 1);
+}
+
+int qbman_bp_is_surplus(struct qbman_bp_query_rslt *r)
+{
+ return (int)((r->state & 0x4) >> 2);
+}
+
+uint32_t qbman_bp_num_free_bufs(struct qbman_bp_query_rslt *r)
+{
+ return r->fill;
+}
+
+uint32_t qbman_bp_get_hdptr(struct qbman_bp_query_rslt *r)
+{
+ return r->hdptr;
+}
+
+uint32_t qbman_bp_get_sdcnt(struct qbman_bp_query_rslt *r)
+{
+ return r->sdcnt;
+}
+
+uint32_t qbman_bp_get_hdcnt(struct qbman_bp_query_rslt *r)
+{
+ return r->hdcnt;
+}
+
+uint32_t qbman_bp_get_sscnt(struct qbman_bp_query_rslt *r)
+{
+ return r->sscnt;
+}
+
struct qbman_fq_query_desc {
uint8_t verb;
uint8_t reserved[3];
@@ -23,6 +197,128 @@ struct qbman_fq_query_desc {
uint8_t reserved2[56];
};
+/* FQ query function for programmable fields */
+int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
+ struct qbman_fq_query_rslt *r)
+{
+ struct qbman_fq_query_desc *p;
+
+ p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->fqid = fqid;
+ *r = *(struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_FQ_QUERY);
+ if (!r) {
+ pr_err("qbman: Query FQID %d failed, no response\n",
+ fqid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of FQID 0x%x failed, code=0x%02x\n",
+ fqid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+uint8_t qbman_fq_attr_get_fqctrl(struct qbman_fq_query_rslt *r)
+{
+ return r->fq_ctrl;
+}
+
+uint16_t qbman_fq_attr_get_cgrid(struct qbman_fq_query_rslt *r)
+{
+ return r->cgid;
+}
+
+uint16_t qbman_fq_attr_get_destwq(struct qbman_fq_query_rslt *r)
+{
+ return r->dest_wq;
+}
+
+static uint16_t qbman_thresh_to_value(uint16_t val)
+{
+ return ((val & 0x1FE0) >> 5) << (val & 0x1F);
+}
+
+uint16_t qbman_fq_attr_get_tdthresh(struct qbman_fq_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->td_thresh);
+}
+
+int qbman_fq_attr_get_oa_ics(struct qbman_fq_query_rslt *r)
+{
+ return (int)(r->oal_oac >> 14) & 0x1;
+}
+
+int qbman_fq_attr_get_oa_cgr(struct qbman_fq_query_rslt *r)
+{
+ return (int)(r->oal_oac >> 15);
+}
+
+uint16_t qbman_fq_attr_get_oal(struct qbman_fq_query_rslt *r)
+{
+ return (r->oal_oac & 0x0FFF);
+}
+
+int qbman_fq_attr_get_bdi(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x1);
+}
+
+int qbman_fq_attr_get_ff(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x2) >> 1;
+}
+
+int qbman_fq_attr_get_va(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x4) >> 2;
+}
+
+int qbman_fq_attr_get_ps(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x8) >> 3;
+}
+
+int qbman_fq_attr_get_pps(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x30) >> 4;
+}
+
+uint16_t qbman_fq_attr_get_icid(struct qbman_fq_query_rslt *r)
+{
+ return r->icid & 0x7FFF;
+}
+
+int qbman_fq_attr_get_pl(struct qbman_fq_query_rslt *r)
+{
+ return (int)((r->icid & 0x8000) >> 15);
+}
+
+uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r)
+{
+ return r->vfqid & 0x00FFFFFF;
+}
+
+uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r)
+{
+ return r->fqid_er & 0x00FFFFFF;
+}
+
+uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r)
+{
+ return r->opridsz;
+}
+
int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
struct qbman_fq_query_np_rslt *r)
{
@@ -55,6 +351,31 @@ int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
return 0;
}
+uint8_t qbman_fq_state_schedstate(const struct qbman_fq_query_np_rslt *r)
+{
+ return r->st1 & 0x7;
+}
+
+int qbman_fq_state_force_eligible(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x8) >> 3);
+}
+
+int qbman_fq_state_xoff(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x10) >> 4);
+}
+
+int qbman_fq_state_retirement_pending(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x20) >> 5);
+}
+
+int qbman_fq_state_overflow_error(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x40) >> 6);
+}
+
uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r)
{
return (r->frm_cnt & 0x00FFFFFF);
@@ -64,3 +385,303 @@ uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r)
{
return r->byte_cnt;
}
+
+/* Query CGR */
+struct qbman_cgr_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t cgid;
+ uint8_t reserved2[60];
+};
+
+int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_cgr_query_rslt *r)
+{
+ struct qbman_cgr_query_desc *p;
+
+ p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ *r = *(struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_CGR_QUERY);
+ if (!r) {
+ pr_err("qbman: Query CGID %d failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_CGR_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query CGID 0x%x failed,code=0x%02x\n", cgid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_cgr_get_cscn_wq_en_enter(struct qbman_cgr_query_rslt *r)
+{
+ return (int)(r->ctl1 & 0x1);
+}
+
+int qbman_cgr_get_cscn_wq_en_exit(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->ctl1 & 0x2) >> 1);
+}
+
+int qbman_cgr_get_cscn_wq_icd(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->ctl1 & 0x4) >> 2);
+}
+
+uint8_t qbman_cgr_get_mode(struct qbman_cgr_query_rslt *r)
+{
+ return r->mode & 0x3;
+}
+
+int qbman_cgr_get_rej_cnt_mode(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->mode & 0x4) >> 2);
+}
+
+int qbman_cgr_get_cscn_bdi(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->mode & 0x8) >> 3);
+}
+
+uint16_t qbman_cgr_attr_get_cs_thres(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->cs_thres);
+}
+
+uint16_t qbman_cgr_attr_get_cs_thres_x(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->cs_thres_x);
+}
+
+uint16_t qbman_cgr_attr_get_td_thres(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->td_thres);
+}
+
+int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_wred_query_rslt *r)
+{
+ struct qbman_cgr_query_desc *p;
+
+ p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ *r = *(struct qbman_wred_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_WRED_QUERY);
+ if (!r) {
+ pr_err("qbman: Query CGID WRED %d failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WRED_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query CGID WRED 0x%x failed,code=0x%02x\n",
+ cgid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_cgr_attr_wred_get_edp(struct qbman_wred_query_rslt *r, uint32_t idx)
+{
+ return (int)(r->edp[idx] & 1);
+}
+
+uint32_t qbman_cgr_attr_wred_get_parm_dp(struct qbman_wred_query_rslt *r,
+ uint32_t idx)
+{
+ return r->wred_parm_dp[idx];
+}
+
+void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
+ uint64_t *maxth, uint8_t *maxp)
+{
+ uint8_t ma, mn, step_i, step_s, pn;
+
+ ma = (uint8_t)(dp >> 24);
+ mn = (uint8_t)(dp >> 19) & 0x1f;
+ step_i = (uint8_t)(dp >> 11);
+ step_s = (uint8_t)(dp >> 6) & 0x1f;
+ pn = (uint8_t)dp & 0x3f;
+
+ *maxp = (uint8_t)(((pn<<2) * 100)/256);
+
+ if (mn == 0)
+ *maxth = ma;
+ else
+ *maxth = ((ma+256) * (1<<(mn-1)));
+
+ if (step_s == 0)
+ *minth = *maxth - step_i;
+ else
+ *minth = *maxth - (256 + step_i) * (1<<(step_s - 1));
+}
+
+/* Query CGR/CCGR/CQ statistics */
+struct qbman_cgr_statistics_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t cgid;
+ uint8_t reserved1;
+ uint8_t ct;
+ uint8_t reserved2[58];
+};
+
+struct qbman_cgr_statistics_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[14];
+ uint64_t frm_cnt;
+ uint64_t byte_cnt;
+ uint32_t reserved2[8];
+};
+
+static int qbman_cgr_statistics_query(struct qbman_swp *s, uint32_t cgid,
+ int clear, uint32_t command_type,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ struct qbman_cgr_statistics_query_desc *p;
+ struct qbman_cgr_statistics_query_rslt *r;
+ uint32_t query_verb;
+
+ p = (struct qbman_cgr_statistics_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ if (command_type < 2)
+ p->ct = command_type;
+ query_verb = clear ?
+ QBMAN_CGR_STAT_QUERY_CLR : QBMAN_CGR_STAT_QUERY;
+ r = (struct qbman_cgr_statistics_query_rslt *)qbman_swp_mc_complete(s,
+ p, query_verb);
+ if (!r) {
+ pr_err("qbman: Query CGID %d statistics failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != query_verb);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query statistics of CGID 0x%x failed, code=0x%02x\n",
+ cgid, r->rslt);
+ return -EIO;
+ }
+
+ if (*frame_cnt)
+ *frame_cnt = r->frm_cnt & 0xFFFFFFFFFFllu;
+ if (*byte_cnt)
+ *byte_cnt = r->byte_cnt & 0xFFFFFFFFFFllu;
+
+ return 0;
+}
+
+int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 0xff,
+ frame_cnt, byte_cnt);
+}
+
+int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 1,
+ frame_cnt, byte_cnt);
+}
+
+int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 0,
+ frame_cnt, byte_cnt);
+}
+
+/* WQ Chan Query */
+struct qbman_wqchan_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t chid;
+ uint8_t reserved2[60];
+};
+
+int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
+ struct qbman_wqchan_query_rslt *r)
+{
+ struct qbman_wqchan_query_desc *p;
+
+ /* Start the management command */
+ p = (struct qbman_wqchan_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ /* Encode the caller-provided attributes */
+ p->chid = chanid;
+
+ /* Complete the management command */
+ *r = *(struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_WQ_QUERY);
+ if (!r) {
+ pr_err("qbman: Query WQ Channel %d failed, no response\n",
+ chanid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WQ_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of WQCHAN 0x%x failed, code=0x%02x\n",
+ chanid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+uint32_t qbman_wqchan_attr_get_wqlen(struct qbman_wqchan_query_rslt *r, int wq)
+{
+ return r->wq_len[wq] & 0x00FFFFFF;
+}
+
+uint64_t qbman_wqchan_attr_get_cdan_ctx(struct qbman_wqchan_query_rslt *r)
+{
+ return r->cdan_ctx;
+}
+
+uint16_t qbman_wqchan_attr_get_cdan_wqid(struct qbman_wqchan_query_rslt *r)
+{
+ return r->cdan_wqid;
+}
+
+uint8_t qbman_wqchan_attr_get_ctrl(struct qbman_wqchan_query_rslt *r)
+{
+ return r->ctrl;
+}
+
+uint16_t qbman_wqchan_attr_get_chanid(struct qbman_wqchan_query_rslt *r)
+{
+ return r->chid;
+}
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index aedcad9258..3a7579c8a7 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1865,6 +1865,12 @@ void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
d->pull.dq_src = chid;
}
+/**
+ * qbman_pull_desc_set_rad() - Decide whether reschedule the fq after dequeue
+ *
+ * @rad: 1 = Reschedule the FQ after dequeue.
+ * 0 = Allow the FQ to remain active after dequeue.
+ */
void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad)
{
if (d->pull.verb & (1 << QB_VDQCR_VERB_RLS_SHIFT)) {
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 04/11] net/dpaa2: support multiple Tx queues enqueue for ordered
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (2 preceding siblings ...)
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 03/11] bus/fslmc: add qbman debug APIs support nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 05/11] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
` (6 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
Support the tx enqueue in order queue mode, where the
queue id for each event may be different.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/event/dpaa2/dpaa2_eventdev.c | 12 ++-
drivers/net/dpaa2/dpaa2_ethdev.h | 4 +
drivers/net/dpaa2/dpaa2_rxtx.c | 140 +++++++++++++++++++++++++++
drivers/net/dpaa2/version.map | 1 +
4 files changed, 153 insertions(+), 4 deletions(-)
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 5ccf22f77f..28f3bbca9a 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017,2019 NXP
+ * Copyright 2017,2019-2021 NXP
*/
#include <assert.h>
@@ -1002,16 +1002,20 @@ dpaa2_eventdev_txa_enqueue(void *port,
struct rte_event ev[],
uint16_t nb_events)
{
- struct rte_mbuf *m = (struct rte_mbuf *)ev[0].mbuf;
+ void *txq[32];
+ struct rte_mbuf *m[32];
uint8_t qid, i;
RTE_SET_USED(port);
for (i = 0; i < nb_events; i++) {
- qid = rte_event_eth_tx_adapter_txq_get(m);
- rte_eth_tx_burst(m->port, qid, &m, 1);
+ m[i] = (struct rte_mbuf *)ev[i].mbuf;
+ qid = rte_event_eth_tx_adapter_txq_get(m[i]);
+ txq[i] = rte_eth_devices[m[i]->port].data->tx_queues[qid];
}
+ dpaa2_dev_tx_multi_txq_ordered(txq, m, nb_events);
+
return nb_events;
}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 3f34d7ecff..5cdd2f6418 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -236,6 +236,10 @@ void dpaa2_dev_process_ordered_event(struct qbman_swp *swp,
uint16_t dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
uint16_t dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs,
uint16_t nb_pkts);
+__rte_internal
+uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
+ struct rte_mbuf **bufs, uint16_t nb_pkts);
+
uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3..546888dc9c 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1445,6 +1445,146 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q,
*dpaa2_seqn(m) = DPAA2_INVALID_MBUF_SEQN;
}
+uint16_t
+dpaa2_dev_tx_multi_txq_ordered(void **queue,
+ struct rte_mbuf **bufs, uint16_t nb_pkts)
+{
+ /* Function to transmit the frames to multiple queues respectively.*/
+ uint32_t loop, retry_count;
+ int32_t ret;
+ struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+ uint32_t frames_to_send;
+ struct rte_mempool *mp;
+ struct qbman_eq_desc eqdesc[MAX_TX_RING_SLOTS];
+ struct dpaa2_queue *dpaa2_q[MAX_TX_RING_SLOTS];
+ struct qbman_swp *swp;
+ uint16_t bpid;
+ struct rte_mbuf *mi;
+ struct rte_eth_dev_data *eth_data;
+ struct dpaa2_dev_priv *priv;
+ struct dpaa2_queue *order_sendq;
+
+ if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
+ ret = dpaa2_affine_qbman_swp();
+ if (ret) {
+ DPAA2_PMD_ERR("Failed to allocate IO portal, tid: %d\n",
+ rte_gettid());
+ return 0;
+ }
+ }
+ swp = DPAA2_PER_LCORE_PORTAL;
+
+ for (loop = 0; loop < nb_pkts; loop++) {
+ dpaa2_q[loop] = (struct dpaa2_queue *)queue[loop];
+ eth_data = dpaa2_q[loop]->eth_data;
+ priv = eth_data->dev_private;
+ qbman_eq_desc_clear(&eqdesc[loop]);
+ if (*dpaa2_seqn(*bufs) && priv->en_ordered) {
+ order_sendq = (struct dpaa2_queue *)priv->tx_vq[0];
+ dpaa2_set_enqueue_descriptor(order_sendq,
+ (*bufs),
+ &eqdesc[loop]);
+ } else {
+ qbman_eq_desc_set_no_orp(&eqdesc[loop],
+ DPAA2_EQ_RESP_ERR_FQ);
+ qbman_eq_desc_set_fq(&eqdesc[loop],
+ dpaa2_q[loop]->fqid);
+ }
+
+ retry_count = 0;
+ while (qbman_result_SCN_state(dpaa2_q[loop]->cscn)) {
+ retry_count++;
+ /* Retry for some time before giving up */
+ if (retry_count > CONG_RETRY_COUNT)
+ goto send_frames;
+ }
+
+ if (likely(RTE_MBUF_DIRECT(*bufs))) {
+ mp = (*bufs)->pool;
+ /* Check the basic scenario and set
+ * the FD appropriately here itself.
+ */
+ if (likely(mp && mp->ops_index ==
+ priv->bp_list->dpaa2_ops_index &&
+ (*bufs)->nb_segs == 1 &&
+ rte_mbuf_refcnt_read((*bufs)) == 1)) {
+ if (unlikely((*bufs)->ol_flags
+ & PKT_TX_VLAN_PKT)) {
+ ret = rte_vlan_insert(bufs);
+ if (ret)
+ goto send_frames;
+ }
+ DPAA2_MBUF_TO_CONTIG_FD((*bufs),
+ &fd_arr[loop],
+ mempool_to_bpid(mp));
+ bufs++;
+ dpaa2_q[loop]++;
+ continue;
+ }
+ } else {
+ mi = rte_mbuf_from_indirect(*bufs);
+ mp = mi->pool;
+ }
+ /* Not a hw_pkt pool allocated frame */
+ if (unlikely(!mp || !priv->bp_list)) {
+ DPAA2_PMD_ERR("Err: No buffer pool attached");
+ goto send_frames;
+ }
+
+ if (mp->ops_index != priv->bp_list->dpaa2_ops_index) {
+ DPAA2_PMD_WARN("Non DPAA2 buffer pool");
+ /* alloc should be from the default buffer pool
+ * attached to this interface
+ */
+ bpid = priv->bp_list->buf_pool.bpid;
+
+ if (unlikely((*bufs)->nb_segs > 1)) {
+ DPAA2_PMD_ERR("S/G not supp for non hw offload buffer");
+ goto send_frames;
+ }
+ if (eth_copy_mbuf_to_fd(*bufs,
+ &fd_arr[loop], bpid)) {
+ goto send_frames;
+ }
+ /* free the original packet */
+ rte_pktmbuf_free(*bufs);
+ } else {
+ bpid = mempool_to_bpid(mp);
+ if (unlikely((*bufs)->nb_segs > 1)) {
+ if (eth_mbuf_to_sg_fd(*bufs,
+ &fd_arr[loop],
+ mp,
+ bpid))
+ goto send_frames;
+ } else {
+ eth_mbuf_to_fd(*bufs,
+ &fd_arr[loop], bpid);
+ }
+ }
+
+ bufs++;
+ dpaa2_q[loop]++;
+ }
+
+send_frames:
+ frames_to_send = loop;
+ loop = 0;
+ while (loop < frames_to_send) {
+ ret = qbman_swp_enqueue_multiple_desc(swp, &eqdesc[loop],
+ &fd_arr[loop],
+ frames_to_send - loop);
+ if (likely(ret > 0)) {
+ loop += ret;
+ } else {
+ retry_count++;
+ if (retry_count > DPAA2_MAX_TX_RETRY_COUNT)
+ break;
+ }
+ }
+
+ return loop;
+}
+
/* Callback to handle sending ordered packets through WRIOP based interface */
uint16_t
dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 3ab96344c4..f9786af7e4 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -19,6 +19,7 @@ EXPERIMENTAL {
INTERNAL {
global:
+ dpaa2_dev_tx_multi_txq_ordered;
dpaa2_eth_eventq_attach;
dpaa2_eth_eventq_detach;
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 05/11] net/dpaa2: add debug print for MTU set for jumbo
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (3 preceding siblings ...)
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 04/11] net/dpaa2: support multiple Tx queues enqueue for ordered nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 06/11] net/dpaa2: add function to generate HW hash key nipun.gupta
` (5 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch adds a debug print for MTU configured on the
device when jumbo frames are enabled.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9cf55c0f0b..275656fbe4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -573,6 +573,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.max_rx_pkt_len -
RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
VLAN_TAG_SIZE;
+ DPAA2_PMD_INFO("MTU configured for the device: %d",
+ dev->data->mtu);
} else {
return -1;
}
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 06/11] net/dpaa2: add function to generate HW hash key
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (4 preceding siblings ...)
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 05/11] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 07/11] net/dpaa2: update RSS to support additional distributions nipun.gupta
` (4 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch add support to generate the hash key in software
equivalent to WRIOP key generation.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_tlu_hash.c | 153 ++++++++++++++++++++++++
drivers/net/dpaa2/meson.build | 1 +
drivers/net/dpaa2/rte_pmd_dpaa2.h | 19 +++
drivers/net/dpaa2/version.map | 2 +
4 files changed, 175 insertions(+)
create mode 100644 drivers/net/dpaa2/base/dpaa2_tlu_hash.c
diff --git a/drivers/net/dpaa2/base/dpaa2_tlu_hash.c b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
new file mode 100644
index 0000000000..9eb127c07c
--- /dev/null
+++ b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
@@ -0,0 +1,153 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <rte_pmd_dpaa2.h>
+
+static unsigned int sbox(unsigned int x)
+{
+ unsigned int a, b, c, d;
+ unsigned int oa, ob, oc, od;
+
+ a = x & 0x1;
+ b = (x >> 1) & 0x1;
+ c = (x >> 2) & 0x1;
+ d = (x >> 3) & 0x1;
+
+ oa = ((a & ~b & ~c & d) | (~a & b) | (~a & ~c & ~d) | (b & c)) & 0x1;
+ ob = ((a & ~b & d) | (~a & c & ~d) | (b & ~c)) & 0x1;
+ oc = ((a & ~b & c) | (a & ~b & ~d) | (~a & b & ~d) | (~a & c & ~d) |
+ (b & c & d)) & 0x1;
+ od = ((a & ~b & c) | (~a & b & ~c) | (a & b & ~d) | (~a & c & d)) & 0x1;
+
+ return ((od << 3) | (oc << 2) | (ob << 1) | oa);
+}
+
+static unsigned int sbox_tbl[16];
+
+static int pbox_tbl[16] = {5, 9, 0, 13,
+ 7, 2, 11, 14,
+ 1, 4, 12, 8,
+ 3, 15, 6, 10 };
+
+static unsigned int mix_tbl[8][16];
+
+static unsigned int stage(unsigned int input)
+{
+ int sbox_out = 0;
+ int pbox_out = 0;
+ int i;
+
+ /* mix */
+ input ^= input >> 16; /* xor lower */
+ input ^= input << 16; /* move original lower to upper */
+
+ for (i = 0; i < 32; i += 4) /* sbox stage */
+ sbox_out |= (sbox_tbl[(input >> i) & 0xf]) << i;
+
+ /* permutation */
+ for (i = 0; i < 16; i++)
+ pbox_out |= ((sbox_out >> i) & 0x10001) << pbox_tbl[i];
+
+ return pbox_out;
+}
+
+static unsigned int fast_stage(unsigned int input)
+{
+ int pbox_out = 0;
+ int i;
+
+ /* mix */
+ input ^= input >> 16; /* xor lower */
+ input ^= input << 16; /* move original lower to upper */
+
+ for (i = 0; i < 32; i += 4) /* sbox stage */
+ pbox_out |= mix_tbl[i >> 2][(input >> i) & 0xf];
+
+ return pbox_out;
+}
+
+static unsigned int fast_hash32(unsigned int x)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ x = fast_stage(x);
+ return x;
+}
+
+static unsigned int
+byte_crc32(unsigned char data /* new byte for the crc calculation */,
+ unsigned old_crc /* crc result of the last iteration */)
+{
+ int i;
+ unsigned int crc, polynom = 0xedb88320;
+ /* the polynomial is built on the reversed version of
+ * the CRC polynomial with out the x64 element.
+ */
+
+ crc = old_crc;
+ for (i = 0; i < 8; i++, data >>= 1)
+ crc = (crc >> 1) ^ (((crc ^ data) & 0x1) ? polynom : 0);
+ /* xor with polynomial is lsb of crc^data is 1 */
+
+ return crc;
+}
+
+static unsigned int crc32_table[256];
+
+static void init_crc32_table(void)
+{
+ int i;
+
+ for (i = 0; i < 256; i++)
+ crc32_table[i] = byte_crc32((unsigned char)i, 0LL);
+}
+
+static unsigned int
+crc32_string(unsigned char *data,
+ int size, unsigned int old_crc)
+{
+ unsigned int crc;
+ int i;
+
+ crc = old_crc;
+ for (i = 0; i < size; i++)
+ crc = (crc >> 8) ^ crc32_table[(crc ^ data[i]) & 0xff];
+
+ return crc;
+}
+
+static void hash_init(void)
+{
+ init_crc32_table();
+ int i, j;
+
+ for (i = 0; i < 16; i++)
+ sbox_tbl[i] = sbox(i);
+
+ for (i = 0; i < 32; i += 4)
+ for (j = 0; j < 16; j++) {
+ /* (a,b)
+ * (b,a^b)=(X,Y)
+ * (X^Y,X)
+ */
+ unsigned int input = (0x88888888 ^ (8 << i)) | (j << i);
+
+ input ^= input << 16; /* (X^Y,Y) */
+ input ^= input >> 16; /* (X^Y,X) */
+ mix_tbl[i >> 2][j] = stage(input);
+ }
+}
+
+uint32_t rte_pmd_dpaa2_get_tlu_hash(uint8_t *data, int size)
+{
+ static int init;
+
+ if (~init)
+ hash_init();
+ init = 1;
+ return fast_hash32(crc32_string(data, size, 0x0));
+}
diff --git a/drivers/net/dpaa2/meson.build b/drivers/net/dpaa2/meson.build
index 20eaf0b8e4..4a6397d09e 100644
--- a/drivers/net/dpaa2/meson.build
+++ b/drivers/net/dpaa2/meson.build
@@ -20,6 +20,7 @@ sources = files(
'mc/dpkg.c',
'mc/dpdmux.c',
'mc/dpni.c',
+ 'base/dpaa2_tlu_hash.c',
)
includes += include_directories('base', 'mc')
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index a68244c974..8ea42ee130 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -82,4 +82,23 @@ __rte_experimental
void
rte_pmd_dpaa2_thread_init(void);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Generate the DPAA2 WRIOP based hash value
+ *
+ * @param key
+ * Array of key data
+ * @param size
+ * Size of the hash input key in bytes
+ *
+ * @return
+ * - 0 if successful.
+ * - Negative in case of failure.
+ */
+
+__rte_experimental
+uint32_t
+rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
#endif /* _RTE_PMD_DPAA2_H */
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index f9786af7e4..2059fc5ae8 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -10,6 +10,8 @@ DPDK_22 {
EXPERIMENTAL {
global:
+ # added in 21.11
+ rte_pmd_dpaa2_get_tlu_hash;
# added in 21.05
rte_pmd_dpaa2_mux_rx_frame_len;
# added in 21.08
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 07/11] net/dpaa2: update RSS to support additional distributions
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (5 preceding siblings ...)
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 06/11] net/dpaa2: add function to generate HW hash key nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 08/11] net/dpaa: add comments to explain driver behaviour nipun.gupta
` (3 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch updates the RSS support to support following additional
distributions:
- VLAN
- ESP
- AH
- PPPOE
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 70 +++++++++++++++++++++++++-
drivers/net/dpaa2/dpaa2_ethdev.h | 7 ++-
2 files changed, 75 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 641e7027f1..08f49af768 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -210,6 +210,10 @@ dpaa2_distset_to_dpkg_profile_cfg(
int l2_configured = 0, l3_configured = 0;
int l4_configured = 0, sctp_configured = 0;
int mpls_configured = 0;
+ int vlan_configured = 0;
+ int esp_configured = 0;
+ int ah_configured = 0;
+ int pppoe_configured = 0;
memset(kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
while (req_dist_set) {
@@ -217,6 +221,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
dist_field = 1ULL << loop;
switch (dist_field) {
case ETH_RSS_L2_PAYLOAD:
+ case ETH_RSS_ETH:
if (l2_configured)
break;
@@ -231,7 +236,70 @@ dpaa2_distset_to_dpkg_profile_cfg(
kg_cfg->extracts[i].extract.from_hdr.type =
DPKG_FULL_FIELD;
i++;
- break;
+ break;
+
+ case ETH_RSS_PPPOE:
+ if (pppoe_configured)
+ break;
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_PPPOE;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_PPPOE_SID;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_ESP:
+ if (esp_configured)
+ break;
+ esp_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_IPSEC_ESP;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_IPSEC_ESP_SPI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_AH:
+ if (ah_configured)
+ break;
+ ah_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_IPSEC_AH;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_IPSEC_AH_SPI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_C_VLAN:
+ case ETH_RSS_S_VLAN:
+ if (vlan_configured)
+ break;
+ vlan_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_VLAN;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_VLAN_TCI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
case ETH_RSS_MPLS:
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 5cdd2f6418..734ef17a9a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -70,7 +70,12 @@
ETH_RSS_UDP | \
ETH_RSS_TCP | \
ETH_RSS_SCTP | \
- ETH_RSS_MPLS)
+ ETH_RSS_MPLS | \
+ ETH_RSS_C_VLAN | \
+ ETH_RSS_S_VLAN | \
+ ETH_RSS_ESP | \
+ ETH_RSS_AH | \
+ ETH_RSS_PPPOE)
/* LX2 FRC Parsed values (Little Endian) */
#define DPAA2_PKT_TYPE_ETHER 0x0060
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 08/11] net/dpaa: add comments to explain driver behaviour
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (6 preceding siblings ...)
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 07/11] net/dpaa2: update RSS to support additional distributions nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 09/11] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
` (2 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
This patch adds comment to explain how dpaa_port_fmc_ccnode_parse
function is working to get the HW queue from FMC policy file
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/net/dpaa/dpaa_fmc.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index 5195053361..f8c9360311 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2021 NXP
*/
/* System headers */
@@ -338,6 +338,12 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
fqid = keys_params->key_params[j].cc_next_engine_params
.params.enqueue_params.new_fqid;
+ /* We read DPDK queue from last classification rule present in
+ * FMC policy file. Hence, this check is required here.
+ * Also, the last classification rule in FMC policy file must
+ * have userspace queue so that it can be used by DPDK
+ * application.
+ */
if (keys_params->key_params[j].cc_next_engine_params
.next_engine != e_IOC_FM_PCD_DONE) {
DPAA_PMD_WARN("FMC CC next engine not support");
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 09/11] raw/dpaa2_qdma: use correct params for config and queue setup
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (7 preceding siblings ...)
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 08/11] net/dpaa: add comments to explain driver behaviour nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 10/11] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 11/11] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
RAW configure and Queue setup APIs support size parameter for
configure. This patch supports the same for DPAA2 QDMA PMD APIs
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 12 +++++++++---
drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h | 8 ++++----
2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c961e18d67..6b7b51747e 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#include <string.h>
@@ -1146,8 +1146,11 @@ dpaa2_qdma_configure(const struct rte_rawdev *rawdev,
DPAA2_QDMA_FUNC_TRACE();
- if (config_size != sizeof(*qdma_config))
+ if (config_size != sizeof(*qdma_config)) {
+ DPAA2_QDMA_ERR("Config size mismatch. Expected %" PRIu64
+ ", Got: %" PRIu64, sizeof(*qdma_config), config_size);
return -EINVAL;
+ }
/* In case QDMA device is not in stopped state, return -EBUSY */
if (qdma_dev->state == 1) {
@@ -1247,8 +1250,11 @@ dpaa2_qdma_queue_setup(struct rte_rawdev *rawdev,
DPAA2_QDMA_FUNC_TRACE();
- if (conf_size != sizeof(*q_config))
+ if (conf_size != sizeof(*q_config)) {
+ DPAA2_QDMA_ERR("Config size mismatch. Expected %" PRIu64
+ ", Got: %" PRIu64, sizeof(*q_config), conf_size);
return -EINVAL;
+ }
rte_spinlock_lock(&qdma_dev->lock);
diff --git a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
index cc1ac25451..1314474271 100644
--- a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
+++ b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef __RTE_PMD_DPAA2_QDMA_H__
@@ -177,13 +177,13 @@ struct rte_qdma_queue_config {
#define rte_qdma_info rte_rawdev_info
#define rte_qdma_start(id) rte_rawdev_start(id)
#define rte_qdma_reset(id) rte_rawdev_reset(id)
-#define rte_qdma_configure(id, cf) rte_rawdev_configure(id, cf)
+#define rte_qdma_configure(id, cf, sz) rte_rawdev_configure(id, cf, sz)
#define rte_qdma_dequeue_buffers(id, buf, num, ctxt) \
rte_rawdev_dequeue_buffers(id, buf, num, ctxt)
#define rte_qdma_enqueue_buffers(id, buf, num, ctxt) \
rte_rawdev_enqueue_buffers(id, buf, num, ctxt)
-#define rte_qdma_queue_setup(id, qid, cfg) \
- rte_rawdev_queue_setup(id, qid, cfg)
+#define rte_qdma_queue_setup(id, qid, cfg, sz) \
+ rte_rawdev_queue_setup(id, qid, cfg, sz)
/*TODO introduce per queue stats API in rawdew */
/**
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 10/11] raw/dpaa2_qdma: remove checks for lcore ID
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (8 preceding siblings ...)
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 09/11] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 11/11] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
There is no need for preventional check of rte_lcore_id() in
data path. This patch removes the same.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 14 --------------
1 file changed, 14 deletions(-)
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index 6b7b51747e..d8a3583dd9 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1389,13 +1389,6 @@ dpaa2_qdma_enqueue(struct rte_rawdev *rawdev,
&dpdmai_dev->qdma_dev->vqs[e_context->vq_id];
int ret;
- /* Return error in case of wrong lcore_id */
- if (rte_lcore_id() != qdma_vq->lcore_id) {
- DPAA2_QDMA_ERR("QDMA enqueue for vqid %d on wrong core",
- e_context->vq_id);
- return -EINVAL;
- }
-
ret = qdma_vq->enqueue_job(qdma_vq, e_context->job, nb_jobs);
if (ret < 0) {
DPAA2_QDMA_ERR("DPDMAI device enqueue failed: %d", ret);
@@ -1428,13 +1421,6 @@ dpaa2_qdma_dequeue(struct rte_rawdev *rawdev,
return -EINVAL;
}
- /* Return error in case of wrong lcore_id */
- if (rte_lcore_id() != (unsigned int)(qdma_vq->lcore_id)) {
- DPAA2_QDMA_WARN("QDMA dequeue for vqid %d on wrong core",
- context->vq_id);
- return -1;
- }
-
/* Only dequeue when there are pending jobs on VQ */
if (qdma_vq->num_enqueues == qdma_vq->num_dequeues)
return 0;
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v1 11/11] common/dpaax: fix paddr to vaddr invalid conversion
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (9 preceding siblings ...)
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 10/11] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
@ 2021-09-27 13:25 ` nipun.gupta
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-09-27 13:25 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, stable,
Gagandeep Singh, Nipun Gupta
From: Gagandeep Singh <g.singh@nxp.com>
If some of the VA entries of table are somehow not populated and are
NULL, it can add offset to NULL and return the invalid VA in PA to
VA conversion.
In this patch, adding a check if the VA entry has valid address only
then add offset and return VA.
Fixes: 2f3d633aa593 ("common/dpaax: add library for PA/VA translation table")
Cc: stable@dpdk.org
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/common/dpaax/dpaax_iova_table.h | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h
index 230fba8ba0..d7087ee7ca 100644
--- a/drivers/common/dpaax/dpaax_iova_table.h
+++ b/drivers/common/dpaax/dpaax_iova_table.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef _DPAAX_IOVA_TABLE_H_
@@ -101,6 +101,12 @@ dpaax_iova_table_get_va(phys_addr_t paddr) {
/* paddr > entry->start && paddr <= entry->(start+len) */
index = (paddr_align - entry[i].start)/DPAAX_MEM_SPLIT;
+ /* paddr is within range, but no vaddr entry ever written
+ * at index
+ */
+ if ((void *)entry[i].pages[index] == NULL)
+ return NULL;
+
vaddr = (void *)((uintptr_t)entry[i].pages[index] + offset);
break;
} while (1);
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (11 preceding siblings ...)
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 01/10] bus/fslmc: updated MC FW to 10.28 nipun.gupta
` (9 more replies)
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
13 siblings, 10 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Nipun Gupta <nipun.gupta@nxp.com>
This series adds new functionality related to flow redirection,
generating HW hash key etc.
It also updates the MC firmware version and includes a fix in
dpaxx library.
Changes in v1:
- Fix checkpatch errors
Changes in v2:
- remove unrequired multi-tx ordered patch
Gagandeep Singh (1):
common/dpaax: fix paddr to vaddr invalid conversion
Hemant Agrawal (4):
bus/fslmc: updated MC FW to 10.28
bus/fslmc: add qbman debug APIs support
net/dpaa2: add debug print for MTU set for jumbo
net/dpaa2: add function to generate HW hash key
Jun Yang (1):
net/dpaa2: support Tx flow redirection action
Nipun Gupta (2):
raw/dpaa2_qdma: use correct params for config and queue setup
raw/dpaa2_qdma: remove checks for lcore ID
Rohit Raj (1):
net/dpaa: add comments to explain driver behaviour
Vanshika Shukla (1):
net/dpaa2: update RSS to support additional distributions
drivers/bus/fslmc/mc/dpdmai.c | 4 +-
drivers/bus/fslmc/mc/fsl_dpdmai.h | 21 +-
drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h | 15 +-
drivers/bus/fslmc/mc/fsl_dpmng.h | 4 +-
drivers/bus/fslmc/mc/fsl_dpopr.h | 7 +-
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 201 +++++-
drivers/bus/fslmc/qbman/qbman_debug.c | 621 ++++++++++++++++++
drivers/bus/fslmc/qbman/qbman_portal.c | 6 +
drivers/common/dpaax/dpaax_iova_table.h | 8 +-
drivers/net/dpaa/dpaa_fmc.c | 8 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 70 +-
drivers/net/dpaa2/base/dpaa2_tlu_hash.c | 153 +++++
drivers/net/dpaa2/dpaa2_ethdev.c | 9 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 8 +-
drivers/net/dpaa2/dpaa2_flow.c | 116 +++-
drivers/net/dpaa2/mc/dpdmux.c | 43 ++
drivers/net/dpaa2/mc/dpni.c | 48 +-
drivers/net/dpaa2/mc/dprtc.c | 78 ++-
drivers/net/dpaa2/mc/fsl_dpdmux.h | 6 +
drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 9 +
drivers/net/dpaa2/mc/fsl_dpkg.h | 6 +-
drivers/net/dpaa2/mc/fsl_dpni.h | 147 ++++-
drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 55 +-
drivers/net/dpaa2/mc/fsl_dprtc.h | 19 +-
drivers/net/dpaa2/mc/fsl_dprtc_cmd.h | 25 +-
drivers/net/dpaa2/meson.build | 1 +
drivers/net/dpaa2/rte_pmd_dpaa2.h | 19 +
drivers/net/dpaa2/version.map | 2 +
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 26 +-
drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h | 8 +-
30 files changed, 1636 insertions(+), 107 deletions(-)
create mode 100644 drivers/net/dpaa2/base/dpaa2_tlu_hash.c
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 01/10] bus/fslmc: updated MC FW to 10.28
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 02/10] net/dpaa2: support Tx flow redirection action nipun.gupta
` (8 subsequent siblings)
9 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Updating MC firmware support APIs to be latest. It supports
improved DPDMUX (SRIOV equivalent) for traffic split between
dpnis and additional PTP APIs.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/fslmc/mc/dpdmai.c | 4 +-
drivers/bus/fslmc/mc/fsl_dpdmai.h | 21 ++++-
drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h | 15 ++--
drivers/bus/fslmc/mc/fsl_dpmng.h | 4 +-
drivers/bus/fslmc/mc/fsl_dpopr.h | 7 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
drivers/net/dpaa2/mc/dpdmux.c | 43 +++++++++
drivers/net/dpaa2/mc/dpni.c | 48 ++++++----
drivers/net/dpaa2/mc/dprtc.c | 78 +++++++++++++++-
drivers/net/dpaa2/mc/fsl_dpdmux.h | 6 ++
drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 9 ++
drivers/net/dpaa2/mc/fsl_dpkg.h | 6 +-
drivers/net/dpaa2/mc/fsl_dpni.h | 124 ++++++++++++++++++++++----
drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 55 +++++++++---
drivers/net/dpaa2/mc/fsl_dprtc.h | 19 +++-
drivers/net/dpaa2/mc/fsl_dprtc_cmd.h | 25 +++++-
16 files changed, 401 insertions(+), 65 deletions(-)
diff --git a/drivers/bus/fslmc/mc/dpdmai.c b/drivers/bus/fslmc/mc/dpdmai.c
index dcb9d516a1..9c2f3bf9d5 100644
--- a/drivers/bus/fslmc/mc/dpdmai.c
+++ b/drivers/bus/fslmc/mc/dpdmai.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#include <fsl_mc_sys.h>
@@ -116,6 +116,7 @@ int dpdmai_create(struct fsl_mc_io *mc_io,
cmd_params->num_queues = cfg->num_queues;
cmd_params->priorities[0] = cfg->priorities[0];
cmd_params->priorities[1] = cfg->priorities[1];
+ cmd_params->options = cpu_to_le32(cfg->adv.options);
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -299,6 +300,7 @@ int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
attr->id = le32_to_cpu(rsp_params->id);
attr->num_of_priorities = rsp_params->num_of_priorities;
attr->num_of_queues = rsp_params->num_of_queues;
+ attr->options = le32_to_cpu(rsp_params->options);
return 0;
}
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
index 19328c00a0..5af8ed48c0 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef __FSL_DPDMAI_H
@@ -36,15 +36,32 @@ int dpdmai_close(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
+/* DPDMAI options */
+
+/**
+ * Enable individual Congestion Groups usage per each priority queue
+ * If this option is not enabled then only one CG is used for all priority
+ * queues
+ * If this option is enabled then a separate specific CG is used for each
+ * individual priority queue.
+ * In this case the priority queue must be specified via congestion notification
+ * API
+ */
+#define DPDMAI_OPT_CG_PER_PRIORITY 0x00000001
+
/**
* struct dpdmai_cfg - Structure representing DPDMAI configuration
* @priorities: Priorities for the DMA hardware processing; valid priorities are
* configured with values 1-8; the entry following last valid entry
* should be configured with 0
+ * @options: dpdmai options
*/
struct dpdmai_cfg {
uint8_t num_queues;
uint8_t priorities[DPDMAI_PRIO_NUM];
+ struct {
+ uint32_t options;
+ } adv;
};
int dpdmai_create(struct fsl_mc_io *mc_io,
@@ -81,11 +98,13 @@ int dpdmai_reset(struct fsl_mc_io *mc_io,
* struct dpdmai_attr - Structure representing DPDMAI attributes
* @id: DPDMAI object ID
* @num_of_priorities: number of priorities
+ * @options: dpdmai options
*/
struct dpdmai_attr {
int id;
uint8_t num_of_priorities;
uint8_t num_of_queues;
+ uint32_t options;
};
__rte_internal
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h b/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
index 7e122de4ef..c8f6b990f8 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
@@ -1,32 +1,33 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2017-2018, 2020-2021 NXP
*/
-
#ifndef _FSL_DPDMAI_CMD_H
#define _FSL_DPDMAI_CMD_H
/* DPDMAI Version */
#define DPDMAI_VER_MAJOR 3
-#define DPDMAI_VER_MINOR 3
+#define DPDMAI_VER_MINOR 4
/* Command versioning */
#define DPDMAI_CMD_BASE_VERSION 1
#define DPDMAI_CMD_VERSION_2 2
+#define DPDMAI_CMD_VERSION_3 3
#define DPDMAI_CMD_ID_OFFSET 4
#define DPDMAI_CMD(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_BASE_VERSION)
#define DPDMAI_CMD_V2(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_VERSION_2)
+#define DPDMAI_CMD_V3(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_VERSION_3)
/* Command IDs */
#define DPDMAI_CMDID_CLOSE DPDMAI_CMD(0x800)
#define DPDMAI_CMDID_OPEN DPDMAI_CMD(0x80E)
-#define DPDMAI_CMDID_CREATE DPDMAI_CMD_V2(0x90E)
+#define DPDMAI_CMDID_CREATE DPDMAI_CMD_V3(0x90E)
#define DPDMAI_CMDID_DESTROY DPDMAI_CMD(0x98E)
#define DPDMAI_CMDID_GET_API_VERSION DPDMAI_CMD(0xa0E)
#define DPDMAI_CMDID_ENABLE DPDMAI_CMD(0x002)
#define DPDMAI_CMDID_DISABLE DPDMAI_CMD(0x003)
-#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMD_V2(0x004)
+#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMD_V3(0x004)
#define DPDMAI_CMDID_RESET DPDMAI_CMD(0x005)
#define DPDMAI_CMDID_IS_ENABLED DPDMAI_CMD(0x006)
@@ -51,6 +52,8 @@ struct dpdmai_cmd_open {
struct dpdmai_cmd_create {
uint8_t num_queues;
uint8_t priorities[2];
+ uint8_t pad;
+ uint32_t options;
};
struct dpdmai_cmd_destroy {
@@ -69,6 +72,8 @@ struct dpdmai_rsp_get_attr {
uint32_t id;
uint8_t num_of_priorities;
uint8_t num_of_queues;
+ uint16_t pad;
+ uint32_t options;
};
#define DPDMAI_DEST_TYPE_SHIFT 0
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index 8764ceaed9..7e9bd96429 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2021 NXP
*
*/
#ifndef __FSL_DPMNG_H
@@ -20,7 +20,7 @@ struct fsl_mc_io;
* Management Complex firmware version information
*/
#define MC_VER_MAJOR 10
-#define MC_VER_MINOR 18
+#define MC_VER_MINOR 28
/**
* struct mc_version
diff --git a/drivers/bus/fslmc/mc/fsl_dpopr.h b/drivers/bus/fslmc/mc/fsl_dpopr.h
index fd727e011b..74dd32f783 100644
--- a/drivers/bus/fslmc/mc/fsl_dpopr.h
+++ b/drivers/bus/fslmc/mc/fsl_dpopr.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*
*/
#ifndef __FSL_DPOPR_H_
@@ -22,7 +22,10 @@
* Retire an existing Order Point Record option
*/
#define OPR_OPT_RETIRE 0x2
-
+/**
+ * Assign an existing Order Point Record to a queue
+ */
+#define OPR_OPT_ASSIGN 0x4
/**
* struct opr_cfg - Structure representing OPR configuration
* @oprrws: Order point record (OPR) restoration window size (0 to 5)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..560b79151b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2273,7 +2273,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
ret = dpni_set_opr(dpni, CMD_PRI_LOW, eth_priv->token,
dpaa2_ethq->tc_index, flow_id,
- OPR_OPT_CREATE, &ocfg);
+ OPR_OPT_CREATE, &ocfg, 0);
if (ret) {
DPAA2_PMD_ERR("Error setting opr: ret: %d\n", ret);
return ret;
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 93912ef9d3..edbb01b45b 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -491,6 +491,49 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
+/**
+ * dpdmux_get_max_frame_length() - Return the maximum frame length for DPDMUX
+ * interface
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPDMUX object
+ * @if_id: Interface id
+ * @max_frame_length: maximum frame length
+ *
+ * When dpdmux object is in VEPA mode this function will ignore if_id parameter
+ * and will return maximum frame length for uplink interface (if_id==0).
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dpdmux_get_max_frame_length(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint16_t if_id,
+ uint16_t *max_frame_length)
+{
+ struct mc_command cmd = { 0 };
+ struct dpdmux_cmd_get_max_frame_len *cmd_params;
+ struct dpdmux_rsp_get_max_frame_len *rsp_params;
+ int err = 0;
+
+ /* prepare command */
+ cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_GET_MAX_FRAME_LENGTH,
+ cmd_flags,
+ token);
+ cmd_params = (struct dpdmux_cmd_get_max_frame_len *)cmd.params;
+ cmd_params->if_id = cpu_to_le16(if_id);
+
+ err = mc_send_command(mc_io, &cmd);
+ if (err)
+ return err;
+
+ rsp_params = (struct dpdmux_rsp_get_max_frame_len *)cmd.params;
+ *max_frame_length = le16_to_cpu(rsp_params->max_len);
+
+ /* send command to mc*/
+ return err;
+}
+
/**
* dpdmux_ul_reset_counters() - Function resets the uplink counter
* @mc_io: Pointer to MC portal's I/O object
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index b254931386..60048d6c43 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#include <fsl_mc_sys.h>
@@ -126,6 +126,8 @@ int dpni_create(struct fsl_mc_io *mc_io,
cmd_params->qos_entries = cfg->qos_entries;
cmd_params->fs_entries = cpu_to_le16(cfg->fs_entries);
cmd_params->num_cgs = cfg->num_cgs;
+ cmd_params->num_opr = cfg->num_opr;
+ cmd_params->dist_key_size = cfg->dist_key_size;
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -1829,6 +1831,7 @@ int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
cmd_params->options = cpu_to_le16(action->options);
cmd_params->flow_id = cpu_to_le16(action->flow_id);
cmd_params->flc = cpu_to_le64(action->flc);
+ cmd_params->redir_token = cpu_to_le16(action->redirect_obj_token);
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
@@ -2442,7 +2445,7 @@ int dpni_reset_statistics(struct fsl_mc_io *mc_io,
}
/**
- * dpni_set_taildrop() - Set taildrop per queue or TC
+ * dpni_set_taildrop() - Set taildrop per congestion group
*
* Setting a per-TC taildrop (cg_point = DPNI_CP_GROUP) will reset any current
* congestion notification or early drop (WRED) configuration previously applied
@@ -2451,13 +2454,14 @@ int dpni_reset_statistics(struct fsl_mc_io *mc_io,
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
- * @cg_point: Congestion point, DPNI_CP_QUEUE is only supported in
+ * @cg_point: Congestion group identifier DPNI_CP_QUEUE is only supported in
* combination with DPNI_QUEUE_RX.
* @q_type: Queue type, can be DPNI_QUEUE_RX or DPNI_QUEUE_TX.
* @tc: Traffic class to apply this taildrop to
- * @q_index: Index of the queue if the DPNI supports multiple queues for
+ * @index/cgid: Index of the queue if the DPNI supports multiple queues for
* traffic distribution.
- * Ignored if CONGESTION_POINT is not DPNI_CP_QUEUE.
+ * If CONGESTION_POINT is DPNI_CP_CONGESTION_GROUP then it
+ * represent the cgid of the congestion point
* @taildrop: Taildrop structure
*
* Return: '0' on Success; Error code otherwise.
@@ -2577,7 +2581,8 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
uint8_t options,
- struct opr_cfg *cfg)
+ struct opr_cfg *cfg,
+ uint8_t opr_id)
{
struct dpni_cmd_set_opr *cmd_params;
struct mc_command cmd = { 0 };
@@ -2591,6 +2596,7 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
cmd_params->tc_id = tc;
cmd_params->index = index;
cmd_params->options = options;
+ cmd_params->opr_id = opr_id;
cmd_params->oloe = cfg->oloe;
cmd_params->oeane = cfg->oeane;
cmd_params->olws = cfg->olws;
@@ -2621,7 +2627,9 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
struct opr_cfg *cfg,
- struct opr_qry *qry)
+ struct opr_qry *qry,
+ uint8_t flags,
+ uint8_t opr_id)
{
struct dpni_rsp_get_opr *rsp_params;
struct dpni_cmd_get_opr *cmd_params;
@@ -2635,6 +2643,8 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
cmd_params = (struct dpni_cmd_get_opr *)cmd.params;
cmd_params->index = index;
cmd_params->tc_id = tc;
+ cmd_params->flags = flags;
+ cmd_params->opr_id = opr_id;
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -2673,7 +2683,7 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
* If the FS is already enabled with a previous call the classification
* key will be changed but all the table rules are kept. If the
* existing rules do not match the key the results will not be
- * predictable. It is the user responsibility to keep key integrity
+ * predictable. It is the user responsibility to keep keyintegrity.
* If cfg.enable is set to 1 the command will create a flow steering table
* and will classify packets according to this table. The packets
* that miss all the table rules will be classified according to
@@ -2695,7 +2705,7 @@ int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
cmd_params = (struct dpni_cmd_set_rx_fs_dist *)cmd.params;
cmd_params->dist_size = cpu_to_le16(cfg->dist_size);
dpni_set_field(cmd_params->enable, RX_FS_DIST_ENABLE, cfg->enable);
- cmd_params->tc = cfg->tc;
+ cmd_params->tc = cfg->tc;
cmd_params->miss_flow_id = cpu_to_le16(cfg->fs_miss_flow_id);
cmd_params->key_cfg_iova = cpu_to_le64(cfg->key_cfg_iova);
@@ -2710,9 +2720,9 @@ int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
* @token: Token of DPNI object
* @cfg: Distribution configuration
* If cfg.enable is set to 1 the packets will be classified using a hash
- * function based on the key received in cfg.key_cfg_iova parameter
+ * function based on the key received in cfg.key_cfg_iova parameter.
* If cfg.enable is set to 0 the packets will be sent to the queue configured in
- * dpni_set_rx_dist_default_queue() call
+ * dpni_set_rx_dist_default_queue() call
*/
int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
uint16_t token, const struct dpni_rx_dist_cfg *cfg)
@@ -2735,9 +2745,9 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
}
/**
- * dpni_add_custom_tpid() - Configures a distinct Ethertype value
- * (or TPID value) to indicate VLAN tag in addition to the common
- * TPID values 0x8100 and 0x88A8
+ * dpni_add_custom_tpid() - Configures a distinct Ethertype value (or TPID
+ * value) to indicate VLAN tag in adition to the common TPID values
+ * 0x81000 and 0x88A8
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
@@ -2745,8 +2755,8 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
*
* Only two custom values are accepted. If the function is called for the third
* time it will return error.
- * To replace an existing value use dpni_remove_custom_tpid() to remove
- * a previous TPID and after that use again the function.
+ * To replace an existing value use dpni_remove_custom_tpid() to remove a
+ * previous TPID and after that use again the function.
*
* Return: '0' on Success; Error code otherwise.
*/
@@ -2769,7 +2779,7 @@ int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
/**
* dpni_remove_custom_tpid() - Removes a distinct Ethertype value added
- * previously with dpni_add_custom_tpid()
+ * previously with dpni_add_custom_tpid()
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
@@ -2798,8 +2808,8 @@ int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
}
/**
- * dpni_get_custom_tpid() - Returns custom TPID (vlan tags) values configured
- * to detect 802.1q frames
+ * dpni_get_custom_tpid() - Returns custom TPID (vlan tags) values configured to
+ * detect 802.1q frames
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
diff --git a/drivers/net/dpaa2/mc/dprtc.c b/drivers/net/dpaa2/mc/dprtc.c
index 42ac89150e..36e62eb0c3 100644
--- a/drivers/net/dpaa2/mc/dprtc.c
+++ b/drivers/net/dpaa2/mc/dprtc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#include <fsl_mc_sys.h>
#include <fsl_mc_cmd.h>
@@ -521,3 +521,79 @@ int dprtc_get_api_version(struct fsl_mc_io *mc_io,
return 0;
}
+
+/**
+ * dprtc_get_ext_trigger_timestamp - Retrieve the Ext trigger timestamp status
+ * (timestamp + flag for unread timestamp in FIFO).
+ *
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @id: External trigger id.
+ * @status: Returned object's external trigger status
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dprtc_get_ext_trigger_timestamp(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ struct dprtc_ext_trigger_status *status)
+{
+ struct dprtc_rsp_ext_trigger_timestamp *rsp_params;
+ struct dprtc_cmd_ext_trigger_timestamp *cmd_params;
+ struct mc_command cmd = { 0 };
+ int err;
+
+ /* prepare command */
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_EXT_TRIGGER_TIMESTAMP,
+ cmd_flags,
+ token);
+
+ cmd_params = (struct dprtc_cmd_ext_trigger_timestamp *)cmd.params;
+ cmd_params->id = id;
+ /* send command to mc*/
+ err = mc_send_command(mc_io, &cmd);
+ if (err)
+ return err;
+
+ /* retrieve response parameters */
+ rsp_params = (struct dprtc_rsp_ext_trigger_timestamp *)cmd.params;
+ status->timestamp = le64_to_cpu(rsp_params->timestamp);
+ status->unread_valid_timestamp = rsp_params->unread_valid_timestamp;
+
+ return 0;
+}
+
+/**
+ * dprtc_set_fiper_loopback() - Set the fiper pulse as source of interrupt for
+ * External Trigger stamps
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @id: External trigger id.
+ * @fiper_as_input: Bit used to control interrupt signal source:
+ * 0 = Normal operation, interrupt external source
+ * 1 = Fiper pulse is looped back into Trigger input
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dprtc_set_fiper_loopback(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ uint8_t fiper_as_input)
+{
+ struct dprtc_ext_trigger_cfg *cmd_params;
+ struct mc_command cmd = { 0 };
+
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_FIPER_LOOPBACK,
+ cmd_flags,
+ token);
+
+ cmd_params = (struct dprtc_ext_trigger_cfg *)cmd.params;
+ cmd_params->id = id;
+ cmd_params->fiper_as_input = fiper_as_input;
+
+ return mc_send_command(mc_io, &cmd);
+}
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index f4f9598a29..b01a98eb59 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -196,6 +196,12 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
uint16_t token,
uint16_t max_frame_length);
+int dpdmux_get_max_frame_length(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint16_t if_id,
+ uint16_t *max_frame_length);
+
/**
* enum dpdmux_counter_type - Counter types
* @DPDMUX_CNT_ING_FRAME: Counts ingress frames
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
index 2ab4d75dfb..f8a1b5b1ae 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
@@ -39,6 +39,7 @@
#define DPDMUX_CMDID_RESET DPDMUX_CMD(0x005)
#define DPDMUX_CMDID_IS_ENABLED DPDMUX_CMD(0x006)
#define DPDMUX_CMDID_SET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a1)
+#define DPDMUX_CMDID_GET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a2)
#define DPDMUX_CMDID_UL_RESET_COUNTERS DPDMUX_CMD(0x0a3)
@@ -124,6 +125,14 @@ struct dpdmux_cmd_set_max_frame_length {
uint16_t max_frame_length;
};
+struct dpdmux_cmd_get_max_frame_len {
+ uint16_t if_id;
+};
+
+struct dpdmux_rsp_get_max_frame_len {
+ uint16_t max_len;
+};
+
#define DPDMUX_ACCEPTED_FRAMES_TYPE_SHIFT 0
#define DPDMUX_ACCEPTED_FRAMES_TYPE_SIZE 4
#define DPDMUX_UNACCEPTED_FRAMES_ACTION_SHIFT 4
diff --git a/drivers/net/dpaa2/mc/fsl_dpkg.h b/drivers/net/dpaa2/mc/fsl_dpkg.h
index 02fe8d50e7..70f2339ea5 100644
--- a/drivers/net/dpaa2/mc/fsl_dpkg.h
+++ b/drivers/net/dpaa2/mc/fsl_dpkg.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef __FSL_DPKG_H_
@@ -21,7 +21,7 @@
/**
* Number of extractions per key profile
*/
-#define DPKG_MAX_NUM_OF_EXTRACTS 10
+#define DPKG_MAX_NUM_OF_EXTRACTS 20
/**
* enum dpkg_extract_from_hdr_type - Selecting extraction by header types
@@ -177,7 +177,7 @@ struct dpni_ext_set_rx_tc_dist {
uint8_t num_extracts;
uint8_t pad[7];
/* words 1..25 */
- struct dpni_dist_extract extracts[10];
+ struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
};
int dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index df42746c9a..34c6b20033 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef __FSL_DPNI_H
@@ -19,6 +19,11 @@ struct fsl_mc_io;
/** General DPNI macros */
+/**
+ * Maximum size of a key
+ */
+#define DPNI_MAX_KEY_SIZE 56
+
/**
* Maximum number of traffic classes
*/
@@ -95,8 +100,18 @@ struct fsl_mc_io;
* Define a custom number of congestion groups
*/
#define DPNI_OPT_CUSTOM_CG 0x000200
-
-
+/**
+ * Define a custom number of order point records
+ */
+#define DPNI_OPT_CUSTOM_OPR 0x000400
+/**
+ * Hash key is shared between all traffic classes
+ */
+#define DPNI_OPT_SHARED_HASH_KEY 0x000800
+/**
+ * Flow steering table is shared between all traffic classes
+ */
+#define DPNI_OPT_SHARED_FS 0x001000
/**
* Software sequence maximum layout size
*/
@@ -183,6 +198,8 @@ struct dpni_cfg {
uint8_t num_rx_tcs;
uint8_t qos_entries;
uint8_t num_cgs;
+ uint16_t num_opr;
+ uint8_t dist_key_size;
};
int dpni_create(struct fsl_mc_io *mc_io,
@@ -366,28 +383,45 @@ int dpni_get_attributes(struct fsl_mc_io *mc_io,
/**
* Extract out of frame header error
*/
-#define DPNI_ERROR_EOFHE 0x00020000
+#define DPNI_ERROR_MS 0x40000000
+#define DPNI_ERROR_PTP 0x08000000
+/* Ethernet multicast frame */
+#define DPNI_ERROR_MC 0x04000000
+/* Ethernet broadcast frame */
+#define DPNI_ERROR_BC 0x02000000
+#define DPNI_ERROR_KSE 0x00040000
+#define DPNI_ERROR_EOFHE 0x00020000
+#define DPNI_ERROR_MNLE 0x00010000
+#define DPNI_ERROR_TIDE 0x00008000
+#define DPNI_ERROR_PIEE 0x00004000
/**
* Frame length error
*/
-#define DPNI_ERROR_FLE 0x00002000
+#define DPNI_ERROR_FLE 0x00002000
/**
* Frame physical error
*/
-#define DPNI_ERROR_FPE 0x00001000
+#define DPNI_ERROR_FPE 0x00001000
+#define DPNI_ERROR_PTE 0x00000080
+#define DPNI_ERROR_ISP 0x00000040
/**
* Parsing header error
*/
-#define DPNI_ERROR_PHE 0x00000020
+#define DPNI_ERROR_PHE 0x00000020
+
+#define DPNI_ERROR_BLE 0x00000010
/**
* Parser L3 checksum error
*/
-#define DPNI_ERROR_L3CE 0x00000004
+#define DPNI_ERROR_L3CV 0x00000008
+
+#define DPNI_ERROR_L3CE 0x00000004
/**
- * Parser L3 checksum error
+ * Parser L4 checksum error
*/
-#define DPNI_ERROR_L4CE 0x00000001
+#define DPNI_ERROR_L4CV 0x00000002
+#define DPNI_ERROR_L4CE 0x00000001
/**
* enum dpni_error_action - Defines DPNI behavior for errors
* @DPNI_ERROR_ACTION_DISCARD: Discard the frame
@@ -455,6 +489,10 @@ int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
* Select to modify the sw-opaque value setting
*/
#define DPNI_BUF_LAYOUT_OPT_SW_OPAQUE 0x00000080
+/**
+ * Select to disable Scatter Gather and use single buffer
+ */
+#define DPNI_BUF_LAYOUT_OPT_NO_SG 0x00000100
/**
* struct dpni_buffer_layout - Structure representing DPNI buffer layout
@@ -733,7 +771,7 @@ int dpni_get_link_state(struct fsl_mc_io *mc_io,
/**
* struct dpni_tx_shaping - Structure representing DPNI tx shaping configuration
- * @rate_limit: Rate in Mbps
+ * @rate_limit: Rate in Mbits/s
* @max_burst_size: Burst size in bytes (up to 64KB)
*/
struct dpni_tx_shaping_cfg {
@@ -798,6 +836,11 @@ int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
uint16_t token,
uint8_t mac_addr[6]);
+/**
+ * Set mac addr queue action
+ */
+#define DPNI_MAC_SET_QUEUE_ACTION 1
+
int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -1464,6 +1507,7 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
struct dpni_fs_action_cfg {
uint64_t flc;
uint16_t flow_id;
+ uint16_t redirect_obj_token;
uint16_t options;
};
@@ -1595,7 +1639,8 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
uint8_t options,
- struct opr_cfg *cfg);
+ struct opr_cfg *cfg,
+ uint8_t opr_id);
int dpni_get_opr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
@@ -1603,7 +1648,9 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
struct opr_cfg *cfg,
- struct opr_qry *qry);
+ struct opr_qry *qry,
+ uint8_t flags,
+ uint8_t opr_id);
/**
* When used for queue_idx in function dpni_set_rx_dist_default_queue will
@@ -1779,14 +1826,57 @@ int dpni_get_sw_sequence_layout(struct fsl_mc_io *mc_io,
/**
* dpni_extract_sw_sequence_layout() - extract the software sequence layout
- * @layout: software sequence layout
- * @sw_sequence_layout_buf: Zeroed 264 bytes of memory before mapping it
- * to DMA
+ * @layout: software sequence layout
+ * @sw_sequence_layout_buf:Zeroed 264 bytes of memory before mapping it to DMA
*
* This function has to be called after dpni_get_sw_sequence_layout
- *
*/
void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
const uint8_t *sw_sequence_layout_buf);
+/**
+ * struct dpni_ptp_cfg - configure single step PTP (IEEE 1588)
+ * @en: enable single step PTP. When enabled the PTPv1 functionality will
+ * not work. If the field is zero, offset and ch_update parameters
+ * will be ignored
+ * @offset: start offset from the beginning of the frame where timestamp
+ * field is found. The offset must respect all MAC headers, VLAN
+ * tags and other protocol headers
+ * @ch_update: when set UDP checksum will be updated inside packet
+ * @peer_delay: For peer-to-peer transparent clocks add this value to the
+ * correction field in addition to the transient time update. The
+ * value expresses nanoseconds.
+ */
+struct dpni_single_step_cfg {
+ uint8_t en;
+ uint8_t ch_update;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_single_step_cfg *ptp_cfg);
+
+int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_single_step_cfg *ptp_cfg);
+
+/**
+ * loopback_en field is valid when calling function dpni_set_port_cfg
+ */
+#define DPNI_PORT_CFG_LOOPBACK 0x01
+
+/**
+ * struct dpni_port_cfg - custom configuration for dpni physical port
+ * @loopback_en: port loopback enabled
+ */
+struct dpni_port_cfg {
+ int loopback_en;
+};
+
+int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, uint32_t flags, struct dpni_port_cfg *port_cfg);
+
+int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_port_cfg *port_cfg);
+
#endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
index c40090b8fe..6fbd93bb38 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef _FSL_DPNI_CMD_H
@@ -9,21 +9,25 @@
/* DPNI Version */
#define DPNI_VER_MAJOR 7
-#define DPNI_VER_MINOR 13
+#define DPNI_VER_MINOR 17
#define DPNI_CMD_BASE_VERSION 1
#define DPNI_CMD_VERSION_2 2
#define DPNI_CMD_VERSION_3 3
+#define DPNI_CMD_VERSION_4 4
+#define DPNI_CMD_VERSION_5 5
#define DPNI_CMD_ID_OFFSET 4
#define DPNI_CMD(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_BASE_VERSION)
#define DPNI_CMD_V2(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_2)
#define DPNI_CMD_V3(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_3)
+#define DPNI_CMD_V4(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_4)
+#define DPNI_CMD_V5(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_5)
/* Command IDs */
#define DPNI_CMDID_OPEN DPNI_CMD(0x801)
#define DPNI_CMDID_CLOSE DPNI_CMD(0x800)
-#define DPNI_CMDID_CREATE DPNI_CMD_V3(0x901)
+#define DPNI_CMDID_CREATE DPNI_CMD_V5(0x901)
#define DPNI_CMDID_DESTROY DPNI_CMD(0x981)
#define DPNI_CMDID_GET_API_VERSION DPNI_CMD(0xa01)
@@ -67,7 +71,7 @@
#define DPNI_CMDID_REMOVE_VLAN_ID DPNI_CMD(0x232)
#define DPNI_CMDID_CLR_VLAN_FILTERS DPNI_CMD(0x233)
-#define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V3(0x235)
+#define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V4(0x235)
#define DPNI_CMDID_SET_RX_TC_POLICING DPNI_CMD(0x23E)
@@ -75,7 +79,7 @@
#define DPNI_CMDID_ADD_QOS_ENT DPNI_CMD_V2(0x241)
#define DPNI_CMDID_REMOVE_QOS_ENT DPNI_CMD(0x242)
#define DPNI_CMDID_CLR_QOS_TBL DPNI_CMD(0x243)
-#define DPNI_CMDID_ADD_FS_ENT DPNI_CMD(0x244)
+#define DPNI_CMDID_ADD_FS_ENT DPNI_CMD_V2(0x244)
#define DPNI_CMDID_REMOVE_FS_ENT DPNI_CMD(0x245)
#define DPNI_CMDID_CLR_FS_ENT DPNI_CMD(0x246)
@@ -140,7 +144,9 @@ struct dpni_cmd_create {
uint16_t fs_entries;
uint8_t num_rx_tcs;
uint8_t pad4;
- uint8_t num_cgs;
+ uint8_t num_cgs;
+ uint16_t num_opr;
+ uint8_t dist_key_size;
};
struct dpni_cmd_destroy {
@@ -411,8 +417,6 @@ struct dpni_rsp_get_port_mac_addr {
uint8_t mac_addr[6];
};
-#define DPNI_MAC_SET_QUEUE_ACTION 1
-
struct dpni_cmd_add_mac_addr {
uint8_t flags;
uint8_t pad;
@@ -594,6 +598,7 @@ struct dpni_cmd_add_fs_entry {
uint64_t key_iova;
uint64_t mask_iova;
uint64_t flc;
+ uint16_t redir_token;
};
struct dpni_cmd_remove_fs_entry {
@@ -779,7 +784,7 @@ struct dpni_rsp_get_congestion_notification {
};
struct dpni_cmd_set_opr {
- uint8_t pad0;
+ uint8_t opr_id;
uint8_t tc_id;
uint8_t index;
uint8_t options;
@@ -792,9 +797,10 @@ struct dpni_cmd_set_opr {
};
struct dpni_cmd_get_opr {
- uint8_t pad;
+ uint8_t flags;
uint8_t tc_id;
uint8_t index;
+ uint8_t opr_id;
};
#define DPNI_RIP_SHIFT 0
@@ -911,5 +917,34 @@ struct dpni_sw_sequence_layout_entry {
uint16_t pad;
};
+#define DPNI_PTP_ENABLE_SHIFT 0
+#define DPNI_PTP_ENABLE_SIZE 1
+#define DPNI_PTP_CH_UPDATE_SHIFT 1
+#define DPNI_PTP_CH_UPDATE_SIZE 1
+struct dpni_cmd_single_step_cfg {
+ uint16_t flags;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+struct dpni_rsp_single_step_cfg {
+ uint16_t flags;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+#define DPNI_PORT_LOOPBACK_EN_SHIFT 0
+#define DPNI_PORT_LOOPBACK_EN_SIZE 1
+
+struct dpni_cmd_set_port_cfg {
+ uint32_t flags;
+ uint32_t bit_params;
+};
+
+struct dpni_rsp_get_port_cfg {
+ uint32_t flags;
+ uint32_t bit_params;
+};
+
#pragma pack(pop)
#endif /* _FSL_DPNI_CMD_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dprtc.h b/drivers/net/dpaa2/mc/fsl_dprtc.h
index 49edb5a050..84ab158444 100644
--- a/drivers/net/dpaa2/mc/fsl_dprtc.h
+++ b/drivers/net/dpaa2/mc/fsl_dprtc.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#ifndef __FSL_DPRTC_H
#define __FSL_DPRTC_H
@@ -86,6 +86,23 @@ int dprtc_set_alarm(struct fsl_mc_io *mc_io,
uint16_t token,
uint64_t time);
+struct dprtc_ext_trigger_status {
+ uint64_t timestamp;
+ uint8_t unread_valid_timestamp;
+};
+
+int dprtc_get_ext_trigger_timestamp(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ struct dprtc_ext_trigger_status *status);
+
+int dprtc_set_fiper_loopback(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ uint8_t fiper_as_input);
+
/**
* struct dprtc_attr - Structure representing DPRTC attributes
* @id: DPRTC object ID
diff --git a/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h b/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
index eca12ff5ee..61aaa4daab 100644
--- a/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#include <fsl_mc_sys.h>
#ifndef _FSL_DPRTC_CMD_H
@@ -7,13 +7,15 @@
/* DPRTC Version */
#define DPRTC_VER_MAJOR 2
-#define DPRTC_VER_MINOR 1
+#define DPRTC_VER_MINOR 3
/* Command versioning */
#define DPRTC_CMD_BASE_VERSION 1
+#define DPRTC_CMD_VERSION_2 2
#define DPRTC_CMD_ID_OFFSET 4
#define DPRTC_CMD(id) (((id) << DPRTC_CMD_ID_OFFSET) | DPRTC_CMD_BASE_VERSION)
+#define DPRTC_CMD_V2(id) (((id) << DPRTC_CMD_ID_OFFSET) | DPRTC_CMD_VERSION_2)
/* Command IDs */
#define DPRTC_CMDID_CLOSE DPRTC_CMD(0x800)
@@ -39,6 +41,7 @@
#define DPRTC_CMDID_SET_EXT_TRIGGER DPRTC_CMD(0x1d8)
#define DPRTC_CMDID_CLEAR_EXT_TRIGGER DPRTC_CMD(0x1d9)
#define DPRTC_CMDID_GET_EXT_TRIGGER_TIMESTAMP DPRTC_CMD(0x1dA)
+#define DPRTC_CMDID_SET_FIPER_LOOPBACK DPRTC_CMD(0x1dB)
/* Macros for accessing command fields smaller than 1byte */
#define DPRTC_MASK(field) \
@@ -87,5 +90,23 @@ struct dprtc_rsp_get_api_version {
uint16_t major;
uint16_t minor;
};
+
+struct dprtc_cmd_ext_trigger_timestamp {
+ uint32_t pad;
+ uint8_t id;
+};
+
+struct dprtc_rsp_ext_trigger_timestamp {
+ uint8_t unread_valid_timestamp;
+ uint8_t pad1;
+ uint16_t pad2;
+ uint32_t pad3;
+ uint64_t timestamp;
+};
+
+struct dprtc_ext_trigger_cfg {
+ uint8_t id;
+ uint8_t fiper_as_input;
+};
#pragma pack(pop)
#endif /* _FSL_DPRTC_CMD_H */
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 02/10] net/dpaa2: support Tx flow redirection action
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 01/10] bus/fslmc: updated MC FW to 10.28 nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 03/10] bus/fslmc: add qbman debug APIs support nipun.gupta
` (7 subsequent siblings)
9 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
TX redirection support by flow action RTE_FLOW_ACTION_TYPE_PHY_PORT
and RTE_FLOW_ACTION_TYPE_PORT_ID
This action is executed by HW to forward packets between ports.
If the ingress packets match the rule, the packets are switched
without software involved and perf is improved as well.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 5 ++
drivers/net/dpaa2/dpaa2_ethdev.h | 1 +
drivers/net/dpaa2/dpaa2_flow.c | 116 +++++++++++++++++++++++++++----
drivers/net/dpaa2/mc/fsl_dpni.h | 23 ++++++
4 files changed, 132 insertions(+), 13 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 560b79151b..9cf55c0f0b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2822,6 +2822,11 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
+int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
+{
+ return dev->device->driver == &rte_dpaa2_pmd.driver;
+}
+
static int
rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
struct rte_dpaa2_device *dpaa2_dev)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index b9c729f6cd..3f34d7ecff 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -240,6 +240,7 @@ uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
uint16_t dpaa2_dev_tx_conf(void *queue) __rte_unused;
+int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev);
int dpaa2_timesync_enable(struct rte_eth_dev *dev);
int dpaa2_timesync_disable(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index bfe17c350a..5de886ec5e 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#include <sys/queue.h>
@@ -30,10 +30,10 @@
int mc_l4_port_identification;
static char *dpaa2_flow_control_log;
-static int dpaa2_flow_miss_flow_id =
+static uint16_t dpaa2_flow_miss_flow_id =
DPNI_FS_MISS_DROP;
-#define FIXED_ENTRY_SIZE 54
+#define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
enum flow_rule_ipaddr_type {
FLOW_NONE_IPADDR,
@@ -83,9 +83,18 @@ static const
enum rte_flow_action_type dpaa2_supported_action_type[] = {
RTE_FLOW_ACTION_TYPE_END,
RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_PHY_PORT,
+ RTE_FLOW_ACTION_TYPE_PORT_ID,
RTE_FLOW_ACTION_TYPE_RSS
};
+static const
+enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_PHY_PORT,
+ RTE_FLOW_ACTION_TYPE_PORT_ID
+};
+
/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
@@ -2937,6 +2946,19 @@ dpaa2_configure_flow_raw(struct rte_flow *flow,
return 0;
}
+static inline int dpaa2_fs_action_supported(
+ enum rte_flow_action_type action)
+{
+ int i;
+
+ for (i = 0; i < (int)(sizeof(dpaa2_supported_fs_action_type) /
+ sizeof(enum rte_flow_action_type)); i++) {
+ if (action == dpaa2_supported_fs_action_type[i])
+ return 1;
+ }
+
+ return 0;
+}
/* The existing QoS/FS entry with IP address(es)
* needs update after
* new extract(s) are inserted before IP
@@ -3115,7 +3137,7 @@ dpaa2_flow_entry_update(
}
}
- if (curr->action != RTE_FLOW_ACTION_TYPE_QUEUE) {
+ if (!dpaa2_fs_action_supported(curr->action)) {
curr = LIST_NEXT(curr, next);
continue;
}
@@ -3253,6 +3275,43 @@ dpaa2_flow_verify_attr(
return 0;
}
+static inline struct rte_eth_dev *
+dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
+ const struct rte_flow_action *action)
+{
+ const struct rte_flow_action_phy_port *phy_port;
+ const struct rte_flow_action_port_id *port_id;
+ int idx = -1;
+ struct rte_eth_dev *dest_dev;
+
+ if (action->type == RTE_FLOW_ACTION_TYPE_PHY_PORT) {
+ phy_port = (const struct rte_flow_action_phy_port *)
+ action->conf;
+ if (!phy_port->original)
+ idx = phy_port->index;
+ } else if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
+ port_id = (const struct rte_flow_action_port_id *)
+ action->conf;
+ if (!port_id->original)
+ idx = port_id->id;
+ } else {
+ return NULL;
+ }
+
+ if (idx >= 0) {
+ if (!rte_eth_dev_is_valid_port(idx))
+ return NULL;
+ dest_dev = &rte_eth_devices[idx];
+ } else {
+ dest_dev = priv->eth_dev;
+ }
+
+ if (!dpaa2_dev_is_dpaa2(dest_dev))
+ return NULL;
+
+ return dest_dev;
+}
+
static inline int
dpaa2_flow_verify_action(
struct dpaa2_dev_priv *priv,
@@ -3278,6 +3337,14 @@ dpaa2_flow_verify_action(
return -1;
}
break;
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
+ if (!dpaa2_flow_redirect_dev(priv, &actions[j])) {
+ DPAA2_PMD_ERR(
+ "Invalid port id of action");
+ return -ENOTSUP;
+ }
+ break;
case RTE_FLOW_ACTION_TYPE_RSS:
rss_conf = (const struct rte_flow_action_rss *)
(actions[j].conf);
@@ -3330,11 +3397,13 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
struct dpni_qos_tbl_cfg qos_cfg;
struct dpni_fs_action_cfg action;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
- struct dpaa2_queue *rxq;
+ struct dpaa2_queue *dest_q;
struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
size_t param;
struct rte_flow *curr = LIST_FIRST(&priv->flows);
uint16_t qos_index;
+ struct rte_eth_dev *dest_dev;
+ struct dpaa2_dev_priv *dest_priv;
ret = dpaa2_flow_verify_attr(priv, attr);
if (ret)
@@ -3446,12 +3515,32 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
while (!end_of_list) {
switch (actions[j].type) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
- dest_queue =
- (const struct rte_flow_action_queue *)(actions[j].conf);
- rxq = priv->rx_vq[dest_queue->index];
- flow->action = RTE_FLOW_ACTION_TYPE_QUEUE;
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
- action.flow_id = rxq->flow_id;
+ flow->action = actions[j].type;
+
+ if (actions[j].type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+ dest_queue = (const struct rte_flow_action_queue *)
+ (actions[j].conf);
+ dest_q = priv->rx_vq[dest_queue->index];
+ action.flow_id = dest_q->flow_id;
+ } else {
+ dest_dev = dpaa2_flow_redirect_dev(priv,
+ &actions[j]);
+ if (!dest_dev) {
+ DPAA2_PMD_ERR(
+ "Invalid destination device to redirect!");
+ return -1;
+ }
+
+ dest_priv = dest_dev->data->dev_private;
+ dest_q = dest_priv->tx_vq[0];
+ action.options =
+ DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
+ action.redirect_obj_token = dest_priv->token;
+ action.flow_id = dest_q->flow_id;
+ }
/* Configure FS table first*/
if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
@@ -3481,8 +3570,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
tc_cfg.enable = true;
- tc_cfg.fs_miss_flow_id =
- dpaa2_flow_miss_flow_id;
+ tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
priv->token, &tc_cfg);
if (ret < 0) {
@@ -3970,7 +4058,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
ret = dpaa2_generic_flow_set(flow, dev, attr, pattern,
actions, error);
if (ret < 0) {
- if (error->type > RTE_FLOW_ERROR_TYPE_ACTION)
+ if (error && error->type > RTE_FLOW_ERROR_TYPE_ACTION)
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
attr, "unknown");
@@ -4002,6 +4090,8 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
switch (flow->action) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
if (priv->num_rx_tc > 1) {
/* Remove entry from QoS table first */
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index 34c6b20033..469ab9b3d4 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1496,12 +1496,35 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
*/
#define DPNI_FS_OPT_SET_STASH_CONTROL 0x4
+/**
+ * Redirect matching traffic to Rx part of another dpni object. The frame
+ * will be classified according to new qos and flow steering rules from
+ * target dpni object.
+ */
+#define DPNI_FS_OPT_REDIRECT_TO_DPNI_RX 0x08
+
+/**
+ * Redirect matching traffic into Tx queue of another dpni object. The
+ * frame will be transmitted directly
+ */
+#define DPNI_FS_OPT_REDIRECT_TO_DPNI_TX 0x10
+
/**
* struct dpni_fs_action_cfg - Action configuration for table look-up
* @flc: FLC value for traffic matching this rule. Please check the Frame
* Descriptor section in the hardware documentation for more information.
* @flow_id: Identifies the Rx queue used for matching traffic. Supported
* values are in range 0 to num_queue-1.
+ * @redirect_obj_token: token that identifies the object where frame is
+ * redirected when this rule is hit. This paraneter is used only when one of the
+ * flags DPNI_FS_OPT_REDIRECT_TO_DPNI_RX or DPNI_FS_OPT_REDIRECT_TO_DPNI_TX is
+ * set.
+ * The token is obtained using dpni_open() API call. The object must stay
+ * open during the operation to ensure the fact that application has access
+ * on it. If the object is destroyed of closed next actions will take place:
+ * - if DPNI_FS_OPT_DISCARD is set the frame will be discarded by current dpni
+ * - if DPNI_FS_OPT_DISCARD is cleared the frame will be enqueued in queue with
+ * index provided in flow_id parameter.
* @options: Any combination of DPNI_FS_OPT_ values.
*/
struct dpni_fs_action_cfg {
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 03/10] bus/fslmc: add qbman debug APIs support
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 01/10] bus/fslmc: updated MC FW to 10.28 nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 02/10] net/dpaa2: support Tx flow redirection action nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
2021-10-06 13:31 ` Thomas Monjalon
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 04/10] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
` (6 subsequent siblings)
9 siblings, 1 reply; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena,
Youri Querry, Roy Pledge, Nipun Gupta
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Add support for debugging qbman FQs
Signed-off-by: Youri Querry <youri.querry_1@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 201 +++++-
drivers/bus/fslmc/qbman/qbman_debug.c | 621 ++++++++++++++++++
drivers/bus/fslmc/qbman/qbman_portal.c | 6 +
3 files changed, 824 insertions(+), 4 deletions(-)
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 54096e8774..14c40a74c5 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,13 +1,118 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2020 NXP
+ * Copyright 2018-2020 NXP
*/
#ifndef _FSL_QBMAN_DEBUG_H
#define _FSL_QBMAN_DEBUG_H
-#include <rte_compat.h>
struct qbman_swp;
+/* Buffer pool query commands */
+struct qbman_bp_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[4];
+ uint8_t bdi;
+ uint8_t state;
+ uint32_t fill;
+ uint32_t hdptr;
+ uint16_t swdet;
+ uint16_t swdxt;
+ uint16_t hwdet;
+ uint16_t hwdxt;
+ uint16_t swset;
+ uint16_t swsxt;
+ uint16_t vbpid;
+ uint16_t icid;
+ uint64_t bpscn_addr;
+ uint64_t bpscn_ctx;
+ uint16_t hw_targ;
+ uint8_t dbe;
+ uint8_t reserved2;
+ uint8_t sdcnt;
+ uint8_t hdcnt;
+ uint8_t sscnt;
+ uint8_t reserved3[9];
+};
+
+int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
+ struct qbman_bp_query_rslt *r);
+int qbman_bp_get_bdi(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_va(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_wae(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swdet(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swdxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hwdet(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hwdxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swset(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swsxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_vbpid(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_icid(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_pl(struct qbman_bp_query_rslt *r);
+uint64_t qbman_bp_get_bpscn_addr(struct qbman_bp_query_rslt *r);
+uint64_t qbman_bp_get_bpscn_ctx(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hw_targ(struct qbman_bp_query_rslt *r);
+int qbman_bp_has_free_bufs(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_num_free_bufs(struct qbman_bp_query_rslt *r);
+int qbman_bp_is_depleted(struct qbman_bp_query_rslt *r);
+int qbman_bp_is_surplus(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_hdptr(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_sdcnt(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_hdcnt(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_sscnt(struct qbman_bp_query_rslt *r);
+
+/* FQ query function for programmable fields */
+struct qbman_fq_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[8];
+ uint16_t cgid;
+ uint16_t dest_wq;
+ uint8_t reserved2;
+ uint8_t fq_ctrl;
+ uint16_t ics_cred;
+ uint16_t td_thresh;
+ uint16_t oal_oac;
+ uint8_t reserved3;
+ uint8_t mctl;
+ uint64_t fqd_ctx;
+ uint16_t icid;
+ uint16_t reserved4;
+ uint32_t vfqid;
+ uint32_t fqid_er;
+ uint16_t opridsz;
+ uint8_t reserved5[18];
+};
+
+int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
+ struct qbman_fq_query_rslt *r);
+uint8_t qbman_fq_attr_get_fqctrl(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_cgrid(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_destwq(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_tdthresh(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_oa_ics(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_oa_cgr(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_oal(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_bdi(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_ff(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_va(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_ps(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_pps(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_icid(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_pl(struct qbman_fq_query_rslt *r);
+uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r);
+uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r);
+
+/* FQ query command for non-programmable fields*/
+enum qbman_fq_schedstate_e {
+ qbman_fq_schedstate_oos = 0,
+ qbman_fq_schedstate_retired,
+ qbman_fq_schedstate_tentatively_scheduled,
+ qbman_fq_schedstate_truly_scheduled,
+ qbman_fq_schedstate_parked,
+ qbman_fq_schedstate_held_active,
+};
struct qbman_fq_query_np_rslt {
uint8_t verb;
@@ -32,10 +137,98 @@ uint8_t verb;
__rte_internal
int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
struct qbman_fq_query_np_rslt *r);
-
+uint8_t qbman_fq_state_schedstate(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_force_eligible(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_xoff(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_retirement_pending(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_overflow_error(const struct qbman_fq_query_np_rslt *r);
__rte_internal
uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r);
-
uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r);
+/* CGR query */
+struct qbman_cgr_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[6];
+ uint8_t ctl1;
+ uint8_t reserved1;
+ uint16_t oal;
+ uint16_t reserved2;
+ uint8_t mode;
+ uint8_t ctl2;
+ uint8_t iwc;
+ uint8_t tdc;
+ uint16_t cs_thres;
+ uint16_t cs_thres_x;
+ uint16_t td_thres;
+ uint16_t cscn_tdcp;
+ uint16_t cscn_wqid;
+ uint16_t cscn_vcgid;
+ uint16_t cg_icid;
+ uint64_t cg_wr_addr;
+ uint64_t cscn_ctx;
+ uint64_t i_cnt;
+ uint64_t a_cnt;
+};
+
+int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_en_enter(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_en_exit(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_icd(struct qbman_cgr_query_rslt *r);
+uint8_t qbman_cgr_get_mode(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_rej_cnt_mode(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_bdi(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_cs_thres(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_cs_thres_x(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_td_thres(struct qbman_cgr_query_rslt *r);
+
+/* WRED query */
+struct qbman_wred_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[6];
+ uint8_t edp[7];
+ uint8_t reserved1;
+ uint32_t wred_parm_dp[7];
+ uint8_t reserved2[20];
+};
+
+int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_wred_query_rslt *r);
+int qbman_cgr_attr_wred_get_edp(struct qbman_wred_query_rslt *r, uint32_t idx);
+void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
+ uint64_t *maxth, uint8_t *maxp);
+uint32_t qbman_cgr_attr_wred_get_parm_dp(struct qbman_wred_query_rslt *r,
+ uint32_t idx);
+
+/* CGR/CCGR/CQ statistics query */
+int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+
+/* Query Work Queue Channel */
+struct qbman_wqchan_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint16_t chid;
+ uint8_t reserved;
+ uint8_t ctrl;
+ uint16_t cdan_wqid;
+ uint64_t cdan_ctx;
+ uint32_t reserved2[4];
+ uint32_t wq_len[8];
+};
+
+int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
+ struct qbman_wqchan_query_rslt *r);
+uint32_t qbman_wqchan_attr_get_wqlen(struct qbman_wqchan_query_rslt *r, int wq);
+uint64_t qbman_wqchan_attr_get_cdan_ctx(struct qbman_wqchan_query_rslt *r);
+uint16_t qbman_wqchan_attr_get_cdan_wqid(struct qbman_wqchan_query_rslt *r);
+uint8_t qbman_wqchan_attr_get_ctrl(struct qbman_wqchan_query_rslt *r);
+uint16_t qbman_wqchan_attr_get_chanid(struct qbman_wqchan_query_rslt *r);
#endif /* !_FSL_QBMAN_DEBUG_H */
diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 34374ae4b6..eea06988ff 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
+ * Copyright 2018-2020 NXP
*/
#include "compat.h"
@@ -16,6 +17,179 @@
#define QBMAN_CGR_STAT_QUERY 0x55
#define QBMAN_CGR_STAT_QUERY_CLR 0x56
+struct qbman_bp_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t bpid;
+ uint8_t reserved2[60];
+};
+
+#define QB_BP_STATE_SHIFT 24
+#define QB_BP_VA_SHIFT 1
+#define QB_BP_VA_MASK 0x2
+#define QB_BP_WAE_SHIFT 2
+#define QB_BP_WAE_MASK 0x4
+#define QB_BP_PL_SHIFT 15
+#define QB_BP_PL_MASK 0x8000
+#define QB_BP_ICID_MASK 0x7FFF
+
+int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
+ struct qbman_bp_query_rslt *r)
+{
+ struct qbman_bp_query_desc *p;
+
+ /* Start the management command */
+ p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ /* Encode the caller-provided attributes */
+ p->bpid = bpid;
+
+ /* Complete the management command */
+ *r = *(struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_BP_QUERY);
+ if (!r) {
+ pr_err("qbman: Query BPID %d failed, no response\n",
+ bpid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of BPID 0x%x failed, code=0x%02x\n", bpid,
+ r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_bp_get_bdi(struct qbman_bp_query_rslt *r)
+{
+ return r->bdi & 1;
+}
+
+int qbman_bp_get_va(struct qbman_bp_query_rslt *r)
+{
+ return (r->bdi & QB_BP_VA_MASK) >> QB_BP_VA_MASK;
+}
+
+int qbman_bp_get_wae(struct qbman_bp_query_rslt *r)
+{
+ return (r->bdi & QB_BP_WAE_MASK) >> QB_BP_WAE_SHIFT;
+}
+
+static uint16_t qbman_bp_thresh_to_value(uint16_t val)
+{
+ return (val & 0xff) << ((val & 0xf00) >> 8);
+}
+
+uint16_t qbman_bp_get_swdet(struct qbman_bp_query_rslt *r)
+{
+
+ return qbman_bp_thresh_to_value(r->swdet);
+}
+
+uint16_t qbman_bp_get_swdxt(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->swdxt);
+}
+
+uint16_t qbman_bp_get_hwdet(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->hwdet);
+}
+
+uint16_t qbman_bp_get_hwdxt(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->hwdxt);
+}
+
+uint16_t qbman_bp_get_swset(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->swset);
+}
+
+uint16_t qbman_bp_get_swsxt(struct qbman_bp_query_rslt *r)
+{
+
+ return qbman_bp_thresh_to_value(r->swsxt);
+}
+
+uint16_t qbman_bp_get_vbpid(struct qbman_bp_query_rslt *r)
+{
+ return r->vbpid;
+}
+
+uint16_t qbman_bp_get_icid(struct qbman_bp_query_rslt *r)
+{
+ return r->icid & QB_BP_ICID_MASK;
+}
+
+int qbman_bp_get_pl(struct qbman_bp_query_rslt *r)
+{
+ return (r->icid & QB_BP_PL_MASK) >> QB_BP_PL_SHIFT;
+}
+
+uint64_t qbman_bp_get_bpscn_addr(struct qbman_bp_query_rslt *r)
+{
+ return r->bpscn_addr;
+}
+
+uint64_t qbman_bp_get_bpscn_ctx(struct qbman_bp_query_rslt *r)
+{
+ return r->bpscn_ctx;
+}
+
+uint16_t qbman_bp_get_hw_targ(struct qbman_bp_query_rslt *r)
+{
+ return r->hw_targ;
+}
+
+int qbman_bp_has_free_bufs(struct qbman_bp_query_rslt *r)
+{
+ return !(int)(r->state & 0x1);
+}
+
+int qbman_bp_is_depleted(struct qbman_bp_query_rslt *r)
+{
+ return (int)((r->state & 0x2) >> 1);
+}
+
+int qbman_bp_is_surplus(struct qbman_bp_query_rslt *r)
+{
+ return (int)((r->state & 0x4) >> 2);
+}
+
+uint32_t qbman_bp_num_free_bufs(struct qbman_bp_query_rslt *r)
+{
+ return r->fill;
+}
+
+uint32_t qbman_bp_get_hdptr(struct qbman_bp_query_rslt *r)
+{
+ return r->hdptr;
+}
+
+uint32_t qbman_bp_get_sdcnt(struct qbman_bp_query_rslt *r)
+{
+ return r->sdcnt;
+}
+
+uint32_t qbman_bp_get_hdcnt(struct qbman_bp_query_rslt *r)
+{
+ return r->hdcnt;
+}
+
+uint32_t qbman_bp_get_sscnt(struct qbman_bp_query_rslt *r)
+{
+ return r->sscnt;
+}
+
struct qbman_fq_query_desc {
uint8_t verb;
uint8_t reserved[3];
@@ -23,6 +197,128 @@ struct qbman_fq_query_desc {
uint8_t reserved2[56];
};
+/* FQ query function for programmable fields */
+int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
+ struct qbman_fq_query_rslt *r)
+{
+ struct qbman_fq_query_desc *p;
+
+ p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->fqid = fqid;
+ *r = *(struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_FQ_QUERY);
+ if (!r) {
+ pr_err("qbman: Query FQID %d failed, no response\n",
+ fqid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of FQID 0x%x failed, code=0x%02x\n",
+ fqid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+uint8_t qbman_fq_attr_get_fqctrl(struct qbman_fq_query_rslt *r)
+{
+ return r->fq_ctrl;
+}
+
+uint16_t qbman_fq_attr_get_cgrid(struct qbman_fq_query_rslt *r)
+{
+ return r->cgid;
+}
+
+uint16_t qbman_fq_attr_get_destwq(struct qbman_fq_query_rslt *r)
+{
+ return r->dest_wq;
+}
+
+static uint16_t qbman_thresh_to_value(uint16_t val)
+{
+ return ((val & 0x1FE0) >> 5) << (val & 0x1F);
+}
+
+uint16_t qbman_fq_attr_get_tdthresh(struct qbman_fq_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->td_thresh);
+}
+
+int qbman_fq_attr_get_oa_ics(struct qbman_fq_query_rslt *r)
+{
+ return (int)(r->oal_oac >> 14) & 0x1;
+}
+
+int qbman_fq_attr_get_oa_cgr(struct qbman_fq_query_rslt *r)
+{
+ return (int)(r->oal_oac >> 15);
+}
+
+uint16_t qbman_fq_attr_get_oal(struct qbman_fq_query_rslt *r)
+{
+ return (r->oal_oac & 0x0FFF);
+}
+
+int qbman_fq_attr_get_bdi(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x1);
+}
+
+int qbman_fq_attr_get_ff(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x2) >> 1;
+}
+
+int qbman_fq_attr_get_va(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x4) >> 2;
+}
+
+int qbman_fq_attr_get_ps(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x8) >> 3;
+}
+
+int qbman_fq_attr_get_pps(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x30) >> 4;
+}
+
+uint16_t qbman_fq_attr_get_icid(struct qbman_fq_query_rslt *r)
+{
+ return r->icid & 0x7FFF;
+}
+
+int qbman_fq_attr_get_pl(struct qbman_fq_query_rslt *r)
+{
+ return (int)((r->icid & 0x8000) >> 15);
+}
+
+uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r)
+{
+ return r->vfqid & 0x00FFFFFF;
+}
+
+uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r)
+{
+ return r->fqid_er & 0x00FFFFFF;
+}
+
+uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r)
+{
+ return r->opridsz;
+}
+
int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
struct qbman_fq_query_np_rslt *r)
{
@@ -55,6 +351,31 @@ int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
return 0;
}
+uint8_t qbman_fq_state_schedstate(const struct qbman_fq_query_np_rslt *r)
+{
+ return r->st1 & 0x7;
+}
+
+int qbman_fq_state_force_eligible(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x8) >> 3);
+}
+
+int qbman_fq_state_xoff(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x10) >> 4);
+}
+
+int qbman_fq_state_retirement_pending(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x20) >> 5);
+}
+
+int qbman_fq_state_overflow_error(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x40) >> 6);
+}
+
uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r)
{
return (r->frm_cnt & 0x00FFFFFF);
@@ -64,3 +385,303 @@ uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r)
{
return r->byte_cnt;
}
+
+/* Query CGR */
+struct qbman_cgr_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t cgid;
+ uint8_t reserved2[60];
+};
+
+int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_cgr_query_rslt *r)
+{
+ struct qbman_cgr_query_desc *p;
+
+ p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ *r = *(struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_CGR_QUERY);
+ if (!r) {
+ pr_err("qbman: Query CGID %d failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_CGR_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query CGID 0x%x failed,code=0x%02x\n", cgid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_cgr_get_cscn_wq_en_enter(struct qbman_cgr_query_rslt *r)
+{
+ return (int)(r->ctl1 & 0x1);
+}
+
+int qbman_cgr_get_cscn_wq_en_exit(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->ctl1 & 0x2) >> 1);
+}
+
+int qbman_cgr_get_cscn_wq_icd(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->ctl1 & 0x4) >> 2);
+}
+
+uint8_t qbman_cgr_get_mode(struct qbman_cgr_query_rslt *r)
+{
+ return r->mode & 0x3;
+}
+
+int qbman_cgr_get_rej_cnt_mode(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->mode & 0x4) >> 2);
+}
+
+int qbman_cgr_get_cscn_bdi(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->mode & 0x8) >> 3);
+}
+
+uint16_t qbman_cgr_attr_get_cs_thres(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->cs_thres);
+}
+
+uint16_t qbman_cgr_attr_get_cs_thres_x(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->cs_thres_x);
+}
+
+uint16_t qbman_cgr_attr_get_td_thres(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->td_thres);
+}
+
+int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_wred_query_rslt *r)
+{
+ struct qbman_cgr_query_desc *p;
+
+ p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ *r = *(struct qbman_wred_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_WRED_QUERY);
+ if (!r) {
+ pr_err("qbman: Query CGID WRED %d failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WRED_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query CGID WRED 0x%x failed,code=0x%02x\n",
+ cgid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_cgr_attr_wred_get_edp(struct qbman_wred_query_rslt *r, uint32_t idx)
+{
+ return (int)(r->edp[idx] & 1);
+}
+
+uint32_t qbman_cgr_attr_wred_get_parm_dp(struct qbman_wred_query_rslt *r,
+ uint32_t idx)
+{
+ return r->wred_parm_dp[idx];
+}
+
+void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
+ uint64_t *maxth, uint8_t *maxp)
+{
+ uint8_t ma, mn, step_i, step_s, pn;
+
+ ma = (uint8_t)(dp >> 24);
+ mn = (uint8_t)(dp >> 19) & 0x1f;
+ step_i = (uint8_t)(dp >> 11);
+ step_s = (uint8_t)(dp >> 6) & 0x1f;
+ pn = (uint8_t)dp & 0x3f;
+
+ *maxp = (uint8_t)(((pn<<2) * 100)/256);
+
+ if (mn == 0)
+ *maxth = ma;
+ else
+ *maxth = ((ma+256) * (1<<(mn-1)));
+
+ if (step_s == 0)
+ *minth = *maxth - step_i;
+ else
+ *minth = *maxth - (256 + step_i) * (1<<(step_s - 1));
+}
+
+/* Query CGR/CCGR/CQ statistics */
+struct qbman_cgr_statistics_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t cgid;
+ uint8_t reserved1;
+ uint8_t ct;
+ uint8_t reserved2[58];
+};
+
+struct qbman_cgr_statistics_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[14];
+ uint64_t frm_cnt;
+ uint64_t byte_cnt;
+ uint32_t reserved2[8];
+};
+
+static int qbman_cgr_statistics_query(struct qbman_swp *s, uint32_t cgid,
+ int clear, uint32_t command_type,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ struct qbman_cgr_statistics_query_desc *p;
+ struct qbman_cgr_statistics_query_rslt *r;
+ uint32_t query_verb;
+
+ p = (struct qbman_cgr_statistics_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ if (command_type < 2)
+ p->ct = command_type;
+ query_verb = clear ?
+ QBMAN_CGR_STAT_QUERY_CLR : QBMAN_CGR_STAT_QUERY;
+ r = (struct qbman_cgr_statistics_query_rslt *)qbman_swp_mc_complete(s,
+ p, query_verb);
+ if (!r) {
+ pr_err("qbman: Query CGID %d statistics failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != query_verb);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query statistics of CGID 0x%x failed, code=0x%02x\n",
+ cgid, r->rslt);
+ return -EIO;
+ }
+
+ if (*frame_cnt)
+ *frame_cnt = r->frm_cnt & 0xFFFFFFFFFFllu;
+ if (*byte_cnt)
+ *byte_cnt = r->byte_cnt & 0xFFFFFFFFFFllu;
+
+ return 0;
+}
+
+int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 0xff,
+ frame_cnt, byte_cnt);
+}
+
+int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 1,
+ frame_cnt, byte_cnt);
+}
+
+int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 0,
+ frame_cnt, byte_cnt);
+}
+
+/* WQ Chan Query */
+struct qbman_wqchan_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t chid;
+ uint8_t reserved2[60];
+};
+
+int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
+ struct qbman_wqchan_query_rslt *r)
+{
+ struct qbman_wqchan_query_desc *p;
+
+ /* Start the management command */
+ p = (struct qbman_wqchan_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ /* Encode the caller-provided attributes */
+ p->chid = chanid;
+
+ /* Complete the management command */
+ *r = *(struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_WQ_QUERY);
+ if (!r) {
+ pr_err("qbman: Query WQ Channel %d failed, no response\n",
+ chanid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WQ_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of WQCHAN 0x%x failed, code=0x%02x\n",
+ chanid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+uint32_t qbman_wqchan_attr_get_wqlen(struct qbman_wqchan_query_rslt *r, int wq)
+{
+ return r->wq_len[wq] & 0x00FFFFFF;
+}
+
+uint64_t qbman_wqchan_attr_get_cdan_ctx(struct qbman_wqchan_query_rslt *r)
+{
+ return r->cdan_ctx;
+}
+
+uint16_t qbman_wqchan_attr_get_cdan_wqid(struct qbman_wqchan_query_rslt *r)
+{
+ return r->cdan_wqid;
+}
+
+uint8_t qbman_wqchan_attr_get_ctrl(struct qbman_wqchan_query_rslt *r)
+{
+ return r->ctrl;
+}
+
+uint16_t qbman_wqchan_attr_get_chanid(struct qbman_wqchan_query_rslt *r)
+{
+ return r->chid;
+}
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index aedcad9258..3a7579c8a7 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1865,6 +1865,12 @@ void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
d->pull.dq_src = chid;
}
+/**
+ * qbman_pull_desc_set_rad() - Decide whether reschedule the fq after dequeue
+ *
+ * @rad: 1 = Reschedule the FQ after dequeue.
+ * 0 = Allow the FQ to remain active after dequeue.
+ */
void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad)
{
if (d->pull.verb & (1 << QB_VDQCR_VERB_RLS_SHIFT)) {
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 04/10] net/dpaa2: add debug print for MTU set for jumbo
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (2 preceding siblings ...)
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 03/10] bus/fslmc: add qbman debug APIs support nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 05/10] net/dpaa2: add function to generate HW hash key nipun.gupta
` (5 subsequent siblings)
9 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch adds a debug print for MTU configured on the
device when jumbo frames are enabled.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9cf55c0f0b..275656fbe4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -573,6 +573,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.max_rx_pkt_len -
RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
VLAN_TAG_SIZE;
+ DPAA2_PMD_INFO("MTU configured for the device: %d",
+ dev->data->mtu);
} else {
return -1;
}
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 05/10] net/dpaa2: add function to generate HW hash key
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (3 preceding siblings ...)
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 04/10] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 06/10] net/dpaa2: update RSS to support additional distributions nipun.gupta
` (4 subsequent siblings)
9 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch add support to generate the hash key in software
equivalent to WRIOP key generation.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_tlu_hash.c | 153 ++++++++++++++++++++++++
drivers/net/dpaa2/meson.build | 1 +
drivers/net/dpaa2/rte_pmd_dpaa2.h | 19 +++
drivers/net/dpaa2/version.map | 2 +
4 files changed, 175 insertions(+)
create mode 100644 drivers/net/dpaa2/base/dpaa2_tlu_hash.c
diff --git a/drivers/net/dpaa2/base/dpaa2_tlu_hash.c b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
new file mode 100644
index 0000000000..9eb127c07c
--- /dev/null
+++ b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
@@ -0,0 +1,153 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <rte_pmd_dpaa2.h>
+
+static unsigned int sbox(unsigned int x)
+{
+ unsigned int a, b, c, d;
+ unsigned int oa, ob, oc, od;
+
+ a = x & 0x1;
+ b = (x >> 1) & 0x1;
+ c = (x >> 2) & 0x1;
+ d = (x >> 3) & 0x1;
+
+ oa = ((a & ~b & ~c & d) | (~a & b) | (~a & ~c & ~d) | (b & c)) & 0x1;
+ ob = ((a & ~b & d) | (~a & c & ~d) | (b & ~c)) & 0x1;
+ oc = ((a & ~b & c) | (a & ~b & ~d) | (~a & b & ~d) | (~a & c & ~d) |
+ (b & c & d)) & 0x1;
+ od = ((a & ~b & c) | (~a & b & ~c) | (a & b & ~d) | (~a & c & d)) & 0x1;
+
+ return ((od << 3) | (oc << 2) | (ob << 1) | oa);
+}
+
+static unsigned int sbox_tbl[16];
+
+static int pbox_tbl[16] = {5, 9, 0, 13,
+ 7, 2, 11, 14,
+ 1, 4, 12, 8,
+ 3, 15, 6, 10 };
+
+static unsigned int mix_tbl[8][16];
+
+static unsigned int stage(unsigned int input)
+{
+ int sbox_out = 0;
+ int pbox_out = 0;
+ int i;
+
+ /* mix */
+ input ^= input >> 16; /* xor lower */
+ input ^= input << 16; /* move original lower to upper */
+
+ for (i = 0; i < 32; i += 4) /* sbox stage */
+ sbox_out |= (sbox_tbl[(input >> i) & 0xf]) << i;
+
+ /* permutation */
+ for (i = 0; i < 16; i++)
+ pbox_out |= ((sbox_out >> i) & 0x10001) << pbox_tbl[i];
+
+ return pbox_out;
+}
+
+static unsigned int fast_stage(unsigned int input)
+{
+ int pbox_out = 0;
+ int i;
+
+ /* mix */
+ input ^= input >> 16; /* xor lower */
+ input ^= input << 16; /* move original lower to upper */
+
+ for (i = 0; i < 32; i += 4) /* sbox stage */
+ pbox_out |= mix_tbl[i >> 2][(input >> i) & 0xf];
+
+ return pbox_out;
+}
+
+static unsigned int fast_hash32(unsigned int x)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ x = fast_stage(x);
+ return x;
+}
+
+static unsigned int
+byte_crc32(unsigned char data /* new byte for the crc calculation */,
+ unsigned old_crc /* crc result of the last iteration */)
+{
+ int i;
+ unsigned int crc, polynom = 0xedb88320;
+ /* the polynomial is built on the reversed version of
+ * the CRC polynomial with out the x64 element.
+ */
+
+ crc = old_crc;
+ for (i = 0; i < 8; i++, data >>= 1)
+ crc = (crc >> 1) ^ (((crc ^ data) & 0x1) ? polynom : 0);
+ /* xor with polynomial is lsb of crc^data is 1 */
+
+ return crc;
+}
+
+static unsigned int crc32_table[256];
+
+static void init_crc32_table(void)
+{
+ int i;
+
+ for (i = 0; i < 256; i++)
+ crc32_table[i] = byte_crc32((unsigned char)i, 0LL);
+}
+
+static unsigned int
+crc32_string(unsigned char *data,
+ int size, unsigned int old_crc)
+{
+ unsigned int crc;
+ int i;
+
+ crc = old_crc;
+ for (i = 0; i < size; i++)
+ crc = (crc >> 8) ^ crc32_table[(crc ^ data[i]) & 0xff];
+
+ return crc;
+}
+
+static void hash_init(void)
+{
+ init_crc32_table();
+ int i, j;
+
+ for (i = 0; i < 16; i++)
+ sbox_tbl[i] = sbox(i);
+
+ for (i = 0; i < 32; i += 4)
+ for (j = 0; j < 16; j++) {
+ /* (a,b)
+ * (b,a^b)=(X,Y)
+ * (X^Y,X)
+ */
+ unsigned int input = (0x88888888 ^ (8 << i)) | (j << i);
+
+ input ^= input << 16; /* (X^Y,Y) */
+ input ^= input >> 16; /* (X^Y,X) */
+ mix_tbl[i >> 2][j] = stage(input);
+ }
+}
+
+uint32_t rte_pmd_dpaa2_get_tlu_hash(uint8_t *data, int size)
+{
+ static int init;
+
+ if (~init)
+ hash_init();
+ init = 1;
+ return fast_hash32(crc32_string(data, size, 0x0));
+}
diff --git a/drivers/net/dpaa2/meson.build b/drivers/net/dpaa2/meson.build
index 20eaf0b8e4..4a6397d09e 100644
--- a/drivers/net/dpaa2/meson.build
+++ b/drivers/net/dpaa2/meson.build
@@ -20,6 +20,7 @@ sources = files(
'mc/dpkg.c',
'mc/dpdmux.c',
'mc/dpni.c',
+ 'base/dpaa2_tlu_hash.c',
)
includes += include_directories('base', 'mc')
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index a68244c974..8ea42ee130 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -82,4 +82,23 @@ __rte_experimental
void
rte_pmd_dpaa2_thread_init(void);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Generate the DPAA2 WRIOP based hash value
+ *
+ * @param key
+ * Array of key data
+ * @param size
+ * Size of the hash input key in bytes
+ *
+ * @return
+ * - 0 if successful.
+ * - Negative in case of failure.
+ */
+
+__rte_experimental
+uint32_t
+rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
#endif /* _RTE_PMD_DPAA2_H */
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 3ab96344c4..24b2a6382d 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -10,6 +10,8 @@ DPDK_22 {
EXPERIMENTAL {
global:
+ # added in 21.11
+ rte_pmd_dpaa2_get_tlu_hash;
# added in 21.05
rte_pmd_dpaa2_mux_rx_frame_len;
# added in 21.08
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 06/10] net/dpaa2: update RSS to support additional distributions
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (4 preceding siblings ...)
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 05/10] net/dpaa2: add function to generate HW hash key nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 07/10] net/dpaa: add comments to explain driver behaviour nipun.gupta
` (3 subsequent siblings)
9 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch updates the RSS support to support following additional
distributions:
- VLAN
- ESP
- AH
- PPPOE
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 70 +++++++++++++++++++++++++-
drivers/net/dpaa2/dpaa2_ethdev.h | 7 ++-
2 files changed, 75 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 641e7027f1..08f49af768 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -210,6 +210,10 @@ dpaa2_distset_to_dpkg_profile_cfg(
int l2_configured = 0, l3_configured = 0;
int l4_configured = 0, sctp_configured = 0;
int mpls_configured = 0;
+ int vlan_configured = 0;
+ int esp_configured = 0;
+ int ah_configured = 0;
+ int pppoe_configured = 0;
memset(kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
while (req_dist_set) {
@@ -217,6 +221,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
dist_field = 1ULL << loop;
switch (dist_field) {
case ETH_RSS_L2_PAYLOAD:
+ case ETH_RSS_ETH:
if (l2_configured)
break;
@@ -231,7 +236,70 @@ dpaa2_distset_to_dpkg_profile_cfg(
kg_cfg->extracts[i].extract.from_hdr.type =
DPKG_FULL_FIELD;
i++;
- break;
+ break;
+
+ case ETH_RSS_PPPOE:
+ if (pppoe_configured)
+ break;
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_PPPOE;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_PPPOE_SID;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_ESP:
+ if (esp_configured)
+ break;
+ esp_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_IPSEC_ESP;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_IPSEC_ESP_SPI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_AH:
+ if (ah_configured)
+ break;
+ ah_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_IPSEC_AH;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_IPSEC_AH_SPI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_C_VLAN:
+ case ETH_RSS_S_VLAN:
+ if (vlan_configured)
+ break;
+ vlan_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_VLAN;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_VLAN_TCI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
case ETH_RSS_MPLS:
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 3f34d7ecff..fdc62ec30d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -70,7 +70,12 @@
ETH_RSS_UDP | \
ETH_RSS_TCP | \
ETH_RSS_SCTP | \
- ETH_RSS_MPLS)
+ ETH_RSS_MPLS | \
+ ETH_RSS_C_VLAN | \
+ ETH_RSS_S_VLAN | \
+ ETH_RSS_ESP | \
+ ETH_RSS_AH | \
+ ETH_RSS_PPPOE)
/* LX2 FRC Parsed values (Little Endian) */
#define DPAA2_PKT_TYPE_ETHER 0x0060
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 07/10] net/dpaa: add comments to explain driver behaviour
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (5 preceding siblings ...)
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 06/10] net/dpaa2: update RSS to support additional distributions nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 08/10] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
` (2 subsequent siblings)
9 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
This patch adds comment to explain how dpaa_port_fmc_ccnode_parse
function is working to get the HW queue from FMC policy file
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/net/dpaa/dpaa_fmc.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index 5195053361..f8c9360311 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2021 NXP
*/
/* System headers */
@@ -338,6 +338,12 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
fqid = keys_params->key_params[j].cc_next_engine_params
.params.enqueue_params.new_fqid;
+ /* We read DPDK queue from last classification rule present in
+ * FMC policy file. Hence, this check is required here.
+ * Also, the last classification rule in FMC policy file must
+ * have userspace queue so that it can be used by DPDK
+ * application.
+ */
if (keys_params->key_params[j].cc_next_engine_params
.next_engine != e_IOC_FM_PCD_DONE) {
DPAA_PMD_WARN("FMC CC next engine not support");
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 08/10] raw/dpaa2_qdma: use correct params for config and queue setup
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (6 preceding siblings ...)
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 07/10] net/dpaa: add comments to explain driver behaviour nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 09/10] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 10/10] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
9 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
RAW configure and Queue setup APIs support size parameter for
configure. This patch supports the same for DPAA2 QDMA PMD APIs
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 12 +++++++++---
drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h | 8 ++++----
2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index 7b80370b36..5d82e132b7 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#include <string.h>
@@ -1146,8 +1146,11 @@ dpaa2_qdma_configure(const struct rte_rawdev *rawdev,
DPAA2_QDMA_FUNC_TRACE();
- if (config_size != sizeof(*qdma_config))
+ if (config_size != sizeof(*qdma_config)) {
+ DPAA2_QDMA_ERR("Config size mismatch. Expected %" PRIu64
+ ", Got: %" PRIu64, sizeof(*qdma_config), config_size);
return -EINVAL;
+ }
/* In case QDMA device is not in stopped state, return -EBUSY */
if (qdma_dev->state == 1) {
@@ -1247,8 +1250,11 @@ dpaa2_qdma_queue_setup(struct rte_rawdev *rawdev,
DPAA2_QDMA_FUNC_TRACE();
- if (conf_size != sizeof(*q_config))
+ if (conf_size != sizeof(*q_config)) {
+ DPAA2_QDMA_ERR("Config size mismatch. Expected %" PRIu64
+ ", Got: %" PRIu64, sizeof(*q_config), conf_size);
return -EINVAL;
+ }
rte_spinlock_lock(&qdma_dev->lock);
diff --git a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
index cc1ac25451..1314474271 100644
--- a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
+++ b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef __RTE_PMD_DPAA2_QDMA_H__
@@ -177,13 +177,13 @@ struct rte_qdma_queue_config {
#define rte_qdma_info rte_rawdev_info
#define rte_qdma_start(id) rte_rawdev_start(id)
#define rte_qdma_reset(id) rte_rawdev_reset(id)
-#define rte_qdma_configure(id, cf) rte_rawdev_configure(id, cf)
+#define rte_qdma_configure(id, cf, sz) rte_rawdev_configure(id, cf, sz)
#define rte_qdma_dequeue_buffers(id, buf, num, ctxt) \
rte_rawdev_dequeue_buffers(id, buf, num, ctxt)
#define rte_qdma_enqueue_buffers(id, buf, num, ctxt) \
rte_rawdev_enqueue_buffers(id, buf, num, ctxt)
-#define rte_qdma_queue_setup(id, qid, cfg) \
- rte_rawdev_queue_setup(id, qid, cfg)
+#define rte_qdma_queue_setup(id, qid, cfg, sz) \
+ rte_rawdev_queue_setup(id, qid, cfg, sz)
/*TODO introduce per queue stats API in rawdew */
/**
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 09/10] raw/dpaa2_qdma: remove checks for lcore ID
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (7 preceding siblings ...)
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 08/10] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 10/10] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
9 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
There is no need for preventional check of rte_lcore_id() in
data path. This patch removes the same.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 14 --------------
1 file changed, 14 deletions(-)
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index 5d82e132b7..23201c0b63 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1389,13 +1389,6 @@ dpaa2_qdma_enqueue(struct rte_rawdev *rawdev,
&dpdmai_dev->qdma_dev->vqs[e_context->vq_id];
int ret;
- /* Return error in case of wrong lcore_id */
- if (rte_lcore_id() != qdma_vq->lcore_id) {
- DPAA2_QDMA_ERR("QDMA enqueue for vqid %d on wrong core",
- e_context->vq_id);
- return -EINVAL;
- }
-
ret = qdma_vq->enqueue_job(qdma_vq, e_context->job, nb_jobs);
if (ret < 0) {
DPAA2_QDMA_ERR("DPDMAI device enqueue failed: %d", ret);
@@ -1428,13 +1421,6 @@ dpaa2_qdma_dequeue(struct rte_rawdev *rawdev,
return -EINVAL;
}
- /* Return error in case of wrong lcore_id */
- if (rte_lcore_id() != (unsigned int)(qdma_vq->lcore_id)) {
- DPAA2_QDMA_WARN("QDMA dequeue for vqid %d on wrong core",
- context->vq_id);
- return -1;
- }
-
/* Only dequeue when there are pending jobs on VQ */
if (qdma_vq->num_enqueues == qdma_vq->num_dequeues)
return 0;
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v2 10/10] common/dpaax: fix paddr to vaddr invalid conversion
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (8 preceding siblings ...)
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 09/10] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
@ 2021-10-06 12:10 ` nipun.gupta
9 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 12:10 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, stable,
Gagandeep Singh, Nipun Gupta
From: Gagandeep Singh <g.singh@nxp.com>
If some of the VA entries of table are somehow not populated and are
NULL, it can add offset to NULL and return the invalid VA in PA to
VA conversion.
In this patch, adding a check if the VA entry has valid address only
then add offset and return VA.
Fixes: 2f3d633aa593 ("common/dpaax: add library for PA/VA translation table")
Cc: stable@dpdk.org
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
drivers/common/dpaax/dpaax_iova_table.h | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h
index 230fba8ba0..b1f2300c52 100644
--- a/drivers/common/dpaax/dpaax_iova_table.h
+++ b/drivers/common/dpaax/dpaax_iova_table.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef _DPAAX_IOVA_TABLE_H_
@@ -101,6 +101,12 @@ dpaax_iova_table_get_va(phys_addr_t paddr) {
/* paddr > entry->start && paddr <= entry->(start+len) */
index = (paddr_align - entry[i].start)/DPAAX_MEM_SPLIT;
+ /* paddr is within range, but no vaddr entry ever written
+ * at index
+ */
+ if ((void *)(uintptr_t)entry[i].pages[index] == NULL)
+ return NULL;
+
vaddr = (void *)((uintptr_t)entry[i].pages[index] + offset);
break;
} while (1);
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [dpdk-dev] [PATCH 01/11] bus/fslmc: updated MC FW to 10.28
2021-09-27 12:26 ` [dpdk-dev] [PATCH 01/11] bus/fslmc: updated MC FW to 10.28 nipun.gupta
@ 2021-10-06 13:28 ` Hemant Agrawal
0 siblings, 0 replies; 52+ messages in thread
From: Hemant Agrawal @ 2021-10-06 13:28 UTC (permalink / raw)
To: nipun.gupta, dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
Series-
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
On 9/27/2021 5:56 PM, nipun.gupta@nxp.com wrote:
> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>
> Updating MC firmware support APIs to be latest. It supports
> improved DPDMUX (SRIOV equivalent) for traffic split between
> dpnis and additional PTP APIs.
>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
> drivers/bus/fslmc/mc/dpdmai.c | 4 +-
> drivers/bus/fslmc/mc/fsl_dpdmai.h | 21 ++++-
> drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h | 15 ++--
> drivers/bus/fslmc/mc/fsl_dpmng.h | 4 +-
> drivers/bus/fslmc/mc/fsl_dpopr.h | 7 +-
> drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
> drivers/net/dpaa2/mc/dpdmux.c | 43 +++++++++
> drivers/net/dpaa2/mc/dpni.c | 48 ++++++----
> drivers/net/dpaa2/mc/dprtc.c | 78 +++++++++++++++-
> drivers/net/dpaa2/mc/fsl_dpdmux.h | 6 ++
> drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 9 ++
> drivers/net/dpaa2/mc/fsl_dpkg.h | 6 +-
> drivers/net/dpaa2/mc/fsl_dpni.h | 124 ++++++++++++++++++++++----
> drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 55 +++++++++---
> drivers/net/dpaa2/mc/fsl_dprtc.h | 19 +++-
> drivers/net/dpaa2/mc/fsl_dprtc_cmd.h | 25 +++++-
> 16 files changed, 401 insertions(+), 65 deletions(-)
>
> diff --git a/drivers/bus/fslmc/mc/dpdmai.c b/drivers/bus/fslmc/mc/dpdmai.c
> index dcb9d516a1..9c2f3bf9d5 100644
> --- a/drivers/bus/fslmc/mc/dpdmai.c
> +++ b/drivers/bus/fslmc/mc/dpdmai.c
> @@ -1,5 +1,5 @@
> /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright 2018 NXP
> + * Copyright 2018-2021 NXP
> */
>
> #include <fsl_mc_sys.h>
> @@ -116,6 +116,7 @@ int dpdmai_create(struct fsl_mc_io *mc_io,
> cmd_params->num_queues = cfg->num_queues;
> cmd_params->priorities[0] = cfg->priorities[0];
> cmd_params->priorities[1] = cfg->priorities[1];
> + cmd_params->options = cpu_to_le32(cfg->adv.options);
>
> /* send command to mc*/
> err = mc_send_command(mc_io, &cmd);
> @@ -299,6 +300,7 @@ int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
> attr->id = le32_to_cpu(rsp_params->id);
> attr->num_of_priorities = rsp_params->num_of_priorities;
> attr->num_of_queues = rsp_params->num_of_queues;
> + attr->options = le32_to_cpu(rsp_params->options);
>
> return 0;
> }
> diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
> index 19328c00a0..5af8ed48c0 100644
> --- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
> +++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
> @@ -1,5 +1,5 @@
> /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright 2018 NXP
> + * Copyright 2018-2021 NXP
> */
>
> #ifndef __FSL_DPDMAI_H
> @@ -36,15 +36,32 @@ int dpdmai_close(struct fsl_mc_io *mc_io,
> uint32_t cmd_flags,
> uint16_t token);
>
> +/* DPDMAI options */
> +
> +/**
> + * Enable individual Congestion Groups usage per each priority queue
> + * If this option is not enabled then only one CG is used for all priority
> + * queues
> + * If this option is enabled then a separate specific CG is used for each
> + * individual priority queue.
> + * In this case the priority queue must be specified via congestion notification
> + * API
> + */
> +#define DPDMAI_OPT_CG_PER_PRIORITY 0x00000001
> +
> /**
> * struct dpdmai_cfg - Structure representing DPDMAI configuration
> * @priorities: Priorities for the DMA hardware processing; valid priorities are
> * configured with values 1-8; the entry following last valid entry
> * should be configured with 0
> + * @options: dpdmai options
> */
> struct dpdmai_cfg {
> uint8_t num_queues;
> uint8_t priorities[DPDMAI_PRIO_NUM];
> + struct {
> + uint32_t options;
> + } adv;
> };
>
> int dpdmai_create(struct fsl_mc_io *mc_io,
> @@ -81,11 +98,13 @@ int dpdmai_reset(struct fsl_mc_io *mc_io,
> * struct dpdmai_attr - Structure representing DPDMAI attributes
> * @id: DPDMAI object ID
> * @num_of_priorities: number of priorities
> + * @options: dpdmai options
> */
> struct dpdmai_attr {
> int id;
> uint8_t num_of_priorities;
> uint8_t num_of_queues;
> + uint32_t options;
> };
>
> __rte_internal
> diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h b/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
> index 7e122de4ef..c8f6b990f8 100644
> --- a/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
> +++ b/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
> @@ -1,32 +1,33 @@
> /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright 2018 NXP
> + * Copyright 2017-2018, 2020-2021 NXP
> */
> -
> #ifndef _FSL_DPDMAI_CMD_H
> #define _FSL_DPDMAI_CMD_H
>
> /* DPDMAI Version */
> #define DPDMAI_VER_MAJOR 3
> -#define DPDMAI_VER_MINOR 3
> +#define DPDMAI_VER_MINOR 4
>
> /* Command versioning */
> #define DPDMAI_CMD_BASE_VERSION 1
> #define DPDMAI_CMD_VERSION_2 2
> +#define DPDMAI_CMD_VERSION_3 3
> #define DPDMAI_CMD_ID_OFFSET 4
>
> #define DPDMAI_CMD(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_BASE_VERSION)
> #define DPDMAI_CMD_V2(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_VERSION_2)
> +#define DPDMAI_CMD_V3(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_VERSION_3)
>
> /* Command IDs */
> #define DPDMAI_CMDID_CLOSE DPDMAI_CMD(0x800)
> #define DPDMAI_CMDID_OPEN DPDMAI_CMD(0x80E)
> -#define DPDMAI_CMDID_CREATE DPDMAI_CMD_V2(0x90E)
> +#define DPDMAI_CMDID_CREATE DPDMAI_CMD_V3(0x90E)
> #define DPDMAI_CMDID_DESTROY DPDMAI_CMD(0x98E)
> #define DPDMAI_CMDID_GET_API_VERSION DPDMAI_CMD(0xa0E)
>
> #define DPDMAI_CMDID_ENABLE DPDMAI_CMD(0x002)
> #define DPDMAI_CMDID_DISABLE DPDMAI_CMD(0x003)
> -#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMD_V2(0x004)
> +#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMD_V3(0x004)
> #define DPDMAI_CMDID_RESET DPDMAI_CMD(0x005)
> #define DPDMAI_CMDID_IS_ENABLED DPDMAI_CMD(0x006)
>
> @@ -51,6 +52,8 @@ struct dpdmai_cmd_open {
> struct dpdmai_cmd_create {
> uint8_t num_queues;
> uint8_t priorities[2];
> + uint8_t pad;
> + uint32_t options;
> };
>
> struct dpdmai_cmd_destroy {
> @@ -69,6 +72,8 @@ struct dpdmai_rsp_get_attr {
> uint32_t id;
> uint8_t num_of_priorities;
> uint8_t num_of_queues;
> + uint16_t pad;
> + uint32_t options;
> };
>
> #define DPDMAI_DEST_TYPE_SHIFT 0
> diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
> index 8764ceaed9..7e9bd96429 100644
> --- a/drivers/bus/fslmc/mc/fsl_dpmng.h
> +++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
> @@ -1,7 +1,7 @@
> /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
> *
> * Copyright 2013-2015 Freescale Semiconductor Inc.
> - * Copyright 2017-2019 NXP
> + * Copyright 2017-2021 NXP
> *
> */
> #ifndef __FSL_DPMNG_H
> @@ -20,7 +20,7 @@ struct fsl_mc_io;
> * Management Complex firmware version information
> */
> #define MC_VER_MAJOR 10
> -#define MC_VER_MINOR 18
> +#define MC_VER_MINOR 28
>
> /**
> * struct mc_version
> diff --git a/drivers/bus/fslmc/mc/fsl_dpopr.h b/drivers/bus/fslmc/mc/fsl_dpopr.h
> index fd727e011b..74dd32f783 100644
> --- a/drivers/bus/fslmc/mc/fsl_dpopr.h
> +++ b/drivers/bus/fslmc/mc/fsl_dpopr.h
> @@ -1,7 +1,7 @@
> /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
> *
> * Copyright 2013-2015 Freescale Semiconductor Inc.
> - * Copyright 2018 NXP
> + * Copyright 2018-2021 NXP
> *
> */
> #ifndef __FSL_DPOPR_H_
> @@ -22,7 +22,10 @@
> * Retire an existing Order Point Record option
> */
> #define OPR_OPT_RETIRE 0x2
> -
> +/**
> + * Assign an existing Order Point Record to a queue
> + */
> +#define OPR_OPT_ASSIGN 0x4
> /**
> * struct opr_cfg - Structure representing OPR configuration
> * @oprrws: Order point record (OPR) restoration window size (0 to 5)
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index c12169578e..560b79151b 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -2273,7 +2273,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
>
> ret = dpni_set_opr(dpni, CMD_PRI_LOW, eth_priv->token,
> dpaa2_ethq->tc_index, flow_id,
> - OPR_OPT_CREATE, &ocfg);
> + OPR_OPT_CREATE, &ocfg, 0);
> if (ret) {
> DPAA2_PMD_ERR("Error setting opr: ret: %d\n", ret);
> return ret;
> diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
> index 93912ef9d3..edbb01b45b 100644
> --- a/drivers/net/dpaa2/mc/dpdmux.c
> +++ b/drivers/net/dpaa2/mc/dpdmux.c
> @@ -491,6 +491,49 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
> return mc_send_command(mc_io, &cmd);
> }
>
> +/**
> + * dpdmux_get_max_frame_length() - Return the maximum frame length for DPDMUX
> + * interface
> + * @mc_io: Pointer to MC portal's I/O object
> + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
> + * @token: Token of DPDMUX object
> + * @if_id: Interface id
> + * @max_frame_length: maximum frame length
> + *
> + * When dpdmux object is in VEPA mode this function will ignore if_id parameter
> + * and will return maximum frame length for uplink interface (if_id==0).
> + *
> + * Return: '0' on Success; Error code otherwise.
> + */
> +int dpdmux_get_max_frame_length(struct fsl_mc_io *mc_io,
> + uint32_t cmd_flags,
> + uint16_t token,
> + uint16_t if_id,
> + uint16_t *max_frame_length)
> +{
> + struct mc_command cmd = { 0 };
> + struct dpdmux_cmd_get_max_frame_len *cmd_params;
> + struct dpdmux_rsp_get_max_frame_len *rsp_params;
> + int err = 0;
> +
> + /* prepare command */
> + cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_GET_MAX_FRAME_LENGTH,
> + cmd_flags,
> + token);
> + cmd_params = (struct dpdmux_cmd_get_max_frame_len *)cmd.params;
> + cmd_params->if_id = cpu_to_le16(if_id);
> +
> + err = mc_send_command(mc_io, &cmd);
> + if (err)
> + return err;
> +
> + rsp_params = (struct dpdmux_rsp_get_max_frame_len *)cmd.params;
> + *max_frame_length = le16_to_cpu(rsp_params->max_len);
> +
> + /* send command to mc*/
> + return err;
> +}
> +
> /**
> * dpdmux_ul_reset_counters() - Function resets the uplink counter
> * @mc_io: Pointer to MC portal's I/O object
> diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
> index b254931386..60048d6c43 100644
> --- a/drivers/net/dpaa2/mc/dpni.c
> +++ b/drivers/net/dpaa2/mc/dpni.c
> @@ -1,7 +1,7 @@
> /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
> *
> * Copyright 2013-2016 Freescale Semiconductor Inc.
> - * Copyright 2016-2020 NXP
> + * Copyright 2016-2021 NXP
> *
> */
> #include <fsl_mc_sys.h>
> @@ -126,6 +126,8 @@ int dpni_create(struct fsl_mc_io *mc_io,
> cmd_params->qos_entries = cfg->qos_entries;
> cmd_params->fs_entries = cpu_to_le16(cfg->fs_entries);
> cmd_params->num_cgs = cfg->num_cgs;
> + cmd_params->num_opr = cfg->num_opr;
> + cmd_params->dist_key_size = cfg->dist_key_size;
>
> /* send command to mc*/
> err = mc_send_command(mc_io, &cmd);
> @@ -1829,6 +1831,7 @@ int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
> cmd_params->options = cpu_to_le16(action->options);
> cmd_params->flow_id = cpu_to_le16(action->flow_id);
> cmd_params->flc = cpu_to_le64(action->flc);
> + cmd_params->redir_token = cpu_to_le16(action->redirect_obj_token);
>
> /* send command to mc*/
> return mc_send_command(mc_io, &cmd);
> @@ -2442,7 +2445,7 @@ int dpni_reset_statistics(struct fsl_mc_io *mc_io,
> }
>
> /**
> - * dpni_set_taildrop() - Set taildrop per queue or TC
> + * dpni_set_taildrop() - Set taildrop per congestion group
> *
> * Setting a per-TC taildrop (cg_point = DPNI_CP_GROUP) will reset any current
> * congestion notification or early drop (WRED) configuration previously applied
> @@ -2451,13 +2454,14 @@ int dpni_reset_statistics(struct fsl_mc_io *mc_io,
> * @mc_io: Pointer to MC portal's I/O object
> * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
> * @token: Token of DPNI object
> - * @cg_point: Congestion point, DPNI_CP_QUEUE is only supported in
> + * @cg_point: Congestion group identifier DPNI_CP_QUEUE is only supported in
> * combination with DPNI_QUEUE_RX.
> * @q_type: Queue type, can be DPNI_QUEUE_RX or DPNI_QUEUE_TX.
> * @tc: Traffic class to apply this taildrop to
> - * @q_index: Index of the queue if the DPNI supports multiple queues for
> + * @index/cgid: Index of the queue if the DPNI supports multiple queues for
> * traffic distribution.
> - * Ignored if CONGESTION_POINT is not DPNI_CP_QUEUE.
> + * If CONGESTION_POINT is DPNI_CP_CONGESTION_GROUP then it
> + * represent the cgid of the congestion point
> * @taildrop: Taildrop structure
> *
> * Return: '0' on Success; Error code otherwise.
> @@ -2577,7 +2581,8 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
> uint8_t tc,
> uint8_t index,
> uint8_t options,
> - struct opr_cfg *cfg)
> + struct opr_cfg *cfg,
> + uint8_t opr_id)
> {
> struct dpni_cmd_set_opr *cmd_params;
> struct mc_command cmd = { 0 };
> @@ -2591,6 +2596,7 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
> cmd_params->tc_id = tc;
> cmd_params->index = index;
> cmd_params->options = options;
> + cmd_params->opr_id = opr_id;
> cmd_params->oloe = cfg->oloe;
> cmd_params->oeane = cfg->oeane;
> cmd_params->olws = cfg->olws;
> @@ -2621,7 +2627,9 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
> uint8_t tc,
> uint8_t index,
> struct opr_cfg *cfg,
> - struct opr_qry *qry)
> + struct opr_qry *qry,
> + uint8_t flags,
> + uint8_t opr_id)
> {
> struct dpni_rsp_get_opr *rsp_params;
> struct dpni_cmd_get_opr *cmd_params;
> @@ -2635,6 +2643,8 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
> cmd_params = (struct dpni_cmd_get_opr *)cmd.params;
> cmd_params->index = index;
> cmd_params->tc_id = tc;
> + cmd_params->flags = flags;
> + cmd_params->opr_id = opr_id;
>
> /* send command to mc*/
> err = mc_send_command(mc_io, &cmd);
> @@ -2673,7 +2683,7 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
> * If the FS is already enabled with a previous call the classification
> * key will be changed but all the table rules are kept. If the
> * existing rules do not match the key the results will not be
> - * predictable. It is the user responsibility to keep key integrity
> + * predictable. It is the user responsibility to keep keyintegrity.
> * If cfg.enable is set to 1 the command will create a flow steering table
> * and will classify packets according to this table. The packets
> * that miss all the table rules will be classified according to
> @@ -2695,7 +2705,7 @@ int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
> cmd_params = (struct dpni_cmd_set_rx_fs_dist *)cmd.params;
> cmd_params->dist_size = cpu_to_le16(cfg->dist_size);
> dpni_set_field(cmd_params->enable, RX_FS_DIST_ENABLE, cfg->enable);
> - cmd_params->tc = cfg->tc;
> + cmd_params->tc = cfg->tc;
> cmd_params->miss_flow_id = cpu_to_le16(cfg->fs_miss_flow_id);
> cmd_params->key_cfg_iova = cpu_to_le64(cfg->key_cfg_iova);
>
> @@ -2710,9 +2720,9 @@ int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
> * @token: Token of DPNI object
> * @cfg: Distribution configuration
> * If cfg.enable is set to 1 the packets will be classified using a hash
> - * function based on the key received in cfg.key_cfg_iova parameter
> + * function based on the key received in cfg.key_cfg_iova parameter.
> * If cfg.enable is set to 0 the packets will be sent to the queue configured in
> - * dpni_set_rx_dist_default_queue() call
> + * dpni_set_rx_dist_default_queue() call
> */
> int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
> uint16_t token, const struct dpni_rx_dist_cfg *cfg)
> @@ -2735,9 +2745,9 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
> }
>
> /**
> - * dpni_add_custom_tpid() - Configures a distinct Ethertype value
> - * (or TPID value) to indicate VLAN tag in addition to the common
> - * TPID values 0x8100 and 0x88A8
> + * dpni_add_custom_tpid() - Configures a distinct Ethertype value (or TPID
> + * value) to indicate VLAN tag in adition to the common TPID values
> + * 0x81000 and 0x88A8
> * @mc_io: Pointer to MC portal's I/O object
> * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
> * @token: Token of DPNI object
> @@ -2745,8 +2755,8 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
> *
> * Only two custom values are accepted. If the function is called for the third
> * time it will return error.
> - * To replace an existing value use dpni_remove_custom_tpid() to remove
> - * a previous TPID and after that use again the function.
> + * To replace an existing value use dpni_remove_custom_tpid() to remove a
> + * previous TPID and after that use again the function.
> *
> * Return: '0' on Success; Error code otherwise.
> */
> @@ -2769,7 +2779,7 @@ int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
>
> /**
> * dpni_remove_custom_tpid() - Removes a distinct Ethertype value added
> - * previously with dpni_add_custom_tpid()
> + * previously with dpni_add_custom_tpid()
> * @mc_io: Pointer to MC portal's I/O object
> * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
> * @token: Token of DPNI object
> @@ -2798,8 +2808,8 @@ int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
> }
>
> /**
> - * dpni_get_custom_tpid() - Returns custom TPID (vlan tags) values configured
> - * to detect 802.1q frames
> + * dpni_get_custom_tpid() - Returns custom TPID (vlan tags) values configured to
> + * detect 802.1q frames
> * @mc_io: Pointer to MC portal's I/O object
> * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
> * @token: Token of DPNI object
> diff --git a/drivers/net/dpaa2/mc/dprtc.c b/drivers/net/dpaa2/mc/dprtc.c
> index 42ac89150e..36e62eb0c3 100644
> --- a/drivers/net/dpaa2/mc/dprtc.c
> +++ b/drivers/net/dpaa2/mc/dprtc.c
> @@ -1,5 +1,5 @@
> /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
> - * Copyright 2019 NXP
> + * Copyright 2019-2021 NXP
> */
> #include <fsl_mc_sys.h>
> #include <fsl_mc_cmd.h>
> @@ -521,3 +521,79 @@ int dprtc_get_api_version(struct fsl_mc_io *mc_io,
>
> return 0;
> }
> +
> +/**
> + * dprtc_get_ext_trigger_timestamp - Retrieve the Ext trigger timestamp status
> + * (timestamp + flag for unread timestamp in FIFO).
> + *
> + * @mc_io: Pointer to MC portal's I/O object
> + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
> + * @token: Token of DPRTC object
> + * @id: External trigger id.
> + * @status: Returned object's external trigger status
> + *
> + * Return: '0' on Success; Error code otherwise.
> + */
> +int dprtc_get_ext_trigger_timestamp(struct fsl_mc_io *mc_io,
> + uint32_t cmd_flags,
> + uint16_t token,
> + uint8_t id,
> + struct dprtc_ext_trigger_status *status)
> +{
> + struct dprtc_rsp_ext_trigger_timestamp *rsp_params;
> + struct dprtc_cmd_ext_trigger_timestamp *cmd_params;
> + struct mc_command cmd = { 0 };
> + int err;
> +
> + /* prepare command */
> + cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_EXT_TRIGGER_TIMESTAMP,
> + cmd_flags,
> + token);
> +
> + cmd_params = (struct dprtc_cmd_ext_trigger_timestamp *)cmd.params;
> + cmd_params->id = id;
> + /* send command to mc*/
> + err = mc_send_command(mc_io, &cmd);
> + if (err)
> + return err;
> +
> + /* retrieve response parameters */
> + rsp_params = (struct dprtc_rsp_ext_trigger_timestamp *)cmd.params;
> + status->timestamp = le64_to_cpu(rsp_params->timestamp);
> + status->unread_valid_timestamp = rsp_params->unread_valid_timestamp;
> +
> + return 0;
> +}
> +
> +/**
> + * dprtc_set_fiper_loopback() - Set the fiper pulse as source of interrupt for
> + * External Trigger stamps
> + * @mc_io: Pointer to MC portal's I/O object
> + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
> + * @token: Token of DPRTC object
> + * @id: External trigger id.
> + * @fiper_as_input: Bit used to control interrupt signal source:
> + * 0 = Normal operation, interrupt external source
> + * 1 = Fiper pulse is looped back into Trigger input
> + *
> + * Return: '0' on Success; Error code otherwise.
> + */
> +int dprtc_set_fiper_loopback(struct fsl_mc_io *mc_io,
> + uint32_t cmd_flags,
> + uint16_t token,
> + uint8_t id,
> + uint8_t fiper_as_input)
> +{
> + struct dprtc_ext_trigger_cfg *cmd_params;
> + struct mc_command cmd = { 0 };
> +
> + cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_FIPER_LOOPBACK,
> + cmd_flags,
> + token);
> +
> + cmd_params = (struct dprtc_ext_trigger_cfg *)cmd.params;
> + cmd_params->id = id;
> + cmd_params->fiper_as_input = fiper_as_input;
> +
> + return mc_send_command(mc_io, &cmd);
> +}
> diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
> index f4f9598a29..b01a98eb59 100644
> --- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
> +++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
> @@ -196,6 +196,12 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
> uint16_t token,
> uint16_t max_frame_length);
>
> +int dpdmux_get_max_frame_length(struct fsl_mc_io *mc_io,
> + uint32_t cmd_flags,
> + uint16_t token,
> + uint16_t if_id,
> + uint16_t *max_frame_length);
> +
> /**
> * enum dpdmux_counter_type - Counter types
> * @DPDMUX_CNT_ING_FRAME: Counts ingress frames
> diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
> index 2ab4d75dfb..f8a1b5b1ae 100644
> --- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
> +++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
> @@ -39,6 +39,7 @@
> #define DPDMUX_CMDID_RESET DPDMUX_CMD(0x005)
> #define DPDMUX_CMDID_IS_ENABLED DPDMUX_CMD(0x006)
> #define DPDMUX_CMDID_SET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a1)
> +#define DPDMUX_CMDID_GET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a2)
>
> #define DPDMUX_CMDID_UL_RESET_COUNTERS DPDMUX_CMD(0x0a3)
>
> @@ -124,6 +125,14 @@ struct dpdmux_cmd_set_max_frame_length {
> uint16_t max_frame_length;
> };
>
> +struct dpdmux_cmd_get_max_frame_len {
> + uint16_t if_id;
> +};
> +
> +struct dpdmux_rsp_get_max_frame_len {
> + uint16_t max_len;
> +};
> +
> #define DPDMUX_ACCEPTED_FRAMES_TYPE_SHIFT 0
> #define DPDMUX_ACCEPTED_FRAMES_TYPE_SIZE 4
> #define DPDMUX_UNACCEPTED_FRAMES_ACTION_SHIFT 4
> diff --git a/drivers/net/dpaa2/mc/fsl_dpkg.h b/drivers/net/dpaa2/mc/fsl_dpkg.h
> index 02fe8d50e7..70f2339ea5 100644
> --- a/drivers/net/dpaa2/mc/fsl_dpkg.h
> +++ b/drivers/net/dpaa2/mc/fsl_dpkg.h
> @@ -1,6 +1,6 @@
> /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
> * Copyright 2013-2015 Freescale Semiconductor Inc.
> - * Copyright 2016-2017 NXP
> + * Copyright 2016-2021 NXP
> *
> */
> #ifndef __FSL_DPKG_H_
> @@ -21,7 +21,7 @@
> /**
> * Number of extractions per key profile
> */
> -#define DPKG_MAX_NUM_OF_EXTRACTS 10
> +#define DPKG_MAX_NUM_OF_EXTRACTS 20
>
> /**
> * enum dpkg_extract_from_hdr_type - Selecting extraction by header types
> @@ -177,7 +177,7 @@ struct dpni_ext_set_rx_tc_dist {
> uint8_t num_extracts;
> uint8_t pad[7];
> /* words 1..25 */
> - struct dpni_dist_extract extracts[10];
> + struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
> };
>
> int dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
> diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
> index df42746c9a..34c6b20033 100644
> --- a/drivers/net/dpaa2/mc/fsl_dpni.h
> +++ b/drivers/net/dpaa2/mc/fsl_dpni.h
> @@ -1,7 +1,7 @@
> /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
> *
> * Copyright 2013-2016 Freescale Semiconductor Inc.
> - * Copyright 2016-2020 NXP
> + * Copyright 2016-2021 NXP
> *
> */
> #ifndef __FSL_DPNI_H
> @@ -19,6 +19,11 @@ struct fsl_mc_io;
>
> /** General DPNI macros */
>
> +/**
> + * Maximum size of a key
> + */
> +#define DPNI_MAX_KEY_SIZE 56
> +
> /**
> * Maximum number of traffic classes
> */
> @@ -95,8 +100,18 @@ struct fsl_mc_io;
> * Define a custom number of congestion groups
> */
> #define DPNI_OPT_CUSTOM_CG 0x000200
> -
> -
> +/**
> + * Define a custom number of order point records
> + */
> +#define DPNI_OPT_CUSTOM_OPR 0x000400
> +/**
> + * Hash key is shared between all traffic classes
> + */
> +#define DPNI_OPT_SHARED_HASH_KEY 0x000800
> +/**
> + * Flow steering table is shared between all traffic classes
> + */
> +#define DPNI_OPT_SHARED_FS 0x001000
> /**
> * Software sequence maximum layout size
> */
> @@ -183,6 +198,8 @@ struct dpni_cfg {
> uint8_t num_rx_tcs;
> uint8_t qos_entries;
> uint8_t num_cgs;
> + uint16_t num_opr;
> + uint8_t dist_key_size;
> };
>
> int dpni_create(struct fsl_mc_io *mc_io,
> @@ -366,28 +383,45 @@ int dpni_get_attributes(struct fsl_mc_io *mc_io,
> /**
> * Extract out of frame header error
> */
> -#define DPNI_ERROR_EOFHE 0x00020000
> +#define DPNI_ERROR_MS 0x40000000
> +#define DPNI_ERROR_PTP 0x08000000
> +/* Ethernet multicast frame */
> +#define DPNI_ERROR_MC 0x04000000
> +/* Ethernet broadcast frame */
> +#define DPNI_ERROR_BC 0x02000000
> +#define DPNI_ERROR_KSE 0x00040000
> +#define DPNI_ERROR_EOFHE 0x00020000
> +#define DPNI_ERROR_MNLE 0x00010000
> +#define DPNI_ERROR_TIDE 0x00008000
> +#define DPNI_ERROR_PIEE 0x00004000
> /**
> * Frame length error
> */
> -#define DPNI_ERROR_FLE 0x00002000
> +#define DPNI_ERROR_FLE 0x00002000
> /**
> * Frame physical error
> */
> -#define DPNI_ERROR_FPE 0x00001000
> +#define DPNI_ERROR_FPE 0x00001000
> +#define DPNI_ERROR_PTE 0x00000080
> +#define DPNI_ERROR_ISP 0x00000040
> /**
> * Parsing header error
> */
> -#define DPNI_ERROR_PHE 0x00000020
> +#define DPNI_ERROR_PHE 0x00000020
> +
> +#define DPNI_ERROR_BLE 0x00000010
> /**
> * Parser L3 checksum error
> */
> -#define DPNI_ERROR_L3CE 0x00000004
> +#define DPNI_ERROR_L3CV 0x00000008
> +
> +#define DPNI_ERROR_L3CE 0x00000004
> /**
> - * Parser L3 checksum error
> + * Parser L4 checksum error
> */
> -#define DPNI_ERROR_L4CE 0x00000001
> +#define DPNI_ERROR_L4CV 0x00000002
>
> +#define DPNI_ERROR_L4CE 0x00000001
> /**
> * enum dpni_error_action - Defines DPNI behavior for errors
> * @DPNI_ERROR_ACTION_DISCARD: Discard the frame
> @@ -455,6 +489,10 @@ int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
> * Select to modify the sw-opaque value setting
> */
> #define DPNI_BUF_LAYOUT_OPT_SW_OPAQUE 0x00000080
> +/**
> + * Select to disable Scatter Gather and use single buffer
> + */
> +#define DPNI_BUF_LAYOUT_OPT_NO_SG 0x00000100
>
> /**
> * struct dpni_buffer_layout - Structure representing DPNI buffer layout
> @@ -733,7 +771,7 @@ int dpni_get_link_state(struct fsl_mc_io *mc_io,
>
> /**
> * struct dpni_tx_shaping - Structure representing DPNI tx shaping configuration
> - * @rate_limit: Rate in Mbps
> + * @rate_limit: Rate in Mbits/s
> * @max_burst_size: Burst size in bytes (up to 64KB)
> */
> struct dpni_tx_shaping_cfg {
> @@ -798,6 +836,11 @@ int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
> uint16_t token,
> uint8_t mac_addr[6]);
>
> +/**
> + * Set mac addr queue action
> + */
> +#define DPNI_MAC_SET_QUEUE_ACTION 1
> +
> int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
> uint32_t cmd_flags,
> uint16_t token,
> @@ -1464,6 +1507,7 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
> struct dpni_fs_action_cfg {
> uint64_t flc;
> uint16_t flow_id;
> + uint16_t redirect_obj_token;
> uint16_t options;
> };
>
> @@ -1595,7 +1639,8 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
> uint8_t tc,
> uint8_t index,
> uint8_t options,
> - struct opr_cfg *cfg);
> + struct opr_cfg *cfg,
> + uint8_t opr_id);
>
> int dpni_get_opr(struct fsl_mc_io *mc_io,
> uint32_t cmd_flags,
> @@ -1603,7 +1648,9 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
> uint8_t tc,
> uint8_t index,
> struct opr_cfg *cfg,
> - struct opr_qry *qry);
> + struct opr_qry *qry,
> + uint8_t flags,
> + uint8_t opr_id);
>
> /**
> * When used for queue_idx in function dpni_set_rx_dist_default_queue will
> @@ -1779,14 +1826,57 @@ int dpni_get_sw_sequence_layout(struct fsl_mc_io *mc_io,
>
> /**
> * dpni_extract_sw_sequence_layout() - extract the software sequence layout
> - * @layout: software sequence layout
> - * @sw_sequence_layout_buf: Zeroed 264 bytes of memory before mapping it
> - * to DMA
> + * @layout: software sequence layout
> + * @sw_sequence_layout_buf:Zeroed 264 bytes of memory before mapping it to DMA
> *
> * This function has to be called after dpni_get_sw_sequence_layout
> - *
> */
> void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
> const uint8_t *sw_sequence_layout_buf);
>
> +/**
> + * struct dpni_ptp_cfg - configure single step PTP (IEEE 1588)
> + * @en: enable single step PTP. When enabled the PTPv1 functionality will
> + * not work. If the field is zero, offset and ch_update parameters
> + * will be ignored
> + * @offset: start offset from the beginning of the frame where timestamp
> + * field is found. The offset must respect all MAC headers, VLAN
> + * tags and other protocol headers
> + * @ch_update: when set UDP checksum will be updated inside packet
> + * @peer_delay: For peer-to-peer transparent clocks add this value to the
> + * correction field in addition to the transient time update. The
> + * value expresses nanoseconds.
> + */
> +struct dpni_single_step_cfg {
> + uint8_t en;
> + uint8_t ch_update;
> + uint16_t offset;
> + uint32_t peer_delay;
> +};
> +
> +int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
> + uint16_t token, struct dpni_single_step_cfg *ptp_cfg);
> +
> +int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
> + uint16_t token, struct dpni_single_step_cfg *ptp_cfg);
> +
> +/**
> + * loopback_en field is valid when calling function dpni_set_port_cfg
> + */
> +#define DPNI_PORT_CFG_LOOPBACK 0x01
> +
> +/**
> + * struct dpni_port_cfg - custom configuration for dpni physical port
> + * @loopback_en: port loopback enabled
> + */
> +struct dpni_port_cfg {
> + int loopback_en;
> +};
> +
> +int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
> + uint16_t token, uint32_t flags, struct dpni_port_cfg *port_cfg);
> +
> +int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
> + uint16_t token, struct dpni_port_cfg *port_cfg);
> +
> #endif /* __FSL_DPNI_H */
> diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
> index c40090b8fe..6fbd93bb38 100644
> --- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
> +++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
> @@ -1,7 +1,7 @@
> /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
> *
> * Copyright 2013-2016 Freescale Semiconductor Inc.
> - * Copyright 2016-2020 NXP
> + * Copyright 2016-2021 NXP
> *
> */
> #ifndef _FSL_DPNI_CMD_H
> @@ -9,21 +9,25 @@
>
> /* DPNI Version */
> #define DPNI_VER_MAJOR 7
> -#define DPNI_VER_MINOR 13
> +#define DPNI_VER_MINOR 17
>
> #define DPNI_CMD_BASE_VERSION 1
> #define DPNI_CMD_VERSION_2 2
> #define DPNI_CMD_VERSION_3 3
> +#define DPNI_CMD_VERSION_4 4
> +#define DPNI_CMD_VERSION_5 5
> #define DPNI_CMD_ID_OFFSET 4
>
> #define DPNI_CMD(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_BASE_VERSION)
> #define DPNI_CMD_V2(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_2)
> #define DPNI_CMD_V3(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_3)
> +#define DPNI_CMD_V4(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_4)
> +#define DPNI_CMD_V5(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_5)
>
> /* Command IDs */
> #define DPNI_CMDID_OPEN DPNI_CMD(0x801)
> #define DPNI_CMDID_CLOSE DPNI_CMD(0x800)
> -#define DPNI_CMDID_CREATE DPNI_CMD_V3(0x901)
> +#define DPNI_CMDID_CREATE DPNI_CMD_V5(0x901)
> #define DPNI_CMDID_DESTROY DPNI_CMD(0x981)
> #define DPNI_CMDID_GET_API_VERSION DPNI_CMD(0xa01)
>
> @@ -67,7 +71,7 @@
> #define DPNI_CMDID_REMOVE_VLAN_ID DPNI_CMD(0x232)
> #define DPNI_CMDID_CLR_VLAN_FILTERS DPNI_CMD(0x233)
>
> -#define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V3(0x235)
> +#define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V4(0x235)
>
> #define DPNI_CMDID_SET_RX_TC_POLICING DPNI_CMD(0x23E)
>
> @@ -75,7 +79,7 @@
> #define DPNI_CMDID_ADD_QOS_ENT DPNI_CMD_V2(0x241)
> #define DPNI_CMDID_REMOVE_QOS_ENT DPNI_CMD(0x242)
> #define DPNI_CMDID_CLR_QOS_TBL DPNI_CMD(0x243)
> -#define DPNI_CMDID_ADD_FS_ENT DPNI_CMD(0x244)
> +#define DPNI_CMDID_ADD_FS_ENT DPNI_CMD_V2(0x244)
> #define DPNI_CMDID_REMOVE_FS_ENT DPNI_CMD(0x245)
> #define DPNI_CMDID_CLR_FS_ENT DPNI_CMD(0x246)
>
> @@ -140,7 +144,9 @@ struct dpni_cmd_create {
> uint16_t fs_entries;
> uint8_t num_rx_tcs;
> uint8_t pad4;
> - uint8_t num_cgs;
> + uint8_t num_cgs;
> + uint16_t num_opr;
> + uint8_t dist_key_size;
> };
>
> struct dpni_cmd_destroy {
> @@ -411,8 +417,6 @@ struct dpni_rsp_get_port_mac_addr {
> uint8_t mac_addr[6];
> };
>
> -#define DPNI_MAC_SET_QUEUE_ACTION 1
> -
> struct dpni_cmd_add_mac_addr {
> uint8_t flags;
> uint8_t pad;
> @@ -594,6 +598,7 @@ struct dpni_cmd_add_fs_entry {
> uint64_t key_iova;
> uint64_t mask_iova;
> uint64_t flc;
> + uint16_t redir_token;
> };
>
> struct dpni_cmd_remove_fs_entry {
> @@ -779,7 +784,7 @@ struct dpni_rsp_get_congestion_notification {
> };
>
> struct dpni_cmd_set_opr {
> - uint8_t pad0;
> + uint8_t opr_id;
> uint8_t tc_id;
> uint8_t index;
> uint8_t options;
> @@ -792,9 +797,10 @@ struct dpni_cmd_set_opr {
> };
>
> struct dpni_cmd_get_opr {
> - uint8_t pad;
> + uint8_t flags;
> uint8_t tc_id;
> uint8_t index;
> + uint8_t opr_id;
> };
>
> #define DPNI_RIP_SHIFT 0
> @@ -911,5 +917,34 @@ struct dpni_sw_sequence_layout_entry {
> uint16_t pad;
> };
>
> +#define DPNI_PTP_ENABLE_SHIFT 0
> +#define DPNI_PTP_ENABLE_SIZE 1
> +#define DPNI_PTP_CH_UPDATE_SHIFT 1
> +#define DPNI_PTP_CH_UPDATE_SIZE 1
> +struct dpni_cmd_single_step_cfg {
> + uint16_t flags;
> + uint16_t offset;
> + uint32_t peer_delay;
> +};
> +
> +struct dpni_rsp_single_step_cfg {
> + uint16_t flags;
> + uint16_t offset;
> + uint32_t peer_delay;
> +};
> +
> +#define DPNI_PORT_LOOPBACK_EN_SHIFT 0
> +#define DPNI_PORT_LOOPBACK_EN_SIZE 1
> +
> +struct dpni_cmd_set_port_cfg {
> + uint32_t flags;
> + uint32_t bit_params;
> +};
> +
> +struct dpni_rsp_get_port_cfg {
> + uint32_t flags;
> + uint32_t bit_params;
> +};
> +
> #pragma pack(pop)
> #endif /* _FSL_DPNI_CMD_H */
> diff --git a/drivers/net/dpaa2/mc/fsl_dprtc.h b/drivers/net/dpaa2/mc/fsl_dprtc.h
> index 49edb5a050..84ab158444 100644
> --- a/drivers/net/dpaa2/mc/fsl_dprtc.h
> +++ b/drivers/net/dpaa2/mc/fsl_dprtc.h
> @@ -1,5 +1,5 @@
> /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
> - * Copyright 2019 NXP
> + * Copyright 2019-2021 NXP
> */
> #ifndef __FSL_DPRTC_H
> #define __FSL_DPRTC_H
> @@ -86,6 +86,23 @@ int dprtc_set_alarm(struct fsl_mc_io *mc_io,
> uint16_t token,
> uint64_t time);
>
> +struct dprtc_ext_trigger_status {
> + uint64_t timestamp;
> + uint8_t unread_valid_timestamp;
> +};
> +
> +int dprtc_get_ext_trigger_timestamp(struct fsl_mc_io *mc_io,
> + uint32_t cmd_flags,
> + uint16_t token,
> + uint8_t id,
> + struct dprtc_ext_trigger_status *status);
> +
> +int dprtc_set_fiper_loopback(struct fsl_mc_io *mc_io,
> + uint32_t cmd_flags,
> + uint16_t token,
> + uint8_t id,
> + uint8_t fiper_as_input);
> +
> /**
> * struct dprtc_attr - Structure representing DPRTC attributes
> * @id: DPRTC object ID
> diff --git a/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h b/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
> index eca12ff5ee..61aaa4daab 100644
> --- a/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
> +++ b/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
> @@ -1,5 +1,5 @@
> /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
> - * Copyright 2019 NXP
> + * Copyright 2019-2021 NXP
> */
> #include <fsl_mc_sys.h>
> #ifndef _FSL_DPRTC_CMD_H
> @@ -7,13 +7,15 @@
>
> /* DPRTC Version */
> #define DPRTC_VER_MAJOR 2
> -#define DPRTC_VER_MINOR 1
> +#define DPRTC_VER_MINOR 3
>
> /* Command versioning */
> #define DPRTC_CMD_BASE_VERSION 1
> +#define DPRTC_CMD_VERSION_2 2
> #define DPRTC_CMD_ID_OFFSET 4
>
> #define DPRTC_CMD(id) (((id) << DPRTC_CMD_ID_OFFSET) | DPRTC_CMD_BASE_VERSION)
> +#define DPRTC_CMD_V2(id) (((id) << DPRTC_CMD_ID_OFFSET) | DPRTC_CMD_VERSION_2)
>
> /* Command IDs */
> #define DPRTC_CMDID_CLOSE DPRTC_CMD(0x800)
> @@ -39,6 +41,7 @@
> #define DPRTC_CMDID_SET_EXT_TRIGGER DPRTC_CMD(0x1d8)
> #define DPRTC_CMDID_CLEAR_EXT_TRIGGER DPRTC_CMD(0x1d9)
> #define DPRTC_CMDID_GET_EXT_TRIGGER_TIMESTAMP DPRTC_CMD(0x1dA)
> +#define DPRTC_CMDID_SET_FIPER_LOOPBACK DPRTC_CMD(0x1dB)
>
> /* Macros for accessing command fields smaller than 1byte */
> #define DPRTC_MASK(field) \
> @@ -87,5 +90,23 @@ struct dprtc_rsp_get_api_version {
> uint16_t major;
> uint16_t minor;
> };
> +
> +struct dprtc_cmd_ext_trigger_timestamp {
> + uint32_t pad;
> + uint8_t id;
> +};
> +
> +struct dprtc_rsp_ext_trigger_timestamp {
> + uint8_t unread_valid_timestamp;
> + uint8_t pad1;
> + uint16_t pad2;
> + uint32_t pad3;
> + uint64_t timestamp;
> +};
> +
> +struct dprtc_ext_trigger_cfg {
> + uint8_t id;
> + uint8_t fiper_as_input;
> +};
> #pragma pack(pop)
> #endif /* _FSL_DPRTC_CMD_H */
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [dpdk-dev] [PATCH v2 03/10] bus/fslmc: add qbman debug APIs support
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 03/10] bus/fslmc: add qbman debug APIs support nipun.gupta
@ 2021-10-06 13:31 ` Thomas Monjalon
2021-10-06 17:02 ` Nipun Gupta
0 siblings, 1 reply; 52+ messages in thread
From: Thomas Monjalon @ 2021-10-06 13:31 UTC (permalink / raw)
To: hemant.agrawal, Nipun Gupta
Cc: dev, ferruh.yigit, sachin.saxena, Youri Querry, Roy Pledge
06/10/2021 14:10, nipun.gupta@nxp.com:
> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>
> Add support for debugging qbman FQs
>
> Signed-off-by: Youri Querry <youri.querry_1@nxp.com>
> Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
I see this error message:
fsl_qbman_debug.h:137:15: error: expected ‘;’ before ‘int’
137 | __rte_internal
| ^
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
` (12 preceding siblings ...)
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 01/10] bus/fslmc: updated MC FW to 10.28 nipun.gupta
` (10 more replies)
13 siblings, 11 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Nipun Gupta <nipun.gupta@nxp.com>
This series adds new functionality related to flow redirection,
generating HW hash key etc.
It also updates the MC firmware version and includes a fix in
dpaxx library.
Changes in v1:
- Fix checkpatch errors
Changes in v2:
- remove unrequired multi-tx ordered patch
Changes in v3:
- fix 32 bit (i386) compilation
Gagandeep Singh (1):
common/dpaax: fix paddr to vaddr invalid conversion
Hemant Agrawal (4):
bus/fslmc: updated MC FW to 10.28
bus/fslmc: add qbman debug APIs support
net/dpaa2: add debug print for MTU set for jumbo
net/dpaa2: add function to generate HW hash key
Jun Yang (1):
net/dpaa2: support Tx flow redirection action
Nipun Gupta (2):
raw/dpaa2_qdma: use correct params for config and queue setup
raw/dpaa2_qdma: remove checks for lcore ID
Rohit Raj (1):
net/dpaa: add comments to explain driver behaviour
Vanshika Shukla (1):
net/dpaa2: update RSS to support additional distributions
drivers/bus/fslmc/mc/dpdmai.c | 4 +-
drivers/bus/fslmc/mc/fsl_dpdmai.h | 21 +-
drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h | 15 +-
drivers/bus/fslmc/mc/fsl_dpmng.h | 4 +-
drivers/bus/fslmc/mc/fsl_dpopr.h | 7 +-
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 200 +++++-
drivers/bus/fslmc/qbman/qbman_debug.c | 621 ++++++++++++++++++
drivers/bus/fslmc/qbman/qbman_portal.c | 6 +
drivers/common/dpaax/dpaax_iova_table.h | 8 +-
drivers/net/dpaa/dpaa_fmc.c | 8 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 70 +-
drivers/net/dpaa2/base/dpaa2_tlu_hash.c | 153 +++++
drivers/net/dpaa2/dpaa2_ethdev.c | 9 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 8 +-
drivers/net/dpaa2/dpaa2_flow.c | 114 +++-
drivers/net/dpaa2/mc/dpdmux.c | 43 ++
drivers/net/dpaa2/mc/dpni.c | 48 +-
drivers/net/dpaa2/mc/dprtc.c | 78 ++-
drivers/net/dpaa2/mc/fsl_dpdmux.h | 6 +
drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 9 +
drivers/net/dpaa2/mc/fsl_dpkg.h | 6 +-
drivers/net/dpaa2/mc/fsl_dpni.h | 147 ++++-
drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 55 +-
drivers/net/dpaa2/mc/fsl_dprtc.h | 19 +-
drivers/net/dpaa2/mc/fsl_dprtc_cmd.h | 25 +-
drivers/net/dpaa2/meson.build | 1 +
drivers/net/dpaa2/rte_pmd_dpaa2.h | 19 +
drivers/net/dpaa2/version.map | 2 +
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 28 +-
drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h | 8 +-
30 files changed, 1636 insertions(+), 106 deletions(-)
create mode 100644 drivers/net/dpaa2/base/dpaa2_tlu_hash.c
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 01/10] bus/fslmc: updated MC FW to 10.28
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 02/10] net/dpaa2: support Tx flow redirection action nipun.gupta
` (9 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Updating MC firmware support APIs to be latest. It supports
improved DPDMUX (SRIOV equivalent) for traffic split between
dpnis and additional PTP APIs.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/fslmc/mc/dpdmai.c | 4 +-
drivers/bus/fslmc/mc/fsl_dpdmai.h | 21 ++++-
drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h | 15 ++--
drivers/bus/fslmc/mc/fsl_dpmng.h | 4 +-
drivers/bus/fslmc/mc/fsl_dpopr.h | 7 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
drivers/net/dpaa2/mc/dpdmux.c | 43 +++++++++
drivers/net/dpaa2/mc/dpni.c | 48 ++++++----
drivers/net/dpaa2/mc/dprtc.c | 78 +++++++++++++++-
drivers/net/dpaa2/mc/fsl_dpdmux.h | 6 ++
drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 9 ++
drivers/net/dpaa2/mc/fsl_dpkg.h | 6 +-
drivers/net/dpaa2/mc/fsl_dpni.h | 124 ++++++++++++++++++++++----
drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 55 +++++++++---
drivers/net/dpaa2/mc/fsl_dprtc.h | 19 +++-
drivers/net/dpaa2/mc/fsl_dprtc_cmd.h | 25 +++++-
16 files changed, 401 insertions(+), 65 deletions(-)
diff --git a/drivers/bus/fslmc/mc/dpdmai.c b/drivers/bus/fslmc/mc/dpdmai.c
index dcb9d516a1..9c2f3bf9d5 100644
--- a/drivers/bus/fslmc/mc/dpdmai.c
+++ b/drivers/bus/fslmc/mc/dpdmai.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#include <fsl_mc_sys.h>
@@ -116,6 +116,7 @@ int dpdmai_create(struct fsl_mc_io *mc_io,
cmd_params->num_queues = cfg->num_queues;
cmd_params->priorities[0] = cfg->priorities[0];
cmd_params->priorities[1] = cfg->priorities[1];
+ cmd_params->options = cpu_to_le32(cfg->adv.options);
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -299,6 +300,7 @@ int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
attr->id = le32_to_cpu(rsp_params->id);
attr->num_of_priorities = rsp_params->num_of_priorities;
attr->num_of_queues = rsp_params->num_of_queues;
+ attr->options = le32_to_cpu(rsp_params->options);
return 0;
}
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
index 19328c00a0..5af8ed48c0 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef __FSL_DPDMAI_H
@@ -36,15 +36,32 @@ int dpdmai_close(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token);
+/* DPDMAI options */
+
+/**
+ * Enable individual Congestion Groups usage per each priority queue
+ * If this option is not enabled then only one CG is used for all priority
+ * queues
+ * If this option is enabled then a separate specific CG is used for each
+ * individual priority queue.
+ * In this case the priority queue must be specified via congestion notification
+ * API
+ */
+#define DPDMAI_OPT_CG_PER_PRIORITY 0x00000001
+
/**
* struct dpdmai_cfg - Structure representing DPDMAI configuration
* @priorities: Priorities for the DMA hardware processing; valid priorities are
* configured with values 1-8; the entry following last valid entry
* should be configured with 0
+ * @options: dpdmai options
*/
struct dpdmai_cfg {
uint8_t num_queues;
uint8_t priorities[DPDMAI_PRIO_NUM];
+ struct {
+ uint32_t options;
+ } adv;
};
int dpdmai_create(struct fsl_mc_io *mc_io,
@@ -81,11 +98,13 @@ int dpdmai_reset(struct fsl_mc_io *mc_io,
* struct dpdmai_attr - Structure representing DPDMAI attributes
* @id: DPDMAI object ID
* @num_of_priorities: number of priorities
+ * @options: dpdmai options
*/
struct dpdmai_attr {
int id;
uint8_t num_of_priorities;
uint8_t num_of_queues;
+ uint32_t options;
};
__rte_internal
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h b/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
index 7e122de4ef..c8f6b990f8 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai_cmd.h
@@ -1,32 +1,33 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2017-2018, 2020-2021 NXP
*/
-
#ifndef _FSL_DPDMAI_CMD_H
#define _FSL_DPDMAI_CMD_H
/* DPDMAI Version */
#define DPDMAI_VER_MAJOR 3
-#define DPDMAI_VER_MINOR 3
+#define DPDMAI_VER_MINOR 4
/* Command versioning */
#define DPDMAI_CMD_BASE_VERSION 1
#define DPDMAI_CMD_VERSION_2 2
+#define DPDMAI_CMD_VERSION_3 3
#define DPDMAI_CMD_ID_OFFSET 4
#define DPDMAI_CMD(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_BASE_VERSION)
#define DPDMAI_CMD_V2(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_VERSION_2)
+#define DPDMAI_CMD_V3(id) ((id << DPDMAI_CMD_ID_OFFSET) | DPDMAI_CMD_VERSION_3)
/* Command IDs */
#define DPDMAI_CMDID_CLOSE DPDMAI_CMD(0x800)
#define DPDMAI_CMDID_OPEN DPDMAI_CMD(0x80E)
-#define DPDMAI_CMDID_CREATE DPDMAI_CMD_V2(0x90E)
+#define DPDMAI_CMDID_CREATE DPDMAI_CMD_V3(0x90E)
#define DPDMAI_CMDID_DESTROY DPDMAI_CMD(0x98E)
#define DPDMAI_CMDID_GET_API_VERSION DPDMAI_CMD(0xa0E)
#define DPDMAI_CMDID_ENABLE DPDMAI_CMD(0x002)
#define DPDMAI_CMDID_DISABLE DPDMAI_CMD(0x003)
-#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMD_V2(0x004)
+#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMD_V3(0x004)
#define DPDMAI_CMDID_RESET DPDMAI_CMD(0x005)
#define DPDMAI_CMDID_IS_ENABLED DPDMAI_CMD(0x006)
@@ -51,6 +52,8 @@ struct dpdmai_cmd_open {
struct dpdmai_cmd_create {
uint8_t num_queues;
uint8_t priorities[2];
+ uint8_t pad;
+ uint32_t options;
};
struct dpdmai_cmd_destroy {
@@ -69,6 +72,8 @@ struct dpdmai_rsp_get_attr {
uint32_t id;
uint8_t num_of_priorities;
uint8_t num_of_queues;
+ uint16_t pad;
+ uint32_t options;
};
#define DPDMAI_DEST_TYPE_SHIFT 0
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index 8764ceaed9..7e9bd96429 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2021 NXP
*
*/
#ifndef __FSL_DPMNG_H
@@ -20,7 +20,7 @@ struct fsl_mc_io;
* Management Complex firmware version information
*/
#define MC_VER_MAJOR 10
-#define MC_VER_MINOR 18
+#define MC_VER_MINOR 28
/**
* struct mc_version
diff --git a/drivers/bus/fslmc/mc/fsl_dpopr.h b/drivers/bus/fslmc/mc/fsl_dpopr.h
index fd727e011b..74dd32f783 100644
--- a/drivers/bus/fslmc/mc/fsl_dpopr.h
+++ b/drivers/bus/fslmc/mc/fsl_dpopr.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*
*/
#ifndef __FSL_DPOPR_H_
@@ -22,7 +22,10 @@
* Retire an existing Order Point Record option
*/
#define OPR_OPT_RETIRE 0x2
-
+/**
+ * Assign an existing Order Point Record to a queue
+ */
+#define OPR_OPT_ASSIGN 0x4
/**
* struct opr_cfg - Structure representing OPR configuration
* @oprrws: Order point record (OPR) restoration window size (0 to 5)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..560b79151b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2273,7 +2273,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
ret = dpni_set_opr(dpni, CMD_PRI_LOW, eth_priv->token,
dpaa2_ethq->tc_index, flow_id,
- OPR_OPT_CREATE, &ocfg);
+ OPR_OPT_CREATE, &ocfg, 0);
if (ret) {
DPAA2_PMD_ERR("Error setting opr: ret: %d\n", ret);
return ret;
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index 93912ef9d3..edbb01b45b 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -491,6 +491,49 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
return mc_send_command(mc_io, &cmd);
}
+/**
+ * dpdmux_get_max_frame_length() - Return the maximum frame length for DPDMUX
+ * interface
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPDMUX object
+ * @if_id: Interface id
+ * @max_frame_length: maximum frame length
+ *
+ * When dpdmux object is in VEPA mode this function will ignore if_id parameter
+ * and will return maximum frame length for uplink interface (if_id==0).
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dpdmux_get_max_frame_length(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint16_t if_id,
+ uint16_t *max_frame_length)
+{
+ struct mc_command cmd = { 0 };
+ struct dpdmux_cmd_get_max_frame_len *cmd_params;
+ struct dpdmux_rsp_get_max_frame_len *rsp_params;
+ int err = 0;
+
+ /* prepare command */
+ cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_GET_MAX_FRAME_LENGTH,
+ cmd_flags,
+ token);
+ cmd_params = (struct dpdmux_cmd_get_max_frame_len *)cmd.params;
+ cmd_params->if_id = cpu_to_le16(if_id);
+
+ err = mc_send_command(mc_io, &cmd);
+ if (err)
+ return err;
+
+ rsp_params = (struct dpdmux_rsp_get_max_frame_len *)cmd.params;
+ *max_frame_length = le16_to_cpu(rsp_params->max_len);
+
+ /* send command to mc*/
+ return err;
+}
+
/**
* dpdmux_ul_reset_counters() - Function resets the uplink counter
* @mc_io: Pointer to MC portal's I/O object
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index b254931386..60048d6c43 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#include <fsl_mc_sys.h>
@@ -126,6 +126,8 @@ int dpni_create(struct fsl_mc_io *mc_io,
cmd_params->qos_entries = cfg->qos_entries;
cmd_params->fs_entries = cpu_to_le16(cfg->fs_entries);
cmd_params->num_cgs = cfg->num_cgs;
+ cmd_params->num_opr = cfg->num_opr;
+ cmd_params->dist_key_size = cfg->dist_key_size;
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -1829,6 +1831,7 @@ int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
cmd_params->options = cpu_to_le16(action->options);
cmd_params->flow_id = cpu_to_le16(action->flow_id);
cmd_params->flc = cpu_to_le64(action->flc);
+ cmd_params->redir_token = cpu_to_le16(action->redirect_obj_token);
/* send command to mc*/
return mc_send_command(mc_io, &cmd);
@@ -2442,7 +2445,7 @@ int dpni_reset_statistics(struct fsl_mc_io *mc_io,
}
/**
- * dpni_set_taildrop() - Set taildrop per queue or TC
+ * dpni_set_taildrop() - Set taildrop per congestion group
*
* Setting a per-TC taildrop (cg_point = DPNI_CP_GROUP) will reset any current
* congestion notification or early drop (WRED) configuration previously applied
@@ -2451,13 +2454,14 @@ int dpni_reset_statistics(struct fsl_mc_io *mc_io,
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
- * @cg_point: Congestion point, DPNI_CP_QUEUE is only supported in
+ * @cg_point: Congestion group identifier DPNI_CP_QUEUE is only supported in
* combination with DPNI_QUEUE_RX.
* @q_type: Queue type, can be DPNI_QUEUE_RX or DPNI_QUEUE_TX.
* @tc: Traffic class to apply this taildrop to
- * @q_index: Index of the queue if the DPNI supports multiple queues for
+ * @index/cgid: Index of the queue if the DPNI supports multiple queues for
* traffic distribution.
- * Ignored if CONGESTION_POINT is not DPNI_CP_QUEUE.
+ * If CONGESTION_POINT is DPNI_CP_CONGESTION_GROUP then it
+ * represent the cgid of the congestion point
* @taildrop: Taildrop structure
*
* Return: '0' on Success; Error code otherwise.
@@ -2577,7 +2581,8 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
uint8_t options,
- struct opr_cfg *cfg)
+ struct opr_cfg *cfg,
+ uint8_t opr_id)
{
struct dpni_cmd_set_opr *cmd_params;
struct mc_command cmd = { 0 };
@@ -2591,6 +2596,7 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
cmd_params->tc_id = tc;
cmd_params->index = index;
cmd_params->options = options;
+ cmd_params->opr_id = opr_id;
cmd_params->oloe = cfg->oloe;
cmd_params->oeane = cfg->oeane;
cmd_params->olws = cfg->olws;
@@ -2621,7 +2627,9 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
struct opr_cfg *cfg,
- struct opr_qry *qry)
+ struct opr_qry *qry,
+ uint8_t flags,
+ uint8_t opr_id)
{
struct dpni_rsp_get_opr *rsp_params;
struct dpni_cmd_get_opr *cmd_params;
@@ -2635,6 +2643,8 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
cmd_params = (struct dpni_cmd_get_opr *)cmd.params;
cmd_params->index = index;
cmd_params->tc_id = tc;
+ cmd_params->flags = flags;
+ cmd_params->opr_id = opr_id;
/* send command to mc*/
err = mc_send_command(mc_io, &cmd);
@@ -2673,7 +2683,7 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
* If the FS is already enabled with a previous call the classification
* key will be changed but all the table rules are kept. If the
* existing rules do not match the key the results will not be
- * predictable. It is the user responsibility to keep key integrity
+ * predictable. It is the user responsibility to keep keyintegrity.
* If cfg.enable is set to 1 the command will create a flow steering table
* and will classify packets according to this table. The packets
* that miss all the table rules will be classified according to
@@ -2695,7 +2705,7 @@ int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
cmd_params = (struct dpni_cmd_set_rx_fs_dist *)cmd.params;
cmd_params->dist_size = cpu_to_le16(cfg->dist_size);
dpni_set_field(cmd_params->enable, RX_FS_DIST_ENABLE, cfg->enable);
- cmd_params->tc = cfg->tc;
+ cmd_params->tc = cfg->tc;
cmd_params->miss_flow_id = cpu_to_le16(cfg->fs_miss_flow_id);
cmd_params->key_cfg_iova = cpu_to_le64(cfg->key_cfg_iova);
@@ -2710,9 +2720,9 @@ int dpni_set_rx_fs_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
* @token: Token of DPNI object
* @cfg: Distribution configuration
* If cfg.enable is set to 1 the packets will be classified using a hash
- * function based on the key received in cfg.key_cfg_iova parameter
+ * function based on the key received in cfg.key_cfg_iova parameter.
* If cfg.enable is set to 0 the packets will be sent to the queue configured in
- * dpni_set_rx_dist_default_queue() call
+ * dpni_set_rx_dist_default_queue() call
*/
int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
uint16_t token, const struct dpni_rx_dist_cfg *cfg)
@@ -2735,9 +2745,9 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
}
/**
- * dpni_add_custom_tpid() - Configures a distinct Ethertype value
- * (or TPID value) to indicate VLAN tag in addition to the common
- * TPID values 0x8100 and 0x88A8
+ * dpni_add_custom_tpid() - Configures a distinct Ethertype value (or TPID
+ * value) to indicate VLAN tag in adition to the common TPID values
+ * 0x81000 and 0x88A8
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
@@ -2745,8 +2755,8 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
*
* Only two custom values are accepted. If the function is called for the third
* time it will return error.
- * To replace an existing value use dpni_remove_custom_tpid() to remove
- * a previous TPID and after that use again the function.
+ * To replace an existing value use dpni_remove_custom_tpid() to remove a
+ * previous TPID and after that use again the function.
*
* Return: '0' on Success; Error code otherwise.
*/
@@ -2769,7 +2779,7 @@ int dpni_add_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
/**
* dpni_remove_custom_tpid() - Removes a distinct Ethertype value added
- * previously with dpni_add_custom_tpid()
+ * previously with dpni_add_custom_tpid()
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
@@ -2798,8 +2808,8 @@ int dpni_remove_custom_tpid(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
}
/**
- * dpni_get_custom_tpid() - Returns custom TPID (vlan tags) values configured
- * to detect 802.1q frames
+ * dpni_get_custom_tpid() - Returns custom TPID (vlan tags) values configured to
+ * detect 802.1q frames
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPNI object
diff --git a/drivers/net/dpaa2/mc/dprtc.c b/drivers/net/dpaa2/mc/dprtc.c
index 42ac89150e..36e62eb0c3 100644
--- a/drivers/net/dpaa2/mc/dprtc.c
+++ b/drivers/net/dpaa2/mc/dprtc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#include <fsl_mc_sys.h>
#include <fsl_mc_cmd.h>
@@ -521,3 +521,79 @@ int dprtc_get_api_version(struct fsl_mc_io *mc_io,
return 0;
}
+
+/**
+ * dprtc_get_ext_trigger_timestamp - Retrieve the Ext trigger timestamp status
+ * (timestamp + flag for unread timestamp in FIFO).
+ *
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @id: External trigger id.
+ * @status: Returned object's external trigger status
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dprtc_get_ext_trigger_timestamp(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ struct dprtc_ext_trigger_status *status)
+{
+ struct dprtc_rsp_ext_trigger_timestamp *rsp_params;
+ struct dprtc_cmd_ext_trigger_timestamp *cmd_params;
+ struct mc_command cmd = { 0 };
+ int err;
+
+ /* prepare command */
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_EXT_TRIGGER_TIMESTAMP,
+ cmd_flags,
+ token);
+
+ cmd_params = (struct dprtc_cmd_ext_trigger_timestamp *)cmd.params;
+ cmd_params->id = id;
+ /* send command to mc*/
+ err = mc_send_command(mc_io, &cmd);
+ if (err)
+ return err;
+
+ /* retrieve response parameters */
+ rsp_params = (struct dprtc_rsp_ext_trigger_timestamp *)cmd.params;
+ status->timestamp = le64_to_cpu(rsp_params->timestamp);
+ status->unread_valid_timestamp = rsp_params->unread_valid_timestamp;
+
+ return 0;
+}
+
+/**
+ * dprtc_set_fiper_loopback() - Set the fiper pulse as source of interrupt for
+ * External Trigger stamps
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @id: External trigger id.
+ * @fiper_as_input: Bit used to control interrupt signal source:
+ * 0 = Normal operation, interrupt external source
+ * 1 = Fiper pulse is looped back into Trigger input
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dprtc_set_fiper_loopback(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ uint8_t fiper_as_input)
+{
+ struct dprtc_ext_trigger_cfg *cmd_params;
+ struct mc_command cmd = { 0 };
+
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_FIPER_LOOPBACK,
+ cmd_flags,
+ token);
+
+ cmd_params = (struct dprtc_ext_trigger_cfg *)cmd.params;
+ cmd_params->id = id;
+ cmd_params->fiper_as_input = fiper_as_input;
+
+ return mc_send_command(mc_io, &cmd);
+}
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index f4f9598a29..b01a98eb59 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -196,6 +196,12 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io,
uint16_t token,
uint16_t max_frame_length);
+int dpdmux_get_max_frame_length(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint16_t if_id,
+ uint16_t *max_frame_length);
+
/**
* enum dpdmux_counter_type - Counter types
* @DPDMUX_CNT_ING_FRAME: Counts ingress frames
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
index 2ab4d75dfb..f8a1b5b1ae 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h
@@ -39,6 +39,7 @@
#define DPDMUX_CMDID_RESET DPDMUX_CMD(0x005)
#define DPDMUX_CMDID_IS_ENABLED DPDMUX_CMD(0x006)
#define DPDMUX_CMDID_SET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a1)
+#define DPDMUX_CMDID_GET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a2)
#define DPDMUX_CMDID_UL_RESET_COUNTERS DPDMUX_CMD(0x0a3)
@@ -124,6 +125,14 @@ struct dpdmux_cmd_set_max_frame_length {
uint16_t max_frame_length;
};
+struct dpdmux_cmd_get_max_frame_len {
+ uint16_t if_id;
+};
+
+struct dpdmux_rsp_get_max_frame_len {
+ uint16_t max_len;
+};
+
#define DPDMUX_ACCEPTED_FRAMES_TYPE_SHIFT 0
#define DPDMUX_ACCEPTED_FRAMES_TYPE_SIZE 4
#define DPDMUX_UNACCEPTED_FRAMES_ACTION_SHIFT 4
diff --git a/drivers/net/dpaa2/mc/fsl_dpkg.h b/drivers/net/dpaa2/mc/fsl_dpkg.h
index 02fe8d50e7..70f2339ea5 100644
--- a/drivers/net/dpaa2/mc/fsl_dpkg.h
+++ b/drivers/net/dpaa2/mc/fsl_dpkg.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
* Copyright 2013-2015 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef __FSL_DPKG_H_
@@ -21,7 +21,7 @@
/**
* Number of extractions per key profile
*/
-#define DPKG_MAX_NUM_OF_EXTRACTS 10
+#define DPKG_MAX_NUM_OF_EXTRACTS 20
/**
* enum dpkg_extract_from_hdr_type - Selecting extraction by header types
@@ -177,7 +177,7 @@ struct dpni_ext_set_rx_tc_dist {
uint8_t num_extracts;
uint8_t pad[7];
/* words 1..25 */
- struct dpni_dist_extract extracts[10];
+ struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
};
int dpkg_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index df42746c9a..34c6b20033 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef __FSL_DPNI_H
@@ -19,6 +19,11 @@ struct fsl_mc_io;
/** General DPNI macros */
+/**
+ * Maximum size of a key
+ */
+#define DPNI_MAX_KEY_SIZE 56
+
/**
* Maximum number of traffic classes
*/
@@ -95,8 +100,18 @@ struct fsl_mc_io;
* Define a custom number of congestion groups
*/
#define DPNI_OPT_CUSTOM_CG 0x000200
-
-
+/**
+ * Define a custom number of order point records
+ */
+#define DPNI_OPT_CUSTOM_OPR 0x000400
+/**
+ * Hash key is shared between all traffic classes
+ */
+#define DPNI_OPT_SHARED_HASH_KEY 0x000800
+/**
+ * Flow steering table is shared between all traffic classes
+ */
+#define DPNI_OPT_SHARED_FS 0x001000
/**
* Software sequence maximum layout size
*/
@@ -183,6 +198,8 @@ struct dpni_cfg {
uint8_t num_rx_tcs;
uint8_t qos_entries;
uint8_t num_cgs;
+ uint16_t num_opr;
+ uint8_t dist_key_size;
};
int dpni_create(struct fsl_mc_io *mc_io,
@@ -366,28 +383,45 @@ int dpni_get_attributes(struct fsl_mc_io *mc_io,
/**
* Extract out of frame header error
*/
-#define DPNI_ERROR_EOFHE 0x00020000
+#define DPNI_ERROR_MS 0x40000000
+#define DPNI_ERROR_PTP 0x08000000
+/* Ethernet multicast frame */
+#define DPNI_ERROR_MC 0x04000000
+/* Ethernet broadcast frame */
+#define DPNI_ERROR_BC 0x02000000
+#define DPNI_ERROR_KSE 0x00040000
+#define DPNI_ERROR_EOFHE 0x00020000
+#define DPNI_ERROR_MNLE 0x00010000
+#define DPNI_ERROR_TIDE 0x00008000
+#define DPNI_ERROR_PIEE 0x00004000
/**
* Frame length error
*/
-#define DPNI_ERROR_FLE 0x00002000
+#define DPNI_ERROR_FLE 0x00002000
/**
* Frame physical error
*/
-#define DPNI_ERROR_FPE 0x00001000
+#define DPNI_ERROR_FPE 0x00001000
+#define DPNI_ERROR_PTE 0x00000080
+#define DPNI_ERROR_ISP 0x00000040
/**
* Parsing header error
*/
-#define DPNI_ERROR_PHE 0x00000020
+#define DPNI_ERROR_PHE 0x00000020
+
+#define DPNI_ERROR_BLE 0x00000010
/**
* Parser L3 checksum error
*/
-#define DPNI_ERROR_L3CE 0x00000004
+#define DPNI_ERROR_L3CV 0x00000008
+
+#define DPNI_ERROR_L3CE 0x00000004
/**
- * Parser L3 checksum error
+ * Parser L4 checksum error
*/
-#define DPNI_ERROR_L4CE 0x00000001
+#define DPNI_ERROR_L4CV 0x00000002
+#define DPNI_ERROR_L4CE 0x00000001
/**
* enum dpni_error_action - Defines DPNI behavior for errors
* @DPNI_ERROR_ACTION_DISCARD: Discard the frame
@@ -455,6 +489,10 @@ int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
* Select to modify the sw-opaque value setting
*/
#define DPNI_BUF_LAYOUT_OPT_SW_OPAQUE 0x00000080
+/**
+ * Select to disable Scatter Gather and use single buffer
+ */
+#define DPNI_BUF_LAYOUT_OPT_NO_SG 0x00000100
/**
* struct dpni_buffer_layout - Structure representing DPNI buffer layout
@@ -733,7 +771,7 @@ int dpni_get_link_state(struct fsl_mc_io *mc_io,
/**
* struct dpni_tx_shaping - Structure representing DPNI tx shaping configuration
- * @rate_limit: Rate in Mbps
+ * @rate_limit: Rate in Mbits/s
* @max_burst_size: Burst size in bytes (up to 64KB)
*/
struct dpni_tx_shaping_cfg {
@@ -798,6 +836,11 @@ int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
uint16_t token,
uint8_t mac_addr[6]);
+/**
+ * Set mac addr queue action
+ */
+#define DPNI_MAC_SET_QUEUE_ACTION 1
+
int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
@@ -1464,6 +1507,7 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
struct dpni_fs_action_cfg {
uint64_t flc;
uint16_t flow_id;
+ uint16_t redirect_obj_token;
uint16_t options;
};
@@ -1595,7 +1639,8 @@ int dpni_set_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
uint8_t options,
- struct opr_cfg *cfg);
+ struct opr_cfg *cfg,
+ uint8_t opr_id);
int dpni_get_opr(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
@@ -1603,7 +1648,9 @@ int dpni_get_opr(struct fsl_mc_io *mc_io,
uint8_t tc,
uint8_t index,
struct opr_cfg *cfg,
- struct opr_qry *qry);
+ struct opr_qry *qry,
+ uint8_t flags,
+ uint8_t opr_id);
/**
* When used for queue_idx in function dpni_set_rx_dist_default_queue will
@@ -1779,14 +1826,57 @@ int dpni_get_sw_sequence_layout(struct fsl_mc_io *mc_io,
/**
* dpni_extract_sw_sequence_layout() - extract the software sequence layout
- * @layout: software sequence layout
- * @sw_sequence_layout_buf: Zeroed 264 bytes of memory before mapping it
- * to DMA
+ * @layout: software sequence layout
+ * @sw_sequence_layout_buf:Zeroed 264 bytes of memory before mapping it to DMA
*
* This function has to be called after dpni_get_sw_sequence_layout
- *
*/
void dpni_extract_sw_sequence_layout(struct dpni_sw_sequence_layout *layout,
const uint8_t *sw_sequence_layout_buf);
+/**
+ * struct dpni_ptp_cfg - configure single step PTP (IEEE 1588)
+ * @en: enable single step PTP. When enabled the PTPv1 functionality will
+ * not work. If the field is zero, offset and ch_update parameters
+ * will be ignored
+ * @offset: start offset from the beginning of the frame where timestamp
+ * field is found. The offset must respect all MAC headers, VLAN
+ * tags and other protocol headers
+ * @ch_update: when set UDP checksum will be updated inside packet
+ * @peer_delay: For peer-to-peer transparent clocks add this value to the
+ * correction field in addition to the transient time update. The
+ * value expresses nanoseconds.
+ */
+struct dpni_single_step_cfg {
+ uint8_t en;
+ uint8_t ch_update;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_single_step_cfg *ptp_cfg);
+
+int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_single_step_cfg *ptp_cfg);
+
+/**
+ * loopback_en field is valid when calling function dpni_set_port_cfg
+ */
+#define DPNI_PORT_CFG_LOOPBACK 0x01
+
+/**
+ * struct dpni_port_cfg - custom configuration for dpni physical port
+ * @loopback_en: port loopback enabled
+ */
+struct dpni_port_cfg {
+ int loopback_en;
+};
+
+int dpni_set_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, uint32_t flags, struct dpni_port_cfg *port_cfg);
+
+int dpni_get_port_cfg(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
+ uint16_t token, struct dpni_port_cfg *port_cfg);
+
#endif /* __FSL_DPNI_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
index c40090b8fe..6fbd93bb38 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2020 NXP
+ * Copyright 2016-2021 NXP
*
*/
#ifndef _FSL_DPNI_CMD_H
@@ -9,21 +9,25 @@
/* DPNI Version */
#define DPNI_VER_MAJOR 7
-#define DPNI_VER_MINOR 13
+#define DPNI_VER_MINOR 17
#define DPNI_CMD_BASE_VERSION 1
#define DPNI_CMD_VERSION_2 2
#define DPNI_CMD_VERSION_3 3
+#define DPNI_CMD_VERSION_4 4
+#define DPNI_CMD_VERSION_5 5
#define DPNI_CMD_ID_OFFSET 4
#define DPNI_CMD(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_BASE_VERSION)
#define DPNI_CMD_V2(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_2)
#define DPNI_CMD_V3(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_3)
+#define DPNI_CMD_V4(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_4)
+#define DPNI_CMD_V5(id) (((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_VERSION_5)
/* Command IDs */
#define DPNI_CMDID_OPEN DPNI_CMD(0x801)
#define DPNI_CMDID_CLOSE DPNI_CMD(0x800)
-#define DPNI_CMDID_CREATE DPNI_CMD_V3(0x901)
+#define DPNI_CMDID_CREATE DPNI_CMD_V5(0x901)
#define DPNI_CMDID_DESTROY DPNI_CMD(0x981)
#define DPNI_CMDID_GET_API_VERSION DPNI_CMD(0xa01)
@@ -67,7 +71,7 @@
#define DPNI_CMDID_REMOVE_VLAN_ID DPNI_CMD(0x232)
#define DPNI_CMDID_CLR_VLAN_FILTERS DPNI_CMD(0x233)
-#define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V3(0x235)
+#define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V4(0x235)
#define DPNI_CMDID_SET_RX_TC_POLICING DPNI_CMD(0x23E)
@@ -75,7 +79,7 @@
#define DPNI_CMDID_ADD_QOS_ENT DPNI_CMD_V2(0x241)
#define DPNI_CMDID_REMOVE_QOS_ENT DPNI_CMD(0x242)
#define DPNI_CMDID_CLR_QOS_TBL DPNI_CMD(0x243)
-#define DPNI_CMDID_ADD_FS_ENT DPNI_CMD(0x244)
+#define DPNI_CMDID_ADD_FS_ENT DPNI_CMD_V2(0x244)
#define DPNI_CMDID_REMOVE_FS_ENT DPNI_CMD(0x245)
#define DPNI_CMDID_CLR_FS_ENT DPNI_CMD(0x246)
@@ -140,7 +144,9 @@ struct dpni_cmd_create {
uint16_t fs_entries;
uint8_t num_rx_tcs;
uint8_t pad4;
- uint8_t num_cgs;
+ uint8_t num_cgs;
+ uint16_t num_opr;
+ uint8_t dist_key_size;
};
struct dpni_cmd_destroy {
@@ -411,8 +417,6 @@ struct dpni_rsp_get_port_mac_addr {
uint8_t mac_addr[6];
};
-#define DPNI_MAC_SET_QUEUE_ACTION 1
-
struct dpni_cmd_add_mac_addr {
uint8_t flags;
uint8_t pad;
@@ -594,6 +598,7 @@ struct dpni_cmd_add_fs_entry {
uint64_t key_iova;
uint64_t mask_iova;
uint64_t flc;
+ uint16_t redir_token;
};
struct dpni_cmd_remove_fs_entry {
@@ -779,7 +784,7 @@ struct dpni_rsp_get_congestion_notification {
};
struct dpni_cmd_set_opr {
- uint8_t pad0;
+ uint8_t opr_id;
uint8_t tc_id;
uint8_t index;
uint8_t options;
@@ -792,9 +797,10 @@ struct dpni_cmd_set_opr {
};
struct dpni_cmd_get_opr {
- uint8_t pad;
+ uint8_t flags;
uint8_t tc_id;
uint8_t index;
+ uint8_t opr_id;
};
#define DPNI_RIP_SHIFT 0
@@ -911,5 +917,34 @@ struct dpni_sw_sequence_layout_entry {
uint16_t pad;
};
+#define DPNI_PTP_ENABLE_SHIFT 0
+#define DPNI_PTP_ENABLE_SIZE 1
+#define DPNI_PTP_CH_UPDATE_SHIFT 1
+#define DPNI_PTP_CH_UPDATE_SIZE 1
+struct dpni_cmd_single_step_cfg {
+ uint16_t flags;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+struct dpni_rsp_single_step_cfg {
+ uint16_t flags;
+ uint16_t offset;
+ uint32_t peer_delay;
+};
+
+#define DPNI_PORT_LOOPBACK_EN_SHIFT 0
+#define DPNI_PORT_LOOPBACK_EN_SIZE 1
+
+struct dpni_cmd_set_port_cfg {
+ uint32_t flags;
+ uint32_t bit_params;
+};
+
+struct dpni_rsp_get_port_cfg {
+ uint32_t flags;
+ uint32_t bit_params;
+};
+
#pragma pack(pop)
#endif /* _FSL_DPNI_CMD_H */
diff --git a/drivers/net/dpaa2/mc/fsl_dprtc.h b/drivers/net/dpaa2/mc/fsl_dprtc.h
index 49edb5a050..84ab158444 100644
--- a/drivers/net/dpaa2/mc/fsl_dprtc.h
+++ b/drivers/net/dpaa2/mc/fsl_dprtc.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#ifndef __FSL_DPRTC_H
#define __FSL_DPRTC_H
@@ -86,6 +86,23 @@ int dprtc_set_alarm(struct fsl_mc_io *mc_io,
uint16_t token,
uint64_t time);
+struct dprtc_ext_trigger_status {
+ uint64_t timestamp;
+ uint8_t unread_valid_timestamp;
+};
+
+int dprtc_get_ext_trigger_timestamp(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ struct dprtc_ext_trigger_status *status);
+
+int dprtc_set_fiper_loopback(struct fsl_mc_io *mc_io,
+ uint32_t cmd_flags,
+ uint16_t token,
+ uint8_t id,
+ uint8_t fiper_as_input);
+
/**
* struct dprtc_attr - Structure representing DPRTC attributes
* @id: DPRTC object ID
diff --git a/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h b/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
index eca12ff5ee..61aaa4daab 100644
--- a/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
+++ b/drivers/net/dpaa2/mc/fsl_dprtc_cmd.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
- * Copyright 2019 NXP
+ * Copyright 2019-2021 NXP
*/
#include <fsl_mc_sys.h>
#ifndef _FSL_DPRTC_CMD_H
@@ -7,13 +7,15 @@
/* DPRTC Version */
#define DPRTC_VER_MAJOR 2
-#define DPRTC_VER_MINOR 1
+#define DPRTC_VER_MINOR 3
/* Command versioning */
#define DPRTC_CMD_BASE_VERSION 1
+#define DPRTC_CMD_VERSION_2 2
#define DPRTC_CMD_ID_OFFSET 4
#define DPRTC_CMD(id) (((id) << DPRTC_CMD_ID_OFFSET) | DPRTC_CMD_BASE_VERSION)
+#define DPRTC_CMD_V2(id) (((id) << DPRTC_CMD_ID_OFFSET) | DPRTC_CMD_VERSION_2)
/* Command IDs */
#define DPRTC_CMDID_CLOSE DPRTC_CMD(0x800)
@@ -39,6 +41,7 @@
#define DPRTC_CMDID_SET_EXT_TRIGGER DPRTC_CMD(0x1d8)
#define DPRTC_CMDID_CLEAR_EXT_TRIGGER DPRTC_CMD(0x1d9)
#define DPRTC_CMDID_GET_EXT_TRIGGER_TIMESTAMP DPRTC_CMD(0x1dA)
+#define DPRTC_CMDID_SET_FIPER_LOOPBACK DPRTC_CMD(0x1dB)
/* Macros for accessing command fields smaller than 1byte */
#define DPRTC_MASK(field) \
@@ -87,5 +90,23 @@ struct dprtc_rsp_get_api_version {
uint16_t major;
uint16_t minor;
};
+
+struct dprtc_cmd_ext_trigger_timestamp {
+ uint32_t pad;
+ uint8_t id;
+};
+
+struct dprtc_rsp_ext_trigger_timestamp {
+ uint8_t unread_valid_timestamp;
+ uint8_t pad1;
+ uint16_t pad2;
+ uint32_t pad3;
+ uint64_t timestamp;
+};
+
+struct dprtc_ext_trigger_cfg {
+ uint8_t id;
+ uint8_t fiper_as_input;
+};
#pragma pack(pop)
#endif /* _FSL_DPRTC_CMD_H */
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 02/10] net/dpaa2: support Tx flow redirection action
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 01/10] bus/fslmc: updated MC FW to 10.28 nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 03/10] bus/fslmc: add qbman debug APIs support nipun.gupta
` (8 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
TX redirection support by flow action RTE_FLOW_ACTION_TYPE_PHY_PORT
and RTE_FLOW_ACTION_TYPE_PORT_ID
This action is executed by HW to forward packets between ports.
If the ingress packets match the rule, the packets are switched
without software involved and perf is improved as well.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 5 ++
drivers/net/dpaa2/dpaa2_ethdev.h | 1 +
drivers/net/dpaa2/dpaa2_flow.c | 114 +++++++++++++++++++++++++++----
drivers/net/dpaa2/mc/fsl_dpni.h | 23 +++++++
4 files changed, 130 insertions(+), 13 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 560b79151b..9cf55c0f0b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2822,6 +2822,11 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
+int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev)
+{
+ return dev->device->driver == &rte_dpaa2_pmd.driver;
+}
+
static int
rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
struct rte_dpaa2_device *dpaa2_dev)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index b9c729f6cd..3f34d7ecff 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -240,6 +240,7 @@ uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
uint16_t dpaa2_dev_tx_conf(void *queue) __rte_unused;
+int dpaa2_dev_is_dpaa2(struct rte_eth_dev *dev);
int dpaa2_timesync_enable(struct rte_eth_dev *dev);
int dpaa2_timesync_disable(struct rte_eth_dev *dev);
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index bfe17c350a..84fe37a7c0 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#include <sys/queue.h>
@@ -30,10 +30,10 @@
int mc_l4_port_identification;
static char *dpaa2_flow_control_log;
-static int dpaa2_flow_miss_flow_id =
+static uint16_t dpaa2_flow_miss_flow_id =
DPNI_FS_MISS_DROP;
-#define FIXED_ENTRY_SIZE 54
+#define FIXED_ENTRY_SIZE DPNI_MAX_KEY_SIZE
enum flow_rule_ipaddr_type {
FLOW_NONE_IPADDR,
@@ -83,9 +83,18 @@ static const
enum rte_flow_action_type dpaa2_supported_action_type[] = {
RTE_FLOW_ACTION_TYPE_END,
RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_PHY_PORT,
+ RTE_FLOW_ACTION_TYPE_PORT_ID,
RTE_FLOW_ACTION_TYPE_RSS
};
+static const
+enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_PHY_PORT,
+ RTE_FLOW_ACTION_TYPE_PORT_ID
+};
+
/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/
#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1)
@@ -2937,6 +2946,19 @@ dpaa2_configure_flow_raw(struct rte_flow *flow,
return 0;
}
+static inline int
+dpaa2_fs_action_supported(enum rte_flow_action_type action)
+{
+ int i;
+
+ for (i = 0; i < (int)(sizeof(dpaa2_supported_fs_action_type) /
+ sizeof(enum rte_flow_action_type)); i++) {
+ if (action == dpaa2_supported_fs_action_type[i])
+ return 1;
+ }
+
+ return 0;
+}
/* The existing QoS/FS entry with IP address(es)
* needs update after
* new extract(s) are inserted before IP
@@ -3115,7 +3137,7 @@ dpaa2_flow_entry_update(
}
}
- if (curr->action != RTE_FLOW_ACTION_TYPE_QUEUE) {
+ if (!dpaa2_fs_action_supported(curr->action)) {
curr = LIST_NEXT(curr, next);
continue;
}
@@ -3253,6 +3275,43 @@ dpaa2_flow_verify_attr(
return 0;
}
+static inline struct rte_eth_dev *
+dpaa2_flow_redirect_dev(struct dpaa2_dev_priv *priv,
+ const struct rte_flow_action *action)
+{
+ const struct rte_flow_action_phy_port *phy_port;
+ const struct rte_flow_action_port_id *port_id;
+ int idx = -1;
+ struct rte_eth_dev *dest_dev;
+
+ if (action->type == RTE_FLOW_ACTION_TYPE_PHY_PORT) {
+ phy_port = (const struct rte_flow_action_phy_port *)
+ action->conf;
+ if (!phy_port->original)
+ idx = phy_port->index;
+ } else if (action->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
+ port_id = (const struct rte_flow_action_port_id *)
+ action->conf;
+ if (!port_id->original)
+ idx = port_id->id;
+ } else {
+ return NULL;
+ }
+
+ if (idx >= 0) {
+ if (!rte_eth_dev_is_valid_port(idx))
+ return NULL;
+ dest_dev = &rte_eth_devices[idx];
+ } else {
+ dest_dev = priv->eth_dev;
+ }
+
+ if (!dpaa2_dev_is_dpaa2(dest_dev))
+ return NULL;
+
+ return dest_dev;
+}
+
static inline int
dpaa2_flow_verify_action(
struct dpaa2_dev_priv *priv,
@@ -3278,6 +3337,13 @@ dpaa2_flow_verify_action(
return -1;
}
break;
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
+ if (!dpaa2_flow_redirect_dev(priv, &actions[j])) {
+ DPAA2_PMD_ERR("Invalid port id of action");
+ return -ENOTSUP;
+ }
+ break;
case RTE_FLOW_ACTION_TYPE_RSS:
rss_conf = (const struct rte_flow_action_rss *)
(actions[j].conf);
@@ -3330,11 +3396,13 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
struct dpni_qos_tbl_cfg qos_cfg;
struct dpni_fs_action_cfg action;
struct dpaa2_dev_priv *priv = dev->data->dev_private;
- struct dpaa2_queue *rxq;
+ struct dpaa2_queue *dest_q;
struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw;
size_t param;
struct rte_flow *curr = LIST_FIRST(&priv->flows);
uint16_t qos_index;
+ struct rte_eth_dev *dest_dev;
+ struct dpaa2_dev_priv *dest_priv;
ret = dpaa2_flow_verify_attr(priv, attr);
if (ret)
@@ -3446,12 +3514,31 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
while (!end_of_list) {
switch (actions[j].type) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
- dest_queue =
- (const struct rte_flow_action_queue *)(actions[j].conf);
- rxq = priv->rx_vq[dest_queue->index];
- flow->action = RTE_FLOW_ACTION_TYPE_QUEUE;
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
memset(&action, 0, sizeof(struct dpni_fs_action_cfg));
- action.flow_id = rxq->flow_id;
+ flow->action = actions[j].type;
+
+ if (actions[j].type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+ dest_queue = (const struct rte_flow_action_queue *)
+ (actions[j].conf);
+ dest_q = priv->rx_vq[dest_queue->index];
+ action.flow_id = dest_q->flow_id;
+ } else {
+ dest_dev = dpaa2_flow_redirect_dev(priv,
+ &actions[j]);
+ if (!dest_dev) {
+ DPAA2_PMD_ERR("Invalid destination device to redirect!");
+ return -1;
+ }
+
+ dest_priv = dest_dev->data->dev_private;
+ dest_q = dest_priv->tx_vq[0];
+ action.options =
+ DPNI_FS_OPT_REDIRECT_TO_DPNI_TX;
+ action.redirect_obj_token = dest_priv->token;
+ action.flow_id = dest_q->flow_id;
+ }
/* Configure FS table first*/
if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) {
@@ -3481,8 +3568,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
return -1;
}
tc_cfg.enable = true;
- tc_cfg.fs_miss_flow_id =
- dpaa2_flow_miss_flow_id;
+ tc_cfg.fs_miss_flow_id = dpaa2_flow_miss_flow_id;
ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW,
priv->token, &tc_cfg);
if (ret < 0) {
@@ -3970,7 +4056,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
ret = dpaa2_generic_flow_set(flow, dev, attr, pattern,
actions, error);
if (ret < 0) {
- if (error->type > RTE_FLOW_ERROR_TYPE_ACTION)
+ if (error && error->type > RTE_FLOW_ERROR_TYPE_ACTION)
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
attr, "unknown");
@@ -4002,6 +4088,8 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev,
switch (flow->action) {
case RTE_FLOW_ACTION_TYPE_QUEUE:
+ case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
if (priv->num_rx_tc > 1) {
/* Remove entry from QoS table first */
ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token,
diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h
index 34c6b20033..469ab9b3d4 100644
--- a/drivers/net/dpaa2/mc/fsl_dpni.h
+++ b/drivers/net/dpaa2/mc/fsl_dpni.h
@@ -1496,12 +1496,35 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
*/
#define DPNI_FS_OPT_SET_STASH_CONTROL 0x4
+/**
+ * Redirect matching traffic to Rx part of another dpni object. The frame
+ * will be classified according to new qos and flow steering rules from
+ * target dpni object.
+ */
+#define DPNI_FS_OPT_REDIRECT_TO_DPNI_RX 0x08
+
+/**
+ * Redirect matching traffic into Tx queue of another dpni object. The
+ * frame will be transmitted directly
+ */
+#define DPNI_FS_OPT_REDIRECT_TO_DPNI_TX 0x10
+
/**
* struct dpni_fs_action_cfg - Action configuration for table look-up
* @flc: FLC value for traffic matching this rule. Please check the Frame
* Descriptor section in the hardware documentation for more information.
* @flow_id: Identifies the Rx queue used for matching traffic. Supported
* values are in range 0 to num_queue-1.
+ * @redirect_obj_token: token that identifies the object where frame is
+ * redirected when this rule is hit. This paraneter is used only when one of the
+ * flags DPNI_FS_OPT_REDIRECT_TO_DPNI_RX or DPNI_FS_OPT_REDIRECT_TO_DPNI_TX is
+ * set.
+ * The token is obtained using dpni_open() API call. The object must stay
+ * open during the operation to ensure the fact that application has access
+ * on it. If the object is destroyed of closed next actions will take place:
+ * - if DPNI_FS_OPT_DISCARD is set the frame will be discarded by current dpni
+ * - if DPNI_FS_OPT_DISCARD is cleared the frame will be enqueued in queue with
+ * index provided in flow_id parameter.
* @options: Any combination of DPNI_FS_OPT_ values.
*/
struct dpni_fs_action_cfg {
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 03/10] bus/fslmc: add qbman debug APIs support
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 01/10] bus/fslmc: updated MC FW to 10.28 nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 02/10] net/dpaa2: support Tx flow redirection action nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 04/10] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
` (7 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena,
Youri Querry, Roy Pledge, Nipun Gupta
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Add support for debugging qbman FQs
Signed-off-by: Youri Querry <youri.querry_1@nxp.com>
Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
.../bus/fslmc/qbman/include/fsl_qbman_debug.h | 200 +++++-
drivers/bus/fslmc/qbman/qbman_debug.c | 621 ++++++++++++++++++
drivers/bus/fslmc/qbman/qbman_portal.c | 6 +
3 files changed, 824 insertions(+), 3 deletions(-)
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index 54096e8774..18b6a3c2e4 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2020 NXP
+ * Copyright 2018-2020 NXP
*/
#ifndef _FSL_QBMAN_DEBUG_H
#define _FSL_QBMAN_DEBUG_H
@@ -8,6 +8,112 @@
#include <rte_compat.h>
struct qbman_swp;
+/* Buffer pool query commands */
+struct qbman_bp_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[4];
+ uint8_t bdi;
+ uint8_t state;
+ uint32_t fill;
+ uint32_t hdptr;
+ uint16_t swdet;
+ uint16_t swdxt;
+ uint16_t hwdet;
+ uint16_t hwdxt;
+ uint16_t swset;
+ uint16_t swsxt;
+ uint16_t vbpid;
+ uint16_t icid;
+ uint64_t bpscn_addr;
+ uint64_t bpscn_ctx;
+ uint16_t hw_targ;
+ uint8_t dbe;
+ uint8_t reserved2;
+ uint8_t sdcnt;
+ uint8_t hdcnt;
+ uint8_t sscnt;
+ uint8_t reserved3[9];
+};
+
+int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
+ struct qbman_bp_query_rslt *r);
+int qbman_bp_get_bdi(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_va(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_wae(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swdet(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swdxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hwdet(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hwdxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swset(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_swsxt(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_vbpid(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_icid(struct qbman_bp_query_rslt *r);
+int qbman_bp_get_pl(struct qbman_bp_query_rslt *r);
+uint64_t qbman_bp_get_bpscn_addr(struct qbman_bp_query_rslt *r);
+uint64_t qbman_bp_get_bpscn_ctx(struct qbman_bp_query_rslt *r);
+uint16_t qbman_bp_get_hw_targ(struct qbman_bp_query_rslt *r);
+int qbman_bp_has_free_bufs(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_num_free_bufs(struct qbman_bp_query_rslt *r);
+int qbman_bp_is_depleted(struct qbman_bp_query_rslt *r);
+int qbman_bp_is_surplus(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_hdptr(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_sdcnt(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_hdcnt(struct qbman_bp_query_rslt *r);
+uint32_t qbman_bp_get_sscnt(struct qbman_bp_query_rslt *r);
+
+/* FQ query function for programmable fields */
+struct qbman_fq_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[8];
+ uint16_t cgid;
+ uint16_t dest_wq;
+ uint8_t reserved2;
+ uint8_t fq_ctrl;
+ uint16_t ics_cred;
+ uint16_t td_thresh;
+ uint16_t oal_oac;
+ uint8_t reserved3;
+ uint8_t mctl;
+ uint64_t fqd_ctx;
+ uint16_t icid;
+ uint16_t reserved4;
+ uint32_t vfqid;
+ uint32_t fqid_er;
+ uint16_t opridsz;
+ uint8_t reserved5[18];
+};
+
+int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
+ struct qbman_fq_query_rslt *r);
+uint8_t qbman_fq_attr_get_fqctrl(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_cgrid(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_destwq(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_tdthresh(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_oa_ics(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_oa_cgr(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_oal(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_bdi(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_ff(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_va(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_ps(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_pps(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_icid(struct qbman_fq_query_rslt *r);
+int qbman_fq_attr_get_pl(struct qbman_fq_query_rslt *r);
+uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r);
+uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r);
+uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r);
+
+/* FQ query command for non-programmable fields*/
+enum qbman_fq_schedstate_e {
+ qbman_fq_schedstate_oos = 0,
+ qbman_fq_schedstate_retired,
+ qbman_fq_schedstate_tentatively_scheduled,
+ qbman_fq_schedstate_truly_scheduled,
+ qbman_fq_schedstate_parked,
+ qbman_fq_schedstate_held_active,
+};
struct qbman_fq_query_np_rslt {
uint8_t verb;
@@ -32,10 +138,98 @@ uint8_t verb;
__rte_internal
int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
struct qbman_fq_query_np_rslt *r);
-
+uint8_t qbman_fq_state_schedstate(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_force_eligible(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_xoff(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_retirement_pending(const struct qbman_fq_query_np_rslt *r);
+int qbman_fq_state_overflow_error(const struct qbman_fq_query_np_rslt *r);
__rte_internal
uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r);
-
uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r);
+/* CGR query */
+struct qbman_cgr_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[6];
+ uint8_t ctl1;
+ uint8_t reserved1;
+ uint16_t oal;
+ uint16_t reserved2;
+ uint8_t mode;
+ uint8_t ctl2;
+ uint8_t iwc;
+ uint8_t tdc;
+ uint16_t cs_thres;
+ uint16_t cs_thres_x;
+ uint16_t td_thres;
+ uint16_t cscn_tdcp;
+ uint16_t cscn_wqid;
+ uint16_t cscn_vcgid;
+ uint16_t cg_icid;
+ uint64_t cg_wr_addr;
+ uint64_t cscn_ctx;
+ uint64_t i_cnt;
+ uint64_t a_cnt;
+};
+
+int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_en_enter(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_en_exit(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_wq_icd(struct qbman_cgr_query_rslt *r);
+uint8_t qbman_cgr_get_mode(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_rej_cnt_mode(struct qbman_cgr_query_rslt *r);
+int qbman_cgr_get_cscn_bdi(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_cs_thres(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_cs_thres_x(struct qbman_cgr_query_rslt *r);
+uint16_t qbman_cgr_attr_get_td_thres(struct qbman_cgr_query_rslt *r);
+
+/* WRED query */
+struct qbman_wred_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[6];
+ uint8_t edp[7];
+ uint8_t reserved1;
+ uint32_t wred_parm_dp[7];
+ uint8_t reserved2[20];
+};
+
+int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_wred_query_rslt *r);
+int qbman_cgr_attr_wred_get_edp(struct qbman_wred_query_rslt *r, uint32_t idx);
+void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
+ uint64_t *maxth, uint8_t *maxp);
+uint32_t qbman_cgr_attr_wred_get_parm_dp(struct qbman_wred_query_rslt *r,
+ uint32_t idx);
+
+/* CGR/CCGR/CQ statistics query */
+int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt);
+
+/* Query Work Queue Channel */
+struct qbman_wqchan_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint16_t chid;
+ uint8_t reserved;
+ uint8_t ctrl;
+ uint16_t cdan_wqid;
+ uint64_t cdan_ctx;
+ uint32_t reserved2[4];
+ uint32_t wq_len[8];
+};
+
+int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
+ struct qbman_wqchan_query_rslt *r);
+uint32_t qbman_wqchan_attr_get_wqlen(struct qbman_wqchan_query_rslt *r, int wq);
+uint64_t qbman_wqchan_attr_get_cdan_ctx(struct qbman_wqchan_query_rslt *r);
+uint16_t qbman_wqchan_attr_get_cdan_wqid(struct qbman_wqchan_query_rslt *r);
+uint8_t qbman_wqchan_attr_get_ctrl(struct qbman_wqchan_query_rslt *r);
+uint16_t qbman_wqchan_attr_get_chanid(struct qbman_wqchan_query_rslt *r);
#endif /* !_FSL_QBMAN_DEBUG_H */
diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 34374ae4b6..eea06988ff 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
+ * Copyright 2018-2020 NXP
*/
#include "compat.h"
@@ -16,6 +17,179 @@
#define QBMAN_CGR_STAT_QUERY 0x55
#define QBMAN_CGR_STAT_QUERY_CLR 0x56
+struct qbman_bp_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t bpid;
+ uint8_t reserved2[60];
+};
+
+#define QB_BP_STATE_SHIFT 24
+#define QB_BP_VA_SHIFT 1
+#define QB_BP_VA_MASK 0x2
+#define QB_BP_WAE_SHIFT 2
+#define QB_BP_WAE_MASK 0x4
+#define QB_BP_PL_SHIFT 15
+#define QB_BP_PL_MASK 0x8000
+#define QB_BP_ICID_MASK 0x7FFF
+
+int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
+ struct qbman_bp_query_rslt *r)
+{
+ struct qbman_bp_query_desc *p;
+
+ /* Start the management command */
+ p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ /* Encode the caller-provided attributes */
+ p->bpid = bpid;
+
+ /* Complete the management command */
+ *r = *(struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_BP_QUERY);
+ if (!r) {
+ pr_err("qbman: Query BPID %d failed, no response\n",
+ bpid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of BPID 0x%x failed, code=0x%02x\n", bpid,
+ r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_bp_get_bdi(struct qbman_bp_query_rslt *r)
+{
+ return r->bdi & 1;
+}
+
+int qbman_bp_get_va(struct qbman_bp_query_rslt *r)
+{
+ return (r->bdi & QB_BP_VA_MASK) >> QB_BP_VA_MASK;
+}
+
+int qbman_bp_get_wae(struct qbman_bp_query_rslt *r)
+{
+ return (r->bdi & QB_BP_WAE_MASK) >> QB_BP_WAE_SHIFT;
+}
+
+static uint16_t qbman_bp_thresh_to_value(uint16_t val)
+{
+ return (val & 0xff) << ((val & 0xf00) >> 8);
+}
+
+uint16_t qbman_bp_get_swdet(struct qbman_bp_query_rslt *r)
+{
+
+ return qbman_bp_thresh_to_value(r->swdet);
+}
+
+uint16_t qbman_bp_get_swdxt(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->swdxt);
+}
+
+uint16_t qbman_bp_get_hwdet(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->hwdet);
+}
+
+uint16_t qbman_bp_get_hwdxt(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->hwdxt);
+}
+
+uint16_t qbman_bp_get_swset(struct qbman_bp_query_rslt *r)
+{
+ return qbman_bp_thresh_to_value(r->swset);
+}
+
+uint16_t qbman_bp_get_swsxt(struct qbman_bp_query_rslt *r)
+{
+
+ return qbman_bp_thresh_to_value(r->swsxt);
+}
+
+uint16_t qbman_bp_get_vbpid(struct qbman_bp_query_rslt *r)
+{
+ return r->vbpid;
+}
+
+uint16_t qbman_bp_get_icid(struct qbman_bp_query_rslt *r)
+{
+ return r->icid & QB_BP_ICID_MASK;
+}
+
+int qbman_bp_get_pl(struct qbman_bp_query_rslt *r)
+{
+ return (r->icid & QB_BP_PL_MASK) >> QB_BP_PL_SHIFT;
+}
+
+uint64_t qbman_bp_get_bpscn_addr(struct qbman_bp_query_rslt *r)
+{
+ return r->bpscn_addr;
+}
+
+uint64_t qbman_bp_get_bpscn_ctx(struct qbman_bp_query_rslt *r)
+{
+ return r->bpscn_ctx;
+}
+
+uint16_t qbman_bp_get_hw_targ(struct qbman_bp_query_rslt *r)
+{
+ return r->hw_targ;
+}
+
+int qbman_bp_has_free_bufs(struct qbman_bp_query_rslt *r)
+{
+ return !(int)(r->state & 0x1);
+}
+
+int qbman_bp_is_depleted(struct qbman_bp_query_rslt *r)
+{
+ return (int)((r->state & 0x2) >> 1);
+}
+
+int qbman_bp_is_surplus(struct qbman_bp_query_rslt *r)
+{
+ return (int)((r->state & 0x4) >> 2);
+}
+
+uint32_t qbman_bp_num_free_bufs(struct qbman_bp_query_rslt *r)
+{
+ return r->fill;
+}
+
+uint32_t qbman_bp_get_hdptr(struct qbman_bp_query_rslt *r)
+{
+ return r->hdptr;
+}
+
+uint32_t qbman_bp_get_sdcnt(struct qbman_bp_query_rslt *r)
+{
+ return r->sdcnt;
+}
+
+uint32_t qbman_bp_get_hdcnt(struct qbman_bp_query_rslt *r)
+{
+ return r->hdcnt;
+}
+
+uint32_t qbman_bp_get_sscnt(struct qbman_bp_query_rslt *r)
+{
+ return r->sscnt;
+}
+
struct qbman_fq_query_desc {
uint8_t verb;
uint8_t reserved[3];
@@ -23,6 +197,128 @@ struct qbman_fq_query_desc {
uint8_t reserved2[56];
};
+/* FQ query function for programmable fields */
+int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
+ struct qbman_fq_query_rslt *r)
+{
+ struct qbman_fq_query_desc *p;
+
+ p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->fqid = fqid;
+ *r = *(struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_FQ_QUERY);
+ if (!r) {
+ pr_err("qbman: Query FQID %d failed, no response\n",
+ fqid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of FQID 0x%x failed, code=0x%02x\n",
+ fqid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+uint8_t qbman_fq_attr_get_fqctrl(struct qbman_fq_query_rslt *r)
+{
+ return r->fq_ctrl;
+}
+
+uint16_t qbman_fq_attr_get_cgrid(struct qbman_fq_query_rslt *r)
+{
+ return r->cgid;
+}
+
+uint16_t qbman_fq_attr_get_destwq(struct qbman_fq_query_rslt *r)
+{
+ return r->dest_wq;
+}
+
+static uint16_t qbman_thresh_to_value(uint16_t val)
+{
+ return ((val & 0x1FE0) >> 5) << (val & 0x1F);
+}
+
+uint16_t qbman_fq_attr_get_tdthresh(struct qbman_fq_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->td_thresh);
+}
+
+int qbman_fq_attr_get_oa_ics(struct qbman_fq_query_rslt *r)
+{
+ return (int)(r->oal_oac >> 14) & 0x1;
+}
+
+int qbman_fq_attr_get_oa_cgr(struct qbman_fq_query_rslt *r)
+{
+ return (int)(r->oal_oac >> 15);
+}
+
+uint16_t qbman_fq_attr_get_oal(struct qbman_fq_query_rslt *r)
+{
+ return (r->oal_oac & 0x0FFF);
+}
+
+int qbman_fq_attr_get_bdi(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x1);
+}
+
+int qbman_fq_attr_get_ff(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x2) >> 1;
+}
+
+int qbman_fq_attr_get_va(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x4) >> 2;
+}
+
+int qbman_fq_attr_get_ps(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x8) >> 3;
+}
+
+int qbman_fq_attr_get_pps(struct qbman_fq_query_rslt *r)
+{
+ return (r->mctl & 0x30) >> 4;
+}
+
+uint16_t qbman_fq_attr_get_icid(struct qbman_fq_query_rslt *r)
+{
+ return r->icid & 0x7FFF;
+}
+
+int qbman_fq_attr_get_pl(struct qbman_fq_query_rslt *r)
+{
+ return (int)((r->icid & 0x8000) >> 15);
+}
+
+uint32_t qbman_fq_attr_get_vfqid(struct qbman_fq_query_rslt *r)
+{
+ return r->vfqid & 0x00FFFFFF;
+}
+
+uint32_t qbman_fq_attr_get_erfqid(struct qbman_fq_query_rslt *r)
+{
+ return r->fqid_er & 0x00FFFFFF;
+}
+
+uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r)
+{
+ return r->opridsz;
+}
+
int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
struct qbman_fq_query_np_rslt *r)
{
@@ -55,6 +351,31 @@ int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
return 0;
}
+uint8_t qbman_fq_state_schedstate(const struct qbman_fq_query_np_rslt *r)
+{
+ return r->st1 & 0x7;
+}
+
+int qbman_fq_state_force_eligible(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x8) >> 3);
+}
+
+int qbman_fq_state_xoff(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x10) >> 4);
+}
+
+int qbman_fq_state_retirement_pending(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x20) >> 5);
+}
+
+int qbman_fq_state_overflow_error(const struct qbman_fq_query_np_rslt *r)
+{
+ return (int)((r->st1 & 0x40) >> 6);
+}
+
uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r)
{
return (r->frm_cnt & 0x00FFFFFF);
@@ -64,3 +385,303 @@ uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r)
{
return r->byte_cnt;
}
+
+/* Query CGR */
+struct qbman_cgr_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t cgid;
+ uint8_t reserved2[60];
+};
+
+int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_cgr_query_rslt *r)
+{
+ struct qbman_cgr_query_desc *p;
+
+ p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ *r = *(struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_CGR_QUERY);
+ if (!r) {
+ pr_err("qbman: Query CGID %d failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_CGR_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query CGID 0x%x failed,code=0x%02x\n", cgid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_cgr_get_cscn_wq_en_enter(struct qbman_cgr_query_rslt *r)
+{
+ return (int)(r->ctl1 & 0x1);
+}
+
+int qbman_cgr_get_cscn_wq_en_exit(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->ctl1 & 0x2) >> 1);
+}
+
+int qbman_cgr_get_cscn_wq_icd(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->ctl1 & 0x4) >> 2);
+}
+
+uint8_t qbman_cgr_get_mode(struct qbman_cgr_query_rslt *r)
+{
+ return r->mode & 0x3;
+}
+
+int qbman_cgr_get_rej_cnt_mode(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->mode & 0x4) >> 2);
+}
+
+int qbman_cgr_get_cscn_bdi(struct qbman_cgr_query_rslt *r)
+{
+ return (int)((r->mode & 0x8) >> 3);
+}
+
+uint16_t qbman_cgr_attr_get_cs_thres(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->cs_thres);
+}
+
+uint16_t qbman_cgr_attr_get_cs_thres_x(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->cs_thres_x);
+}
+
+uint16_t qbman_cgr_attr_get_td_thres(struct qbman_cgr_query_rslt *r)
+{
+ return qbman_thresh_to_value(r->td_thres);
+}
+
+int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
+ struct qbman_wred_query_rslt *r)
+{
+ struct qbman_cgr_query_desc *p;
+
+ p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ *r = *(struct qbman_wred_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_WRED_QUERY);
+ if (!r) {
+ pr_err("qbman: Query CGID WRED %d failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WRED_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query CGID WRED 0x%x failed,code=0x%02x\n",
+ cgid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int qbman_cgr_attr_wred_get_edp(struct qbman_wred_query_rslt *r, uint32_t idx)
+{
+ return (int)(r->edp[idx] & 1);
+}
+
+uint32_t qbman_cgr_attr_wred_get_parm_dp(struct qbman_wred_query_rslt *r,
+ uint32_t idx)
+{
+ return r->wred_parm_dp[idx];
+}
+
+void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
+ uint64_t *maxth, uint8_t *maxp)
+{
+ uint8_t ma, mn, step_i, step_s, pn;
+
+ ma = (uint8_t)(dp >> 24);
+ mn = (uint8_t)(dp >> 19) & 0x1f;
+ step_i = (uint8_t)(dp >> 11);
+ step_s = (uint8_t)(dp >> 6) & 0x1f;
+ pn = (uint8_t)dp & 0x3f;
+
+ *maxp = (uint8_t)(((pn<<2) * 100)/256);
+
+ if (mn == 0)
+ *maxth = ma;
+ else
+ *maxth = ((ma+256) * (1<<(mn-1)));
+
+ if (step_s == 0)
+ *minth = *maxth - step_i;
+ else
+ *minth = *maxth - (256 + step_i) * (1<<(step_s - 1));
+}
+
+/* Query CGR/CCGR/CQ statistics */
+struct qbman_cgr_statistics_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t cgid;
+ uint8_t reserved1;
+ uint8_t ct;
+ uint8_t reserved2[58];
+};
+
+struct qbman_cgr_statistics_query_rslt {
+ uint8_t verb;
+ uint8_t rslt;
+ uint8_t reserved[14];
+ uint64_t frm_cnt;
+ uint64_t byte_cnt;
+ uint32_t reserved2[8];
+};
+
+static int qbman_cgr_statistics_query(struct qbman_swp *s, uint32_t cgid,
+ int clear, uint32_t command_type,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ struct qbman_cgr_statistics_query_desc *p;
+ struct qbman_cgr_statistics_query_rslt *r;
+ uint32_t query_verb;
+
+ p = (struct qbman_cgr_statistics_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ p->cgid = cgid;
+ if (command_type < 2)
+ p->ct = command_type;
+ query_verb = clear ?
+ QBMAN_CGR_STAT_QUERY_CLR : QBMAN_CGR_STAT_QUERY;
+ r = (struct qbman_cgr_statistics_query_rslt *)qbman_swp_mc_complete(s,
+ p, query_verb);
+ if (!r) {
+ pr_err("qbman: Query CGID %d statistics failed, no response\n",
+ cgid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != query_verb);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query statistics of CGID 0x%x failed, code=0x%02x\n",
+ cgid, r->rslt);
+ return -EIO;
+ }
+
+ if (*frame_cnt)
+ *frame_cnt = r->frm_cnt & 0xFFFFFFFFFFllu;
+ if (*byte_cnt)
+ *byte_cnt = r->byte_cnt & 0xFFFFFFFFFFllu;
+
+ return 0;
+}
+
+int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 0xff,
+ frame_cnt, byte_cnt);
+}
+
+int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 1,
+ frame_cnt, byte_cnt);
+}
+
+int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
+ uint64_t *frame_cnt, uint64_t *byte_cnt)
+{
+ return qbman_cgr_statistics_query(s, cgid, clear, 0,
+ frame_cnt, byte_cnt);
+}
+
+/* WQ Chan Query */
+struct qbman_wqchan_query_desc {
+ uint8_t verb;
+ uint8_t reserved;
+ uint16_t chid;
+ uint8_t reserved2[60];
+};
+
+int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
+ struct qbman_wqchan_query_rslt *r)
+{
+ struct qbman_wqchan_query_desc *p;
+
+ /* Start the management command */
+ p = (struct qbman_wqchan_query_desc *)qbman_swp_mc_start(s);
+ if (!p)
+ return -EBUSY;
+
+ /* Encode the caller-provided attributes */
+ p->chid = chanid;
+
+ /* Complete the management command */
+ *r = *(struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(s, p,
+ QBMAN_WQ_QUERY);
+ if (!r) {
+ pr_err("qbman: Query WQ Channel %d failed, no response\n",
+ chanid);
+ return -EIO;
+ }
+
+ /* Decode the outcome */
+ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WQ_QUERY);
+
+ /* Determine success or failure */
+ if (r->rslt != QBMAN_MC_RSLT_OK) {
+ pr_err("Query of WQCHAN 0x%x failed, code=0x%02x\n",
+ chanid, r->rslt);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+uint32_t qbman_wqchan_attr_get_wqlen(struct qbman_wqchan_query_rslt *r, int wq)
+{
+ return r->wq_len[wq] & 0x00FFFFFF;
+}
+
+uint64_t qbman_wqchan_attr_get_cdan_ctx(struct qbman_wqchan_query_rslt *r)
+{
+ return r->cdan_ctx;
+}
+
+uint16_t qbman_wqchan_attr_get_cdan_wqid(struct qbman_wqchan_query_rslt *r)
+{
+ return r->cdan_wqid;
+}
+
+uint8_t qbman_wqchan_attr_get_ctrl(struct qbman_wqchan_query_rslt *r)
+{
+ return r->ctrl;
+}
+
+uint16_t qbman_wqchan_attr_get_chanid(struct qbman_wqchan_query_rslt *r)
+{
+ return r->chid;
+}
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index aedcad9258..3a7579c8a7 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1865,6 +1865,12 @@ void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
d->pull.dq_src = chid;
}
+/**
+ * qbman_pull_desc_set_rad() - Decide whether reschedule the fq after dequeue
+ *
+ * @rad: 1 = Reschedule the FQ after dequeue.
+ * 0 = Allow the FQ to remain active after dequeue.
+ */
void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad)
{
if (d->pull.verb & (1 << QB_VDQCR_VERB_RLS_SHIFT)) {
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 04/10] net/dpaa2: add debug print for MTU set for jumbo
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (2 preceding siblings ...)
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 03/10] bus/fslmc: add qbman debug APIs support nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 05/10] net/dpaa2: add function to generate HW hash key nipun.gupta
` (6 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch adds a debug print for MTU configured on the
device when jumbo frames are enabled.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9cf55c0f0b..275656fbe4 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -573,6 +573,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.max_rx_pkt_len -
RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
VLAN_TAG_SIZE;
+ DPAA2_PMD_INFO("MTU configured for the device: %d",
+ dev->data->mtu);
} else {
return -1;
}
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 05/10] net/dpaa2: add function to generate HW hash key
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (3 preceding siblings ...)
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 04/10] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 06/10] net/dpaa2: update RSS to support additional distributions nipun.gupta
` (5 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena
From: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch add support to generate the hash key in software
equivalent to WRIOP key generation.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_tlu_hash.c | 153 ++++++++++++++++++++++++
drivers/net/dpaa2/meson.build | 1 +
drivers/net/dpaa2/rte_pmd_dpaa2.h | 19 +++
drivers/net/dpaa2/version.map | 2 +
4 files changed, 175 insertions(+)
create mode 100644 drivers/net/dpaa2/base/dpaa2_tlu_hash.c
diff --git a/drivers/net/dpaa2/base/dpaa2_tlu_hash.c b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
new file mode 100644
index 0000000000..9eb127c07c
--- /dev/null
+++ b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
@@ -0,0 +1,153 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <rte_pmd_dpaa2.h>
+
+static unsigned int sbox(unsigned int x)
+{
+ unsigned int a, b, c, d;
+ unsigned int oa, ob, oc, od;
+
+ a = x & 0x1;
+ b = (x >> 1) & 0x1;
+ c = (x >> 2) & 0x1;
+ d = (x >> 3) & 0x1;
+
+ oa = ((a & ~b & ~c & d) | (~a & b) | (~a & ~c & ~d) | (b & c)) & 0x1;
+ ob = ((a & ~b & d) | (~a & c & ~d) | (b & ~c)) & 0x1;
+ oc = ((a & ~b & c) | (a & ~b & ~d) | (~a & b & ~d) | (~a & c & ~d) |
+ (b & c & d)) & 0x1;
+ od = ((a & ~b & c) | (~a & b & ~c) | (a & b & ~d) | (~a & c & d)) & 0x1;
+
+ return ((od << 3) | (oc << 2) | (ob << 1) | oa);
+}
+
+static unsigned int sbox_tbl[16];
+
+static int pbox_tbl[16] = {5, 9, 0, 13,
+ 7, 2, 11, 14,
+ 1, 4, 12, 8,
+ 3, 15, 6, 10 };
+
+static unsigned int mix_tbl[8][16];
+
+static unsigned int stage(unsigned int input)
+{
+ int sbox_out = 0;
+ int pbox_out = 0;
+ int i;
+
+ /* mix */
+ input ^= input >> 16; /* xor lower */
+ input ^= input << 16; /* move original lower to upper */
+
+ for (i = 0; i < 32; i += 4) /* sbox stage */
+ sbox_out |= (sbox_tbl[(input >> i) & 0xf]) << i;
+
+ /* permutation */
+ for (i = 0; i < 16; i++)
+ pbox_out |= ((sbox_out >> i) & 0x10001) << pbox_tbl[i];
+
+ return pbox_out;
+}
+
+static unsigned int fast_stage(unsigned int input)
+{
+ int pbox_out = 0;
+ int i;
+
+ /* mix */
+ input ^= input >> 16; /* xor lower */
+ input ^= input << 16; /* move original lower to upper */
+
+ for (i = 0; i < 32; i += 4) /* sbox stage */
+ pbox_out |= mix_tbl[i >> 2][(input >> i) & 0xf];
+
+ return pbox_out;
+}
+
+static unsigned int fast_hash32(unsigned int x)
+{
+ int i;
+
+ for (i = 0; i < 4; i++)
+ x = fast_stage(x);
+ return x;
+}
+
+static unsigned int
+byte_crc32(unsigned char data /* new byte for the crc calculation */,
+ unsigned old_crc /* crc result of the last iteration */)
+{
+ int i;
+ unsigned int crc, polynom = 0xedb88320;
+ /* the polynomial is built on the reversed version of
+ * the CRC polynomial with out the x64 element.
+ */
+
+ crc = old_crc;
+ for (i = 0; i < 8; i++, data >>= 1)
+ crc = (crc >> 1) ^ (((crc ^ data) & 0x1) ? polynom : 0);
+ /* xor with polynomial is lsb of crc^data is 1 */
+
+ return crc;
+}
+
+static unsigned int crc32_table[256];
+
+static void init_crc32_table(void)
+{
+ int i;
+
+ for (i = 0; i < 256; i++)
+ crc32_table[i] = byte_crc32((unsigned char)i, 0LL);
+}
+
+static unsigned int
+crc32_string(unsigned char *data,
+ int size, unsigned int old_crc)
+{
+ unsigned int crc;
+ int i;
+
+ crc = old_crc;
+ for (i = 0; i < size; i++)
+ crc = (crc >> 8) ^ crc32_table[(crc ^ data[i]) & 0xff];
+
+ return crc;
+}
+
+static void hash_init(void)
+{
+ init_crc32_table();
+ int i, j;
+
+ for (i = 0; i < 16; i++)
+ sbox_tbl[i] = sbox(i);
+
+ for (i = 0; i < 32; i += 4)
+ for (j = 0; j < 16; j++) {
+ /* (a,b)
+ * (b,a^b)=(X,Y)
+ * (X^Y,X)
+ */
+ unsigned int input = (0x88888888 ^ (8 << i)) | (j << i);
+
+ input ^= input << 16; /* (X^Y,Y) */
+ input ^= input >> 16; /* (X^Y,X) */
+ mix_tbl[i >> 2][j] = stage(input);
+ }
+}
+
+uint32_t rte_pmd_dpaa2_get_tlu_hash(uint8_t *data, int size)
+{
+ static int init;
+
+ if (~init)
+ hash_init();
+ init = 1;
+ return fast_hash32(crc32_string(data, size, 0x0));
+}
diff --git a/drivers/net/dpaa2/meson.build b/drivers/net/dpaa2/meson.build
index 20eaf0b8e4..4a6397d09e 100644
--- a/drivers/net/dpaa2/meson.build
+++ b/drivers/net/dpaa2/meson.build
@@ -20,6 +20,7 @@ sources = files(
'mc/dpkg.c',
'mc/dpdmux.c',
'mc/dpni.c',
+ 'base/dpaa2_tlu_hash.c',
)
includes += include_directories('base', 'mc')
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h
index a68244c974..8ea42ee130 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2.h
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h
@@ -82,4 +82,23 @@ __rte_experimental
void
rte_pmd_dpaa2_thread_init(void);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Generate the DPAA2 WRIOP based hash value
+ *
+ * @param key
+ * Array of key data
+ * @param size
+ * Size of the hash input key in bytes
+ *
+ * @return
+ * - 0 if successful.
+ * - Negative in case of failure.
+ */
+
+__rte_experimental
+uint32_t
+rte_pmd_dpaa2_get_tlu_hash(uint8_t *key, int size);
#endif /* _RTE_PMD_DPAA2_H */
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index 3ab96344c4..24b2a6382d 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -10,6 +10,8 @@ DPDK_22 {
EXPERIMENTAL {
global:
+ # added in 21.11
+ rte_pmd_dpaa2_get_tlu_hash;
# added in 21.05
rte_pmd_dpaa2_mux_rx_frame_len;
# added in 21.08
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 06/10] net/dpaa2: update RSS to support additional distributions
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (4 preceding siblings ...)
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 05/10] net/dpaa2: add function to generate HW hash key nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 07/10] net/dpaa: add comments to explain driver behaviour nipun.gupta
` (4 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch updates the RSS support to support following additional
distributions:
- VLAN
- ESP
- AH
- PPPOE
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 70 +++++++++++++++++++++++++-
drivers/net/dpaa2/dpaa2_ethdev.h | 7 ++-
2 files changed, 75 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 641e7027f1..08f49af768 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -210,6 +210,10 @@ dpaa2_distset_to_dpkg_profile_cfg(
int l2_configured = 0, l3_configured = 0;
int l4_configured = 0, sctp_configured = 0;
int mpls_configured = 0;
+ int vlan_configured = 0;
+ int esp_configured = 0;
+ int ah_configured = 0;
+ int pppoe_configured = 0;
memset(kg_cfg, 0, sizeof(struct dpkg_profile_cfg));
while (req_dist_set) {
@@ -217,6 +221,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
dist_field = 1ULL << loop;
switch (dist_field) {
case ETH_RSS_L2_PAYLOAD:
+ case ETH_RSS_ETH:
if (l2_configured)
break;
@@ -231,7 +236,70 @@ dpaa2_distset_to_dpkg_profile_cfg(
kg_cfg->extracts[i].extract.from_hdr.type =
DPKG_FULL_FIELD;
i++;
- break;
+ break;
+
+ case ETH_RSS_PPPOE:
+ if (pppoe_configured)
+ break;
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_PPPOE;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_PPPOE_SID;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_ESP:
+ if (esp_configured)
+ break;
+ esp_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_IPSEC_ESP;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_IPSEC_ESP_SPI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_AH:
+ if (ah_configured)
+ break;
+ ah_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_IPSEC_AH;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_IPSEC_AH_SPI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
+
+ case ETH_RSS_C_VLAN:
+ case ETH_RSS_S_VLAN:
+ if (vlan_configured)
+ break;
+ vlan_configured = 1;
+
+ kg_cfg->extracts[i].extract.from_hdr.prot =
+ NET_PROT_VLAN;
+ kg_cfg->extracts[i].extract.from_hdr.field =
+ NH_FLD_VLAN_TCI;
+ kg_cfg->extracts[i].type =
+ DPKG_EXTRACT_FROM_HDR;
+ kg_cfg->extracts[i].extract.from_hdr.type =
+ DPKG_FULL_FIELD;
+ i++;
+ break;
case ETH_RSS_MPLS:
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 3f34d7ecff..fdc62ec30d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -70,7 +70,12 @@
ETH_RSS_UDP | \
ETH_RSS_TCP | \
ETH_RSS_SCTP | \
- ETH_RSS_MPLS)
+ ETH_RSS_MPLS | \
+ ETH_RSS_C_VLAN | \
+ ETH_RSS_S_VLAN | \
+ ETH_RSS_ESP | \
+ ETH_RSS_AH | \
+ ETH_RSS_PPPOE)
/* LX2 FRC Parsed values (Little Endian) */
#define DPAA2_PKT_TYPE_ETHER 0x0060
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 07/10] net/dpaa: add comments to explain driver behaviour
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (5 preceding siblings ...)
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 06/10] net/dpaa2: update RSS to support additional distributions nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 08/10] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
` (3 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
This patch adds comment to explain how dpaa_port_fmc_ccnode_parse
function is working to get the HW queue from FMC policy file
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_fmc.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index 5195053361..f8c9360311 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2021 NXP
*/
/* System headers */
@@ -338,6 +338,12 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
fqid = keys_params->key_params[j].cc_next_engine_params
.params.enqueue_params.new_fqid;
+ /* We read DPDK queue from last classification rule present in
+ * FMC policy file. Hence, this check is required here.
+ * Also, the last classification rule in FMC policy file must
+ * have userspace queue so that it can be used by DPDK
+ * application.
+ */
if (keys_params->key_params[j].cc_next_engine_params
.next_engine != e_IOC_FM_PCD_DONE) {
DPAA_PMD_WARN("FMC CC next engine not support");
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 08/10] raw/dpaa2_qdma: use correct params for config and queue setup
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (6 preceding siblings ...)
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 07/10] net/dpaa: add comments to explain driver behaviour nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 09/10] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
` (2 subsequent siblings)
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
RAW configure and Queue setup APIs support size parameter for
configure. This patch supports the same for DPAA2 QDMA PMD APIs
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 14 +++++++++++---
drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h | 8 ++++----
2 files changed, 15 insertions(+), 7 deletions(-)
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index 7b80370b36..e45412e640 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#include <string.h>
@@ -1146,8 +1146,12 @@ dpaa2_qdma_configure(const struct rte_rawdev *rawdev,
DPAA2_QDMA_FUNC_TRACE();
- if (config_size != sizeof(*qdma_config))
+ if (config_size != sizeof(*qdma_config)) {
+ DPAA2_QDMA_ERR("Config size mismatch. Expected %" PRIu64
+ ", Got: %" PRIu64, (uint64_t)(sizeof(*qdma_config)),
+ (uint64_t)config_size);
return -EINVAL;
+ }
/* In case QDMA device is not in stopped state, return -EBUSY */
if (qdma_dev->state == 1) {
@@ -1247,8 +1251,12 @@ dpaa2_qdma_queue_setup(struct rte_rawdev *rawdev,
DPAA2_QDMA_FUNC_TRACE();
- if (conf_size != sizeof(*q_config))
+ if (conf_size != sizeof(*q_config)) {
+ DPAA2_QDMA_ERR("Config size mismatch. Expected %" PRIu64
+ ", Got: %" PRIu64, (uint64_t)(sizeof(*q_config)),
+ (uint64_t)conf_size);
return -EINVAL;
+ }
rte_spinlock_lock(&qdma_dev->lock);
diff --git a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
index cc1ac25451..1314474271 100644
--- a/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
+++ b/drivers/raw/dpaa2_qdma/rte_pmd_dpaa2_qdma.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef __RTE_PMD_DPAA2_QDMA_H__
@@ -177,13 +177,13 @@ struct rte_qdma_queue_config {
#define rte_qdma_info rte_rawdev_info
#define rte_qdma_start(id) rte_rawdev_start(id)
#define rte_qdma_reset(id) rte_rawdev_reset(id)
-#define rte_qdma_configure(id, cf) rte_rawdev_configure(id, cf)
+#define rte_qdma_configure(id, cf, sz) rte_rawdev_configure(id, cf, sz)
#define rte_qdma_dequeue_buffers(id, buf, num, ctxt) \
rte_rawdev_dequeue_buffers(id, buf, num, ctxt)
#define rte_qdma_enqueue_buffers(id, buf, num, ctxt) \
rte_rawdev_enqueue_buffers(id, buf, num, ctxt)
-#define rte_qdma_queue_setup(id, qid, cfg) \
- rte_rawdev_queue_setup(id, qid, cfg)
+#define rte_qdma_queue_setup(id, qid, cfg, sz) \
+ rte_rawdev_queue_setup(id, qid, cfg, sz)
/*TODO introduce per queue stats API in rawdew */
/**
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 09/10] raw/dpaa2_qdma: remove checks for lcore ID
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (7 preceding siblings ...)
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 08/10] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 10/10] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
2021-10-07 7:37 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes Thomas Monjalon
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev; +Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
There is no need for preventional check of rte_lcore_id() in
data path. This patch removes the same.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 14 --------------
1 file changed, 14 deletions(-)
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index e45412e640..de26d2aef3 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1391,13 +1391,6 @@ dpaa2_qdma_enqueue(struct rte_rawdev *rawdev,
&dpdmai_dev->qdma_dev->vqs[e_context->vq_id];
int ret;
- /* Return error in case of wrong lcore_id */
- if (rte_lcore_id() != qdma_vq->lcore_id) {
- DPAA2_QDMA_ERR("QDMA enqueue for vqid %d on wrong core",
- e_context->vq_id);
- return -EINVAL;
- }
-
ret = qdma_vq->enqueue_job(qdma_vq, e_context->job, nb_jobs);
if (ret < 0) {
DPAA2_QDMA_ERR("DPDMAI device enqueue failed: %d", ret);
@@ -1430,13 +1423,6 @@ dpaa2_qdma_dequeue(struct rte_rawdev *rawdev,
return -EINVAL;
}
- /* Return error in case of wrong lcore_id */
- if (rte_lcore_id() != (unsigned int)(qdma_vq->lcore_id)) {
- DPAA2_QDMA_WARN("QDMA dequeue for vqid %d on wrong core",
- context->vq_id);
- return -1;
- }
-
/* Only dequeue when there are pending jobs on VQ */
if (qdma_vq->num_enqueues == qdma_vq->num_dequeues)
return 0;
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [dpdk-dev] [PATCH v3 10/10] common/dpaax: fix paddr to vaddr invalid conversion
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (8 preceding siblings ...)
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 09/10] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
@ 2021-10-06 17:01 ` nipun.gupta
2021-10-07 7:37 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes Thomas Monjalon
10 siblings, 0 replies; 52+ messages in thread
From: nipun.gupta @ 2021-10-06 17:01 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, hemant.agrawal, sachin.saxena, stable,
Gagandeep Singh, Nipun Gupta
From: Gagandeep Singh <g.singh@nxp.com>
If some of the VA entries of table are somehow not populated and are
NULL, it can add offset to NULL and return the invalid VA in PA to
VA conversion.
In this patch, adding a check if the VA entry has valid address only
then add offset and return VA.
Fixes: 2f3d633aa593 ("common/dpaax: add library for PA/VA translation table")
Cc: stable@dpdk.org
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/common/dpaax/dpaax_iova_table.h | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h
index 230fba8ba0..b1f2300c52 100644
--- a/drivers/common/dpaax/dpaax_iova_table.h
+++ b/drivers/common/dpaax/dpaax_iova_table.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018-2021 NXP
*/
#ifndef _DPAAX_IOVA_TABLE_H_
@@ -101,6 +101,12 @@ dpaax_iova_table_get_va(phys_addr_t paddr) {
/* paddr > entry->start && paddr <= entry->(start+len) */
index = (paddr_align - entry[i].start)/DPAAX_MEM_SPLIT;
+ /* paddr is within range, but no vaddr entry ever written
+ * at index
+ */
+ if ((void *)(uintptr_t)entry[i].pages[index] == NULL)
+ return NULL;
+
vaddr = (void *)((uintptr_t)entry[i].pages[index] + offset);
break;
} while (1);
--
2.17.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [dpdk-dev] [PATCH v2 03/10] bus/fslmc: add qbman debug APIs support
2021-10-06 13:31 ` Thomas Monjalon
@ 2021-10-06 17:02 ` Nipun Gupta
0 siblings, 0 replies; 52+ messages in thread
From: Nipun Gupta @ 2021-10-06 17:02 UTC (permalink / raw)
To: Thomas Monjalon, Hemant Agrawal
Cc: dev, ferruh.yigit, Sachin Saxena, Youri Querry, Roy Pledge
Fixed and sent a new version.
Thanks,
Nipun
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday, October 6, 2021 7:02 PM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>; Nipun Gupta
> <nipun.gupta@nxp.com>
> Cc: dev@dpdk.org; ferruh.yigit@intel.com; Sachin Saxena
> <sachin.saxena@nxp.com>; Youri Querry <youri.querry_1@nxp.com>; Roy
> Pledge <roy.pledge@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH v2 03/10] bus/fslmc: add qbman debug APIs
> support
>
> 06/10/2021 14:10, nipun.gupta@nxp.com:
> > From: Hemant Agrawal <hemant.agrawal@nxp.com>
> >
> > Add support for debugging qbman FQs
> >
> > Signed-off-by: Youri Querry <youri.querry_1@nxp.com>
> > Signed-off-by: Roy Pledge <roy.pledge@nxp.com>
> > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
>
> I see this error message:
>
> fsl_qbman_debug.h:137:15: error: expected ‘;’ before ‘int’
> 137 | __rte_internal
> | ^
>
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
` (9 preceding siblings ...)
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 10/10] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
@ 2021-10-07 7:37 ` Thomas Monjalon
2021-10-07 9:38 ` Thomas Monjalon
10 siblings, 1 reply; 52+ messages in thread
From: Thomas Monjalon @ 2021-10-07 7:37 UTC (permalink / raw)
To: nipun.gupta; +Cc: dev, ferruh.yigit, hemant.agrawal, sachin.saxena
06/10/2021 19:01, nipun.gupta@nxp.com:
> From: Nipun Gupta <nipun.gupta@nxp.com>
>
> This series adds new functionality related to flow redirection,
> generating HW hash key etc.
> It also updates the MC firmware version and includes a fix in
> dpaxx library.
>
> Changes in v1:
> - Fix checkpatch errors
> Changes in v2:
> - remove unrequired multi-tx ordered patch
> Changes in v3:
> - fix 32 bit (i386) compilation
>
> Gagandeep Singh (1):
> common/dpaax: fix paddr to vaddr invalid conversion
>
> Hemant Agrawal (4):
> bus/fslmc: updated MC FW to 10.28
> bus/fslmc: add qbman debug APIs support
> net/dpaa2: add debug print for MTU set for jumbo
> net/dpaa2: add function to generate HW hash key
>
> Jun Yang (1):
> net/dpaa2: support Tx flow redirection action
>
> Nipun Gupta (2):
> raw/dpaa2_qdma: use correct params for config and queue setup
> raw/dpaa2_qdma: remove checks for lcore ID
>
> Rohit Raj (1):
> net/dpaa: add comments to explain driver behaviour
>
> Vanshika Shukla (1):
> net/dpaa2: update RSS to support additional distributions
Applied, thanks
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes
2021-10-07 7:37 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes Thomas Monjalon
@ 2021-10-07 9:38 ` Thomas Monjalon
2021-10-07 9:46 ` Nipun Gupta
0 siblings, 1 reply; 52+ messages in thread
From: Thomas Monjalon @ 2021-10-07 9:38 UTC (permalink / raw)
To: nipun.gupta, hemant.agrawal; +Cc: dev, ferruh.yigit, sachin.saxena
07/10/2021 09:37, Thomas Monjalon:
> 06/10/2021 19:01, nipun.gupta@nxp.com:
> > From: Nipun Gupta <nipun.gupta@nxp.com>
> >
> > This series adds new functionality related to flow redirection,
> > generating HW hash key etc.
> > It also updates the MC firmware version and includes a fix in
> > dpaxx library.
> >
> > Changes in v1:
> > - Fix checkpatch errors
> > Changes in v2:
> > - remove unrequired multi-tx ordered patch
> > Changes in v3:
> > - fix 32 bit (i386) compilation
> >
> > Gagandeep Singh (1):
> > common/dpaax: fix paddr to vaddr invalid conversion
> >
> > Hemant Agrawal (4):
> > bus/fslmc: updated MC FW to 10.28
> > bus/fslmc: add qbman debug APIs support
> > net/dpaa2: add debug print for MTU set for jumbo
> > net/dpaa2: add function to generate HW hash key
> >
> > Jun Yang (1):
> > net/dpaa2: support Tx flow redirection action
> >
> > Nipun Gupta (2):
> > raw/dpaa2_qdma: use correct params for config and queue setup
> > raw/dpaa2_qdma: remove checks for lcore ID
> >
> > Rohit Raj (1):
> > net/dpaa: add comments to explain driver behaviour
> >
> > Vanshika Shukla (1):
> > net/dpaa2: update RSS to support additional distributions
>
> Applied, thanks
Fixing an indent in Meson file, and updating doc for new rte_flow features.
Please pay attention to such details, next time.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes
2021-10-07 9:38 ` Thomas Monjalon
@ 2021-10-07 9:46 ` Nipun Gupta
0 siblings, 0 replies; 52+ messages in thread
From: Nipun Gupta @ 2021-10-07 9:46 UTC (permalink / raw)
To: Thomas Monjalon, Hemant Agrawal; +Cc: dev, ferruh.yigit, Sachin Saxena
Sure Thomas,
Thanks!!
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, October 7, 2021 3:08 PM
> To: Nipun Gupta <nipun.gupta@nxp.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Cc: dev@dpdk.org; ferruh.yigit@intel.com; Sachin Saxena
> <sachin.saxena@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes
>
> 07/10/2021 09:37, Thomas Monjalon:
> > 06/10/2021 19:01, nipun.gupta@nxp.com:
> > > From: Nipun Gupta <nipun.gupta@nxp.com>
> > >
> > > This series adds new functionality related to flow redirection,
> > > generating HW hash key etc.
> > > It also updates the MC firmware version and includes a fix in
> > > dpaxx library.
> > >
> > > Changes in v1:
> > > - Fix checkpatch errors
> > > Changes in v2:
> > > - remove unrequired multi-tx ordered patch
> > > Changes in v3:
> > > - fix 32 bit (i386) compilation
> > >
> > > Gagandeep Singh (1):
> > > common/dpaax: fix paddr to vaddr invalid conversion
> > >
> > > Hemant Agrawal (4):
> > > bus/fslmc: updated MC FW to 10.28
> > > bus/fslmc: add qbman debug APIs support
> > > net/dpaa2: add debug print for MTU set for jumbo
> > > net/dpaa2: add function to generate HW hash key
> > >
> > > Jun Yang (1):
> > > net/dpaa2: support Tx flow redirection action
> > >
> > > Nipun Gupta (2):
> > > raw/dpaa2_qdma: use correct params for config and queue setup
> > > raw/dpaa2_qdma: remove checks for lcore ID
> > >
> > > Rohit Raj (1):
> > > net/dpaa: add comments to explain driver behaviour
> > >
> > > Vanshika Shukla (1):
> > > net/dpaa2: update RSS to support additional distributions
> >
> > Applied, thanks
>
> Fixing an indent in Meson file, and updating doc for new rte_flow features.
> Please pay attention to such details, next time.
>
^ permalink raw reply [flat|nested] 52+ messages in thread
end of thread, other threads:[~2021-10-07 9:46 UTC | newest]
Thread overview: 52+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-27 12:26 [dpdk-dev] [PATCH 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 01/11] bus/fslmc: updated MC FW to 10.28 nipun.gupta
2021-10-06 13:28 ` Hemant Agrawal
2021-09-27 12:26 ` [dpdk-dev] [PATCH 02/11] net/dpaa2: support Tx flow redirection action nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 03/11] bus/fslmc: add qbman debug APIs support nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 04/11] net/dpaa2: support multiple Tx queues enqueue for ordered nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 05/11] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 06/11] net/dpaa2: add function to generate HW hash key nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 07/11] net/dpaa2: update RSS to support additional distributions nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 08/11] net/dpaa: add comments to explain driver behaviour nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 09/11] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 10/11] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
2021-09-27 12:26 ` [dpdk-dev] [PATCH 11/11] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 00/11] NXP DPAAx Bus and PMD changes nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 01/11] bus/fslmc: updated MC FW to 10.28 nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 02/11] net/dpaa2: support Tx flow redirection action nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 03/11] bus/fslmc: add qbman debug APIs support nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 04/11] net/dpaa2: support multiple Tx queues enqueue for ordered nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 05/11] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 06/11] net/dpaa2: add function to generate HW hash key nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 07/11] net/dpaa2: update RSS to support additional distributions nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 08/11] net/dpaa: add comments to explain driver behaviour nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 09/11] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 10/11] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
2021-09-27 13:25 ` [dpdk-dev] [PATCH v1 11/11] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 01/10] bus/fslmc: updated MC FW to 10.28 nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 02/10] net/dpaa2: support Tx flow redirection action nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 03/10] bus/fslmc: add qbman debug APIs support nipun.gupta
2021-10-06 13:31 ` Thomas Monjalon
2021-10-06 17:02 ` Nipun Gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 04/10] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 05/10] net/dpaa2: add function to generate HW hash key nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 06/10] net/dpaa2: update RSS to support additional distributions nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 07/10] net/dpaa: add comments to explain driver behaviour nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 08/10] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 09/10] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
2021-10-06 12:10 ` [dpdk-dev] [PATCH v2 10/10] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 01/10] bus/fslmc: updated MC FW to 10.28 nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 02/10] net/dpaa2: support Tx flow redirection action nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 03/10] bus/fslmc: add qbman debug APIs support nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 04/10] net/dpaa2: add debug print for MTU set for jumbo nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 05/10] net/dpaa2: add function to generate HW hash key nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 06/10] net/dpaa2: update RSS to support additional distributions nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 07/10] net/dpaa: add comments to explain driver behaviour nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 08/10] raw/dpaa2_qdma: use correct params for config and queue setup nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 09/10] raw/dpaa2_qdma: remove checks for lcore ID nipun.gupta
2021-10-06 17:01 ` [dpdk-dev] [PATCH v3 10/10] common/dpaax: fix paddr to vaddr invalid conversion nipun.gupta
2021-10-07 7:37 ` [dpdk-dev] [PATCH v3 00/10] NXP DPAAx Bus and PMD changes Thomas Monjalon
2021-10-07 9:38 ` Thomas Monjalon
2021-10-07 9:46 ` Nipun Gupta
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).