* [dpdk-dev] [PATCH 00/18] DPAA PMD improvements
@ 2017-12-13 12:05 Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 01/18] net/dpaa: fix coverity reported issues Hemant Agrawal
` (19 more replies)
0 siblings, 20 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
This patch series add various improvement and performance related
optimizations for DPAA PMD
Ashish Jain (2):
net/dpaa: fix the mbuf packet type if zero
net/dpaa: set the correct frame size in device MTU
Hemant Agrawal (11):
net/dpaa: fix coverity reported issues
net/dpaa: fix FW version code
bus/dpaa: update platform soc value register routines
net/dpaa: add frame count based tail drop with CGR
bus/dpaa: add support to create dynamic HW portal
bus/dpaa: query queue frame count support
net/dpaa: add Rx queue count support
net/dpaa: add support for loopback API
app/testpmd: add support for loopback config for dpaa
bus/dpaa: add support for static queues
net/dpaa: integrate the support of push mode in PMD
Nipun Gupta (5):
bus/dpaa: optimize the qman HW stashing settings
bus/dpaa: optimize the endianness conversions
net/dpaa: change Tx HW budget to 7
net/dpaa: optimize the Tx burst
net/dpaa: optimize Rx path
app/test-pmd/Makefile | 4 +
app/test-pmd/cmdline.c | 7 +
doc/guides/nics/dpaa.rst | 11 ++
drivers/bus/dpaa/base/qbman/qman.c | 172 ++++++++++++++++++--
drivers/bus/dpaa/base/qbman/qman.h | 4 +-
drivers/bus/dpaa/base/qbman/qman_driver.c | 153 +++++++++++++++---
drivers/bus/dpaa/base/qbman/qman_priv.h | 6 +-
drivers/bus/dpaa/dpaa_bus.c | 43 ++++-
drivers/bus/dpaa/include/fsl_qman.h | 44 +++--
drivers/bus/dpaa/include/fsl_usd.h | 4 +
drivers/bus/dpaa/include/process.h | 11 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 18 +++
drivers/bus/dpaa/rte_dpaa_bus.h | 15 ++
drivers/net/dpaa/Makefile | 3 +
drivers/net/dpaa/dpaa_ethdev.c | 259 ++++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_ethdev.h | 21 ++-
drivers/net/dpaa/dpaa_rxtx.c | 163 +++++++++++++------
drivers/net/dpaa/dpaa_rxtx.h | 7 +-
drivers/net/dpaa/rte_pmd_dpaa.h | 37 +++++
drivers/net/dpaa/rte_pmd_dpaa_version.map | 8 +
20 files changed, 837 insertions(+), 153 deletions(-)
create mode 100644 drivers/net/dpaa/rte_pmd_dpaa.h
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 01/18] net/dpaa: fix coverity reported issues
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2018-01-09 10:46 ` Ferruh Yigit
2017-12-13 12:05 ` [dpdk-dev] [PATCH 02/18] net/dpaa: fix the mbuf packet type if zero Hemant Agrawal
` (18 subsequent siblings)
19 siblings, 1 reply; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable
Fixes: 05ba55bc2b1a ("net/dpaa: add packet dump for debugging")
Fixes: 37f9b54bd3cf ("net/dpaa: support Tx and Rx queue setup")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 6 +++---
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index cf5a2ec..3023302 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -723,7 +723,7 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
static int dpaa_rx_queue_init(struct qman_fq *fq,
uint32_t fqid)
{
- struct qm_mcc_initfq opts;
+ struct qm_mcc_initfq opts = {0};
int ret;
PMD_INIT_FUNC_TRACE();
@@ -769,7 +769,7 @@ static int dpaa_rx_queue_init(struct qman_fq *fq,
static int dpaa_tx_queue_init(struct qman_fq *fq,
struct fman_if *fman_intf)
{
- struct qm_mcc_initfq opts;
+ struct qm_mcc_initfq opts = {0};
int ret;
PMD_INIT_FUNC_TRACE();
@@ -800,7 +800,7 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
{
- struct qm_mcc_initfq opts;
+ struct qm_mcc_initfq opts = {0};
int ret;
PMD_INIT_FUNC_TRACE();
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 41e57f2..771e141 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -665,7 +665,7 @@ tx_on_external_pool(struct qman_fq *txq, struct rte_mbuf *mbuf,
return 1;
}
- DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, dpaa_intf->bp_info->bpid);
+ DPAA_MBUF_TO_CONTIG_FD(dmable_mbuf, fd_arr, dpaa_intf->bp_info->bpid);
return 0;
}
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 02/18] net/dpaa: fix the mbuf packet type if zero
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 01/18] net/dpaa: fix coverity reported issues Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 03/18] net/dpaa: fix FW version code Hemant Agrawal
` (17 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Ashish Jain, stable
From: Ashish Jain <ashish.jain@nxp.com>
Populate the mbuf field packet_type which is required
for calculating checksum while transmitting frames
Fixes: 8cffdcbe85aa ("net/dpaa: support scattered Rx")
Cc: stable@dpdk.org
Signed-off-by: Ashish Jain <ashish.jain@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 771e141..c0cfec9 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -58,6 +58,7 @@
#include <rte_ip.h>
#include <rte_tcp.h>
#include <rte_udp.h>
+#include <rte_net.h>
#include "dpaa_ethdev.h"
#include "dpaa_rxtx.h"
@@ -504,6 +505,15 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
fd->opaque_addr = 0;
if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+ if (!mbuf->packet_type) {
+ struct rte_net_hdr_lens hdr_lens;
+
+ mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
+ RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
+ | RTE_PTYPE_L4_MASK);
+ mbuf->l2_len = hdr_lens.l2_len;
+ mbuf->l3_len = hdr_lens.l3_len;
+ }
if (temp->data_off < DEFAULT_TX_ICEOF
+ sizeof(struct dpaa_eth_parse_results_t))
temp->data_off = DEFAULT_TX_ICEOF
@@ -611,6 +621,15 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
}
if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+ if (!mbuf->packet_type) {
+ struct rte_net_hdr_lens hdr_lens;
+
+ mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
+ RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
+ | RTE_PTYPE_L4_MASK);
+ mbuf->l2_len = hdr_lens.l2_len;
+ mbuf->l3_len = hdr_lens.l3_len;
+ }
if (mbuf->data_off < (DEFAULT_TX_ICEOF +
sizeof(struct dpaa_eth_parse_results_t))) {
DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 03/18] net/dpaa: fix FW version code
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 01/18] net/dpaa: fix coverity reported issues Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 02/18] net/dpaa: fix the mbuf packet type if zero Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 04/18] bus/dpaa: update platform soc value register routines Hemant Agrawal
` (16 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable
fix the soc id path and missing fclose
Fixes: cf0fab1d2ca5 ("net/dpaa: support firmware version get API")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 14 +++++---------
drivers/net/dpaa/dpaa_ethdev.h | 2 +-
2 files changed, 6 insertions(+), 10 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3023302..29678c5 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -212,19 +212,15 @@ dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
DPAA_PMD_ERR("Unable to open SoC device");
return -ENOTSUP; /* Not supported on this infra */
}
-
- ret = fscanf(svr_file, "svr:%x", &svr_ver);
- if (ret <= 0) {
+ if (fscanf(svr_file, "svr:%x", &svr_ver) <= 0)
DPAA_PMD_ERR("Unable to read SoC device");
- return -ENOTSUP; /* Not supported on this infra */
- }
- ret = snprintf(fw_version, fw_size,
- "svr:%x-fman-v%x",
- svr_ver,
- fman_ip_rev);
+ fclose(svr_file);
+ ret = snprintf(fw_version, fw_size, "SVR:%x-fman-v%x",
+ svr_ver, fman_ip_rev);
ret += 1; /* add the size of '\0' */
+
if (fw_size < (uint32_t)ret)
return ret;
else
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 5457d61..ec5ae13 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -46,7 +46,7 @@
/* DPAA SoC identifier; If this is not available, it can be concluded
* that board is non-DPAA. Single slot is currently supported.
*/
-#define DPAA_SOC_ID_FILE "sys/devices/soc0/soc_id"
+#define DPAA_SOC_ID_FILE "/sys/devices/soc0/soc_id"
#define DPAA_MBUF_HW_ANNOTATION 64
#define DPAA_FD_PTA_SIZE 64
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 04/18] bus/dpaa: update platform soc value register routines
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (2 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 03/18] net/dpaa: fix FW version code Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 05/18] net/dpaa: set the correct frame size in device MTU Hemant Agrawal
` (15 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
This patch update the logic and expose the soc value
register, so that it can be used by other modules as well.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/dpaa_bus.c | 12 ++++++++++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 8 ++++++++
drivers/bus/dpaa/rte_dpaa_bus.h | 11 +++++++++++
drivers/net/dpaa/dpaa_ethdev.c | 4 +++-
drivers/net/dpaa/dpaa_ethdev.h | 5 -----
5 files changed, 34 insertions(+), 6 deletions(-)
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 1cc8c89..f1bc62a 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -77,6 +77,8 @@ struct netcfg_info *dpaa_netcfg;
/* define a variable to hold the portal_key, once created.*/
pthread_key_t dpaa_portal_key;
+unsigned int dpaa_svr_family;
+
RTE_DEFINE_PER_LCORE(bool, _dpaa_io);
static inline void
@@ -443,6 +445,8 @@ rte_dpaa_bus_probe(void)
int ret = -1;
struct rte_dpaa_device *dev;
struct rte_dpaa_driver *drv;
+ FILE *svr_file = NULL;
+ unsigned int svr_ver;
BUS_INIT_FUNC_TRACE();
@@ -462,6 +466,14 @@ rte_dpaa_bus_probe(void)
break;
}
}
+
+ svr_file = fopen(DPAA_SOC_ID_FILE, "r");
+ if (svr_file) {
+ if (fscanf(svr_file, "svr:%x", &svr_ver) > 0)
+ dpaa_svr_family = svr_ver & SVR_MASK;
+ fclose(svr_file);
+ }
+
return 0;
}
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index fb9d532..eeeb458 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -64,3 +64,11 @@ DPDK_17.11 {
local: *;
};
+
+DPDK_18.02 {
+ global:
+
+ dpaa_svr_family;
+
+ local: *;
+} DPDK_17.11;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index eafc944..40caf72 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -46,6 +46,17 @@
#define DEV_TO_DPAA_DEVICE(ptr) \
container_of(ptr, struct rte_dpaa_device, device)
+/* DPAA SoC identifier; If this is not available, it can be concluded
+ * that board is non-DPAA. Single slot is currently supported.
+ */
+#define DPAA_SOC_ID_FILE "/sys/devices/soc0/soc_id"
+
+#define SVR_LS1043A_FAMILY 0x87920000
+#define SVR_LS1046A_FAMILY 0x87070000
+#define SVR_MASK 0xffff0000
+
+extern unsigned int dpaa_svr_family;
+
struct rte_dpaa_device;
struct rte_dpaa_driver;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 29678c5..4ad9afc 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -212,7 +212,9 @@ dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
DPAA_PMD_ERR("Unable to open SoC device");
return -ENOTSUP; /* Not supported on this infra */
}
- if (fscanf(svr_file, "svr:%x", &svr_ver) <= 0)
+ if (fscanf(svr_file, "svr:%x", &svr_ver) > 0)
+ dpaa_svr_family = svr_ver & SVR_MASK;
+ else
DPAA_PMD_ERR("Unable to read SoC device");
fclose(svr_file);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index ec5ae13..3f06d63 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -43,11 +43,6 @@
#include <of.h>
#include <netcfg.h>
-/* DPAA SoC identifier; If this is not available, it can be concluded
- * that board is non-DPAA. Single slot is currently supported.
- */
-#define DPAA_SOC_ID_FILE "/sys/devices/soc0/soc_id"
-
#define DPAA_MBUF_HW_ANNOTATION 64
#define DPAA_FD_PTA_SIZE 64
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 05/18] net/dpaa: set the correct frame size in device MTU
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (3 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 04/18] bus/dpaa: update platform soc value register routines Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2018-01-09 10:46 ` Ferruh Yigit
2017-12-13 12:05 ` [dpdk-dev] [PATCH 06/18] net/dpaa: add frame count based tail drop with CGR Hemant Agrawal
` (14 subsequent siblings)
19 siblings, 1 reply; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Ashish Jain
From: Ashish Jain <ashish.jain@nxp.com>
Setting correct frame size in dpaa_dev_mtu_set
api call. Also setting correct max frame size in
hardware in dev_configure for jumbo frames
Signed-off-by: Ashish Jain <ashish.jain@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 20 +++++++++++++-------
drivers/net/dpaa/dpaa_ethdev.h | 4 ++++
2 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4ad9afc..adcc219 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -111,19 +111,21 @@ static int
dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ uint32_t frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN
+ + VLAN_TAG_SIZE;
PMD_INIT_FUNC_TRACE();
- if (mtu < ETHER_MIN_MTU)
+ if ((mtu < ETHER_MIN_MTU) || (frame_size > DPAA_MAX_RX_PKT_LEN))
return -EINVAL;
- if (mtu > ETHER_MAX_LEN)
+ if (frame_size > ETHER_MAX_LEN)
dev->data->dev_conf.rxmode.jumbo_frame = 1;
else
dev->data->dev_conf.rxmode.jumbo_frame = 0;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
+ dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- fman_if_set_maxfrm(dpaa_intf->fif, mtu);
+ fman_if_set_maxfrm(dpaa_intf->fif, frame_size);
return 0;
}
@@ -131,15 +133,19 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
static int
dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
PMD_INIT_FUNC_TRACE();
if (dev->data->dev_conf.rxmode.jumbo_frame == 1) {
if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- DPAA_MAX_RX_PKT_LEN)
- return dpaa_mtu_set(dev,
+ DPAA_MAX_RX_PKT_LEN) {
+ fman_if_set_maxfrm(dpaa_intf->fif,
dev->data->dev_conf.rxmode.max_rx_pkt_len);
- else
+ return 0;
+ } else {
return -1;
+ }
}
return 0;
}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 3f06d63..ef726d3 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -71,6 +71,10 @@
/*Maximum number of slots available in TX ring*/
#define MAX_TX_RING_SLOTS 8
+#ifndef VLAN_TAG_SIZE
+#define VLAN_TAG_SIZE 4 /** < Vlan Header Length */
+#endif
+
/* PCD frame queues */
#define DPAA_PCD_FQID_START 0x400
#define DPAA_PCD_FQID_MULTIPLIER 0x100
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 06/18] net/dpaa: add frame count based tail drop with CGR
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (4 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 05/18] net/dpaa: set the correct frame size in device MTU Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 07/18] bus/dpaa: optimize the qman HW stashing settings Hemant Agrawal
` (13 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
Replace the byte based tail queue congestion support
with frame count based congestion groups.
It can easily map to number of RX descriptors for a queue.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/rte_bus_dpaa_version.map | 5 ++
drivers/net/dpaa/dpaa_ethdev.c | 98 +++++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_ethdev.h | 8 +--
3 files changed, 97 insertions(+), 14 deletions(-)
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index eeeb458..f412362 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -69,6 +69,11 @@ DPDK_18.02 {
global:
dpaa_svr_family;
+ qman_alloc_cgrid_range;
+ qman_create_cgr;
+ qman_delete_cgr;
+ qman_modify_cgr;
+ qman_release_cgrid_range;
local: *;
} DPDK_17.11;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index adcc219..6482998 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -73,6 +73,9 @@
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
+/* Per FQ Taildrop in frame count */
+static unsigned int td_threshold = CGR_RX_PERFQ_THRESH;
+
struct rte_dpaa_xstats_name_off {
char name[RTE_ETH_XSTATS_NAME_SIZE];
uint32_t offset;
@@ -447,12 +450,13 @@ static void dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
static
int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
- uint16_t nb_desc __rte_unused,
+ uint16_t nb_desc,
unsigned int socket_id __rte_unused,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
PMD_INIT_FUNC_TRACE();
@@ -488,7 +492,23 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->name, fd_offset,
fman_if_get_fdoff(dpaa_intf->fif));
}
- dev->data->rx_queues[queue_idx] = &dpaa_intf->rx_queues[queue_idx];
+
+ dev->data->rx_queues[queue_idx] = rxq;
+
+ /* configure the CGR size as per the desc size */
+ if (dpaa_intf->cgr_rx) {
+ struct qm_mcc_initcgr cgr_opts = {0};
+ int ret;
+
+ /* Enable tail drop with cgr on this queue */
+ qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres, nb_desc, 0);
+ ret = qman_modify_cgr(dpaa_intf->cgr_rx, 0, &cgr_opts);
+ if (ret) {
+ DPAA_PMD_WARN(
+ "rx taildrop modify fail on fqid %d (ret=%d)",
+ rxq->fqid, ret);
+ }
+ }
return 0;
}
@@ -724,11 +744,21 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
}
/* Initialise an Rx FQ */
-static int dpaa_rx_queue_init(struct qman_fq *fq,
+static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
int ret;
+ u32 flags = 0;
+ struct qm_mcc_initcgr cgr_opts = {
+ .we_mask = QM_CGR_WE_CS_THRES |
+ QM_CGR_WE_CSTD_EN |
+ QM_CGR_WE_MODE,
+ .cgr = {
+ .cstd_en = QM_CGR_EN,
+ .mode = QMAN_CGR_MODE_FRAME
+ }
+ };
PMD_INIT_FUNC_TRACE();
@@ -758,12 +788,24 @@ static int dpaa_rx_queue_init(struct qman_fq *fq,
opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
- /*Enable tail drop */
- opts.we_mask = opts.we_mask | QM_INITFQ_WE_TDTHRESH;
- opts.fqd.fq_ctrl = opts.fqd.fq_ctrl | QM_FQCTRL_TDE;
- qm_fqd_taildrop_set(&opts.fqd.td, CONG_THRESHOLD_RX_Q, 1);
-
- ret = qman_init_fq(fq, 0, &opts);
+ if (cgr_rx) {
+ /* Enable tail drop with cgr on this queue */
+ qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres, td_threshold, 0);
+ cgr_rx->cb = NULL;
+ ret = qman_create_cgr(cgr_rx, QMAN_CGR_FLAG_USE_INIT,
+ &cgr_opts);
+ if (ret) {
+ DPAA_PMD_WARN(
+ "rx taildrop init fail on rx fqid %d (ret=%d)",
+ fqid, ret);
+ goto without_cgr;
+ }
+ opts.we_mask |= QM_INITFQ_WE_CGID;
+ opts.fqd.cgid = cgr_rx->cgrid;
+ opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+ }
+without_cgr:
+ ret = qman_init_fq(fq, flags, &opts);
if (ret)
DPAA_PMD_ERR("init rx fqid %d failed with ret: %d", fqid, ret);
return ret;
@@ -845,6 +887,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
struct fm_eth_port_cfg *cfg;
struct fman_if *fman_intf;
struct fman_if_bpool *bp, *tmp_bp;
+ uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES];
PMD_INIT_FUNC_TRACE();
@@ -881,10 +924,31 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->rx_queues = rte_zmalloc(NULL,
sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+
+ /* If congestion control is enabled globally*/
+ if (td_threshold) {
+ dpaa_intf->cgr_rx = rte_zmalloc(NULL,
+ sizeof(struct qman_cgr) * num_rx_fqs, MAX_CACHELINE);
+
+ ret = qman_alloc_cgrid_range(&cgrid[0], num_rx_fqs, 1, 0);
+ if (ret != num_rx_fqs) {
+ DPAA_PMD_WARN("insufficient CGRIDs available");
+ return -EINVAL;
+ }
+ } else {
+ dpaa_intf->cgr_rx = NULL;
+ }
+
for (loop = 0; loop < num_rx_fqs; loop++) {
fqid = DPAA_PCD_FQID_START + dpaa_intf->ifid *
DPAA_PCD_FQID_MULTIPLIER + loop;
- ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop], fqid);
+
+ if (dpaa_intf->cgr_rx)
+ dpaa_intf->cgr_rx[loop].cgrid = cgrid[loop];
+
+ ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop],
+ dpaa_intf->cgr_rx ? &dpaa_intf->cgr_rx[loop] : NULL,
+ fqid);
if (ret)
return ret;
dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
@@ -939,6 +1003,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
"store MAC addresses",
ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
+ rte_free(dpaa_intf->cgr_rx);
rte_free(dpaa_intf->rx_queues);
rte_free(dpaa_intf->tx_queues);
dpaa_intf->rx_queues = NULL;
@@ -977,6 +1042,7 @@ static int
dpaa_dev_uninit(struct rte_eth_dev *dev)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ int loop;
PMD_INIT_FUNC_TRACE();
@@ -994,6 +1060,18 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
if (dpaa_intf->fc_conf)
rte_free(dpaa_intf->fc_conf);
+ /* Release RX congestion Groups */
+ if (dpaa_intf->cgr_rx) {
+ for (loop = 0; loop < dpaa_intf->nb_rx_queues; loop++)
+ qman_delete_cgr(&dpaa_intf->cgr_rx[loop]);
+
+ qman_release_cgrid_range(dpaa_intf->cgr_rx[loop].cgrid,
+ dpaa_intf->nb_rx_queues);
+ }
+
+ rte_free(dpaa_intf->cgr_rx);
+ dpaa_intf->cgr_rx = NULL;
+
rte_free(dpaa_intf->rx_queues);
dpaa_intf->rx_queues = NULL;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index ef726d3..b26e411 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -60,10 +60,8 @@
#define DPAA_MIN_RX_BUF_SIZE 512
#define DPAA_MAX_RX_PKT_LEN 10240
-/* RX queue tail drop threshold
- * currently considering 32 KB packets.
- */
-#define CONG_THRESHOLD_RX_Q (32 * 1024)
+/* RX queue tail drop threshold (CGR Based) in frame count */
+#define CGR_RX_PERFQ_THRESH 256
/*max mac filter for memac(8) including primary mac addr*/
#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
@@ -79,6 +77,7 @@
#define DPAA_PCD_FQID_START 0x400
#define DPAA_PCD_FQID_MULTIPLIER 0x100
#define DPAA_DEFAULT_NUM_PCD_QUEUES 1
+#define DPAA_MAX_NUM_PCD_QUEUES 32
#define DPAA_IF_TX_PRIORITY 3
#define DPAA_IF_RX_PRIORITY 4
@@ -128,6 +127,7 @@ struct dpaa_if {
char *name;
const struct fm_eth_port_cfg *cfg;
struct qman_fq *rx_queues;
+ struct qman_cgr *cgr_rx;
struct qman_fq *tx_queues;
struct qman_fq debug_queues[2];
uint16_t nb_rx_queues;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 07/18] bus/dpaa: optimize the qman HW stashing settings
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (5 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 06/18] net/dpaa: add frame count based tail drop with CGR Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 08/18] bus/dpaa: optimize the endianness conversions Hemant Agrawal
` (12 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
The settings are tuned for performance.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 87fec60..77e4eeb 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -40,6 +40,7 @@
#include "qman.h"
#include <rte_branch_prediction.h>
+#include <rte_dpaa_bus.h>
/* Compilation constants */
#define DQRR_MAXFILL 15
@@ -532,7 +533,12 @@ struct qman_portal *qman_create_portal(
p = &portal->p;
- portal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+ if (dpaa_svr_family == SVR_LS1043A_FAMILY)
+ portal->use_eqcr_ci_stashing = 3;
+ else
+ portal->use_eqcr_ci_stashing =
+ ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+
/*
* prep the low-level portal struct with the mapped addresses from the
* config, everything that follows depends on it and "config" is more
@@ -545,7 +551,7 @@ struct qman_portal *qman_create_portal(
* and stash with high-than-DQRR priority.
*/
if (qm_eqcr_init(p, qm_eqcr_pvb,
- portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {
+ portal->use_eqcr_ci_stashing, 1)) {
pr_err("Qman EQCR initialisation failed\n");
goto fail_eqcr;
}
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 08/18] bus/dpaa: optimize the endianness conversions
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (6 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 07/18] bus/dpaa: optimize the qman HW stashing settings Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 09/18] bus/dpaa: add support to create dynamic HW portal Hemant Agrawal
` (11 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 7 ++++---
drivers/bus/dpaa/include/fsl_qman.h | 2 ++
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 77e4eeb..400d920 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -935,7 +935,7 @@ static inline unsigned int __poll_portal_fast(struct qman_portal *p,
do {
qm_dqrr_pvb_update(&p->p);
dq = qm_dqrr_current(&p->p);
- if (!dq)
+ if (unlikely(!dq))
break;
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
/* If running on an LE system the fields of the
@@ -1194,6 +1194,7 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
}
spin_lock_init(&fq->fqlock);
fq->fqid = fqid;
+ fq->fqid_le = cpu_to_be32(fqid);
fq->flags = flags;
fq->state = qman_fq_state_oos;
fq->cgr_groupid = 0;
@@ -1981,7 +1982,7 @@ int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
int qman_enqueue_multi(struct qman_fq *fq,
const struct qm_fd *fd,
- int frames_to_send)
+ int frames_to_send)
{
struct qman_portal *p = get_affine_portal();
struct qm_portal *portal = &p->p;
@@ -2003,7 +2004,7 @@ int qman_enqueue_multi(struct qman_fq *fq,
/* try to send as many frames as possible */
while (eqcr->available && frames_to_send--) {
- eq->fqid = cpu_to_be32(fq->fqid);
+ eq->fqid = fq->fqid_le;
#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
eq->tag = cpu_to_be32(fq->key);
#else
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index eedfd7e..ebcfa43 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1230,6 +1230,8 @@ struct qman_fq {
*/
spinlock_t fqlock;
u32 fqid;
+ u32 fqid_le;
+
/* DPDK Interface */
void *dpaa_intf;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 09/18] bus/dpaa: add support to create dynamic HW portal
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (7 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 08/18] bus/dpaa: optimize the endianness conversions Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 10/18] net/dpaa: change Tx HW budget to 7 Hemant Agrawal
` (10 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
HW portal is a processing context in DPAA. This patch allow
creation of a queue specific HW portal context.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 69 ++++++++++++--
drivers/bus/dpaa/base/qbman/qman_driver.c | 153 +++++++++++++++++++++++++-----
drivers/bus/dpaa/base/qbman/qman_priv.h | 6 +-
drivers/bus/dpaa/dpaa_bus.c | 31 +++++-
drivers/bus/dpaa/include/fsl_qman.h | 25 ++---
drivers/bus/dpaa/include/fsl_usd.h | 4 +
drivers/bus/dpaa/include/process.h | 11 ++-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 2 +
drivers/bus/dpaa/rte_dpaa_bus.h | 4 +
9 files changed, 252 insertions(+), 53 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 400d920..6ae4bb3 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -650,11 +650,52 @@ struct qman_portal *qman_create_portal(
return NULL;
}
+#define MAX_GLOBAL_PORTLAS 8
+static struct qman_portal global_portals[MAX_GLOBAL_PORTLAS];
+static int global_portals_used[MAX_GLOBAL_PORTLAS];
+
+static struct qman_portal *
+qman_alloc_global_portal(void)
+{
+ unsigned int i;
+
+ for (i = 0; i < MAX_GLOBAL_PORTLAS; i++) {
+ if (global_portals_used[i] == 0) {
+ global_portals_used[i] = 1;
+ return &global_portals[i];
+ }
+ }
+ pr_err("No portal available (%x)\n", MAX_GLOBAL_PORTLAS);
+
+ return NULL;
+}
+
+static int
+qman_free_global_portal(struct qman_portal *portal)
+{
+ unsigned int i;
+
+ for (i = 0; i < MAX_GLOBAL_PORTLAS; i++) {
+ if (&global_portals[i] == portal) {
+ global_portals_used[i] = 0;
+ return 0;
+ }
+ }
+ return -1;
+}
+
struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,
- const struct qman_cgrs *cgrs)
+ const struct qman_cgrs *cgrs,
+ int alloc)
{
struct qman_portal *res;
- struct qman_portal *portal = get_affine_portal();
+ struct qman_portal *portal;
+
+ if (alloc)
+ portal = qman_alloc_global_portal();
+ else
+ portal = get_affine_portal();
+
/* A criteria for calling this function (from qman_driver.c) is that
* we're already affine to the cpu and won't schedule onto another cpu.
*/
@@ -704,13 +745,18 @@ void qman_destroy_portal(struct qman_portal *qm)
spin_lock_destroy(&qm->cgr_lock);
}
-const struct qm_portal_config *qman_destroy_affine_portal(void)
+const struct qm_portal_config *
+qman_destroy_affine_portal(struct qman_portal *qp)
{
/* We don't want to redirect if we're a slave, use "raw" */
- struct qman_portal *qm = get_affine_portal();
+ struct qman_portal *qm;
const struct qm_portal_config *pcfg;
int cpu;
+ if (qp == NULL)
+ qm = get_affine_portal();
+ else
+ qm = qp;
pcfg = qm->config;
cpu = pcfg->cpu;
@@ -719,6 +765,9 @@ const struct qm_portal_config *qman_destroy_affine_portal(void)
spin_lock(&affine_mask_lock);
CPU_CLR(cpu, &affine_mask);
spin_unlock(&affine_mask_lock);
+
+ qman_free_global_portal(qm);
+
return pcfg;
}
@@ -1125,27 +1174,27 @@ void qman_start_dequeues(void)
qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
}
-void qman_static_dequeue_add(u32 pools)
+void qman_static_dequeue_add(u32 pools, struct qman_portal *qp)
{
- struct qman_portal *p = get_affine_portal();
+ struct qman_portal *p = qp ? qp : get_affine_portal();
pools &= p->config->pools;
p->sdqcr |= pools;
qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
}
-void qman_static_dequeue_del(u32 pools)
+void qman_static_dequeue_del(u32 pools, struct qman_portal *qp)
{
- struct qman_portal *p = get_affine_portal();
+ struct qman_portal *p = qp ? qp : get_affine_portal();
pools &= p->config->pools;
p->sdqcr &= ~pools;
qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
}
-u32 qman_static_dequeue_get(void)
+u32 qman_static_dequeue_get(struct qman_portal *qp)
{
- struct qman_portal *p = get_affine_portal();
+ struct qman_portal *p = qp ? qp : get_affine_portal();
return p->sdqcr;
}
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 7a68896..f5d4b37 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -57,8 +57,8 @@ void *qman_ccsr_map;
/* The qman clock frequency */
u32 qman_clk;
-static __thread int fd = -1;
-static __thread struct qm_portal_config pcfg;
+static __thread int qmfd = -1;
+static __thread struct qm_portal_config qpcfg;
static __thread struct dpaa_ioctl_portal_map map = {
.type = dpaa_portal_qman
};
@@ -77,16 +77,16 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
error(0, ret, "pthread_getaffinity_np()");
return ret;
}
- pcfg.cpu = -1;
+ qpcfg.cpu = -1;
for (loop = 0; loop < CPU_SETSIZE; loop++)
if (CPU_ISSET(loop, &cpuset)) {
- if (pcfg.cpu != -1) {
+ if (qpcfg.cpu != -1) {
pr_err("Thread is not affine to 1 cpu\n");
return -EINVAL;
}
- pcfg.cpu = loop;
+ qpcfg.cpu = loop;
}
- if (pcfg.cpu == -1) {
+ if (qpcfg.cpu == -1) {
pr_err("Bug in getaffinity handling!\n");
return -EINVAL;
}
@@ -98,36 +98,36 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
error(0, ret, "process_portal_map()");
return ret;
}
- pcfg.channel = map.channel;
- pcfg.pools = map.pools;
- pcfg.index = map.index;
+ qpcfg.channel = map.channel;
+ qpcfg.pools = map.pools;
+ qpcfg.index = map.index;
/* Make the portal's cache-[enabled|inhibited] regions */
- pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
- pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+ qpcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+ qpcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
- fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
- if (fd == -1) {
+ qmfd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+ if (qmfd == -1) {
pr_err("QMan irq init failed\n");
process_portal_unmap(&map.addr);
return -EBUSY;
}
- pcfg.is_shared = is_shared;
- pcfg.node = NULL;
- pcfg.irq = fd;
+ qpcfg.is_shared = is_shared;
+ qpcfg.node = NULL;
+ qpcfg.irq = qmfd;
- portal = qman_create_affine_portal(&pcfg, NULL);
+ portal = qman_create_affine_portal(&qpcfg, NULL, 0);
if (!portal) {
pr_err("Qman portal initialisation failed (%d)\n",
- pcfg.cpu);
+ qpcfg.cpu);
process_portal_unmap(&map.addr);
return -EBUSY;
}
irq_map.type = dpaa_portal_qman;
irq_map.portal_cinh = map.addr.cinh;
- process_portal_irq_map(fd, &irq_map);
+ process_portal_irq_map(qmfd, &irq_map);
return 0;
}
@@ -136,10 +136,10 @@ static int fsl_qman_portal_finish(void)
__maybe_unused const struct qm_portal_config *cfg;
int ret;
- process_portal_irq_unmap(fd);
+ process_portal_irq_unmap(qmfd);
- cfg = qman_destroy_affine_portal();
- DPAA_BUG_ON(cfg != &pcfg);
+ cfg = qman_destroy_affine_portal(NULL);
+ DPAA_BUG_ON(cfg != &qpcfg);
ret = process_portal_unmap(&map.addr);
if (ret)
error(0, ret, "process_portal_unmap()");
@@ -161,14 +161,119 @@ int qman_thread_finish(void)
void qman_thread_irq(void)
{
- qbman_invoke_irq(pcfg.irq);
+ qbman_invoke_irq(qpcfg.irq);
/* Now we need to uninhibit interrupts. This is the only code outside
* the regular portal driver that manipulates any portal register, so
* rather than breaking that encapsulation I am simply hard-coding the
* offset to the inhibit register here.
*/
- out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+ out_be32(qpcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+struct qman_portal *fsl_qman_portal_create(void)
+{
+ cpu_set_t cpuset;
+ struct qman_portal *res;
+
+ struct qm_portal_config *q_pcfg;
+ int loop, ret;
+ struct dpaa_ioctl_irq_map irq_map;
+ struct dpaa_ioctl_portal_map q_map = {0};
+ int q_fd;
+
+ q_pcfg = kzalloc((sizeof(struct qm_portal_config)), 0);
+ if (!q_pcfg) {
+ error(0, -1, "q_pcfg kzalloc failed");
+ return NULL;
+ }
+
+ /* Verify the thread's cpu-affinity */
+ ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+ &cpuset);
+ if (ret) {
+ error(0, ret, "pthread_getaffinity_np()");
+ return NULL;
+ }
+
+ q_pcfg->cpu = -1;
+ for (loop = 0; loop < CPU_SETSIZE; loop++)
+ if (CPU_ISSET(loop, &cpuset)) {
+ if (q_pcfg->cpu != -1) {
+ pr_err("Thread is not affine to 1 cpu\n");
+ return NULL;
+ }
+ q_pcfg->cpu = loop;
+ }
+ if (q_pcfg->cpu == -1) {
+ pr_err("Bug in getaffinity handling!\n");
+ return NULL;
+ }
+
+ /* Allocate and map a qman portal */
+ q_map.type = dpaa_portal_qman;
+ q_map.index = QBMAN_ANY_PORTAL_IDX;
+ ret = process_portal_map(&q_map);
+ if (ret) {
+ error(0, ret, "process_portal_map()");
+ return NULL;
+ }
+ q_pcfg->channel = q_map.channel;
+ q_pcfg->pools = q_map.pools;
+ q_pcfg->index = q_map.index;
+
+ /* Make the portal's cache-[enabled|inhibited] regions */
+ q_pcfg->addr_virt[DPAA_PORTAL_CE] = q_map.addr.cena;
+ q_pcfg->addr_virt[DPAA_PORTAL_CI] = q_map.addr.cinh;
+
+ q_fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+ if (q_fd == -1) {
+ pr_err("QMan irq init failed\n");
+ goto err1;
+ }
+
+ q_pcfg->irq = q_fd;
+
+ res = qman_create_affine_portal(q_pcfg, NULL, true);
+ if (!res) {
+ pr_err("Qman portal initialisation failed (%d)\n",
+ q_pcfg->cpu);
+ goto err2;
+ }
+
+ irq_map.type = dpaa_portal_qman;
+ irq_map.portal_cinh = q_map.addr.cinh;
+ process_portal_irq_map(q_fd, &irq_map);
+
+ return res;
+err2:
+ close(q_fd);
+err1:
+ process_portal_unmap(&q_map.addr);
+ return NULL;
+}
+
+int fsl_qman_portal_destroy(struct qman_portal *qp)
+{
+ const struct qm_portal_config *cfg;
+ struct dpaa_portal_map addr;
+ int ret;
+
+ cfg = qman_destroy_affine_portal(qp);
+ kfree(qp);
+
+ process_portal_irq_unmap(cfg->irq);
+
+ addr.cena = cfg->addr_virt[DPAA_PORTAL_CE];
+ addr.cinh = cfg->addr_virt[DPAA_PORTAL_CI];
+
+ ret = process_portal_unmap(&addr);
+ if (ret)
+ pr_err("process_portal_unmap() (%d)\n", ret);
+
+ kfree((void *)cfg);
+
+ return ret;
}
int qman_global_init(void)
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index 3e1d7f9..e78d90b 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -179,8 +179,10 @@ int qm_get_wpm(int *wpm);
struct qman_portal *qman_create_affine_portal(
const struct qm_portal_config *config,
- const struct qman_cgrs *cgrs);
-const struct qm_portal_config *qman_destroy_affine_portal(void);
+ const struct qman_cgrs *cgrs,
+ int alloc);
+const struct qm_portal_config *
+qman_destroy_affine_portal(struct qman_portal *q);
struct qm_portal_config *qm_get_unused_portal(void);
struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index f1bc62a..8d74643 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -290,8 +290,7 @@ _dpaa_portal_init(void *arg)
* rte_dpaa_portal_init - Wrapper over _dpaa_portal_init with thread level check
* XXX Complete this
*/
-int
-rte_dpaa_portal_init(void *arg)
+int rte_dpaa_portal_init(void *arg)
{
if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
return _dpaa_portal_init(arg);
@@ -299,6 +298,34 @@ rte_dpaa_portal_init(void *arg)
return 0;
}
+int
+rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
+{
+ /* Affine above created portal with channel*/
+ u32 sdqcr;
+ struct qman_portal *qp;
+
+ if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
+ _dpaa_portal_init(arg);
+
+ /* Initialise qman specific portals */
+ qp = fsl_qman_portal_create();
+ if (!qp) {
+ DPAA_BUS_LOG(ERR, "Unable to alloc fq portal");
+ return -1;
+ }
+ fq->qp = qp;
+ sdqcr = QM_SDQCR_CHANNELS_POOL_CONV(fq->ch_id);
+ qman_static_dequeue_add(sdqcr, qp);
+
+ return 0;
+}
+
+int rte_dpaa_portal_fq_close(struct qman_fq *fq)
+{
+ return fsl_qman_portal_destroy(fq->qp);
+}
+
void
dpaa_portal_finish(void *arg)
{
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index ebcfa43..c5aef2d 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1223,21 +1223,24 @@ struct qman_fq_cb {
struct qman_fq {
/* Caller of qman_create_fq() provides these demux callbacks */
struct qman_fq_cb cb;
- /*
- * These are internal to the driver, don't touch. In particular, they
- * may change, be removed, or extended (so you shouldn't rely on
- * sizeof(qman_fq) being a constant).
- */
- spinlock_t fqlock;
- u32 fqid;
+
u32 fqid_le;
+ u16 ch_id;
+ u8 cgr_groupid;
+ u8 is_static;
/* DPDK Interface */
void *dpaa_intf;
+ /* affined portal in case of static queue */
+ struct qman_portal *qp;
+
volatile unsigned long flags;
+
enum qman_fq_state state;
- int cgr_groupid;
+ u32 fqid;
+ spinlock_t fqlock;
+
struct rb_node node;
#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
u32 key;
@@ -1416,7 +1419,7 @@ void qman_start_dequeues(void);
* (SDQCR). The requested pools are limited to those the portal has dequeue
* access to.
*/
-void qman_static_dequeue_add(u32 pools);
+void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);
/**
* qman_static_dequeue_del - Remove pool channels from the portal SDQCR
@@ -1426,7 +1429,7 @@ void qman_static_dequeue_add(u32 pools);
* register (SDQCR). The requested pools are limited to those the portal has
* dequeue access to.
*/
-void qman_static_dequeue_del(u32 pools);
+void qman_static_dequeue_del(u32 pools, struct qman_portal *qp);
/**
* qman_static_dequeue_get - return the portal's current SDQCR
@@ -1435,7 +1438,7 @@ void qman_static_dequeue_del(u32 pools);
* entire register is returned, so if only the currently-enabled pool channels
* are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
*/
-u32 qman_static_dequeue_get(void);
+u32 qman_static_dequeue_get(struct qman_portal *qp);
/**
* qman_dca - Perform a Discrete Consumption Acknowledgment
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index a3243af..038a89d 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -100,6 +100,10 @@ void bman_thread_irq(void);
int qman_global_init(void);
int bman_global_init(void);
+/* Direct portal create and destroy */
+struct qman_portal *fsl_qman_portal_create(void);
+int fsl_qman_portal_destroy(struct qman_portal *qp);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index 989ddcd..352e949 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -72,6 +72,11 @@ enum dpaa_portal_type {
dpaa_portal_bman,
};
+struct dpaa_portal_map {
+ void *cinh;
+ void *cena;
+};
+
struct dpaa_ioctl_portal_map {
/* Input parameter, is a qman or bman portal required. */
enum dpaa_portal_type type;
@@ -83,10 +88,8 @@ struct dpaa_ioctl_portal_map {
/* Return value if the map succeeds, this gives the mapped
* cache-inhibited (cinh) and cache-enabled (cena) addresses.
*/
- struct dpaa_portal_map {
- void *cinh;
- void *cena;
- } addr;
+ struct dpaa_portal_map addr;
+
/* Qman-specific return values */
u16 channel;
uint32_t pools;
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index f412362..4e3afda 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -74,6 +74,8 @@ DPDK_18.02 {
qman_delete_cgr;
qman_modify_cgr;
qman_release_cgrid_range;
+ rte_dpaa_portal_fq_close;
+ rte_dpaa_portal_fq_init;
local: *;
} DPDK_17.11;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 40caf72..b0f7d48 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -162,6 +162,10 @@ void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
*/
int rte_dpaa_portal_init(void *arg);
+int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);
+
+int rte_dpaa_portal_fq_close(struct qman_fq *fq);
+
/**
* Cleanup a DPAA Portal
*/
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 10/18] net/dpaa: change Tx HW budget to 7
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (8 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 09/18] bus/dpaa: add support to create dynamic HW portal Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 11/18] net/dpaa: optimize the Tx burst Hemant Agrawal
` (9 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
change the TX budget to 7 to sync best with the hw.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.h | 2 +-
drivers/net/dpaa/dpaa_rxtx.c | 5 +++--
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b26e411..95d745e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -67,7 +67,7 @@
#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
/*Maximum number of slots available in TX ring*/
-#define MAX_TX_RING_SLOTS 8
+#define DPAA_TX_BURST_SIZE 7
#ifndef VLAN_TAG_SIZE
#define VLAN_TAG_SIZE 4 /** < Vlan Header Length */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index c0cfec9..1b0ca9a 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -695,7 +695,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct rte_mbuf *mbuf, *mi = NULL;
struct rte_mempool *mp;
struct dpaa_bp_info *bp_info;
- struct qm_fd fd_arr[MAX_TX_RING_SLOTS];
+ struct qm_fd fd_arr[DPAA_TX_BURST_SIZE];
uint32_t frames_to_send, loop, i = 0;
uint16_t state;
int ret;
@@ -709,7 +709,8 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_DP_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
while (nb_bufs) {
- frames_to_send = (nb_bufs >> 3) ? MAX_TX_RING_SLOTS : nb_bufs;
+ frames_to_send = (nb_bufs > DPAA_TX_BURST_SIZE) ?
+ DPAA_TX_BURST_SIZE : nb_bufs;
for (loop = 0; loop < frames_to_send; loop++, i++) {
mbuf = bufs[i];
if (RTE_MBUF_DIRECT(mbuf)) {
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 11/18] net/dpaa: optimize the Tx burst
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (9 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 10/18] net/dpaa: change Tx HW budget to 7 Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 12/18] net/dpaa: optimize Rx path Hemant Agrawal
` (8 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Optimize it for best case. Create a function
for TX offloads to be used in multiple legs.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 73 ++++++++++++++++++++++++++++----------------
1 file changed, 46 insertions(+), 27 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 1b0ca9a..33cc412 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -298,6 +298,30 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
}
+static inline void
+dpaa_unsegmented_checksum(struct rte_mbuf *mbuf, struct qm_fd *fd_arr)
+{
+ if (!mbuf->packet_type) {
+ struct rte_net_hdr_lens hdr_lens;
+
+ mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
+ RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
+ | RTE_PTYPE_L4_MASK);
+ mbuf->l2_len = hdr_lens.l2_len;
+ mbuf->l3_len = hdr_lens.l3_len;
+ }
+ if (mbuf->data_off < (DEFAULT_TX_ICEOF +
+ sizeof(struct dpaa_eth_parse_results_t))) {
+ DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
+ "Not enough Headroom "
+ "space for correct Checksum offload."
+ "So Calculating checksum in Software.");
+ dpaa_checksum(mbuf);
+ } else {
+ dpaa_checksum_offload(mbuf, fd_arr, mbuf->buf_addr);
+ }
+}
+
struct rte_mbuf *
dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
{
@@ -620,27 +644,8 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
rte_pktmbuf_free(mbuf);
}
- if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
- if (!mbuf->packet_type) {
- struct rte_net_hdr_lens hdr_lens;
-
- mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
- RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
- | RTE_PTYPE_L4_MASK);
- mbuf->l2_len = hdr_lens.l2_len;
- mbuf->l3_len = hdr_lens.l3_len;
- }
- if (mbuf->data_off < (DEFAULT_TX_ICEOF +
- sizeof(struct dpaa_eth_parse_results_t))) {
- DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
- "Not enough Headroom "
- "space for correct Checksum offload."
- "So Calculating checksum in Software.");
- dpaa_checksum(mbuf);
- } else {
- dpaa_checksum_offload(mbuf, fd_arr, mbuf->buf_addr);
- }
- }
+ if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK)
+ dpaa_unsegmented_checksum(mbuf, fd_arr);
}
/* Handle all mbufs on dpaa BMAN managed pool */
@@ -696,7 +701,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct rte_mempool *mp;
struct dpaa_bp_info *bp_info;
struct qm_fd fd_arr[DPAA_TX_BURST_SIZE];
- uint32_t frames_to_send, loop, i = 0;
+ uint32_t frames_to_send, loop, sent = 0;
uint16_t state;
int ret;
@@ -711,10 +716,23 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
while (nb_bufs) {
frames_to_send = (nb_bufs > DPAA_TX_BURST_SIZE) ?
DPAA_TX_BURST_SIZE : nb_bufs;
- for (loop = 0; loop < frames_to_send; loop++, i++) {
- mbuf = bufs[i];
- if (RTE_MBUF_DIRECT(mbuf)) {
+ for (loop = 0; loop < frames_to_send; loop++) {
+ mbuf = *(bufs++);
+ if (likely(RTE_MBUF_DIRECT(mbuf))) {
mp = mbuf->pool;
+ bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+ if (likely(mp->ops_index ==
+ bp_info->dpaa_ops_index &&
+ mbuf->nb_segs == 1 &&
+ rte_mbuf_refcnt_read(mbuf) == 1)) {
+ DPAA_MBUF_TO_CONTIG_FD(mbuf,
+ &fd_arr[loop], bp_info->bpid);
+ if (mbuf->ol_flags &
+ DPAA_TX_CKSUM_OFFLOAD_MASK)
+ dpaa_unsegmented_checksum(mbuf,
+ &fd_arr[loop]);
+ continue;
+ }
} else {
mi = rte_mbuf_from_indirect(mbuf);
mp = mi->pool;
@@ -755,11 +773,12 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
frames_to_send - loop);
}
nb_bufs -= frames_to_send;
+ sent += frames_to_send;
}
- DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", i, q);
+ DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
- return i;
+ return sent;
}
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 12/18] net/dpaa: optimize Rx path
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (10 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 11/18] net/dpaa: optimize the Tx burst Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 13/18] bus/dpaa: query queue frame count support Hemant Agrawal
` (7 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 48 ++++++++++++++++++++------------------------
drivers/net/dpaa/dpaa_rxtx.h | 2 +-
2 files changed, 23 insertions(+), 27 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 33cc412..2609953 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -123,12 +123,6 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
switch (prs) {
- case DPAA_PKT_TYPE_NONE:
- m->packet_type = 0;
- break;
- case DPAA_PKT_TYPE_ETHER:
- m->packet_type = RTE_PTYPE_L2_ETHER;
- break;
case DPAA_PKT_TYPE_IPV4:
m->packet_type = RTE_PTYPE_L2_ETHER |
RTE_PTYPE_L3_IPV4;
@@ -137,6 +131,9 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
m->packet_type = RTE_PTYPE_L2_ETHER |
RTE_PTYPE_L3_IPV6;
break;
+ case DPAA_PKT_TYPE_ETHER:
+ m->packet_type = RTE_PTYPE_L2_ETHER;
+ break;
case DPAA_PKT_TYPE_IPV4_FRAG:
case DPAA_PKT_TYPE_IPV4_FRAG_UDP:
case DPAA_PKT_TYPE_IPV4_FRAG_TCP:
@@ -199,6 +196,9 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
m->packet_type = RTE_PTYPE_L2_ETHER |
RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP;
break;
+ case DPAA_PKT_TYPE_NONE:
+ m->packet_type = 0;
+ break;
/* More switch cases can be added */
default:
dpaa_slow_parsing(m, prs);
@@ -209,12 +209,11 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
<< DPAA_PKT_L3_LEN_SHIFT;
/* Set the hash values */
- m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));
- m->ol_flags = PKT_RX_RSS_HASH;
+ m->hash.rss = (uint32_t)(annot->hash);
/* All packets with Bad checksum are dropped by interface (and
* corresponding notification issued to RX error queues).
*/
- m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+ m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_GOOD;
/* Check if Vlan is present */
if (prs & DPAA_PARSE_VLAN_MASK)
@@ -323,7 +322,7 @@ dpaa_unsegmented_checksum(struct rte_mbuf *mbuf, struct qm_fd *fd_arr)
}
struct rte_mbuf *
-dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
+dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
{
struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
@@ -381,34 +380,31 @@ dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
return first_seg;
}
-static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
- uint32_t ifid)
+static inline struct rte_mbuf *
+dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
{
- struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
struct rte_mbuf *mbuf;
- void *ptr;
+ struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+ void *ptr = rte_dpaa_mem_ptov(qm_fd_addr(fd));
uint8_t format =
(fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
- uint16_t offset =
- (fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
- uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
+ uint16_t offset;
+ uint32_t length;
DPAA_DP_LOG(DEBUG, " FD--->MBUF");
if (unlikely(format == qm_fd_sg))
return dpaa_eth_sg_to_mbuf(fd, ifid);
+ rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
+
+ offset = (fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
+ length = fd->opaque & DPAA_FD_LENGTH_MASK;
+
/* Ignoring case when format != qm_fd_contig */
dpaa_display_frame(fd);
- ptr = rte_dpaa_mem_ptov(fd->addr);
- /* Ignoring case when ptr would be NULL. That is only possible incase
- * of a corrupted packet
- */
mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
- /* Prefetch the Parse results and packet data to L1 */
- rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
- rte_prefetch0((void *)((uint8_t *)ptr + offset));
mbuf->data_off = offset;
mbuf->data_len = length;
@@ -488,11 +484,11 @@ static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
if (!dpaa_mbuf)
return NULL;
- memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + mbuf->data_off, (void *)
+ memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + RTE_PKTMBUF_HEADROOM, (void *)
((uint8_t *)(mbuf->buf_addr) + mbuf->data_off), mbuf->pkt_len);
/* Copy only the required fields */
- dpaa_mbuf->data_off = mbuf->data_off;
+ dpaa_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
dpaa_mbuf->pkt_len = mbuf->pkt_len;
dpaa_mbuf->ol_flags = mbuf->ol_flags;
dpaa_mbuf->packet_type = mbuf->packet_type;
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 2ffc4ff..b434b6d 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -288,7 +288,7 @@ uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
struct rte_mbuf **bufs __rte_unused,
uint16_t nb_bufs __rte_unused);
-struct rte_mbuf *dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid);
+struct rte_mbuf *dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid);
int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
struct qm_fd *fd,
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 13/18] bus/dpaa: query queue frame count support
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (11 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 12/18] net/dpaa: optimize Rx path Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 14/18] net/dpaa: add Rx queue " Hemant Agrawal
` (6 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 22 ++++++++++++++++++++++
drivers/bus/dpaa/include/fsl_qman.h | 7 +++++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 +
3 files changed, 30 insertions(+)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 6ae4bb3..b2f82a3 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -1750,6 +1750,28 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
return 0;
}
+int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt)
+{
+ struct qm_mc_command *mcc;
+ struct qm_mc_result *mcr;
+ struct qman_portal *p = get_affine_portal();
+
+ mcc = qm_mc_start(&p->p);
+ mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+ qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+ while (!(mcr = qm_mc_result(&p->p)))
+ cpu_relax();
+ DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+
+ if (mcr->result == QM_MCR_RESULT_OK)
+ *frm_cnt = be24_to_cpu(mcr->queryfq_np.frm_cnt);
+ else if (mcr->result == QM_MCR_RESULT_ERR_FQID)
+ return -ERANGE;
+ else if (mcr->result != QM_MCR_RESULT_OK)
+ return -EIO;
+ return 0;
+}
+
int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
{
struct qm_mc_command *mcc;
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index c5aef2d..9090b63 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1649,6 +1649,13 @@ int qman_query_fq_has_pkts(struct qman_fq *fq);
int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
/**
+ * qman_query_fq_frmcnt - Queries fq frame count
+ * @fq: the frame queue object to be queried
+ * @frm_cnt: number of frames in the queue
+ */
+int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);
+
+/**
* qman_query_wq - Queries work queue lengths
* @query_dedicated: If non-zero, query length of WQs in the channel dedicated
* to this software portal. Otherwise, query length of WQs in a
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 4e3afda..212c75f 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -73,6 +73,7 @@ DPDK_18.02 {
qman_create_cgr;
qman_delete_cgr;
qman_modify_cgr;
+ qman_query_fq_frm_cnt;
qman_release_cgrid_range;
rte_dpaa_portal_fq_close;
rte_dpaa_portal_fq_init;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 14/18] net/dpaa: add Rx queue count support
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (12 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 13/18] bus/dpaa: query queue frame count support Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 15/18] net/dpaa: add support for loopback API Hemant Agrawal
` (5 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 6482998..53b8c87 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -539,6 +539,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
PMD_INIT_FUNC_TRACE();
}
+static uint32_t
+dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+ u32 frm_cnt = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
+ RTE_LOG(DEBUG, PMD, "RX frame count for q(%d) is %u\n",
+ rx_queue_id, frm_cnt);
+ }
+ return frm_cnt;
+}
+
static int dpaa_link_down(struct rte_eth_dev *dev)
{
PMD_INIT_FUNC_TRACE();
@@ -690,6 +706,7 @@ static struct eth_dev_ops dpaa_devops = {
.tx_queue_setup = dpaa_eth_tx_queue_setup,
.rx_queue_release = dpaa_eth_rx_queue_release,
.tx_queue_release = dpaa_eth_tx_queue_release,
+ .rx_queue_count = dpaa_dev_rx_queue_count,
.flow_ctrl_get = dpaa_flow_ctrl_get,
.flow_ctrl_set = dpaa_flow_ctrl_set,
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 15/18] net/dpaa: add support for loopback API
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (13 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 14/18] net/dpaa: add Rx queue " Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2018-01-09 10:46 ` Ferruh Yigit
2017-12-13 12:05 ` [dpdk-dev] [PATCH 16/18] app/testpmd: add support for loopback config for dpaa Hemant Agrawal
` (4 subsequent siblings)
19 siblings, 1 reply; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/Makefile | 3 +++
drivers/net/dpaa/dpaa_ethdev.c | 42 +++++++++++++++++++++++++++++++
drivers/net/dpaa/rte_pmd_dpaa.h | 37 +++++++++++++++++++++++++++
drivers/net/dpaa/rte_pmd_dpaa_version.map | 8 ++++++
4 files changed, 90 insertions(+)
create mode 100644 drivers/net/dpaa/rte_pmd_dpaa.h
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index 171686e..a99d1ee 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -60,4 +60,7 @@ LDLIBS += -lrte_mempool_dpaa
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs
+# install this header file
+SYMLINK-$(CONFIG_RTE_LIBRTE_DPAA_PMD)-include := rte_pmd_dpaa.h
+
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 53b8c87..fcba929 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -64,6 +64,7 @@
#include <dpaa_ethdev.h>
#include <dpaa_rxtx.h>
+#include <rte_pmd_dpaa.h>
#include <fsl_usd.h>
#include <fsl_qman.h>
@@ -110,6 +111,8 @@ static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
offsetof(struct dpaa_if_stats, tund)},
};
+static struct rte_dpaa_driver rte_dpaa_pmd;
+
static int
dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
@@ -733,6 +736,45 @@ static struct eth_dev_ops dpaa_devops = {
.fw_version_get = dpaa_fw_version_get,
};
+static bool
+is_device_supported(struct rte_eth_dev *dev, struct rte_dpaa_driver *drv)
+{
+ if (strcmp(dev->device->driver->name,
+ drv->driver.name))
+ return false;
+
+ return true;
+}
+
+static bool
+is_dpaa_supported(struct rte_eth_dev *dev)
+{
+ return is_device_supported(dev, &rte_dpaa_pmd);
+}
+
+int
+rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
+{
+ struct rte_eth_dev *dev;
+ struct dpaa_if *dpaa_intf;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV);
+
+ dev = &rte_eth_devices[port];
+
+ if (!is_dpaa_supported(dev))
+ return -ENOTSUP;
+
+ dpaa_intf = dev->data->dev_private;
+
+ if (on)
+ fman_if_loopback_enable(dpaa_intf->fif);
+ else
+ fman_if_loopback_disable(dpaa_intf->fif);
+
+ return 0;
+}
+
static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
{
struct rte_eth_fc_conf *fc_conf;
diff --git a/drivers/net/dpaa/rte_pmd_dpaa.h b/drivers/net/dpaa/rte_pmd_dpaa.h
new file mode 100644
index 0000000..4464dd4
--- /dev/null
+++ b/drivers/net/dpaa/rte_pmd_dpaa.h
@@ -0,0 +1,37 @@
+/*-
+ * Copyright 2017 NXP.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+
+#ifndef _PMD_DPAA_H_
+#define _PMD_DPAA_H_
+
+/**
+ * @file rte_pmd_dpaa.h
+ *
+ * dpaa PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ */
+
+#include <rte_ethdev.h>
+
+/**
+ * Enable/Disable TX loopback
+ *
+ * @param port
+ * The port identifier of the Ethernet device.
+ * @param on
+ * 1 - Enable TX loopback.
+ * 0 - Disable TX loopback.
+ * @return
+ * - (0) if successful.
+ * - (-ENODEV) if *port* invalid.
+ * - (-EINVAL) if bad parameter.
+ */
+int rte_pmd_dpaa_set_tx_loopback(uint8_t port,
+ uint8_t on);
+
+#endif /* _PMD_DPAA_H_ */
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index a70bd19..d76acbd 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -2,3 +2,11 @@ DPDK_17.11 {
local: *;
};
+
+DPDK_18.02 {
+ global:
+
+ rte_pmd_dpaa_set_tx_loopback;
+
+ local: *;
+} DPDK_17.11;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 16/18] app/testpmd: add support for loopback config for dpaa
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (14 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 15/18] net/dpaa: add support for loopback API Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 17/18] bus/dpaa: add support for static queues Hemant Agrawal
` (3 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
app/test-pmd/Makefile | 4 ++++
app/test-pmd/cmdline.c | 7 +++++++
2 files changed, 11 insertions(+)
diff --git a/app/test-pmd/Makefile b/app/test-pmd/Makefile
index d21308f..f60449b 100644
--- a/app/test-pmd/Makefile
+++ b/app/test-pmd/Makefile
@@ -71,6 +71,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_BOND),y)
LDLIBS += -lrte_pmd_bond
endif
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
+LDLIBS += -lrte_pmd_dpaa
+endif
+
ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y)
LDLIBS += -lrte_pmd_ixgbe
endif
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f71d963..32096aa 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -89,6 +89,9 @@
#include <rte_eth_bond.h>
#include <rte_eth_bond_8023ad.h>
#endif
+#ifdef RTE_LIBRTE_DPAA_PMD
+#include <rte_pmd_dpaa.h>
+#endif
#ifdef RTE_LIBRTE_IXGBE_PMD
#include <rte_pmd_ixgbe.h>
#endif
@@ -12620,6 +12623,10 @@ cmd_set_tx_loopback_parsed(
if (ret == -ENOTSUP)
ret = rte_pmd_bnxt_set_tx_loopback(res->port_id, is_on);
#endif
+#ifdef RTE_LIBRTE_DPAA_PMD
+ if (ret == -ENOTSUP)
+ ret = rte_pmd_dpaa_set_tx_loopback(res->port_id, is_on);
+#endif
switch (ret) {
case 0:
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 17/18] bus/dpaa: add support for static queues
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (15 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 16/18] app/testpmd: add support for loopback config for dpaa Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2018-01-09 10:46 ` Ferruh Yigit
2017-12-13 12:05 ` [dpdk-dev] [PATCH 18/18] net/dpaa: integrate the support of push mode in PMD Hemant Agrawal
` (2 subsequent siblings)
19 siblings, 1 reply; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Sunil Kumar Kori
DPAA hardware support two kinds of queues:
1. Pull mode queue - where one needs to regularly pull the packets.
2. Push mode queue - where the hw pushes the packet to queue. These are
high performance queues, but limitd in number.
This patch add the driver support for push m de queues.
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 64 +++++++++++++++++++++++++++++++
drivers/bus/dpaa/base/qbman/qman.h | 4 +-
drivers/bus/dpaa/include/fsl_qman.h | 10 +++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 2 +
4 files changed, 78 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index b2f82a3..42d509d 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -1080,6 +1080,70 @@ u16 qman_affine_channel(int cpu)
return affine_channels[cpu];
}
+unsigned int qman_portal_poll_rx(unsigned int poll_limit,
+ void **bufs,
+ struct qman_portal *p)
+{
+ const struct qm_dqrr_entry *dq;
+ struct qman_fq *fq;
+ enum qman_cb_dqrr_result res;
+ unsigned int limit = 0;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ struct qm_dqrr_entry *shadow;
+#endif
+ unsigned int rx_number = 0;
+
+ do {
+ qm_dqrr_pvb_update(&p->p);
+ dq = qm_dqrr_current(&p->p);
+ if (unlikely(!dq))
+ break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ /* If running on an LE system the fields of the
+ * dequeue entry must be swapper. Because the
+ * QMan HW will ignore writes the DQRR entry is
+ * copied and the index stored within the copy
+ */
+ shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+ *shadow = *dq;
+ dq = shadow;
+ shadow->fqid = be32_to_cpu(shadow->fqid);
+ shadow->contextB = be32_to_cpu(shadow->contextB);
+ shadow->seqnum = be16_to_cpu(shadow->seqnum);
+ hw_fd_to_cpu(&shadow->fd);
+#endif
+
+ /* SDQCR: context_b points to the FQ */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+ fq = get_fq_table_entry(dq->contextB);
+#else
+ fq = (void *)(uintptr_t)dq->contextB;
+#endif
+ /* Now let the callback do its stuff */
+ res = fq->cb.dqrr_dpdk_cb(NULL, p, fq, dq, &bufs[rx_number]);
+ rx_number++;
+ /* Interpret 'dq' from a driver perspective. */
+ /*
+ * Parking isn't possible unless HELDACTIVE was set. NB,
+ * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+ * check for HELDACTIVE to cover both.
+ */
+ DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+ (res != qman_cb_dqrr_park));
+ qm_dqrr_cdc_consume_1ptr(&p->p, dq, res == qman_cb_dqrr_park);
+ /* Move forward */
+ qm_dqrr_next(&p->p);
+ /*
+ * Entry processed and consumed, increment our counter. The
+ * callback can request that we exit after consuming the
+ * entry, and we also exit if we reach our processing limit,
+ * so loop back only if neither of these conditions is met.
+ */
+ } while (likely(++limit < poll_limit));
+
+ return limit;
+}
+
struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
{
struct qman_portal *p = get_affine_portal();
diff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h
index 2c0f694..999e429 100644
--- a/drivers/bus/dpaa/base/qbman/qman.h
+++ b/drivers/bus/dpaa/base/qbman/qman.h
@@ -187,7 +187,7 @@ struct qm_eqcr {
};
struct qm_dqrr {
- const struct qm_dqrr_entry *ring, *cursor;
+ struct qm_dqrr_entry *ring, *cursor;
u8 pi, ci, fill, ithresh, vbit;
#ifdef RTE_LIBRTE_DPAA_HWDEBUG
enum qm_dqrr_dmode dmode;
@@ -460,7 +460,7 @@ static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)
return ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);
}
-static inline const struct qm_dqrr_entry *DQRR_INC(
+static inline struct qm_dqrr_entry *DQRR_INC(
const struct qm_dqrr_entry *e)
{
return DQRR_CARRYCLEAR(e + 1);
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 9090b63..7ec07ee 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1157,6 +1157,12 @@ typedef enum qman_cb_dqrr_result (*qman_cb_dqrr)(struct qman_portal *qm,
struct qman_fq *fq,
const struct qm_dqrr_entry *dqrr);
+typedef enum qman_cb_dqrr_result (*qman_dpdk_cb_dqrr)(void *event,
+ struct qman_portal *qm,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bd);
+
/*
* This callback type is used when handling ERNs, FQRNs and FQRLs via MR. They
* are always consumed after the callback returns.
@@ -1215,6 +1221,7 @@ enum qman_fq_state {
*/
struct qman_fq_cb {
+ qman_dpdk_cb_dqrr dqrr_dpdk_cb; /* for dequeued frames */
qman_cb_dqrr dqrr; /* for dequeued frames */
qman_cb_mr ern; /* for s/w ERNs */
qman_cb_mr fqs; /* frame-queue state changes*/
@@ -1332,6 +1339,9 @@ int qman_get_portal_index(void);
*/
u16 qman_affine_channel(int cpu);
+unsigned int qman_portal_poll_rx(unsigned int poll_limit,
+ void **bufs, struct qman_portal *q);
+
/**
* qman_set_vdq - Issue a volatile dequeue command
* @fq: Frame Queue on which the volatile dequeue command is issued
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 212c75f..460cfbf 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -70,9 +70,11 @@ DPDK_18.02 {
dpaa_svr_family;
qman_alloc_cgrid_range;
+ qman_alloc_pool_range;
qman_create_cgr;
qman_delete_cgr;
qman_modify_cgr;
+ qman_portal_poll_rx;
qman_query_fq_frm_cnt;
qman_release_cgrid_range;
rte_dpaa_portal_fq_close;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH 18/18] net/dpaa: integrate the support of push mode in PMD
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (16 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 17/18] bus/dpaa: add support for static queues Hemant Agrawal
@ 2017-12-13 12:05 ` Hemant Agrawal
2018-01-09 10:47 ` [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Ferruh Yigit
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2017-12-13 12:05 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Sunil Kumar Kori, Nipun Gupta
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
doc/guides/nics/dpaa.rst | 11 ++++++++
drivers/net/dpaa/dpaa_ethdev.c | 62 ++++++++++++++++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_ethdev.h | 2 +-
drivers/net/dpaa/dpaa_rxtx.c | 34 +++++++++++++++++++++++
drivers/net/dpaa/dpaa_rxtx.h | 5 ++++
5 files changed, 107 insertions(+), 7 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index d331c05..7b6a0b1 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -315,6 +315,17 @@ state during application initialization:
In case the application is configured to use lesser number of queues than
configured above, it might result in packet loss (because of distribution).
+- ``DPAA_NUM_PUSH_QUEUES`` (default 4)
+
+ This defines the number of High performance queues to be used for ethdev Rx.
+ These queues use one private HW portal per queue configured, so they are
+ limited in the system. The first configured ethdev queues will be
+ automatically be assigned from the these high perf PUSH queues. Any queue
+ configuration beyond that will be standard Rx queues. The application can
+ choose to change their number if HW portals are limited.
+ The valid values are from '0' to '4'. The valuse shall be set to '0' if the
+ application want to use eventdev with DPAA device.
+
Driver compilation and testing
------------------------------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index fcba929..7798994 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -73,6 +73,14 @@
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
+/* At present we only allow up to 4 push mode queues - as each of this queue
+ * need dedicated portal and we are short of portals.
+ */
+#define DPAA_MAX_PUSH_MODE_QUEUE 4
+
+static int dpaa_push_mode_max_queue = DPAA_MAX_PUSH_MODE_QUEUE;
+static int dpaa_push_queue_idx; /* Queue index which are in push mode*/
+
/* Per FQ Taildrop in frame count */
static unsigned int td_threshold = CGR_RX_PERFQ_THRESH;
@@ -460,6 +468,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
+ struct qm_mcc_initfq opts = {0};
+ u32 flags = 0;
+ int ret;
PMD_INIT_FUNC_TRACE();
@@ -495,7 +506,41 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->name, fd_offset,
fman_if_get_fdoff(dpaa_intf->fif));
}
-
+ /* checking if push mode only, no error check for now */
+ if (dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
+ dpaa_push_queue_idx++;
+ opts.we_mask = QM_INITFQ_WE_FQCTRL | QM_INITFQ_WE_CONTEXTA;
+ opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK |
+ QM_FQCTRL_CTXASTASHING |
+ QM_FQCTRL_PREFERINCACHE;
+ opts.fqd.context_a.stashing.exclusive = 0;
+ opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+ opts.fqd.context_a.stashing.context_cl =
+ DPAA_IF_RX_CONTEXT_STASH;
+ if (dpaa_svr_family != SVR_LS1046A_FAMILY)
+ opts.fqd.context_a.stashing.annotation_cl =
+ DPAA_IF_RX_ANNOTATION_STASH;
+
+ /*Create a channel and associate given queue with the channel*/
+ qman_alloc_pool_range((u32 *)&rxq->ch_id, 1, 1, 0);
+ opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ;
+ opts.fqd.dest.channel = rxq->ch_id;
+ opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+ flags = QMAN_INITFQ_FLAG_SCHED;
+
+ /* Configure tail drop */
+ if (dpaa_intf->cgr_rx) {
+ opts.we_mask |= QM_INITFQ_WE_CGID;
+ opts.fqd.cgid = dpaa_intf->cgr_rx[queue_idx].cgrid;
+ opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+ }
+ ret = qman_init_fq(rxq, flags, &opts);
+ if (ret)
+ DPAA_PMD_ERR("Channel/Queue association failed. fqid %d"
+ " ret: %d", rxq->fqid, ret);
+ rxq->cb.dqrr_dpdk_cb = dpaa_rx_cb;
+ rxq->is_static = true;
+ }
dev->data->rx_queues[queue_idx] = rxq;
/* configure the CGR size as per the desc size */
@@ -835,11 +880,8 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
fqid, ret);
return ret;
}
-
- opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
- QM_INITFQ_WE_CONTEXTA;
-
- opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+ fq->is_static = false;
+ opts.we_mask = QM_INITFQ_WE_FQCTRL | QM_INITFQ_WE_CONTEXTA;
opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
QM_FQCTRL_PREFERINCACHE;
opts.fqd.context_a.stashing.exclusive = 0;
@@ -973,6 +1015,14 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
else
num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
+ /* Number of High perf push mode queues. */
+ if (getenv("DPAA_NUM_PUSH_QUEUES")) {
+ dpaa_push_mode_max_queue =
+ atoi(getenv("DPAA_NUM_PUSH_QUEUES"));
+ if (dpaa_push_mode_max_queue > DPAA_MAX_PUSH_MODE_QUEUE)
+ dpaa_push_mode_max_queue = DPAA_MAX_PUSH_MODE_QUEUE;
+ }
+
/* Each device can not have more than DPAA_PCD_FQID_MULTIPLIER RX
* queues.
*/
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 95d745e..c0a8430 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -80,7 +80,7 @@
#define DPAA_MAX_NUM_PCD_QUEUES 32
#define DPAA_IF_TX_PRIORITY 3
-#define DPAA_IF_RX_PRIORITY 4
+#define DPAA_IF_RX_PRIORITY 0
#define DPAA_IF_DEBUG_PRIORITY 7
#define DPAA_IF_RX_ANNOTATION_STASH 1
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 2609953..088fbe1 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -420,6 +420,37 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
return mbuf;
}
+enum qman_cb_dqrr_result dpaa_rx_cb(void *event __always_unused,
+ struct qman_portal *qm __always_unused,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bufs)
+{
+ const struct qm_fd *fd = &dqrr->fd;
+
+ *bufs = dpaa_eth_fd_to_mbuf(fd,
+ ((struct dpaa_if *)fq->dpaa_intf)->ifid);
+ return qman_cb_dqrr_consume;
+}
+
+static uint16_t
+dpaa_eth_queue_portal_rx(struct qman_fq *fq,
+ struct rte_mbuf **bufs,
+ uint16_t nb_bufs)
+{
+ int ret;
+
+ if (unlikely(fq->qp == NULL)) {
+ ret = rte_dpaa_portal_fq_init((void *)0, fq);
+ if (ret) {
+ DPAA_PMD_ERR("Failure in affining portal %d", ret);
+ return 0;
+ }
+ }
+
+ return qman_portal_poll_rx(nb_bufs, (void **)bufs, fq->qp);
+}
+
uint16_t dpaa_eth_queue_rx(void *q,
struct rte_mbuf **bufs,
uint16_t nb_bufs)
@@ -429,6 +460,9 @@ uint16_t dpaa_eth_queue_rx(void *q,
uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
int ret;
+ if (likely(fq->is_static))
+ return dpaa_eth_queue_portal_rx(fq, bufs, nb_bufs);
+
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_PMD_ERR("Failure in affining portal");
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index b434b6d..de65ebc 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -294,4 +294,9 @@ int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
struct qm_fd *fd,
uint32_t bpid);
+enum qman_cb_dqrr_result dpaa_rx_cb(void *event,
+ struct qman_portal *qm,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bd);
#endif
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [dpdk-dev] [PATCH 15/18] net/dpaa: add support for loopback API
2017-12-13 12:05 ` [dpdk-dev] [PATCH 15/18] net/dpaa: add support for loopback API Hemant Agrawal
@ 2018-01-09 10:46 ` Ferruh Yigit
0 siblings, 0 replies; 65+ messages in thread
From: Ferruh Yigit @ 2018-01-09 10:46 UTC (permalink / raw)
To: Hemant Agrawal, dev, Luca Boccassi
On 12/13/2017 12:05 PM, Hemant Agrawal wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
<...>
> @@ -0,0 +1,37 @@
> +/*-
> + * Copyright 2017 NXP.
> + *
> + * SPDX-License-Identifier: BSD-3-Clause
I guess latest agreement was without break.
> + */
> +
> +#ifndef _PMD_DPAA_H_
> +#define _PMD_DPAA_H_
> +
> +/**
> + * @file rte_pmd_dpaa.h
> + *
> + * dpaa PMD specific functions.
> + *
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
What about adding a @warning for this note too, otherwise it is very easy to
miss note.
> + *
> + */
> +
> +#include <rte_ethdev.h>
> +
> +/**
> + * Enable/Disable TX loopback
I am for adding EXPERIMENTAL tag for API as well, otherwise it is easy to miss.
I suggest adding @warning as well to highlight is as done in rte_member.h
> + *
> + * @param port
> + * The port identifier of the Ethernet device.
> + * @param on
> + * 1 - Enable TX loopback.
> + * 0 - Disable TX loopback.
> + * @return
> + * - (0) if successful.
> + * - (-ENODEV) if *port* invalid.
> + * - (-EINVAL) if bad parameter.
> + */
> +int rte_pmd_dpaa_set_tx_loopback(uint8_t port,
> + uint8_t on);
PMD now has PMD specific API, this is public API, can you please update related
API documentations to document this API?
> +
> +#endif /* _PMD_DPAA_H_ */
> diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
> index a70bd19..d76acbd 100644
> --- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
> +++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
> @@ -2,3 +2,11 @@ DPDK_17.11 {
>
> local: *;
> };
> +
> +DPDK_18.02 {
This API is EXPERIMENTAL (as far as I can see from above) so having an
experimental tag is better.
How to mark an API as experimental is not documented, we should indeed.
cc'ed Luca if I am missing steps related how to make an API experimental.
> + global:
> +
> + rte_pmd_dpaa_set_tx_loopback;
> +
> + local: *;
> +} DPDK_17.11;
>
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [dpdk-dev] [PATCH 05/18] net/dpaa: set the correct frame size in device MTU
2017-12-13 12:05 ` [dpdk-dev] [PATCH 05/18] net/dpaa: set the correct frame size in device MTU Hemant Agrawal
@ 2018-01-09 10:46 ` Ferruh Yigit
0 siblings, 0 replies; 65+ messages in thread
From: Ferruh Yigit @ 2018-01-09 10:46 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: Ashish Jain
On 12/13/2017 12:05 PM, Hemant Agrawal wrote:
> From: Ashish Jain <ashish.jain@nxp.com>
>
> Setting correct frame size in dpaa_dev_mtu_set
> api call. Also setting correct max frame size in
> hardware in dev_configure for jumbo frames
>
> Signed-off-by: Ashish Jain <ashish.jain@nxp.com>
> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
<...>
> @@ -111,19 +111,21 @@ static int
> dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct dpaa_if *dpaa_intf = dev->data->dev_private;
> + uint32_t frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN
> + + VLAN_TAG_SIZE;
>
> PMD_INIT_FUNC_TRACE();
>
> - if (mtu < ETHER_MIN_MTU)
> + if ((mtu < ETHER_MIN_MTU) || (frame_size > DPAA_MAX_RX_PKT_LEN))
checkpatch complains about extra parentheses:
CHECK:UNNECESSARY_PARENTHESES: Unnecessary parentheses around 'mtu < ETHER_MIN_MTU'
#42: FILE: drivers/net/dpaa/dpaa_ethdev.c:119:
+ if ((mtu < ETHER_MIN_MTU) || (frame_size > DPAA_MAX_RX_PKT_LEN))
CHECK:UNNECESSARY_PARENTHESES: Unnecessary parentheses around 'frame_size >
DPAA_MAX_RX_PKT_LEN'
#42: FILE: drivers/net/dpaa/dpaa_ethdev.c:119:
+ if ((mtu < ETHER_MIN_MTU) || (frame_size > DPAA_MAX_RX_PKT_LEN))
<...>
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [dpdk-dev] [PATCH 17/18] bus/dpaa: add support for static queues
2017-12-13 12:05 ` [dpdk-dev] [PATCH 17/18] bus/dpaa: add support for static queues Hemant Agrawal
@ 2018-01-09 10:46 ` Ferruh Yigit
0 siblings, 0 replies; 65+ messages in thread
From: Ferruh Yigit @ 2018-01-09 10:46 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: Sunil Kumar Kori
On 12/13/2017 12:05 PM, Hemant Agrawal wrote:
> DPAA hardware support two kinds of queues:
> 1. Pull mode queue - where one needs to regularly pull the packets.
> 2. Push mode queue - where the hw pushes the packet to queue. These are
> high performance queues, but limitd in number.
limited
>
> This patch add the driver support for push m de queues.
push mode
>
> Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
<...>
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [dpdk-dev] [PATCH 01/18] net/dpaa: fix coverity reported issues
2017-12-13 12:05 ` [dpdk-dev] [PATCH 01/18] net/dpaa: fix coverity reported issues Hemant Agrawal
@ 2018-01-09 10:46 ` Ferruh Yigit
2018-01-09 13:29 ` Hemant Agrawal
0 siblings, 1 reply; 65+ messages in thread
From: Ferruh Yigit @ 2018-01-09 10:46 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: stable, Kovacevic, Marko
On 12/13/2017 12:05 PM, Hemant Agrawal wrote:
> Fixes: 05ba55bc2b1a ("net/dpaa: add packet dump for debugging")
> Fixes: 37f9b54bd3cf ("net/dpaa: support Tx and Rx queue setup")
> Cc: stable@dpdk.org>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Hi Hemant,
fix coverity issues is not very helpful as commit title, can you please document
what really fixed.
And there is a special format for coverity fixes:
"
Coverity issue: ......
Fixes: ............ ("...")
Cc: stable@dpdk.org [if required]
Signed-off-by: ....
"
There are samples in git history. It seems this format is not documented and
Marko will help to document it.
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [dpdk-dev] [PATCH 00/18] DPAA PMD improvements
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (17 preceding siblings ...)
2017-12-13 12:05 ` [dpdk-dev] [PATCH 18/18] net/dpaa: integrate the support of push mode in PMD Hemant Agrawal
@ 2018-01-09 10:47 ` Ferruh Yigit
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
19 siblings, 0 replies; 65+ messages in thread
From: Ferruh Yigit @ 2018-01-09 10:47 UTC (permalink / raw)
To: Hemant Agrawal, dev
On 12/13/2017 12:05 PM, Hemant Agrawal wrote:
> This patch series add various improvement and performance related
> optimizations for DPAA PMD
>
> Ashish Jain (2):
> net/dpaa: fix the mbuf packet type if zero
> net/dpaa: set the correct frame size in device MTU
>
> Hemant Agrawal (11):
> net/dpaa: fix coverity reported issues
> net/dpaa: fix FW version code
> bus/dpaa: update platform soc value register routines
> net/dpaa: add frame count based tail drop with CGR
> bus/dpaa: add support to create dynamic HW portal
> bus/dpaa: query queue frame count support
> net/dpaa: add Rx queue count support
> net/dpaa: add support for loopback API
> app/testpmd: add support for loopback config for dpaa
> bus/dpaa: add support for static queues
> net/dpaa: integrate the support of push mode in PMD
>
> Nipun Gupta (5):
> bus/dpaa: optimize the qman HW stashing settings
> bus/dpaa: optimize the endianness conversions
> net/dpaa: change Tx HW budget to 7
> net/dpaa: optimize the Tx burst
> net/dpaa: optimize Rx path
Hi Hemant,
There is new PMD specific API and it needs to be documented, I put more detailed
comments to the patch.
And since there will be a new version of the set commented on a few minor issues
too.
Thanks,
ferruh
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 00/18] DPAA PMD improvements
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
` (18 preceding siblings ...)
2018-01-09 10:47 ` [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Ferruh Yigit
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 01/18] net/dpaa: fix the mbuf packet type if zero Hemant Agrawal
` (18 more replies)
19 siblings, 19 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
This patch series add various improvement and performance related
optimizations for DPAA PMD
v2:
- fix the spelling of PORTALS
- Add Akhil's patch wich is required for crypto
- minor improvement in push mode patch
Akhil Goyal (1):
bus/dpaa: support for enqueue frames of multiple queues
Ashish Jain (2):
net/dpaa: fix the mbuf packet type if zero
net/dpaa: set the correct frame size in device MTU
Hemant Agrawal (10):
net/dpaa: fix FW version code
bus/dpaa: update platform soc value register routines
net/dpaa: add frame count based tail drop with CGR
bus/dpaa: add support to create dynamic HW portal
bus/dpaa: query queue frame count support
net/dpaa: add Rx queue count support
net/dpaa: add support for loopback API
app/testpmd: add support for loopback config for dpaa
bus/dpaa: add support for static queues
net/dpaa: integrate the support of push mode in PMD
Nipun Gupta (5):
bus/dpaa: optimize the qman HW stashing settings
bus/dpaa: optimize the endianness conversions
net/dpaa: change Tx HW budget to 7
net/dpaa: optimize the Tx burst
net/dpaa: optimize Rx path
app/test-pmd/Makefile | 4 +
app/test-pmd/cmdline.c | 7 +
doc/guides/nics/dpaa.rst | 11 ++
drivers/bus/dpaa/base/qbman/qman.c | 238 ++++++++++++++++++++++++++--
drivers/bus/dpaa/base/qbman/qman.h | 4 +-
drivers/bus/dpaa/base/qbman/qman_driver.c | 153 +++++++++++++++---
drivers/bus/dpaa/base/qbman/qman_priv.h | 6 +-
drivers/bus/dpaa/dpaa_bus.c | 43 ++++-
drivers/bus/dpaa/include/fsl_qman.h | 62 ++++++--
drivers/bus/dpaa/include/fsl_usd.h | 4 +
drivers/bus/dpaa/include/process.h | 11 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 21 +++
drivers/bus/dpaa/rte_dpaa_bus.h | 15 ++
drivers/net/dpaa/Makefile | 3 +
drivers/net/dpaa/dpaa_ethdev.c | 253 ++++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_ethdev.h | 21 ++-
drivers/net/dpaa/dpaa_rxtx.c | 161 +++++++++++++------
drivers/net/dpaa/dpaa_rxtx.h | 7 +-
drivers/net/dpaa/rte_pmd_dpaa.h | 35 +++++
drivers/net/dpaa/rte_pmd_dpaa_version.map | 8 +
20 files changed, 917 insertions(+), 150 deletions(-)
create mode 100644 drivers/net/dpaa/rte_pmd_dpaa.h
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 01/18] net/dpaa: fix the mbuf packet type if zero
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 02/18] net/dpaa: fix FW version code Hemant Agrawal
` (17 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Ashish Jain, stable
From: Ashish Jain <ashish.jain@nxp.com>
Populate the mbuf field packet_type which is required
for calculating checksum while transmitting frames
Fixes: 8cffdcbe85aa ("net/dpaa: support scattered Rx")
Cc: stable@dpdk.org
Signed-off-by: Ashish Jain <ashish.jain@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index c3a0920..630d7a5 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -32,6 +32,7 @@
#include <rte_ip.h>
#include <rte_tcp.h>
#include <rte_udp.h>
+#include <rte_net.h>
#include "dpaa_ethdev.h"
#include "dpaa_rxtx.h"
@@ -478,6 +479,15 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
fd->opaque_addr = 0;
if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+ if (!mbuf->packet_type) {
+ struct rte_net_hdr_lens hdr_lens;
+
+ mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
+ RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
+ | RTE_PTYPE_L4_MASK);
+ mbuf->l2_len = hdr_lens.l2_len;
+ mbuf->l3_len = hdr_lens.l3_len;
+ }
if (temp->data_off < DEFAULT_TX_ICEOF
+ sizeof(struct dpaa_eth_parse_results_t))
temp->data_off = DEFAULT_TX_ICEOF
@@ -585,6 +595,15 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
}
if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+ if (!mbuf->packet_type) {
+ struct rte_net_hdr_lens hdr_lens;
+
+ mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
+ RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
+ | RTE_PTYPE_L4_MASK);
+ mbuf->l2_len = hdr_lens.l2_len;
+ mbuf->l3_len = hdr_lens.l3_len;
+ }
if (mbuf->data_off < (DEFAULT_TX_ICEOF +
sizeof(struct dpaa_eth_parse_results_t))) {
DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 02/18] net/dpaa: fix FW version code
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 01/18] net/dpaa: fix the mbuf packet type if zero Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 03/18] bus/dpaa: update platform soc value register routines Hemant Agrawal
` (16 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, stable
fix the soc id path and missing fclose
Fixes: cf0fab1d2ca5 ("net/dpaa: support firmware version get API")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 14 +++++---------
drivers/net/dpaa/dpaa_ethdev.h | 2 +-
2 files changed, 6 insertions(+), 10 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 7b4a6f1..db6574f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -186,19 +186,15 @@ dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
DPAA_PMD_ERR("Unable to open SoC device");
return -ENOTSUP; /* Not supported on this infra */
}
-
- ret = fscanf(svr_file, "svr:%x", &svr_ver);
- if (ret <= 0) {
+ if (fscanf(svr_file, "svr:%x", &svr_ver) <= 0)
DPAA_PMD_ERR("Unable to read SoC device");
- return -ENOTSUP; /* Not supported on this infra */
- }
- ret = snprintf(fw_version, fw_size,
- "svr:%x-fman-v%x",
- svr_ver,
- fman_ip_rev);
+ fclose(svr_file);
+ ret = snprintf(fw_version, fw_size, "SVR:%x-fman-v%x",
+ svr_ver, fman_ip_rev);
ret += 1; /* add the size of '\0' */
+
if (fw_size < (uint32_t)ret)
return ret;
else
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index bd63ee0..254fca2 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -20,7 +20,7 @@
/* DPAA SoC identifier; If this is not available, it can be concluded
* that board is non-DPAA. Single slot is currently supported.
*/
-#define DPAA_SOC_ID_FILE "sys/devices/soc0/soc_id"
+#define DPAA_SOC_ID_FILE "/sys/devices/soc0/soc_id"
#define DPAA_MBUF_HW_ANNOTATION 64
#define DPAA_FD_PTA_SIZE 64
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 03/18] bus/dpaa: update platform soc value register routines
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 01/18] net/dpaa: fix the mbuf packet type if zero Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 02/18] net/dpaa: fix FW version code Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 04/18] net/dpaa: set the correct frame size in device MTU Hemant Agrawal
` (15 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
This patch update the logic and expose the soc value
register, so that it can be used by other modules as well.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/dpaa_bus.c | 12 ++++++++++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 8 ++++++++
drivers/bus/dpaa/rte_dpaa_bus.h | 11 +++++++++++
drivers/net/dpaa/dpaa_ethdev.c | 4 +++-
drivers/net/dpaa/dpaa_ethdev.h | 5 -----
5 files changed, 34 insertions(+), 6 deletions(-)
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 79f4858..a7c05b3 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -51,6 +51,8 @@ struct netcfg_info *dpaa_netcfg;
/* define a variable to hold the portal_key, once created.*/
pthread_key_t dpaa_portal_key;
+unsigned int dpaa_svr_family;
+
RTE_DEFINE_PER_LCORE(bool, _dpaa_io);
static inline void
@@ -417,6 +419,8 @@ rte_dpaa_bus_probe(void)
int ret = -1;
struct rte_dpaa_device *dev;
struct rte_dpaa_driver *drv;
+ FILE *svr_file = NULL;
+ unsigned int svr_ver;
BUS_INIT_FUNC_TRACE();
@@ -436,6 +440,14 @@ rte_dpaa_bus_probe(void)
break;
}
}
+
+ svr_file = fopen(DPAA_SOC_ID_FILE, "r");
+ if (svr_file) {
+ if (fscanf(svr_file, "svr:%x", &svr_ver) > 0)
+ dpaa_svr_family = svr_ver & SVR_MASK;
+ fclose(svr_file);
+ }
+
return 0;
}
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index fb9d532..eeeb458 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -64,3 +64,11 @@ DPDK_17.11 {
local: *;
};
+
+DPDK_18.02 {
+ global:
+
+ dpaa_svr_family;
+
+ local: *;
+} DPDK_17.11;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 5758274..d9e8c84 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -20,6 +20,17 @@
#define DEV_TO_DPAA_DEVICE(ptr) \
container_of(ptr, struct rte_dpaa_device, device)
+/* DPAA SoC identifier; If this is not available, it can be concluded
+ * that board is non-DPAA. Single slot is currently supported.
+ */
+#define DPAA_SOC_ID_FILE "/sys/devices/soc0/soc_id"
+
+#define SVR_LS1043A_FAMILY 0x87920000
+#define SVR_LS1046A_FAMILY 0x87070000
+#define SVR_MASK 0xffff0000
+
+extern unsigned int dpaa_svr_family;
+
struct rte_dpaa_device;
struct rte_dpaa_driver;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index db6574f..24943ef 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -186,7 +186,9 @@ dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
DPAA_PMD_ERR("Unable to open SoC device");
return -ENOTSUP; /* Not supported on this infra */
}
- if (fscanf(svr_file, "svr:%x", &svr_ver) <= 0)
+ if (fscanf(svr_file, "svr:%x", &svr_ver) > 0)
+ dpaa_svr_family = svr_ver & SVR_MASK;
+ else
DPAA_PMD_ERR("Unable to read SoC device");
fclose(svr_file);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 254fca2..9c3b42c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -17,11 +17,6 @@
#include <of.h>
#include <netcfg.h>
-/* DPAA SoC identifier; If this is not available, it can be concluded
- * that board is non-DPAA. Single slot is currently supported.
- */
-#define DPAA_SOC_ID_FILE "/sys/devices/soc0/soc_id"
-
#define DPAA_MBUF_HW_ANNOTATION 64
#define DPAA_FD_PTA_SIZE 64
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 04/18] net/dpaa: set the correct frame size in device MTU
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (2 preceding siblings ...)
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 03/18] bus/dpaa: update platform soc value register routines Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 05/18] net/dpaa: add frame count based tail drop with CGR Hemant Agrawal
` (14 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Ashish Jain
From: Ashish Jain <ashish.jain@nxp.com>
Setting correct frame size in dpaa_dev_mtu_set
api call. Also setting correct max frame size in
hardware in dev_configure for jumbo frames
Signed-off-by: Ashish Jain <ashish.jain@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 20 +++++++++++++-------
drivers/net/dpaa/dpaa_ethdev.h | 4 ++++
2 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 24943ef..5a2ea4f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -85,19 +85,21 @@ static int
dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ uint32_t frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN
+ + VLAN_TAG_SIZE;
PMD_INIT_FUNC_TRACE();
- if (mtu < ETHER_MIN_MTU)
+ if (mtu < ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN)
return -EINVAL;
- if (mtu > ETHER_MAX_LEN)
+ if (frame_size > ETHER_MAX_LEN)
dev->data->dev_conf.rxmode.jumbo_frame = 1;
else
dev->data->dev_conf.rxmode.jumbo_frame = 0;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
+ dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- fman_if_set_maxfrm(dpaa_intf->fif, mtu);
+ fman_if_set_maxfrm(dpaa_intf->fif, frame_size);
return 0;
}
@@ -105,15 +107,19 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
static int
dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
PMD_INIT_FUNC_TRACE();
if (dev->data->dev_conf.rxmode.jumbo_frame == 1) {
if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- DPAA_MAX_RX_PKT_LEN)
- return dpaa_mtu_set(dev,
+ DPAA_MAX_RX_PKT_LEN) {
+ fman_if_set_maxfrm(dpaa_intf->fif,
dev->data->dev_conf.rxmode.max_rx_pkt_len);
- else
+ return 0;
+ } else {
return -1;
+ }
}
return 0;
}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 9c3b42c..548ccff 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -45,6 +45,10 @@
/*Maximum number of slots available in TX ring*/
#define MAX_TX_RING_SLOTS 8
+#ifndef VLAN_TAG_SIZE
+#define VLAN_TAG_SIZE 4 /** < Vlan Header Length */
+#endif
+
/* PCD frame queues */
#define DPAA_PCD_FQID_START 0x400
#define DPAA_PCD_FQID_MULTIPLIER 0x100
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 05/18] net/dpaa: add frame count based tail drop with CGR
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (3 preceding siblings ...)
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 04/18] net/dpaa: set the correct frame size in device MTU Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 06/18] bus/dpaa: optimize the qman HW stashing settings Hemant Agrawal
` (13 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
Replace the byte based tail queue congestion support
with frame count based congestion groups.
It can easily map to number of RX descriptors for a queue.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/rte_bus_dpaa_version.map | 5 ++
drivers/net/dpaa/dpaa_ethdev.c | 98 +++++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_ethdev.h | 8 +--
3 files changed, 97 insertions(+), 14 deletions(-)
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index eeeb458..f412362 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -69,6 +69,11 @@ DPDK_18.02 {
global:
dpaa_svr_family;
+ qman_alloc_cgrid_range;
+ qman_create_cgr;
+ qman_delete_cgr;
+ qman_modify_cgr;
+ qman_release_cgrid_range;
local: *;
} DPDK_17.11;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5a2ea4f..5d94af5 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -47,6 +47,9 @@
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
+/* Per FQ Taildrop in frame count */
+static unsigned int td_threshold = CGR_RX_PERFQ_THRESH;
+
struct rte_dpaa_xstats_name_off {
char name[RTE_ETH_XSTATS_NAME_SIZE];
uint32_t offset;
@@ -421,12 +424,13 @@ static void dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
static
int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
- uint16_t nb_desc __rte_unused,
+ uint16_t nb_desc,
unsigned int socket_id __rte_unused,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
PMD_INIT_FUNC_TRACE();
@@ -462,7 +466,23 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->name, fd_offset,
fman_if_get_fdoff(dpaa_intf->fif));
}
- dev->data->rx_queues[queue_idx] = &dpaa_intf->rx_queues[queue_idx];
+
+ dev->data->rx_queues[queue_idx] = rxq;
+
+ /* configure the CGR size as per the desc size */
+ if (dpaa_intf->cgr_rx) {
+ struct qm_mcc_initcgr cgr_opts = {0};
+ int ret;
+
+ /* Enable tail drop with cgr on this queue */
+ qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres, nb_desc, 0);
+ ret = qman_modify_cgr(dpaa_intf->cgr_rx, 0, &cgr_opts);
+ if (ret) {
+ DPAA_PMD_WARN(
+ "rx taildrop modify fail on fqid %d (ret=%d)",
+ rxq->fqid, ret);
+ }
+ }
return 0;
}
@@ -698,11 +718,21 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
}
/* Initialise an Rx FQ */
-static int dpaa_rx_queue_init(struct qman_fq *fq,
+static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
int ret;
+ u32 flags = 0;
+ struct qm_mcc_initcgr cgr_opts = {
+ .we_mask = QM_CGR_WE_CS_THRES |
+ QM_CGR_WE_CSTD_EN |
+ QM_CGR_WE_MODE,
+ .cgr = {
+ .cstd_en = QM_CGR_EN,
+ .mode = QMAN_CGR_MODE_FRAME
+ }
+ };
PMD_INIT_FUNC_TRACE();
@@ -732,12 +762,24 @@ static int dpaa_rx_queue_init(struct qman_fq *fq,
opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
- /*Enable tail drop */
- opts.we_mask = opts.we_mask | QM_INITFQ_WE_TDTHRESH;
- opts.fqd.fq_ctrl = opts.fqd.fq_ctrl | QM_FQCTRL_TDE;
- qm_fqd_taildrop_set(&opts.fqd.td, CONG_THRESHOLD_RX_Q, 1);
-
- ret = qman_init_fq(fq, 0, &opts);
+ if (cgr_rx) {
+ /* Enable tail drop with cgr on this queue */
+ qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres, td_threshold, 0);
+ cgr_rx->cb = NULL;
+ ret = qman_create_cgr(cgr_rx, QMAN_CGR_FLAG_USE_INIT,
+ &cgr_opts);
+ if (ret) {
+ DPAA_PMD_WARN(
+ "rx taildrop init fail on rx fqid %d (ret=%d)",
+ fqid, ret);
+ goto without_cgr;
+ }
+ opts.we_mask |= QM_INITFQ_WE_CGID;
+ opts.fqd.cgid = cgr_rx->cgrid;
+ opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+ }
+without_cgr:
+ ret = qman_init_fq(fq, flags, &opts);
if (ret)
DPAA_PMD_ERR("init rx fqid %d failed with ret: %d", fqid, ret);
return ret;
@@ -819,6 +861,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
struct fm_eth_port_cfg *cfg;
struct fman_if *fman_intf;
struct fman_if_bpool *bp, *tmp_bp;
+ uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES];
PMD_INIT_FUNC_TRACE();
@@ -855,10 +898,31 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->rx_queues = rte_zmalloc(NULL,
sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+
+ /* If congestion control is enabled globally*/
+ if (td_threshold) {
+ dpaa_intf->cgr_rx = rte_zmalloc(NULL,
+ sizeof(struct qman_cgr) * num_rx_fqs, MAX_CACHELINE);
+
+ ret = qman_alloc_cgrid_range(&cgrid[0], num_rx_fqs, 1, 0);
+ if (ret != num_rx_fqs) {
+ DPAA_PMD_WARN("insufficient CGRIDs available");
+ return -EINVAL;
+ }
+ } else {
+ dpaa_intf->cgr_rx = NULL;
+ }
+
for (loop = 0; loop < num_rx_fqs; loop++) {
fqid = DPAA_PCD_FQID_START + dpaa_intf->ifid *
DPAA_PCD_FQID_MULTIPLIER + loop;
- ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop], fqid);
+
+ if (dpaa_intf->cgr_rx)
+ dpaa_intf->cgr_rx[loop].cgrid = cgrid[loop];
+
+ ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop],
+ dpaa_intf->cgr_rx ? &dpaa_intf->cgr_rx[loop] : NULL,
+ fqid);
if (ret)
return ret;
dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
@@ -913,6 +977,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
"store MAC addresses",
ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
+ rte_free(dpaa_intf->cgr_rx);
rte_free(dpaa_intf->rx_queues);
rte_free(dpaa_intf->tx_queues);
dpaa_intf->rx_queues = NULL;
@@ -951,6 +1016,7 @@ static int
dpaa_dev_uninit(struct rte_eth_dev *dev)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ int loop;
PMD_INIT_FUNC_TRACE();
@@ -968,6 +1034,18 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
if (dpaa_intf->fc_conf)
rte_free(dpaa_intf->fc_conf);
+ /* Release RX congestion Groups */
+ if (dpaa_intf->cgr_rx) {
+ for (loop = 0; loop < dpaa_intf->nb_rx_queues; loop++)
+ qman_delete_cgr(&dpaa_intf->cgr_rx[loop]);
+
+ qman_release_cgrid_range(dpaa_intf->cgr_rx[loop].cgrid,
+ dpaa_intf->nb_rx_queues);
+ }
+
+ rte_free(dpaa_intf->cgr_rx);
+ dpaa_intf->cgr_rx = NULL;
+
rte_free(dpaa_intf->rx_queues);
dpaa_intf->rx_queues = NULL;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 548ccff..f00a77a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -34,10 +34,8 @@
#define DPAA_MIN_RX_BUF_SIZE 512
#define DPAA_MAX_RX_PKT_LEN 10240
-/* RX queue tail drop threshold
- * currently considering 32 KB packets.
- */
-#define CONG_THRESHOLD_RX_Q (32 * 1024)
+/* RX queue tail drop threshold (CGR Based) in frame count */
+#define CGR_RX_PERFQ_THRESH 256
/*max mac filter for memac(8) including primary mac addr*/
#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
@@ -53,6 +51,7 @@
#define DPAA_PCD_FQID_START 0x400
#define DPAA_PCD_FQID_MULTIPLIER 0x100
#define DPAA_DEFAULT_NUM_PCD_QUEUES 1
+#define DPAA_MAX_NUM_PCD_QUEUES 32
#define DPAA_IF_TX_PRIORITY 3
#define DPAA_IF_RX_PRIORITY 4
@@ -102,6 +101,7 @@ struct dpaa_if {
char *name;
const struct fm_eth_port_cfg *cfg;
struct qman_fq *rx_queues;
+ struct qman_cgr *cgr_rx;
struct qman_fq *tx_queues;
struct qman_fq debug_queues[2];
uint16_t nb_rx_queues;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 06/18] bus/dpaa: optimize the qman HW stashing settings
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (4 preceding siblings ...)
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 05/18] net/dpaa: add frame count based tail drop with CGR Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 07/18] bus/dpaa: optimize the endianness conversions Hemant Agrawal
` (12 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
The settings are tuned for performance.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index a53459f..49bc317 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -7,6 +7,7 @@
#include "qman.h"
#include <rte_branch_prediction.h>
+#include <rte_dpaa_bus.h>
/* Compilation constants */
#define DQRR_MAXFILL 15
@@ -503,7 +504,12 @@ struct qman_portal *qman_create_portal(
p = &portal->p;
- portal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+ if (dpaa_svr_family == SVR_LS1043A_FAMILY)
+ portal->use_eqcr_ci_stashing = 3;
+ else
+ portal->use_eqcr_ci_stashing =
+ ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+
/*
* prep the low-level portal struct with the mapped addresses from the
* config, everything that follows depends on it and "config" is more
@@ -516,7 +522,7 @@ struct qman_portal *qman_create_portal(
* and stash with high-than-DQRR priority.
*/
if (qm_eqcr_init(p, qm_eqcr_pvb,
- portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {
+ portal->use_eqcr_ci_stashing, 1)) {
pr_err("Qman EQCR initialisation failed\n");
goto fail_eqcr;
}
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 07/18] bus/dpaa: optimize the endianness conversions
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (5 preceding siblings ...)
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 06/18] bus/dpaa: optimize the qman HW stashing settings Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 08/18] bus/dpaa: add support to create dynamic HW portal Hemant Agrawal
` (11 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 7 ++++---
drivers/bus/dpaa/include/fsl_qman.h | 2 ++
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 49bc317..b6fd40b 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -906,7 +906,7 @@ static inline unsigned int __poll_portal_fast(struct qman_portal *p,
do {
qm_dqrr_pvb_update(&p->p);
dq = qm_dqrr_current(&p->p);
- if (!dq)
+ if (unlikely(!dq))
break;
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
/* If running on an LE system the fields of the
@@ -1165,6 +1165,7 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
}
spin_lock_init(&fq->fqlock);
fq->fqid = fqid;
+ fq->fqid_le = cpu_to_be32(fqid);
fq->flags = flags;
fq->state = qman_fq_state_oos;
fq->cgr_groupid = 0;
@@ -1953,7 +1954,7 @@ int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
int qman_enqueue_multi(struct qman_fq *fq,
const struct qm_fd *fd,
- int frames_to_send)
+ int frames_to_send)
{
struct qman_portal *p = get_affine_portal();
struct qm_portal *portal = &p->p;
@@ -1975,7 +1976,7 @@ int qman_enqueue_multi(struct qman_fq *fq,
/* try to send as many frames as possible */
while (eqcr->available && frames_to_send--) {
- eq->fqid = cpu_to_be32(fq->fqid);
+ eq->fqid = fq->fqid_le;
#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
eq->tag = cpu_to_be32(fq->key);
#else
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 5830ad5..5027230 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1197,6 +1197,8 @@ struct qman_fq {
*/
spinlock_t fqlock;
u32 fqid;
+ u32 fqid_le;
+
/* DPDK Interface */
void *dpaa_intf;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 08/18] bus/dpaa: add support to create dynamic HW portal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (6 preceding siblings ...)
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 07/18] bus/dpaa: optimize the endianness conversions Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 09/18] net/dpaa: change Tx HW budget to 7 Hemant Agrawal
` (10 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
HW portal is a processing context in DPAA. This patch allow
creation of a queue specific HW portal context.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 69 ++++++++++++--
drivers/bus/dpaa/base/qbman/qman_driver.c | 153 +++++++++++++++++++++++++-----
drivers/bus/dpaa/base/qbman/qman_priv.h | 6 +-
drivers/bus/dpaa/dpaa_bus.c | 31 +++++-
drivers/bus/dpaa/include/fsl_qman.h | 25 ++---
drivers/bus/dpaa/include/fsl_usd.h | 4 +
drivers/bus/dpaa/include/process.h | 11 ++-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 2 +
drivers/bus/dpaa/rte_dpaa_bus.h | 4 +
9 files changed, 252 insertions(+), 53 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index b6fd40b..d8fb25a 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -621,11 +621,52 @@ struct qman_portal *qman_create_portal(
return NULL;
}
+#define MAX_GLOBAL_PORTALS 8
+static struct qman_portal global_portals[MAX_GLOBAL_PORTALS];
+static int global_portals_used[MAX_GLOBAL_PORTALS];
+
+static struct qman_portal *
+qman_alloc_global_portal(void)
+{
+ unsigned int i;
+
+ for (i = 0; i < MAX_GLOBAL_PORTALS; i++) {
+ if (global_portals_used[i] == 0) {
+ global_portals_used[i] = 1;
+ return &global_portals[i];
+ }
+ }
+ pr_err("No portal available (%x)\n", MAX_GLOBAL_PORTALS);
+
+ return NULL;
+}
+
+static int
+qman_free_global_portal(struct qman_portal *portal)
+{
+ unsigned int i;
+
+ for (i = 0; i < MAX_GLOBAL_PORTALS; i++) {
+ if (&global_portals[i] == portal) {
+ global_portals_used[i] = 0;
+ return 0;
+ }
+ }
+ return -1;
+}
+
struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,
- const struct qman_cgrs *cgrs)
+ const struct qman_cgrs *cgrs,
+ int alloc)
{
struct qman_portal *res;
- struct qman_portal *portal = get_affine_portal();
+ struct qman_portal *portal;
+
+ if (alloc)
+ portal = qman_alloc_global_portal();
+ else
+ portal = get_affine_portal();
+
/* A criteria for calling this function (from qman_driver.c) is that
* we're already affine to the cpu and won't schedule onto another cpu.
*/
@@ -675,13 +716,18 @@ void qman_destroy_portal(struct qman_portal *qm)
spin_lock_destroy(&qm->cgr_lock);
}
-const struct qm_portal_config *qman_destroy_affine_portal(void)
+const struct qm_portal_config *
+qman_destroy_affine_portal(struct qman_portal *qp)
{
/* We don't want to redirect if we're a slave, use "raw" */
- struct qman_portal *qm = get_affine_portal();
+ struct qman_portal *qm;
const struct qm_portal_config *pcfg;
int cpu;
+ if (qp == NULL)
+ qm = get_affine_portal();
+ else
+ qm = qp;
pcfg = qm->config;
cpu = pcfg->cpu;
@@ -690,6 +736,9 @@ const struct qm_portal_config *qman_destroy_affine_portal(void)
spin_lock(&affine_mask_lock);
CPU_CLR(cpu, &affine_mask);
spin_unlock(&affine_mask_lock);
+
+ qman_free_global_portal(qm);
+
return pcfg;
}
@@ -1096,27 +1145,27 @@ void qman_start_dequeues(void)
qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
}
-void qman_static_dequeue_add(u32 pools)
+void qman_static_dequeue_add(u32 pools, struct qman_portal *qp)
{
- struct qman_portal *p = get_affine_portal();
+ struct qman_portal *p = qp ? qp : get_affine_portal();
pools &= p->config->pools;
p->sdqcr |= pools;
qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
}
-void qman_static_dequeue_del(u32 pools)
+void qman_static_dequeue_del(u32 pools, struct qman_portal *qp)
{
- struct qman_portal *p = get_affine_portal();
+ struct qman_portal *p = qp ? qp : get_affine_portal();
pools &= p->config->pools;
p->sdqcr &= ~pools;
qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
}
-u32 qman_static_dequeue_get(void)
+u32 qman_static_dequeue_get(struct qman_portal *qp)
{
- struct qman_portal *p = get_affine_portal();
+ struct qman_portal *p = qp ? qp : get_affine_portal();
return p->sdqcr;
}
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index c17d15f..7cfa8ee 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -24,8 +24,8 @@ void *qman_ccsr_map;
/* The qman clock frequency */
u32 qman_clk;
-static __thread int fd = -1;
-static __thread struct qm_portal_config pcfg;
+static __thread int qmfd = -1;
+static __thread struct qm_portal_config qpcfg;
static __thread struct dpaa_ioctl_portal_map map = {
.type = dpaa_portal_qman
};
@@ -44,16 +44,16 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
error(0, ret, "pthread_getaffinity_np()");
return ret;
}
- pcfg.cpu = -1;
+ qpcfg.cpu = -1;
for (loop = 0; loop < CPU_SETSIZE; loop++)
if (CPU_ISSET(loop, &cpuset)) {
- if (pcfg.cpu != -1) {
+ if (qpcfg.cpu != -1) {
pr_err("Thread is not affine to 1 cpu\n");
return -EINVAL;
}
- pcfg.cpu = loop;
+ qpcfg.cpu = loop;
}
- if (pcfg.cpu == -1) {
+ if (qpcfg.cpu == -1) {
pr_err("Bug in getaffinity handling!\n");
return -EINVAL;
}
@@ -65,36 +65,36 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
error(0, ret, "process_portal_map()");
return ret;
}
- pcfg.channel = map.channel;
- pcfg.pools = map.pools;
- pcfg.index = map.index;
+ qpcfg.channel = map.channel;
+ qpcfg.pools = map.pools;
+ qpcfg.index = map.index;
/* Make the portal's cache-[enabled|inhibited] regions */
- pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
- pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+ qpcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+ qpcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
- fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
- if (fd == -1) {
+ qmfd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+ if (qmfd == -1) {
pr_err("QMan irq init failed\n");
process_portal_unmap(&map.addr);
return -EBUSY;
}
- pcfg.is_shared = is_shared;
- pcfg.node = NULL;
- pcfg.irq = fd;
+ qpcfg.is_shared = is_shared;
+ qpcfg.node = NULL;
+ qpcfg.irq = qmfd;
- portal = qman_create_affine_portal(&pcfg, NULL);
+ portal = qman_create_affine_portal(&qpcfg, NULL, 0);
if (!portal) {
pr_err("Qman portal initialisation failed (%d)\n",
- pcfg.cpu);
+ qpcfg.cpu);
process_portal_unmap(&map.addr);
return -EBUSY;
}
irq_map.type = dpaa_portal_qman;
irq_map.portal_cinh = map.addr.cinh;
- process_portal_irq_map(fd, &irq_map);
+ process_portal_irq_map(qmfd, &irq_map);
return 0;
}
@@ -103,10 +103,10 @@ static int fsl_qman_portal_finish(void)
__maybe_unused const struct qm_portal_config *cfg;
int ret;
- process_portal_irq_unmap(fd);
+ process_portal_irq_unmap(qmfd);
- cfg = qman_destroy_affine_portal();
- DPAA_BUG_ON(cfg != &pcfg);
+ cfg = qman_destroy_affine_portal(NULL);
+ DPAA_BUG_ON(cfg != &qpcfg);
ret = process_portal_unmap(&map.addr);
if (ret)
error(0, ret, "process_portal_unmap()");
@@ -128,14 +128,119 @@ int qman_thread_finish(void)
void qman_thread_irq(void)
{
- qbman_invoke_irq(pcfg.irq);
+ qbman_invoke_irq(qpcfg.irq);
/* Now we need to uninhibit interrupts. This is the only code outside
* the regular portal driver that manipulates any portal register, so
* rather than breaking that encapsulation I am simply hard-coding the
* offset to the inhibit register here.
*/
- out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+ out_be32(qpcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+struct qman_portal *fsl_qman_portal_create(void)
+{
+ cpu_set_t cpuset;
+ struct qman_portal *res;
+
+ struct qm_portal_config *q_pcfg;
+ int loop, ret;
+ struct dpaa_ioctl_irq_map irq_map;
+ struct dpaa_ioctl_portal_map q_map = {0};
+ int q_fd;
+
+ q_pcfg = kzalloc((sizeof(struct qm_portal_config)), 0);
+ if (!q_pcfg) {
+ error(0, -1, "q_pcfg kzalloc failed");
+ return NULL;
+ }
+
+ /* Verify the thread's cpu-affinity */
+ ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+ &cpuset);
+ if (ret) {
+ error(0, ret, "pthread_getaffinity_np()");
+ return NULL;
+ }
+
+ q_pcfg->cpu = -1;
+ for (loop = 0; loop < CPU_SETSIZE; loop++)
+ if (CPU_ISSET(loop, &cpuset)) {
+ if (q_pcfg->cpu != -1) {
+ pr_err("Thread is not affine to 1 cpu\n");
+ return NULL;
+ }
+ q_pcfg->cpu = loop;
+ }
+ if (q_pcfg->cpu == -1) {
+ pr_err("Bug in getaffinity handling!\n");
+ return NULL;
+ }
+
+ /* Allocate and map a qman portal */
+ q_map.type = dpaa_portal_qman;
+ q_map.index = QBMAN_ANY_PORTAL_IDX;
+ ret = process_portal_map(&q_map);
+ if (ret) {
+ error(0, ret, "process_portal_map()");
+ return NULL;
+ }
+ q_pcfg->channel = q_map.channel;
+ q_pcfg->pools = q_map.pools;
+ q_pcfg->index = q_map.index;
+
+ /* Make the portal's cache-[enabled|inhibited] regions */
+ q_pcfg->addr_virt[DPAA_PORTAL_CE] = q_map.addr.cena;
+ q_pcfg->addr_virt[DPAA_PORTAL_CI] = q_map.addr.cinh;
+
+ q_fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+ if (q_fd == -1) {
+ pr_err("QMan irq init failed\n");
+ goto err1;
+ }
+
+ q_pcfg->irq = q_fd;
+
+ res = qman_create_affine_portal(q_pcfg, NULL, true);
+ if (!res) {
+ pr_err("Qman portal initialisation failed (%d)\n",
+ q_pcfg->cpu);
+ goto err2;
+ }
+
+ irq_map.type = dpaa_portal_qman;
+ irq_map.portal_cinh = q_map.addr.cinh;
+ process_portal_irq_map(q_fd, &irq_map);
+
+ return res;
+err2:
+ close(q_fd);
+err1:
+ process_portal_unmap(&q_map.addr);
+ return NULL;
+}
+
+int fsl_qman_portal_destroy(struct qman_portal *qp)
+{
+ const struct qm_portal_config *cfg;
+ struct dpaa_portal_map addr;
+ int ret;
+
+ cfg = qman_destroy_affine_portal(qp);
+ kfree(qp);
+
+ process_portal_irq_unmap(cfg->irq);
+
+ addr.cena = cfg->addr_virt[DPAA_PORTAL_CE];
+ addr.cinh = cfg->addr_virt[DPAA_PORTAL_CI];
+
+ ret = process_portal_unmap(&addr);
+ if (ret)
+ pr_err("process_portal_unmap() (%d)\n", ret);
+
+ kfree((void *)cfg);
+
+ return ret;
}
int qman_global_init(void)
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index db0b310..9e4471e 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -146,8 +146,10 @@ int qm_get_wpm(int *wpm);
struct qman_portal *qman_create_affine_portal(
const struct qm_portal_config *config,
- const struct qman_cgrs *cgrs);
-const struct qm_portal_config *qman_destroy_affine_portal(void);
+ const struct qman_cgrs *cgrs,
+ int alloc);
+const struct qm_portal_config *
+qman_destroy_affine_portal(struct qman_portal *q);
struct qm_portal_config *qm_get_unused_portal(void);
struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index a7c05b3..329a125 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -264,8 +264,7 @@ _dpaa_portal_init(void *arg)
* rte_dpaa_portal_init - Wrapper over _dpaa_portal_init with thread level check
* XXX Complete this
*/
-int
-rte_dpaa_portal_init(void *arg)
+int rte_dpaa_portal_init(void *arg)
{
if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
return _dpaa_portal_init(arg);
@@ -273,6 +272,34 @@ rte_dpaa_portal_init(void *arg)
return 0;
}
+int
+rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
+{
+ /* Affine above created portal with channel*/
+ u32 sdqcr;
+ struct qman_portal *qp;
+
+ if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
+ _dpaa_portal_init(arg);
+
+ /* Initialise qman specific portals */
+ qp = fsl_qman_portal_create();
+ if (!qp) {
+ DPAA_BUS_LOG(ERR, "Unable to alloc fq portal");
+ return -1;
+ }
+ fq->qp = qp;
+ sdqcr = QM_SDQCR_CHANNELS_POOL_CONV(fq->ch_id);
+ qman_static_dequeue_add(sdqcr, qp);
+
+ return 0;
+}
+
+int rte_dpaa_portal_fq_close(struct qman_fq *fq)
+{
+ return fsl_qman_portal_destroy(fq->qp);
+}
+
void
dpaa_portal_finish(void *arg)
{
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 5027230..fc00d8d 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1190,21 +1190,24 @@ struct qman_fq_cb {
struct qman_fq {
/* Caller of qman_create_fq() provides these demux callbacks */
struct qman_fq_cb cb;
- /*
- * These are internal to the driver, don't touch. In particular, they
- * may change, be removed, or extended (so you shouldn't rely on
- * sizeof(qman_fq) being a constant).
- */
- spinlock_t fqlock;
- u32 fqid;
+
u32 fqid_le;
+ u16 ch_id;
+ u8 cgr_groupid;
+ u8 is_static;
/* DPDK Interface */
void *dpaa_intf;
+ /* affined portal in case of static queue */
+ struct qman_portal *qp;
+
volatile unsigned long flags;
+
enum qman_fq_state state;
- int cgr_groupid;
+ u32 fqid;
+ spinlock_t fqlock;
+
struct rb_node node;
#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
u32 key;
@@ -1383,7 +1386,7 @@ void qman_start_dequeues(void);
* (SDQCR). The requested pools are limited to those the portal has dequeue
* access to.
*/
-void qman_static_dequeue_add(u32 pools);
+void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);
/**
* qman_static_dequeue_del - Remove pool channels from the portal SDQCR
@@ -1393,7 +1396,7 @@ void qman_static_dequeue_add(u32 pools);
* register (SDQCR). The requested pools are limited to those the portal has
* dequeue access to.
*/
-void qman_static_dequeue_del(u32 pools);
+void qman_static_dequeue_del(u32 pools, struct qman_portal *qp);
/**
* qman_static_dequeue_get - return the portal's current SDQCR
@@ -1402,7 +1405,7 @@ void qman_static_dequeue_del(u32 pools);
* entire register is returned, so if only the currently-enabled pool channels
* are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
*/
-u32 qman_static_dequeue_get(void);
+u32 qman_static_dequeue_get(struct qman_portal *qp);
/**
* qman_dca - Perform a Discrete Consumption Acknowledgment
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index ada92f2..e183617 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -67,6 +67,10 @@ void bman_thread_irq(void);
int qman_global_init(void);
int bman_global_init(void);
+/* Direct portal create and destroy */
+struct qman_portal *fsl_qman_portal_create(void);
+int fsl_qman_portal_destroy(struct qman_portal *qp);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index 079849b..d9ec94e 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -39,6 +39,11 @@ enum dpaa_portal_type {
dpaa_portal_bman,
};
+struct dpaa_portal_map {
+ void *cinh;
+ void *cena;
+};
+
struct dpaa_ioctl_portal_map {
/* Input parameter, is a qman or bman portal required. */
enum dpaa_portal_type type;
@@ -50,10 +55,8 @@ struct dpaa_ioctl_portal_map {
/* Return value if the map succeeds, this gives the mapped
* cache-inhibited (cinh) and cache-enabled (cena) addresses.
*/
- struct dpaa_portal_map {
- void *cinh;
- void *cena;
- } addr;
+ struct dpaa_portal_map addr;
+
/* Qman-specific return values */
u16 channel;
uint32_t pools;
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index f412362..4e3afda 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -74,6 +74,8 @@ DPDK_18.02 {
qman_delete_cgr;
qman_modify_cgr;
qman_release_cgrid_range;
+ rte_dpaa_portal_fq_close;
+ rte_dpaa_portal_fq_init;
local: *;
} DPDK_17.11;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index d9e8c84..f626066 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -136,6 +136,10 @@ void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
*/
int rte_dpaa_portal_init(void *arg);
+int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);
+
+int rte_dpaa_portal_fq_close(struct qman_fq *fq);
+
/**
* Cleanup a DPAA Portal
*/
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 09/18] net/dpaa: change Tx HW budget to 7
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (7 preceding siblings ...)
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 08/18] bus/dpaa: add support to create dynamic HW portal Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 10/18] net/dpaa: optimize the Tx burst Hemant Agrawal
` (9 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
change the TX budget to 7 to sync best with the hw.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.h | 2 +-
drivers/net/dpaa/dpaa_rxtx.c | 5 +++--
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index f00a77a..1b36567 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -41,7 +41,7 @@
#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
/*Maximum number of slots available in TX ring*/
-#define MAX_TX_RING_SLOTS 8
+#define DPAA_TX_BURST_SIZE 7
#ifndef VLAN_TAG_SIZE
#define VLAN_TAG_SIZE 4 /** < Vlan Header Length */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 630d7a5..565ca50 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -669,7 +669,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct rte_mbuf *mbuf, *mi = NULL;
struct rte_mempool *mp;
struct dpaa_bp_info *bp_info;
- struct qm_fd fd_arr[MAX_TX_RING_SLOTS];
+ struct qm_fd fd_arr[DPAA_TX_BURST_SIZE];
uint32_t frames_to_send, loop, i = 0;
uint16_t state;
int ret;
@@ -683,7 +683,8 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_DP_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
while (nb_bufs) {
- frames_to_send = (nb_bufs >> 3) ? MAX_TX_RING_SLOTS : nb_bufs;
+ frames_to_send = (nb_bufs > DPAA_TX_BURST_SIZE) ?
+ DPAA_TX_BURST_SIZE : nb_bufs;
for (loop = 0; loop < frames_to_send; loop++, i++) {
mbuf = bufs[i];
if (RTE_MBUF_DIRECT(mbuf)) {
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 10/18] net/dpaa: optimize the Tx burst
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (8 preceding siblings ...)
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 09/18] net/dpaa: change Tx HW budget to 7 Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 11/18] net/dpaa: optimize Rx path Hemant Agrawal
` (8 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Optimize it for best case. Create a function
for TX offloads to be used in multiple legs.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 73 ++++++++++++++++++++++++++++----------------
1 file changed, 46 insertions(+), 27 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 565ca50..148f265 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -272,6 +272,30 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
}
+static inline void
+dpaa_unsegmented_checksum(struct rte_mbuf *mbuf, struct qm_fd *fd_arr)
+{
+ if (!mbuf->packet_type) {
+ struct rte_net_hdr_lens hdr_lens;
+
+ mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
+ RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
+ | RTE_PTYPE_L4_MASK);
+ mbuf->l2_len = hdr_lens.l2_len;
+ mbuf->l3_len = hdr_lens.l3_len;
+ }
+ if (mbuf->data_off < (DEFAULT_TX_ICEOF +
+ sizeof(struct dpaa_eth_parse_results_t))) {
+ DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
+ "Not enough Headroom "
+ "space for correct Checksum offload."
+ "So Calculating checksum in Software.");
+ dpaa_checksum(mbuf);
+ } else {
+ dpaa_checksum_offload(mbuf, fd_arr, mbuf->buf_addr);
+ }
+}
+
struct rte_mbuf *
dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
{
@@ -594,27 +618,8 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
rte_pktmbuf_free(mbuf);
}
- if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
- if (!mbuf->packet_type) {
- struct rte_net_hdr_lens hdr_lens;
-
- mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
- RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
- | RTE_PTYPE_L4_MASK);
- mbuf->l2_len = hdr_lens.l2_len;
- mbuf->l3_len = hdr_lens.l3_len;
- }
- if (mbuf->data_off < (DEFAULT_TX_ICEOF +
- sizeof(struct dpaa_eth_parse_results_t))) {
- DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
- "Not enough Headroom "
- "space for correct Checksum offload."
- "So Calculating checksum in Software.");
- dpaa_checksum(mbuf);
- } else {
- dpaa_checksum_offload(mbuf, fd_arr, mbuf->buf_addr);
- }
- }
+ if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK)
+ dpaa_unsegmented_checksum(mbuf, fd_arr);
}
/* Handle all mbufs on dpaa BMAN managed pool */
@@ -670,7 +675,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct rte_mempool *mp;
struct dpaa_bp_info *bp_info;
struct qm_fd fd_arr[DPAA_TX_BURST_SIZE];
- uint32_t frames_to_send, loop, i = 0;
+ uint32_t frames_to_send, loop, sent = 0;
uint16_t state;
int ret;
@@ -685,10 +690,23 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
while (nb_bufs) {
frames_to_send = (nb_bufs > DPAA_TX_BURST_SIZE) ?
DPAA_TX_BURST_SIZE : nb_bufs;
- for (loop = 0; loop < frames_to_send; loop++, i++) {
- mbuf = bufs[i];
- if (RTE_MBUF_DIRECT(mbuf)) {
+ for (loop = 0; loop < frames_to_send; loop++) {
+ mbuf = *(bufs++);
+ if (likely(RTE_MBUF_DIRECT(mbuf))) {
mp = mbuf->pool;
+ bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+ if (likely(mp->ops_index ==
+ bp_info->dpaa_ops_index &&
+ mbuf->nb_segs == 1 &&
+ rte_mbuf_refcnt_read(mbuf) == 1)) {
+ DPAA_MBUF_TO_CONTIG_FD(mbuf,
+ &fd_arr[loop], bp_info->bpid);
+ if (mbuf->ol_flags &
+ DPAA_TX_CKSUM_OFFLOAD_MASK)
+ dpaa_unsegmented_checksum(mbuf,
+ &fd_arr[loop]);
+ continue;
+ }
} else {
mi = rte_mbuf_from_indirect(mbuf);
mp = mi->pool;
@@ -729,11 +747,12 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
frames_to_send - loop);
}
nb_bufs -= frames_to_send;
+ sent += frames_to_send;
}
- DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", i, q);
+ DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
- return i;
+ return sent;
}
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 11/18] net/dpaa: optimize Rx path
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (9 preceding siblings ...)
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 10/18] net/dpaa: optimize the Tx burst Hemant Agrawal
@ 2018-01-09 13:22 ` Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 12/18] bus/dpaa: query queue frame count support Hemant Agrawal
` (7 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:22 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 48 ++++++++++++++++++++------------------------
drivers/net/dpaa/dpaa_rxtx.h | 2 +-
2 files changed, 23 insertions(+), 27 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 148f265..98671fa 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -97,12 +97,6 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
switch (prs) {
- case DPAA_PKT_TYPE_NONE:
- m->packet_type = 0;
- break;
- case DPAA_PKT_TYPE_ETHER:
- m->packet_type = RTE_PTYPE_L2_ETHER;
- break;
case DPAA_PKT_TYPE_IPV4:
m->packet_type = RTE_PTYPE_L2_ETHER |
RTE_PTYPE_L3_IPV4;
@@ -111,6 +105,9 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
m->packet_type = RTE_PTYPE_L2_ETHER |
RTE_PTYPE_L3_IPV6;
break;
+ case DPAA_PKT_TYPE_ETHER:
+ m->packet_type = RTE_PTYPE_L2_ETHER;
+ break;
case DPAA_PKT_TYPE_IPV4_FRAG:
case DPAA_PKT_TYPE_IPV4_FRAG_UDP:
case DPAA_PKT_TYPE_IPV4_FRAG_TCP:
@@ -173,6 +170,9 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
m->packet_type = RTE_PTYPE_L2_ETHER |
RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP;
break;
+ case DPAA_PKT_TYPE_NONE:
+ m->packet_type = 0;
+ break;
/* More switch cases can be added */
default:
dpaa_slow_parsing(m, prs);
@@ -183,12 +183,11 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
<< DPAA_PKT_L3_LEN_SHIFT;
/* Set the hash values */
- m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));
- m->ol_flags = PKT_RX_RSS_HASH;
+ m->hash.rss = (uint32_t)(annot->hash);
/* All packets with Bad checksum are dropped by interface (and
* corresponding notification issued to RX error queues).
*/
- m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+ m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_GOOD;
/* Check if Vlan is present */
if (prs & DPAA_PARSE_VLAN_MASK)
@@ -297,7 +296,7 @@ dpaa_unsegmented_checksum(struct rte_mbuf *mbuf, struct qm_fd *fd_arr)
}
struct rte_mbuf *
-dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
+dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
{
struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
@@ -355,34 +354,31 @@ dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
return first_seg;
}
-static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
- uint32_t ifid)
+static inline struct rte_mbuf *
+dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
{
- struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
struct rte_mbuf *mbuf;
- void *ptr;
+ struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+ void *ptr = rte_dpaa_mem_ptov(qm_fd_addr(fd));
uint8_t format =
(fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
- uint16_t offset =
- (fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
- uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
+ uint16_t offset;
+ uint32_t length;
DPAA_DP_LOG(DEBUG, " FD--->MBUF");
if (unlikely(format == qm_fd_sg))
return dpaa_eth_sg_to_mbuf(fd, ifid);
+ rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
+
+ offset = (fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
+ length = fd->opaque & DPAA_FD_LENGTH_MASK;
+
/* Ignoring case when format != qm_fd_contig */
dpaa_display_frame(fd);
- ptr = rte_dpaa_mem_ptov(fd->addr);
- /* Ignoring case when ptr would be NULL. That is only possible incase
- * of a corrupted packet
- */
mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
- /* Prefetch the Parse results and packet data to L1 */
- rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
- rte_prefetch0((void *)((uint8_t *)ptr + offset));
mbuf->data_off = offset;
mbuf->data_len = length;
@@ -462,11 +458,11 @@ static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
if (!dpaa_mbuf)
return NULL;
- memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + mbuf->data_off, (void *)
+ memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + RTE_PKTMBUF_HEADROOM, (void *)
((uint8_t *)(mbuf->buf_addr) + mbuf->data_off), mbuf->pkt_len);
/* Copy only the required fields */
- dpaa_mbuf->data_off = mbuf->data_off;
+ dpaa_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
dpaa_mbuf->pkt_len = mbuf->pkt_len;
dpaa_mbuf->ol_flags = mbuf->ol_flags;
dpaa_mbuf->packet_type = mbuf->packet_type;
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 9308b3a..78e804f 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -262,7 +262,7 @@ uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
struct rte_mbuf **bufs __rte_unused,
uint16_t nb_bufs __rte_unused);
-struct rte_mbuf *dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid);
+struct rte_mbuf *dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid);
int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
struct qm_fd *fd,
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 12/18] bus/dpaa: query queue frame count support
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (10 preceding siblings ...)
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 11/18] net/dpaa: optimize Rx path Hemant Agrawal
@ 2018-01-09 13:23 ` Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 13/18] net/dpaa: add Rx queue " Hemant Agrawal
` (6 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:23 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 22 ++++++++++++++++++++++
drivers/bus/dpaa/include/fsl_qman.h | 7 +++++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 +
3 files changed, 30 insertions(+)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index d8fb25a..ffb008e 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -1722,6 +1722,28 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
return 0;
}
+int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt)
+{
+ struct qm_mc_command *mcc;
+ struct qm_mc_result *mcr;
+ struct qman_portal *p = get_affine_portal();
+
+ mcc = qm_mc_start(&p->p);
+ mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+ qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+ while (!(mcr = qm_mc_result(&p->p)))
+ cpu_relax();
+ DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+
+ if (mcr->result == QM_MCR_RESULT_OK)
+ *frm_cnt = be24_to_cpu(mcr->queryfq_np.frm_cnt);
+ else if (mcr->result == QM_MCR_RESULT_ERR_FQID)
+ return -ERANGE;
+ else if (mcr->result != QM_MCR_RESULT_OK)
+ return -EIO;
+ return 0;
+}
+
int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
{
struct qm_mc_command *mcc;
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index fc00d8d..d769d50 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1616,6 +1616,13 @@ int qman_query_fq_has_pkts(struct qman_fq *fq);
int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
/**
+ * qman_query_fq_frmcnt - Queries fq frame count
+ * @fq: the frame queue object to be queried
+ * @frm_cnt: number of frames in the queue
+ */
+int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);
+
+/**
* qman_query_wq - Queries work queue lengths
* @query_dedicated: If non-zero, query length of WQs in the channel dedicated
* to this software portal. Otherwise, query length of WQs in a
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 4e3afda..212c75f 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -73,6 +73,7 @@ DPDK_18.02 {
qman_create_cgr;
qman_delete_cgr;
qman_modify_cgr;
+ qman_query_fq_frm_cnt;
qman_release_cgrid_range;
rte_dpaa_portal_fq_close;
rte_dpaa_portal_fq_init;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 13/18] net/dpaa: add Rx queue count support
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (11 preceding siblings ...)
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 12/18] bus/dpaa: query queue frame count support Hemant Agrawal
@ 2018-01-09 13:23 ` Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 14/18] net/dpaa: add support for loopback API Hemant Agrawal
` (5 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:23 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5d94af5..de016ab 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -513,6 +513,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
PMD_INIT_FUNC_TRACE();
}
+static uint32_t
+dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+ u32 frm_cnt = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
+ RTE_LOG(DEBUG, PMD, "RX frame count for q(%d) is %u\n",
+ rx_queue_id, frm_cnt);
+ }
+ return frm_cnt;
+}
+
static int dpaa_link_down(struct rte_eth_dev *dev)
{
PMD_INIT_FUNC_TRACE();
@@ -664,6 +680,7 @@ static struct eth_dev_ops dpaa_devops = {
.tx_queue_setup = dpaa_eth_tx_queue_setup,
.rx_queue_release = dpaa_eth_rx_queue_release,
.tx_queue_release = dpaa_eth_tx_queue_release,
+ .rx_queue_count = dpaa_dev_rx_queue_count,
.flow_ctrl_get = dpaa_flow_ctrl_get,
.flow_ctrl_set = dpaa_flow_ctrl_set,
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 14/18] net/dpaa: add support for loopback API
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (12 preceding siblings ...)
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 13/18] net/dpaa: add Rx queue " Hemant Agrawal
@ 2018-01-09 13:23 ` Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 15/18] app/testpmd: add support for loopback config for dpaa Hemant Agrawal
` (4 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:23 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/Makefile | 3 +++
drivers/net/dpaa/dpaa_ethdev.c | 42 +++++++++++++++++++++++++++++++
drivers/net/dpaa/rte_pmd_dpaa.h | 35 ++++++++++++++++++++++++++
drivers/net/dpaa/rte_pmd_dpaa_version.map | 8 ++++++
4 files changed, 88 insertions(+)
create mode 100644 drivers/net/dpaa/rte_pmd_dpaa.h
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index e5f662f..b1fc5a0 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -34,4 +34,7 @@ LDLIBS += -lrte_mempool_dpaa
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs
+# install this header file
+SYMLINK-$(CONFIG_RTE_LIBRTE_DPAA_PMD)-include := rte_pmd_dpaa.h
+
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index de016ab..85ccea4 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -38,6 +38,7 @@
#include <dpaa_ethdev.h>
#include <dpaa_rxtx.h>
+#include <rte_pmd_dpaa.h>
#include <fsl_usd.h>
#include <fsl_qman.h>
@@ -84,6 +85,8 @@ static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
offsetof(struct dpaa_if_stats, tund)},
};
+static struct rte_dpaa_driver rte_dpaa_pmd;
+
static int
dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
@@ -707,6 +710,45 @@ static struct eth_dev_ops dpaa_devops = {
.fw_version_get = dpaa_fw_version_get,
};
+static bool
+is_device_supported(struct rte_eth_dev *dev, struct rte_dpaa_driver *drv)
+{
+ if (strcmp(dev->device->driver->name,
+ drv->driver.name))
+ return false;
+
+ return true;
+}
+
+static bool
+is_dpaa_supported(struct rte_eth_dev *dev)
+{
+ return is_device_supported(dev, &rte_dpaa_pmd);
+}
+
+int
+rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
+{
+ struct rte_eth_dev *dev;
+ struct dpaa_if *dpaa_intf;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV);
+
+ dev = &rte_eth_devices[port];
+
+ if (!is_dpaa_supported(dev))
+ return -ENOTSUP;
+
+ dpaa_intf = dev->data->dev_private;
+
+ if (on)
+ fman_if_loopback_enable(dpaa_intf->fif);
+ else
+ fman_if_loopback_disable(dpaa_intf->fif);
+
+ return 0;
+}
+
static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
{
struct rte_eth_fc_conf *fc_conf;
diff --git a/drivers/net/dpaa/rte_pmd_dpaa.h b/drivers/net/dpaa/rte_pmd_dpaa.h
new file mode 100644
index 0000000..04955a4
--- /dev/null
+++ b/drivers/net/dpaa/rte_pmd_dpaa.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2017 NXP.
+ */
+
+#ifndef _PMD_DPAA_H_
+#define _PMD_DPAA_H_
+
+/**
+ * @file rte_pmd_dpaa.h
+ *
+ * dpaa PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ */
+
+#include <rte_ethdev.h>
+
+/**
+ * Enable/Disable TX loopback
+ *
+ * @param port
+ * The port identifier of the Ethernet device.
+ * @param on
+ * 1 - Enable TX loopback.
+ * 0 - Disable TX loopback.
+ * @return
+ * - (0) if successful.
+ * - (-ENODEV) if *port* invalid.
+ * - (-EINVAL) if bad parameter.
+ */
+int rte_pmd_dpaa_set_tx_loopback(uint8_t port,
+ uint8_t on);
+
+#endif /* _PMD_DPAA_H_ */
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index a70bd19..d76acbd 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -2,3 +2,11 @@ DPDK_17.11 {
local: *;
};
+
+DPDK_18.02 {
+ global:
+
+ rte_pmd_dpaa_set_tx_loopback;
+
+ local: *;
+} DPDK_17.11;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 15/18] app/testpmd: add support for loopback config for dpaa
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (13 preceding siblings ...)
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 14/18] net/dpaa: add support for loopback API Hemant Agrawal
@ 2018-01-09 13:23 ` Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 16/18] bus/dpaa: add support for static queues Hemant Agrawal
` (3 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:23 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
app/test-pmd/Makefile | 4 ++++
app/test-pmd/cmdline.c | 7 +++++++
2 files changed, 11 insertions(+)
diff --git a/app/test-pmd/Makefile b/app/test-pmd/Makefile
index 82b3481..34125e5 100644
--- a/app/test-pmd/Makefile
+++ b/app/test-pmd/Makefile
@@ -43,6 +43,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_BOND),y)
LDLIBS += -lrte_pmd_bond
endif
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
+LDLIBS += -lrte_pmd_dpaa
+endif
+
ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y)
LDLIBS += -lrte_pmd_ixgbe
endif
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f71d963..32096aa 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -89,6 +89,9 @@
#include <rte_eth_bond.h>
#include <rte_eth_bond_8023ad.h>
#endif
+#ifdef RTE_LIBRTE_DPAA_PMD
+#include <rte_pmd_dpaa.h>
+#endif
#ifdef RTE_LIBRTE_IXGBE_PMD
#include <rte_pmd_ixgbe.h>
#endif
@@ -12620,6 +12623,10 @@ cmd_set_tx_loopback_parsed(
if (ret == -ENOTSUP)
ret = rte_pmd_bnxt_set_tx_loopback(res->port_id, is_on);
#endif
+#ifdef RTE_LIBRTE_DPAA_PMD
+ if (ret == -ENOTSUP)
+ ret = rte_pmd_dpaa_set_tx_loopback(res->port_id, is_on);
+#endif
switch (ret) {
case 0:
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 16/18] bus/dpaa: add support for static queues
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (14 preceding siblings ...)
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 15/18] app/testpmd: add support for loopback config for dpaa Hemant Agrawal
@ 2018-01-09 13:23 ` Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 17/18] net/dpaa: integrate the support of push mode in PMD Hemant Agrawal
` (2 subsequent siblings)
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:23 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Sunil Kumar Kori
DPAA hardware support two kinds of queues:
1. Pull mode queue - where one needs to regularly pull the packets.
2. Push mode queue - where the hw pushes the packet to queue. These are
high performance queues, but limitd in number.
This patch add the driver support for push m de queues.
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 64 +++++++++++++++++++++++++++++++
drivers/bus/dpaa/base/qbman/qman.h | 4 +-
drivers/bus/dpaa/include/fsl_qman.h | 14 ++++++-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 4 ++
4 files changed, 83 insertions(+), 3 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index ffb008e..7e285a5 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -1051,6 +1051,70 @@ u16 qman_affine_channel(int cpu)
return affine_channels[cpu];
}
+unsigned int qman_portal_poll_rx(unsigned int poll_limit,
+ void **bufs,
+ struct qman_portal *p)
+{
+ const struct qm_dqrr_entry *dq;
+ struct qman_fq *fq;
+ enum qman_cb_dqrr_result res;
+ unsigned int limit = 0;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ struct qm_dqrr_entry *shadow;
+#endif
+ unsigned int rx_number = 0;
+
+ do {
+ qm_dqrr_pvb_update(&p->p);
+ dq = qm_dqrr_current(&p->p);
+ if (unlikely(!dq))
+ break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ /* If running on an LE system the fields of the
+ * dequeue entry must be swapper. Because the
+ * QMan HW will ignore writes the DQRR entry is
+ * copied and the index stored within the copy
+ */
+ shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+ *shadow = *dq;
+ dq = shadow;
+ shadow->fqid = be32_to_cpu(shadow->fqid);
+ shadow->contextB = be32_to_cpu(shadow->contextB);
+ shadow->seqnum = be16_to_cpu(shadow->seqnum);
+ hw_fd_to_cpu(&shadow->fd);
+#endif
+
+ /* SDQCR: context_b points to the FQ */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+ fq = get_fq_table_entry(dq->contextB);
+#else
+ fq = (void *)(uintptr_t)dq->contextB;
+#endif
+ /* Now let the callback do its stuff */
+ res = fq->cb.dqrr_dpdk_cb(NULL, p, fq, dq, &bufs[rx_number]);
+ rx_number++;
+ /* Interpret 'dq' from a driver perspective. */
+ /*
+ * Parking isn't possible unless HELDACTIVE was set. NB,
+ * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+ * check for HELDACTIVE to cover both.
+ */
+ DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+ (res != qman_cb_dqrr_park));
+ qm_dqrr_cdc_consume_1ptr(&p->p, dq, res == qman_cb_dqrr_park);
+ /* Move forward */
+ qm_dqrr_next(&p->p);
+ /*
+ * Entry processed and consumed, increment our counter. The
+ * callback can request that we exit after consuming the
+ * entry, and we also exit if we reach our processing limit,
+ * so loop back only if neither of these conditions is met.
+ */
+ } while (likely(++limit < poll_limit));
+
+ return limit;
+}
+
struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
{
struct qman_portal *p = get_affine_portal();
diff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h
index a433369..4346d86 100644
--- a/drivers/bus/dpaa/base/qbman/qman.h
+++ b/drivers/bus/dpaa/base/qbman/qman.h
@@ -154,7 +154,7 @@ struct qm_eqcr {
};
struct qm_dqrr {
- const struct qm_dqrr_entry *ring, *cursor;
+ struct qm_dqrr_entry *ring, *cursor;
u8 pi, ci, fill, ithresh, vbit;
#ifdef RTE_LIBRTE_DPAA_HWDEBUG
enum qm_dqrr_dmode dmode;
@@ -441,7 +441,7 @@ static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)
return ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);
}
-static inline const struct qm_dqrr_entry *DQRR_INC(
+static inline struct qm_dqrr_entry *DQRR_INC(
const struct qm_dqrr_entry *e)
{
return DQRR_CARRYCLEAR(e + 1);
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index d769d50..ad40d80 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1124,6 +1124,12 @@ typedef enum qman_cb_dqrr_result (*qman_cb_dqrr)(struct qman_portal *qm,
struct qman_fq *fq,
const struct qm_dqrr_entry *dqrr);
+typedef enum qman_cb_dqrr_result (*qman_dpdk_cb_dqrr)(void *event,
+ struct qman_portal *qm,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bd);
+
/*
* This callback type is used when handling ERNs, FQRNs and FQRLs via MR. They
* are always consumed after the callback returns.
@@ -1182,7 +1188,10 @@ enum qman_fq_state {
*/
struct qman_fq_cb {
- qman_cb_dqrr dqrr; /* for dequeued frames */
+ union { /* for dequeued frames */
+ qman_dpdk_cb_dqrr dqrr_dpdk_cb;
+ qman_cb_dqrr dqrr;
+ };
qman_cb_mr ern; /* for s/w ERNs */
qman_cb_mr fqs; /* frame-queue state changes*/
};
@@ -1299,6 +1308,9 @@ int qman_get_portal_index(void);
*/
u16 qman_affine_channel(int cpu);
+unsigned int qman_portal_poll_rx(unsigned int poll_limit,
+ void **bufs, struct qman_portal *q);
+
/**
* qman_set_vdq - Issue a volatile dequeue command
* @fq: Frame Queue on which the volatile dequeue command is issued
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 212c75f..ac455cd 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -70,11 +70,15 @@ DPDK_18.02 {
dpaa_svr_family;
qman_alloc_cgrid_range;
+ qman_alloc_pool_range;
qman_create_cgr;
qman_delete_cgr;
qman_modify_cgr;
+ qman_oos_fq;
+ qman_portal_poll_rx;
qman_query_fq_frm_cnt;
qman_release_cgrid_range;
+ qman_retire_fq;
rte_dpaa_portal_fq_close;
rte_dpaa_portal_fq_init;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 17/18] net/dpaa: integrate the support of push mode in PMD
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (15 preceding siblings ...)
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 16/18] bus/dpaa: add support for static queues Hemant Agrawal
@ 2018-01-09 13:23 ` Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 18/18] bus/dpaa: support for enqueue frames of multiple queues Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:23 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Sunil Kumar Kori, Nipun Gupta
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
doc/guides/nics/dpaa.rst | 11 ++++++++
drivers/net/dpaa/dpaa_ethdev.c | 64 +++++++++++++++++++++++++++++++++++++-----
drivers/net/dpaa/dpaa_ethdev.h | 2 +-
drivers/net/dpaa/dpaa_rxtx.c | 34 ++++++++++++++++++++++
drivers/net/dpaa/dpaa_rxtx.h | 5 ++++
5 files changed, 108 insertions(+), 8 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index a62f128..0a13996 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -290,6 +290,17 @@ state during application initialization:
In case the application is configured to use lesser number of queues than
configured above, it might result in packet loss (because of distribution).
+- ``DPAA_PUSH_QUEUES_NUMBER`` (default 4)
+
+ This defines the number of High performance queues to be used for ethdev Rx.
+ These queues use one private HW portal per queue configured, so they are
+ limited in the system. The first configured ethdev queues will be
+ automatically be assigned from the these high perf PUSH queues. Any queue
+ configuration beyond that will be standard Rx queues. The application can
+ choose to change their number if HW portals are limited.
+ The valid values are from '0' to '4'. The valuse shall be set to '0' if the
+ application want to use eventdev with DPAA device.
+
Driver compilation and testing
------------------------------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 85ccea4..444c122 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -47,6 +47,14 @@
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
+/* At present we only allow up to 4 push mode queues - as each of this queue
+ * need dedicated portal and we are short of portals.
+ */
+#define DPAA_MAX_PUSH_MODE_QUEUE 4
+
+static int dpaa_push_mode_max_queue = DPAA_MAX_PUSH_MODE_QUEUE;
+static int dpaa_push_queue_idx; /* Queue index which are in push mode*/
+
/* Per FQ Taildrop in frame count */
static unsigned int td_threshold = CGR_RX_PERFQ_THRESH;
@@ -434,6 +442,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
+ struct qm_mcc_initfq opts = {0};
+ u32 flags = 0;
+ int ret;
PMD_INIT_FUNC_TRACE();
@@ -469,13 +480,45 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->name, fd_offset,
fman_if_get_fdoff(dpaa_intf->fif));
}
-
+ /* checking if push mode only, no error check for now */
+ if (dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
+ dpaa_push_queue_idx++;
+ opts.we_mask = QM_INITFQ_WE_FQCTRL | QM_INITFQ_WE_CONTEXTA;
+ opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK |
+ QM_FQCTRL_CTXASTASHING |
+ QM_FQCTRL_PREFERINCACHE;
+ opts.fqd.context_a.stashing.exclusive = 0;
+ opts.fqd.context_a.stashing.annotation_cl =
+ DPAA_IF_RX_ANNOTATION_STASH;
+ opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+ opts.fqd.context_a.stashing.context_cl =
+ DPAA_IF_RX_CONTEXT_STASH;
+
+ /*Create a channel and associate given queue with the channel*/
+ qman_alloc_pool_range((u32 *)&rxq->ch_id, 1, 1, 0);
+ opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ;
+ opts.fqd.dest.channel = rxq->ch_id;
+ opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+ flags = QMAN_INITFQ_FLAG_SCHED;
+
+ /* Configure tail drop */
+ if (dpaa_intf->cgr_rx) {
+ opts.we_mask |= QM_INITFQ_WE_CGID;
+ opts.fqd.cgid = dpaa_intf->cgr_rx[queue_idx].cgrid;
+ opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+ }
+ ret = qman_init_fq(rxq, flags, &opts);
+ if (ret)
+ DPAA_PMD_ERR("Channel/Queue association failed. fqid %d"
+ " ret: %d", rxq->fqid, ret);
+ rxq->cb.dqrr_dpdk_cb = dpaa_rx_cb;
+ rxq->is_static = true;
+ }
dev->data->rx_queues[queue_idx] = rxq;
/* configure the CGR size as per the desc size */
if (dpaa_intf->cgr_rx) {
struct qm_mcc_initcgr cgr_opts = {0};
- int ret;
/* Enable tail drop with cgr on this queue */
qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres, nb_desc, 0);
@@ -809,11 +852,8 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
fqid, ret);
return ret;
}
-
- opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
- QM_INITFQ_WE_CONTEXTA;
-
- opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+ fq->is_static = false;
+ opts.we_mask = QM_INITFQ_WE_FQCTRL | QM_INITFQ_WE_CONTEXTA;
opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
QM_FQCTRL_PREFERINCACHE;
opts.fqd.context_a.stashing.exclusive = 0;
@@ -947,6 +987,16 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
else
num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
+ /* if push mode queues to be enabled. Currenly we are allowing only
+ * one queue per thread.
+ */
+ if (getenv("DPAA_PUSH_QUEUES_NUMBER")) {
+ dpaa_push_mode_max_queue =
+ atoi(getenv("DPAA_PUSH_QUEUES_NUMBER"));
+ if (dpaa_push_mode_max_queue > DPAA_MAX_PUSH_MODE_QUEUE)
+ dpaa_push_mode_max_queue = DPAA_MAX_PUSH_MODE_QUEUE;
+ }
+
/* Each device can not have more than DPAA_PCD_FQID_MULTIPLIER RX
* queues.
*/
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 1b36567..1fa6caf 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -54,7 +54,7 @@
#define DPAA_MAX_NUM_PCD_QUEUES 32
#define DPAA_IF_TX_PRIORITY 3
-#define DPAA_IF_RX_PRIORITY 4
+#define DPAA_IF_RX_PRIORITY 0
#define DPAA_IF_DEBUG_PRIORITY 7
#define DPAA_IF_RX_ANNOTATION_STASH 1
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 98671fa..0413932 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -394,6 +394,37 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
return mbuf;
}
+enum qman_cb_dqrr_result dpaa_rx_cb(void *event __always_unused,
+ struct qman_portal *qm __always_unused,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bufs)
+{
+ const struct qm_fd *fd = &dqrr->fd;
+
+ *bufs = dpaa_eth_fd_to_mbuf(fd,
+ ((struct dpaa_if *)fq->dpaa_intf)->ifid);
+ return qman_cb_dqrr_consume;
+}
+
+static uint16_t
+dpaa_eth_queue_portal_rx(struct qman_fq *fq,
+ struct rte_mbuf **bufs,
+ uint16_t nb_bufs)
+{
+ int ret;
+
+ if (unlikely(fq->qp == NULL)) {
+ ret = rte_dpaa_portal_fq_init((void *)0, fq);
+ if (ret) {
+ DPAA_PMD_ERR("Failure in affining portal %d", ret);
+ return 0;
+ }
+ }
+
+ return qman_portal_poll_rx(nb_bufs, (void **)bufs, fq->qp);
+}
+
uint16_t dpaa_eth_queue_rx(void *q,
struct rte_mbuf **bufs,
uint16_t nb_bufs)
@@ -403,6 +434,9 @@ uint16_t dpaa_eth_queue_rx(void *q,
uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
int ret;
+ if (likely(fq->is_static))
+ return dpaa_eth_queue_portal_rx(fq, bufs, nb_bufs);
+
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_PMD_ERR("Failure in affining portal");
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 78e804f..29d8f95 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -268,4 +268,9 @@ int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
struct qm_fd *fd,
uint32_t bpid);
+enum qman_cb_dqrr_result dpaa_rx_cb(void *event,
+ struct qman_portal *qm,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bd);
#endif
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v2 18/18] bus/dpaa: support for enqueue frames of multiple queues
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (16 preceding siblings ...)
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 17/18] net/dpaa: integrate the support of push mode in PMD Hemant Agrawal
@ 2018-01-09 13:23 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
18 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:23 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Akhil Goyal, Nipun Gupta
From: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 66 +++++++++++++++++++++++++++++++
drivers/bus/dpaa/include/fsl_qman.h | 14 +++++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 +
3 files changed, 81 insertions(+)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 7e285a5..e171356 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -2158,6 +2158,72 @@ int qman_enqueue_multi(struct qman_fq *fq,
return sent;
}
+int
+qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
+ int frames_to_send)
+{
+ struct qman_portal *p = get_affine_portal();
+ struct qm_portal *portal = &p->p;
+
+ register struct qm_eqcr *eqcr = &portal->eqcr;
+ struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+
+ u8 i, diff, old_ci, sent = 0;
+
+ /* Update the available entries if no entry is free */
+ if (!eqcr->available) {
+ old_ci = eqcr->ci;
+ eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+ diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+ eqcr->available += diff;
+ if (!diff)
+ return 0;
+ }
+
+ /* try to send as many frames as possible */
+ while (eqcr->available && frames_to_send--) {
+ eq->fqid = fq[sent]->fqid_le;
+ eq->fd.opaque_addr = fd->opaque_addr;
+ eq->fd.addr = cpu_to_be40(fd->addr);
+ eq->fd.status = cpu_to_be32(fd->status);
+ eq->fd.opaque = cpu_to_be32(fd->opaque);
+
+ eq = (void *)((unsigned long)(eq + 1) &
+ (~(unsigned long)(QM_EQCR_SIZE << 6)));
+ eqcr->available--;
+ sent++;
+ fd++;
+ }
+ lwsync();
+
+ /* In order for flushes to complete faster, all lines are recorded in
+ * 32 bit word.
+ */
+ eq = eqcr->cursor;
+ for (i = 0; i < sent; i++) {
+ eq->__dont_write_directly__verb =
+ QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
+ prev_eq = eq;
+ eq = (void *)((unsigned long)(eq + 1) &
+ (~(unsigned long)(QM_EQCR_SIZE << 6)));
+ if (unlikely((prev_eq + 1) != eq))
+ eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+ }
+
+ /* We need to flush all the lines but without load/store operations
+ * between them
+ */
+ eq = eqcr->cursor;
+ for (i = 0; i < sent; i++) {
+ dcbf(eq);
+ eq = (void *)((unsigned long)(eq + 1) &
+ (~(unsigned long)(QM_EQCR_SIZE << 6)));
+ }
+ /* Update cursor for the next call */
+ eqcr->cursor = eq;
+ return sent;
+}
+
int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
struct qman_fq *orp, u16 orp_seqnum)
{
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index ad40d80..0e3e4fe 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1703,6 +1703,20 @@ int qman_enqueue_multi(struct qman_fq *fq,
const struct qm_fd *fd,
int frames_to_send);
+/**
+ * qman_enqueue_multi_fq - Enqueue multiple frames to their respective frame
+ * queues.
+ * @fq[]: Array of frame queue objects to enqueue to
+ * @fd: pointer to first descriptor of frame to be enqueued
+ * @frames_to_send: number of frames to be sent.
+ *
+ * This API is similar to qman_enqueue_multi(), but it takes fd which needs
+ * to be processed by different frame queues.
+ */
+int
+qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
+ int frames_to_send);
+
typedef int (*qman_cb_precommit) (void *arg);
/**
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index ac455cd..64068de 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -73,6 +73,7 @@ DPDK_18.02 {
qman_alloc_pool_range;
qman_create_cgr;
qman_delete_cgr;
+ qman_enqueue_multi_fq;
qman_modify_cgr;
qman_oos_fq;
qman_portal_poll_rx;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [dpdk-dev] [PATCH 01/18] net/dpaa: fix coverity reported issues
2018-01-09 10:46 ` Ferruh Yigit
@ 2018-01-09 13:29 ` Hemant Agrawal
0 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-09 13:29 UTC (permalink / raw)
To: Ferruh Yigit, dev; +Cc: stable, Kovacevic, Marko
On 1/9/2018 4:16 PM, Ferruh Yigit wrote:
> On 12/13/2017 12:05 PM, Hemant Agrawal wrote:
>> Fixes: 05ba55bc2b1a ("net/dpaa: add packet dump for debugging")
>> Fixes: 37f9b54bd3cf ("net/dpaa: support Tx and Rx queue setup")
>> Cc: stable@dpdk.org>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>
> Hi Hemant,
>
> fix coverity issues is not very helpful as commit title, can you please document
> what really fixed.
>
> And there is a special format for coverity fixes:
>
> "
>
> Coverity issue: ......
> Fixes: ............ ("...")
> Cc: stable@dpdk.org [if required]
>
> Signed-off-by: ....
> "
>
> There are samples in git history. It seems this format is not documented and
> Marko will help to document it.
>
Hi Ferruh,
thanks for the review. Please ignore v2, I will fix it in v3.
Regards,
Hemant
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
` (17 preceding siblings ...)
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 18/18] bus/dpaa: support for enqueue frames of multiple queues Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 01/19] net/dpaa: fix uninitialized and unused variables Hemant Agrawal
` (19 more replies)
18 siblings, 20 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
This patch series add various improvement and performance related
optimizations for DPAA PMD
v3:
- handling review comments from Ferruh
- update the API doc for new PMD specific API
v2:
- fix the spelling of PORTALS
- Add Akhil's patch wich is required for crypto
- minor improvement in push mode patch
Akhil Goyal (1):
bus/dpaa: support for enqueue frames of multiple queues
Ashish Jain (2):
net/dpaa: fix the mbuf packet type if zero
net/dpaa: set the correct frame size in device MTU
Hemant Agrawal (11):
net/dpaa: fix uninitialized and unused variables
net/dpaa: fix FW version code
bus/dpaa: update platform soc value register routines
net/dpaa: add frame count based tail drop with CGR
bus/dpaa: add support to create dynamic HW portal
bus/dpaa: query queue frame count support
net/dpaa: add Rx queue count support
net/dpaa: add support for loopback API
app/testpmd: add support for loopback config for dpaa
bus/dpaa: add support for static queues
net/dpaa: integrate the support of push mode in PMD
Nipun Gupta (5):
bus/dpaa: optimize the qman HW stashing settings
bus/dpaa: optimize the endianness conversions
net/dpaa: change Tx HW budget to 7
net/dpaa: optimize the Tx burst
net/dpaa: optimize Rx path
app/test-pmd/Makefile | 4 +
app/test-pmd/cmdline.c | 7 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf | 1 +
doc/guides/nics/dpaa.rst | 11 ++
drivers/bus/dpaa/base/qbman/qman.c | 238 +++++++++++++++++++++++++--
drivers/bus/dpaa/base/qbman/qman.h | 4 +-
drivers/bus/dpaa/base/qbman/qman_driver.c | 153 +++++++++++++++---
drivers/bus/dpaa/base/qbman/qman_priv.h | 6 +-
drivers/bus/dpaa/dpaa_bus.c | 43 ++++-
drivers/bus/dpaa/include/fsl_qman.h | 62 +++++--
drivers/bus/dpaa/include/fsl_usd.h | 4 +
drivers/bus/dpaa/include/process.h | 11 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 21 +++
drivers/bus/dpaa/rte_dpaa_bus.h | 15 ++
drivers/net/dpaa/Makefile | 3 +
drivers/net/dpaa/dpaa_ethdev.c | 259 ++++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_ethdev.h | 21 ++-
drivers/net/dpaa/dpaa_rxtx.c | 163 +++++++++++++------
drivers/net/dpaa/dpaa_rxtx.h | 7 +-
drivers/net/dpaa/rte_pmd_dpaa.h | 39 +++++
drivers/net/dpaa/rte_pmd_dpaa_version.map | 8 +
22 files changed, 927 insertions(+), 154 deletions(-)
create mode 100644 drivers/net/dpaa/rte_pmd_dpaa.h
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 01/19] net/dpaa: fix uninitialized and unused variables
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 02/19] net/dpaa: fix the mbuf packet type if zero Hemant Agrawal
` (18 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, stable
This patch fixes the issues reported by NXP's internal
coverity build.
Fixes: 05ba55bc2b1a ("net/dpaa: add packet dump for debugging")
Fixes: 37f9b54bd3cf ("net/dpaa: support Tx and Rx queue setup")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 6 +++---
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 2eb00b5..7b4a6f1 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -697,7 +697,7 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
static int dpaa_rx_queue_init(struct qman_fq *fq,
uint32_t fqid)
{
- struct qm_mcc_initfq opts;
+ struct qm_mcc_initfq opts = {0};
int ret;
PMD_INIT_FUNC_TRACE();
@@ -743,7 +743,7 @@ static int dpaa_rx_queue_init(struct qman_fq *fq,
static int dpaa_tx_queue_init(struct qman_fq *fq,
struct fman_if *fman_intf)
{
- struct qm_mcc_initfq opts;
+ struct qm_mcc_initfq opts = {0};
int ret;
PMD_INIT_FUNC_TRACE();
@@ -774,7 +774,7 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
{
- struct qm_mcc_initfq opts;
+ struct qm_mcc_initfq opts = {0};
int ret;
PMD_INIT_FUNC_TRACE();
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 905ecc0..c3a0920 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -639,7 +639,7 @@ tx_on_external_pool(struct qman_fq *txq, struct rte_mbuf *mbuf,
return 1;
}
- DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, dpaa_intf->bp_info->bpid);
+ DPAA_MBUF_TO_CONTIG_FD(dmable_mbuf, fd_arr, dpaa_intf->bp_info->bpid);
return 0;
}
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 02/19] net/dpaa: fix the mbuf packet type if zero
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 01/19] net/dpaa: fix uninitialized and unused variables Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 03/19] net/dpaa: fix FW version code Hemant Agrawal
` (17 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Ashish Jain, stable
From: Ashish Jain <ashish.jain@nxp.com>
Populate the mbuf field packet_type which is required
for calculating checksum while transmitting frames
Fixes: 8cffdcbe85aa ("net/dpaa: support scattered Rx")
Cc: stable@dpdk.org
Signed-off-by: Ashish Jain <ashish.jain@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index c3a0920..630d7a5 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -32,6 +32,7 @@
#include <rte_ip.h>
#include <rte_tcp.h>
#include <rte_udp.h>
+#include <rte_net.h>
#include "dpaa_ethdev.h"
#include "dpaa_rxtx.h"
@@ -478,6 +479,15 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
fd->opaque_addr = 0;
if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+ if (!mbuf->packet_type) {
+ struct rte_net_hdr_lens hdr_lens;
+
+ mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
+ RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
+ | RTE_PTYPE_L4_MASK);
+ mbuf->l2_len = hdr_lens.l2_len;
+ mbuf->l3_len = hdr_lens.l3_len;
+ }
if (temp->data_off < DEFAULT_TX_ICEOF
+ sizeof(struct dpaa_eth_parse_results_t))
temp->data_off = DEFAULT_TX_ICEOF
@@ -585,6 +595,15 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
}
if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
+ if (!mbuf->packet_type) {
+ struct rte_net_hdr_lens hdr_lens;
+
+ mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
+ RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
+ | RTE_PTYPE_L4_MASK);
+ mbuf->l2_len = hdr_lens.l2_len;
+ mbuf->l3_len = hdr_lens.l3_len;
+ }
if (mbuf->data_off < (DEFAULT_TX_ICEOF +
sizeof(struct dpaa_eth_parse_results_t))) {
DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 03/19] net/dpaa: fix FW version code
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 01/19] net/dpaa: fix uninitialized and unused variables Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 02/19] net/dpaa: fix the mbuf packet type if zero Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 04/19] bus/dpaa: update platform soc value register routines Hemant Agrawal
` (16 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, stable
fix the soc id path and missing fclose
Fixes: cf0fab1d2ca5 ("net/dpaa: support firmware version get API")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 14 +++++---------
drivers/net/dpaa/dpaa_ethdev.h | 2 +-
2 files changed, 6 insertions(+), 10 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 7b4a6f1..db6574f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -186,19 +186,15 @@ dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
DPAA_PMD_ERR("Unable to open SoC device");
return -ENOTSUP; /* Not supported on this infra */
}
-
- ret = fscanf(svr_file, "svr:%x", &svr_ver);
- if (ret <= 0) {
+ if (fscanf(svr_file, "svr:%x", &svr_ver) <= 0)
DPAA_PMD_ERR("Unable to read SoC device");
- return -ENOTSUP; /* Not supported on this infra */
- }
- ret = snprintf(fw_version, fw_size,
- "svr:%x-fman-v%x",
- svr_ver,
- fman_ip_rev);
+ fclose(svr_file);
+ ret = snprintf(fw_version, fw_size, "SVR:%x-fman-v%x",
+ svr_ver, fman_ip_rev);
ret += 1; /* add the size of '\0' */
+
if (fw_size < (uint32_t)ret)
return ret;
else
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index bd63ee0..254fca2 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -20,7 +20,7 @@
/* DPAA SoC identifier; If this is not available, it can be concluded
* that board is non-DPAA. Single slot is currently supported.
*/
-#define DPAA_SOC_ID_FILE "sys/devices/soc0/soc_id"
+#define DPAA_SOC_ID_FILE "/sys/devices/soc0/soc_id"
#define DPAA_MBUF_HW_ANNOTATION 64
#define DPAA_FD_PTA_SIZE 64
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 04/19] bus/dpaa: update platform soc value register routines
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (2 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 03/19] net/dpaa: fix FW version code Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 05/19] net/dpaa: set the correct frame size in device MTU Hemant Agrawal
` (15 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
This patch update the logic and expose the soc value
register, so that it can be used by other modules as well.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/dpaa_bus.c | 12 ++++++++++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 8 ++++++++
drivers/bus/dpaa/rte_dpaa_bus.h | 11 +++++++++++
drivers/net/dpaa/dpaa_ethdev.c | 4 +++-
drivers/net/dpaa/dpaa_ethdev.h | 5 -----
5 files changed, 34 insertions(+), 6 deletions(-)
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 79f4858..a7c05b3 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -51,6 +51,8 @@ struct netcfg_info *dpaa_netcfg;
/* define a variable to hold the portal_key, once created.*/
pthread_key_t dpaa_portal_key;
+unsigned int dpaa_svr_family;
+
RTE_DEFINE_PER_LCORE(bool, _dpaa_io);
static inline void
@@ -417,6 +419,8 @@ rte_dpaa_bus_probe(void)
int ret = -1;
struct rte_dpaa_device *dev;
struct rte_dpaa_driver *drv;
+ FILE *svr_file = NULL;
+ unsigned int svr_ver;
BUS_INIT_FUNC_TRACE();
@@ -436,6 +440,14 @@ rte_dpaa_bus_probe(void)
break;
}
}
+
+ svr_file = fopen(DPAA_SOC_ID_FILE, "r");
+ if (svr_file) {
+ if (fscanf(svr_file, "svr:%x", &svr_ver) > 0)
+ dpaa_svr_family = svr_ver & SVR_MASK;
+ fclose(svr_file);
+ }
+
return 0;
}
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index fb9d532..eeeb458 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -64,3 +64,11 @@ DPDK_17.11 {
local: *;
};
+
+DPDK_18.02 {
+ global:
+
+ dpaa_svr_family;
+
+ local: *;
+} DPDK_17.11;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 5758274..d9e8c84 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -20,6 +20,17 @@
#define DEV_TO_DPAA_DEVICE(ptr) \
container_of(ptr, struct rte_dpaa_device, device)
+/* DPAA SoC identifier; If this is not available, it can be concluded
+ * that board is non-DPAA. Single slot is currently supported.
+ */
+#define DPAA_SOC_ID_FILE "/sys/devices/soc0/soc_id"
+
+#define SVR_LS1043A_FAMILY 0x87920000
+#define SVR_LS1046A_FAMILY 0x87070000
+#define SVR_MASK 0xffff0000
+
+extern unsigned int dpaa_svr_family;
+
struct rte_dpaa_device;
struct rte_dpaa_driver;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index db6574f..24943ef 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -186,7 +186,9 @@ dpaa_fw_version_get(struct rte_eth_dev *dev __rte_unused,
DPAA_PMD_ERR("Unable to open SoC device");
return -ENOTSUP; /* Not supported on this infra */
}
- if (fscanf(svr_file, "svr:%x", &svr_ver) <= 0)
+ if (fscanf(svr_file, "svr:%x", &svr_ver) > 0)
+ dpaa_svr_family = svr_ver & SVR_MASK;
+ else
DPAA_PMD_ERR("Unable to read SoC device");
fclose(svr_file);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 254fca2..9c3b42c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -17,11 +17,6 @@
#include <of.h>
#include <netcfg.h>
-/* DPAA SoC identifier; If this is not available, it can be concluded
- * that board is non-DPAA. Single slot is currently supported.
- */
-#define DPAA_SOC_ID_FILE "/sys/devices/soc0/soc_id"
-
#define DPAA_MBUF_HW_ANNOTATION 64
#define DPAA_FD_PTA_SIZE 64
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 05/19] net/dpaa: set the correct frame size in device MTU
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (3 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 04/19] bus/dpaa: update platform soc value register routines Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 06/19] net/dpaa: add frame count based tail drop with CGR Hemant Agrawal
` (14 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Ashish Jain
From: Ashish Jain <ashish.jain@nxp.com>
Setting correct frame size in dpaa_dev_mtu_set
api call. Also setting correct max frame size in
hardware in dev_configure for jumbo frames
Signed-off-by: Ashish Jain <ashish.jain@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 20 +++++++++++++-------
drivers/net/dpaa/dpaa_ethdev.h | 4 ++++
2 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 24943ef..5a2ea4f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -85,19 +85,21 @@ static int
dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ uint32_t frame_size = mtu + ETHER_HDR_LEN + ETHER_CRC_LEN
+ + VLAN_TAG_SIZE;
PMD_INIT_FUNC_TRACE();
- if (mtu < ETHER_MIN_MTU)
+ if (mtu < ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN)
return -EINVAL;
- if (mtu > ETHER_MAX_LEN)
+ if (frame_size > ETHER_MAX_LEN)
dev->data->dev_conf.rxmode.jumbo_frame = 1;
else
dev->data->dev_conf.rxmode.jumbo_frame = 0;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mtu;
+ dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- fman_if_set_maxfrm(dpaa_intf->fif, mtu);
+ fman_if_set_maxfrm(dpaa_intf->fif, frame_size);
return 0;
}
@@ -105,15 +107,19 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
static int
dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
PMD_INIT_FUNC_TRACE();
if (dev->data->dev_conf.rxmode.jumbo_frame == 1) {
if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- DPAA_MAX_RX_PKT_LEN)
- return dpaa_mtu_set(dev,
+ DPAA_MAX_RX_PKT_LEN) {
+ fman_if_set_maxfrm(dpaa_intf->fif,
dev->data->dev_conf.rxmode.max_rx_pkt_len);
- else
+ return 0;
+ } else {
return -1;
+ }
}
return 0;
}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 9c3b42c..548ccff 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -45,6 +45,10 @@
/*Maximum number of slots available in TX ring*/
#define MAX_TX_RING_SLOTS 8
+#ifndef VLAN_TAG_SIZE
+#define VLAN_TAG_SIZE 4 /** < Vlan Header Length */
+#endif
+
/* PCD frame queues */
#define DPAA_PCD_FQID_START 0x400
#define DPAA_PCD_FQID_MULTIPLIER 0x100
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 06/19] net/dpaa: add frame count based tail drop with CGR
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (4 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 05/19] net/dpaa: set the correct frame size in device MTU Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 07/19] bus/dpaa: optimize the qman HW stashing settings Hemant Agrawal
` (13 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
Replace the byte based tail queue congestion support
with frame count based congestion groups.
It can easily map to number of RX descriptors for a queue.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/rte_bus_dpaa_version.map | 5 ++
drivers/net/dpaa/dpaa_ethdev.c | 98 +++++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_ethdev.h | 8 +--
3 files changed, 97 insertions(+), 14 deletions(-)
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index eeeb458..f412362 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -69,6 +69,11 @@ DPDK_18.02 {
global:
dpaa_svr_family;
+ qman_alloc_cgrid_range;
+ qman_create_cgr;
+ qman_delete_cgr;
+ qman_modify_cgr;
+ qman_release_cgrid_range;
local: *;
} DPDK_17.11;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5a2ea4f..5d94af5 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -47,6 +47,9 @@
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
+/* Per FQ Taildrop in frame count */
+static unsigned int td_threshold = CGR_RX_PERFQ_THRESH;
+
struct rte_dpaa_xstats_name_off {
char name[RTE_ETH_XSTATS_NAME_SIZE];
uint32_t offset;
@@ -421,12 +424,13 @@ static void dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
static
int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
- uint16_t nb_desc __rte_unused,
+ uint16_t nb_desc,
unsigned int socket_id __rte_unused,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
PMD_INIT_FUNC_TRACE();
@@ -462,7 +466,23 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->name, fd_offset,
fman_if_get_fdoff(dpaa_intf->fif));
}
- dev->data->rx_queues[queue_idx] = &dpaa_intf->rx_queues[queue_idx];
+
+ dev->data->rx_queues[queue_idx] = rxq;
+
+ /* configure the CGR size as per the desc size */
+ if (dpaa_intf->cgr_rx) {
+ struct qm_mcc_initcgr cgr_opts = {0};
+ int ret;
+
+ /* Enable tail drop with cgr on this queue */
+ qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres, nb_desc, 0);
+ ret = qman_modify_cgr(dpaa_intf->cgr_rx, 0, &cgr_opts);
+ if (ret) {
+ DPAA_PMD_WARN(
+ "rx taildrop modify fail on fqid %d (ret=%d)",
+ rxq->fqid, ret);
+ }
+ }
return 0;
}
@@ -698,11 +718,21 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
}
/* Initialise an Rx FQ */
-static int dpaa_rx_queue_init(struct qman_fq *fq,
+static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
int ret;
+ u32 flags = 0;
+ struct qm_mcc_initcgr cgr_opts = {
+ .we_mask = QM_CGR_WE_CS_THRES |
+ QM_CGR_WE_CSTD_EN |
+ QM_CGR_WE_MODE,
+ .cgr = {
+ .cstd_en = QM_CGR_EN,
+ .mode = QMAN_CGR_MODE_FRAME
+ }
+ };
PMD_INIT_FUNC_TRACE();
@@ -732,12 +762,24 @@ static int dpaa_rx_queue_init(struct qman_fq *fq,
opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
- /*Enable tail drop */
- opts.we_mask = opts.we_mask | QM_INITFQ_WE_TDTHRESH;
- opts.fqd.fq_ctrl = opts.fqd.fq_ctrl | QM_FQCTRL_TDE;
- qm_fqd_taildrop_set(&opts.fqd.td, CONG_THRESHOLD_RX_Q, 1);
-
- ret = qman_init_fq(fq, 0, &opts);
+ if (cgr_rx) {
+ /* Enable tail drop with cgr on this queue */
+ qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres, td_threshold, 0);
+ cgr_rx->cb = NULL;
+ ret = qman_create_cgr(cgr_rx, QMAN_CGR_FLAG_USE_INIT,
+ &cgr_opts);
+ if (ret) {
+ DPAA_PMD_WARN(
+ "rx taildrop init fail on rx fqid %d (ret=%d)",
+ fqid, ret);
+ goto without_cgr;
+ }
+ opts.we_mask |= QM_INITFQ_WE_CGID;
+ opts.fqd.cgid = cgr_rx->cgrid;
+ opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+ }
+without_cgr:
+ ret = qman_init_fq(fq, flags, &opts);
if (ret)
DPAA_PMD_ERR("init rx fqid %d failed with ret: %d", fqid, ret);
return ret;
@@ -819,6 +861,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
struct fm_eth_port_cfg *cfg;
struct fman_if *fman_intf;
struct fman_if_bpool *bp, *tmp_bp;
+ uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES];
PMD_INIT_FUNC_TRACE();
@@ -855,10 +898,31 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->rx_queues = rte_zmalloc(NULL,
sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+
+ /* If congestion control is enabled globally*/
+ if (td_threshold) {
+ dpaa_intf->cgr_rx = rte_zmalloc(NULL,
+ sizeof(struct qman_cgr) * num_rx_fqs, MAX_CACHELINE);
+
+ ret = qman_alloc_cgrid_range(&cgrid[0], num_rx_fqs, 1, 0);
+ if (ret != num_rx_fqs) {
+ DPAA_PMD_WARN("insufficient CGRIDs available");
+ return -EINVAL;
+ }
+ } else {
+ dpaa_intf->cgr_rx = NULL;
+ }
+
for (loop = 0; loop < num_rx_fqs; loop++) {
fqid = DPAA_PCD_FQID_START + dpaa_intf->ifid *
DPAA_PCD_FQID_MULTIPLIER + loop;
- ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop], fqid);
+
+ if (dpaa_intf->cgr_rx)
+ dpaa_intf->cgr_rx[loop].cgrid = cgrid[loop];
+
+ ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop],
+ dpaa_intf->cgr_rx ? &dpaa_intf->cgr_rx[loop] : NULL,
+ fqid);
if (ret)
return ret;
dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
@@ -913,6 +977,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_ERR("Failed to allocate %d bytes needed to "
"store MAC addresses",
ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
+ rte_free(dpaa_intf->cgr_rx);
rte_free(dpaa_intf->rx_queues);
rte_free(dpaa_intf->tx_queues);
dpaa_intf->rx_queues = NULL;
@@ -951,6 +1016,7 @@ static int
dpaa_dev_uninit(struct rte_eth_dev *dev)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ int loop;
PMD_INIT_FUNC_TRACE();
@@ -968,6 +1034,18 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
if (dpaa_intf->fc_conf)
rte_free(dpaa_intf->fc_conf);
+ /* Release RX congestion Groups */
+ if (dpaa_intf->cgr_rx) {
+ for (loop = 0; loop < dpaa_intf->nb_rx_queues; loop++)
+ qman_delete_cgr(&dpaa_intf->cgr_rx[loop]);
+
+ qman_release_cgrid_range(dpaa_intf->cgr_rx[loop].cgrid,
+ dpaa_intf->nb_rx_queues);
+ }
+
+ rte_free(dpaa_intf->cgr_rx);
+ dpaa_intf->cgr_rx = NULL;
+
rte_free(dpaa_intf->rx_queues);
dpaa_intf->rx_queues = NULL;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 548ccff..f00a77a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -34,10 +34,8 @@
#define DPAA_MIN_RX_BUF_SIZE 512
#define DPAA_MAX_RX_PKT_LEN 10240
-/* RX queue tail drop threshold
- * currently considering 32 KB packets.
- */
-#define CONG_THRESHOLD_RX_Q (32 * 1024)
+/* RX queue tail drop threshold (CGR Based) in frame count */
+#define CGR_RX_PERFQ_THRESH 256
/*max mac filter for memac(8) including primary mac addr*/
#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
@@ -53,6 +51,7 @@
#define DPAA_PCD_FQID_START 0x400
#define DPAA_PCD_FQID_MULTIPLIER 0x100
#define DPAA_DEFAULT_NUM_PCD_QUEUES 1
+#define DPAA_MAX_NUM_PCD_QUEUES 32
#define DPAA_IF_TX_PRIORITY 3
#define DPAA_IF_RX_PRIORITY 4
@@ -102,6 +101,7 @@ struct dpaa_if {
char *name;
const struct fm_eth_port_cfg *cfg;
struct qman_fq *rx_queues;
+ struct qman_cgr *cgr_rx;
struct qman_fq *tx_queues;
struct qman_fq debug_queues[2];
uint16_t nb_rx_queues;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 07/19] bus/dpaa: optimize the qman HW stashing settings
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (5 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 06/19] net/dpaa: add frame count based tail drop with CGR Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 08/19] bus/dpaa: optimize the endianness conversions Hemant Agrawal
` (12 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
The settings are tuned for performance.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index a53459f..49bc317 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -7,6 +7,7 @@
#include "qman.h"
#include <rte_branch_prediction.h>
+#include <rte_dpaa_bus.h>
/* Compilation constants */
#define DQRR_MAXFILL 15
@@ -503,7 +504,12 @@ struct qman_portal *qman_create_portal(
p = &portal->p;
- portal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+ if (dpaa_svr_family == SVR_LS1043A_FAMILY)
+ portal->use_eqcr_ci_stashing = 3;
+ else
+ portal->use_eqcr_ci_stashing =
+ ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+
/*
* prep the low-level portal struct with the mapped addresses from the
* config, everything that follows depends on it and "config" is more
@@ -516,7 +522,7 @@ struct qman_portal *qman_create_portal(
* and stash with high-than-DQRR priority.
*/
if (qm_eqcr_init(p, qm_eqcr_pvb,
- portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {
+ portal->use_eqcr_ci_stashing, 1)) {
pr_err("Qman EQCR initialisation failed\n");
goto fail_eqcr;
}
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 08/19] bus/dpaa: optimize the endianness conversions
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (6 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 07/19] bus/dpaa: optimize the qman HW stashing settings Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 09/19] bus/dpaa: add support to create dynamic HW portal Hemant Agrawal
` (11 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 7 ++++---
drivers/bus/dpaa/include/fsl_qman.h | 2 ++
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 49bc317..b6fd40b 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -906,7 +906,7 @@ static inline unsigned int __poll_portal_fast(struct qman_portal *p,
do {
qm_dqrr_pvb_update(&p->p);
dq = qm_dqrr_current(&p->p);
- if (!dq)
+ if (unlikely(!dq))
break;
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
/* If running on an LE system the fields of the
@@ -1165,6 +1165,7 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)
}
spin_lock_init(&fq->fqlock);
fq->fqid = fqid;
+ fq->fqid_le = cpu_to_be32(fqid);
fq->flags = flags;
fq->state = qman_fq_state_oos;
fq->cgr_groupid = 0;
@@ -1953,7 +1954,7 @@ int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
int qman_enqueue_multi(struct qman_fq *fq,
const struct qm_fd *fd,
- int frames_to_send)
+ int frames_to_send)
{
struct qman_portal *p = get_affine_portal();
struct qm_portal *portal = &p->p;
@@ -1975,7 +1976,7 @@ int qman_enqueue_multi(struct qman_fq *fq,
/* try to send as many frames as possible */
while (eqcr->available && frames_to_send--) {
- eq->fqid = cpu_to_be32(fq->fqid);
+ eq->fqid = fq->fqid_le;
#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
eq->tag = cpu_to_be32(fq->key);
#else
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 5830ad5..5027230 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1197,6 +1197,8 @@ struct qman_fq {
*/
spinlock_t fqlock;
u32 fqid;
+ u32 fqid_le;
+
/* DPDK Interface */
void *dpaa_intf;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 09/19] bus/dpaa: add support to create dynamic HW portal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (7 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 08/19] bus/dpaa: optimize the endianness conversions Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 10/19] net/dpaa: change Tx HW budget to 7 Hemant Agrawal
` (10 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
HW portal is a processing context in DPAA. This patch allow
creation of a queue specific HW portal context.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 69 ++++++++++++--
drivers/bus/dpaa/base/qbman/qman_driver.c | 153 +++++++++++++++++++++++++-----
drivers/bus/dpaa/base/qbman/qman_priv.h | 6 +-
drivers/bus/dpaa/dpaa_bus.c | 31 +++++-
drivers/bus/dpaa/include/fsl_qman.h | 25 ++---
drivers/bus/dpaa/include/fsl_usd.h | 4 +
drivers/bus/dpaa/include/process.h | 11 ++-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 2 +
drivers/bus/dpaa/rte_dpaa_bus.h | 4 +
9 files changed, 252 insertions(+), 53 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index b6fd40b..d8fb25a 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -621,11 +621,52 @@ struct qman_portal *qman_create_portal(
return NULL;
}
+#define MAX_GLOBAL_PORTALS 8
+static struct qman_portal global_portals[MAX_GLOBAL_PORTALS];
+static int global_portals_used[MAX_GLOBAL_PORTALS];
+
+static struct qman_portal *
+qman_alloc_global_portal(void)
+{
+ unsigned int i;
+
+ for (i = 0; i < MAX_GLOBAL_PORTALS; i++) {
+ if (global_portals_used[i] == 0) {
+ global_portals_used[i] = 1;
+ return &global_portals[i];
+ }
+ }
+ pr_err("No portal available (%x)\n", MAX_GLOBAL_PORTALS);
+
+ return NULL;
+}
+
+static int
+qman_free_global_portal(struct qman_portal *portal)
+{
+ unsigned int i;
+
+ for (i = 0; i < MAX_GLOBAL_PORTALS; i++) {
+ if (&global_portals[i] == portal) {
+ global_portals_used[i] = 0;
+ return 0;
+ }
+ }
+ return -1;
+}
+
struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,
- const struct qman_cgrs *cgrs)
+ const struct qman_cgrs *cgrs,
+ int alloc)
{
struct qman_portal *res;
- struct qman_portal *portal = get_affine_portal();
+ struct qman_portal *portal;
+
+ if (alloc)
+ portal = qman_alloc_global_portal();
+ else
+ portal = get_affine_portal();
+
/* A criteria for calling this function (from qman_driver.c) is that
* we're already affine to the cpu and won't schedule onto another cpu.
*/
@@ -675,13 +716,18 @@ void qman_destroy_portal(struct qman_portal *qm)
spin_lock_destroy(&qm->cgr_lock);
}
-const struct qm_portal_config *qman_destroy_affine_portal(void)
+const struct qm_portal_config *
+qman_destroy_affine_portal(struct qman_portal *qp)
{
/* We don't want to redirect if we're a slave, use "raw" */
- struct qman_portal *qm = get_affine_portal();
+ struct qman_portal *qm;
const struct qm_portal_config *pcfg;
int cpu;
+ if (qp == NULL)
+ qm = get_affine_portal();
+ else
+ qm = qp;
pcfg = qm->config;
cpu = pcfg->cpu;
@@ -690,6 +736,9 @@ const struct qm_portal_config *qman_destroy_affine_portal(void)
spin_lock(&affine_mask_lock);
CPU_CLR(cpu, &affine_mask);
spin_unlock(&affine_mask_lock);
+
+ qman_free_global_portal(qm);
+
return pcfg;
}
@@ -1096,27 +1145,27 @@ void qman_start_dequeues(void)
qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);
}
-void qman_static_dequeue_add(u32 pools)
+void qman_static_dequeue_add(u32 pools, struct qman_portal *qp)
{
- struct qman_portal *p = get_affine_portal();
+ struct qman_portal *p = qp ? qp : get_affine_portal();
pools &= p->config->pools;
p->sdqcr |= pools;
qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
}
-void qman_static_dequeue_del(u32 pools)
+void qman_static_dequeue_del(u32 pools, struct qman_portal *qp)
{
- struct qman_portal *p = get_affine_portal();
+ struct qman_portal *p = qp ? qp : get_affine_portal();
pools &= p->config->pools;
p->sdqcr &= ~pools;
qm_dqrr_sdqcr_set(&p->p, p->sdqcr);
}
-u32 qman_static_dequeue_get(void)
+u32 qman_static_dequeue_get(struct qman_portal *qp)
{
- struct qman_portal *p = get_affine_portal();
+ struct qman_portal *p = qp ? qp : get_affine_portal();
return p->sdqcr;
}
diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c
index c17d15f..7cfa8ee 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -24,8 +24,8 @@ void *qman_ccsr_map;
/* The qman clock frequency */
u32 qman_clk;
-static __thread int fd = -1;
-static __thread struct qm_portal_config pcfg;
+static __thread int qmfd = -1;
+static __thread struct qm_portal_config qpcfg;
static __thread struct dpaa_ioctl_portal_map map = {
.type = dpaa_portal_qman
};
@@ -44,16 +44,16 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
error(0, ret, "pthread_getaffinity_np()");
return ret;
}
- pcfg.cpu = -1;
+ qpcfg.cpu = -1;
for (loop = 0; loop < CPU_SETSIZE; loop++)
if (CPU_ISSET(loop, &cpuset)) {
- if (pcfg.cpu != -1) {
+ if (qpcfg.cpu != -1) {
pr_err("Thread is not affine to 1 cpu\n");
return -EINVAL;
}
- pcfg.cpu = loop;
+ qpcfg.cpu = loop;
}
- if (pcfg.cpu == -1) {
+ if (qpcfg.cpu == -1) {
pr_err("Bug in getaffinity handling!\n");
return -EINVAL;
}
@@ -65,36 +65,36 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)
error(0, ret, "process_portal_map()");
return ret;
}
- pcfg.channel = map.channel;
- pcfg.pools = map.pools;
- pcfg.index = map.index;
+ qpcfg.channel = map.channel;
+ qpcfg.pools = map.pools;
+ qpcfg.index = map.index;
/* Make the portal's cache-[enabled|inhibited] regions */
- pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
- pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
+ qpcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena;
+ qpcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh;
- fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
- if (fd == -1) {
+ qmfd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+ if (qmfd == -1) {
pr_err("QMan irq init failed\n");
process_portal_unmap(&map.addr);
return -EBUSY;
}
- pcfg.is_shared = is_shared;
- pcfg.node = NULL;
- pcfg.irq = fd;
+ qpcfg.is_shared = is_shared;
+ qpcfg.node = NULL;
+ qpcfg.irq = qmfd;
- portal = qman_create_affine_portal(&pcfg, NULL);
+ portal = qman_create_affine_portal(&qpcfg, NULL, 0);
if (!portal) {
pr_err("Qman portal initialisation failed (%d)\n",
- pcfg.cpu);
+ qpcfg.cpu);
process_portal_unmap(&map.addr);
return -EBUSY;
}
irq_map.type = dpaa_portal_qman;
irq_map.portal_cinh = map.addr.cinh;
- process_portal_irq_map(fd, &irq_map);
+ process_portal_irq_map(qmfd, &irq_map);
return 0;
}
@@ -103,10 +103,10 @@ static int fsl_qman_portal_finish(void)
__maybe_unused const struct qm_portal_config *cfg;
int ret;
- process_portal_irq_unmap(fd);
+ process_portal_irq_unmap(qmfd);
- cfg = qman_destroy_affine_portal();
- DPAA_BUG_ON(cfg != &pcfg);
+ cfg = qman_destroy_affine_portal(NULL);
+ DPAA_BUG_ON(cfg != &qpcfg);
ret = process_portal_unmap(&map.addr);
if (ret)
error(0, ret, "process_portal_unmap()");
@@ -128,14 +128,119 @@ int qman_thread_finish(void)
void qman_thread_irq(void)
{
- qbman_invoke_irq(pcfg.irq);
+ qbman_invoke_irq(qpcfg.irq);
/* Now we need to uninhibit interrupts. This is the only code outside
* the regular portal driver that manipulates any portal register, so
* rather than breaking that encapsulation I am simply hard-coding the
* offset to the inhibit register here.
*/
- out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+ out_be32(qpcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0);
+}
+
+struct qman_portal *fsl_qman_portal_create(void)
+{
+ cpu_set_t cpuset;
+ struct qman_portal *res;
+
+ struct qm_portal_config *q_pcfg;
+ int loop, ret;
+ struct dpaa_ioctl_irq_map irq_map;
+ struct dpaa_ioctl_portal_map q_map = {0};
+ int q_fd;
+
+ q_pcfg = kzalloc((sizeof(struct qm_portal_config)), 0);
+ if (!q_pcfg) {
+ error(0, -1, "q_pcfg kzalloc failed");
+ return NULL;
+ }
+
+ /* Verify the thread's cpu-affinity */
+ ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+ &cpuset);
+ if (ret) {
+ error(0, ret, "pthread_getaffinity_np()");
+ return NULL;
+ }
+
+ q_pcfg->cpu = -1;
+ for (loop = 0; loop < CPU_SETSIZE; loop++)
+ if (CPU_ISSET(loop, &cpuset)) {
+ if (q_pcfg->cpu != -1) {
+ pr_err("Thread is not affine to 1 cpu\n");
+ return NULL;
+ }
+ q_pcfg->cpu = loop;
+ }
+ if (q_pcfg->cpu == -1) {
+ pr_err("Bug in getaffinity handling!\n");
+ return NULL;
+ }
+
+ /* Allocate and map a qman portal */
+ q_map.type = dpaa_portal_qman;
+ q_map.index = QBMAN_ANY_PORTAL_IDX;
+ ret = process_portal_map(&q_map);
+ if (ret) {
+ error(0, ret, "process_portal_map()");
+ return NULL;
+ }
+ q_pcfg->channel = q_map.channel;
+ q_pcfg->pools = q_map.pools;
+ q_pcfg->index = q_map.index;
+
+ /* Make the portal's cache-[enabled|inhibited] regions */
+ q_pcfg->addr_virt[DPAA_PORTAL_CE] = q_map.addr.cena;
+ q_pcfg->addr_virt[DPAA_PORTAL_CI] = q_map.addr.cinh;
+
+ q_fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY);
+ if (q_fd == -1) {
+ pr_err("QMan irq init failed\n");
+ goto err1;
+ }
+
+ q_pcfg->irq = q_fd;
+
+ res = qman_create_affine_portal(q_pcfg, NULL, true);
+ if (!res) {
+ pr_err("Qman portal initialisation failed (%d)\n",
+ q_pcfg->cpu);
+ goto err2;
+ }
+
+ irq_map.type = dpaa_portal_qman;
+ irq_map.portal_cinh = q_map.addr.cinh;
+ process_portal_irq_map(q_fd, &irq_map);
+
+ return res;
+err2:
+ close(q_fd);
+err1:
+ process_portal_unmap(&q_map.addr);
+ return NULL;
+}
+
+int fsl_qman_portal_destroy(struct qman_portal *qp)
+{
+ const struct qm_portal_config *cfg;
+ struct dpaa_portal_map addr;
+ int ret;
+
+ cfg = qman_destroy_affine_portal(qp);
+ kfree(qp);
+
+ process_portal_irq_unmap(cfg->irq);
+
+ addr.cena = cfg->addr_virt[DPAA_PORTAL_CE];
+ addr.cinh = cfg->addr_virt[DPAA_PORTAL_CI];
+
+ ret = process_portal_unmap(&addr);
+ if (ret)
+ pr_err("process_portal_unmap() (%d)\n", ret);
+
+ kfree((void *)cfg);
+
+ return ret;
}
int qman_global_init(void)
diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h
index db0b310..9e4471e 100644
--- a/drivers/bus/dpaa/base/qbman/qman_priv.h
+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h
@@ -146,8 +146,10 @@ int qm_get_wpm(int *wpm);
struct qman_portal *qman_create_affine_portal(
const struct qm_portal_config *config,
- const struct qman_cgrs *cgrs);
-const struct qm_portal_config *qman_destroy_affine_portal(void);
+ const struct qman_cgrs *cgrs,
+ int alloc);
+const struct qm_portal_config *
+qman_destroy_affine_portal(struct qman_portal *q);
struct qm_portal_config *qm_get_unused_portal(void);
struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx);
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index a7c05b3..329a125 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -264,8 +264,7 @@ _dpaa_portal_init(void *arg)
* rte_dpaa_portal_init - Wrapper over _dpaa_portal_init with thread level check
* XXX Complete this
*/
-int
-rte_dpaa_portal_init(void *arg)
+int rte_dpaa_portal_init(void *arg)
{
if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
return _dpaa_portal_init(arg);
@@ -273,6 +272,34 @@ rte_dpaa_portal_init(void *arg)
return 0;
}
+int
+rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
+{
+ /* Affine above created portal with channel*/
+ u32 sdqcr;
+ struct qman_portal *qp;
+
+ if (unlikely(!RTE_PER_LCORE(_dpaa_io)))
+ _dpaa_portal_init(arg);
+
+ /* Initialise qman specific portals */
+ qp = fsl_qman_portal_create();
+ if (!qp) {
+ DPAA_BUS_LOG(ERR, "Unable to alloc fq portal");
+ return -1;
+ }
+ fq->qp = qp;
+ sdqcr = QM_SDQCR_CHANNELS_POOL_CONV(fq->ch_id);
+ qman_static_dequeue_add(sdqcr, qp);
+
+ return 0;
+}
+
+int rte_dpaa_portal_fq_close(struct qman_fq *fq)
+{
+ return fsl_qman_portal_destroy(fq->qp);
+}
+
void
dpaa_portal_finish(void *arg)
{
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 5027230..fc00d8d 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1190,21 +1190,24 @@ struct qman_fq_cb {
struct qman_fq {
/* Caller of qman_create_fq() provides these demux callbacks */
struct qman_fq_cb cb;
- /*
- * These are internal to the driver, don't touch. In particular, they
- * may change, be removed, or extended (so you shouldn't rely on
- * sizeof(qman_fq) being a constant).
- */
- spinlock_t fqlock;
- u32 fqid;
+
u32 fqid_le;
+ u16 ch_id;
+ u8 cgr_groupid;
+ u8 is_static;
/* DPDK Interface */
void *dpaa_intf;
+ /* affined portal in case of static queue */
+ struct qman_portal *qp;
+
volatile unsigned long flags;
+
enum qman_fq_state state;
- int cgr_groupid;
+ u32 fqid;
+ spinlock_t fqlock;
+
struct rb_node node;
#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
u32 key;
@@ -1383,7 +1386,7 @@ void qman_start_dequeues(void);
* (SDQCR). The requested pools are limited to those the portal has dequeue
* access to.
*/
-void qman_static_dequeue_add(u32 pools);
+void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);
/**
* qman_static_dequeue_del - Remove pool channels from the portal SDQCR
@@ -1393,7 +1396,7 @@ void qman_static_dequeue_add(u32 pools);
* register (SDQCR). The requested pools are limited to those the portal has
* dequeue access to.
*/
-void qman_static_dequeue_del(u32 pools);
+void qman_static_dequeue_del(u32 pools, struct qman_portal *qp);
/**
* qman_static_dequeue_get - return the portal's current SDQCR
@@ -1402,7 +1405,7 @@ void qman_static_dequeue_del(u32 pools);
* entire register is returned, so if only the currently-enabled pool channels
* are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.
*/
-u32 qman_static_dequeue_get(void);
+u32 qman_static_dequeue_get(struct qman_portal *qp);
/**
* qman_dca - Perform a Discrete Consumption Acknowledgment
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index ada92f2..e183617 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -67,6 +67,10 @@ void bman_thread_irq(void);
int qman_global_init(void);
int bman_global_init(void);
+/* Direct portal create and destroy */
+struct qman_portal *fsl_qman_portal_create(void);
+int fsl_qman_portal_destroy(struct qman_portal *qp);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index 079849b..d9ec94e 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -39,6 +39,11 @@ enum dpaa_portal_type {
dpaa_portal_bman,
};
+struct dpaa_portal_map {
+ void *cinh;
+ void *cena;
+};
+
struct dpaa_ioctl_portal_map {
/* Input parameter, is a qman or bman portal required. */
enum dpaa_portal_type type;
@@ -50,10 +55,8 @@ struct dpaa_ioctl_portal_map {
/* Return value if the map succeeds, this gives the mapped
* cache-inhibited (cinh) and cache-enabled (cena) addresses.
*/
- struct dpaa_portal_map {
- void *cinh;
- void *cena;
- } addr;
+ struct dpaa_portal_map addr;
+
/* Qman-specific return values */
u16 channel;
uint32_t pools;
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index f412362..4e3afda 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -74,6 +74,8 @@ DPDK_18.02 {
qman_delete_cgr;
qman_modify_cgr;
qman_release_cgrid_range;
+ rte_dpaa_portal_fq_close;
+ rte_dpaa_portal_fq_init;
local: *;
} DPDK_17.11;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index d9e8c84..f626066 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -136,6 +136,10 @@ void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
*/
int rte_dpaa_portal_init(void *arg);
+int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);
+
+int rte_dpaa_portal_fq_close(struct qman_fq *fq);
+
/**
* Cleanup a DPAA Portal
*/
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 10/19] net/dpaa: change Tx HW budget to 7
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (8 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 09/19] bus/dpaa: add support to create dynamic HW portal Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 11/19] net/dpaa: optimize the Tx burst Hemant Agrawal
` (9 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
change the TX budget to 7 to sync best with the hw.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.h | 2 +-
drivers/net/dpaa/dpaa_rxtx.c | 5 +++--
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index f00a77a..1b36567 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -41,7 +41,7 @@
#define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
/*Maximum number of slots available in TX ring*/
-#define MAX_TX_RING_SLOTS 8
+#define DPAA_TX_BURST_SIZE 7
#ifndef VLAN_TAG_SIZE
#define VLAN_TAG_SIZE 4 /** < Vlan Header Length */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 630d7a5..565ca50 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -669,7 +669,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct rte_mbuf *mbuf, *mi = NULL;
struct rte_mempool *mp;
struct dpaa_bp_info *bp_info;
- struct qm_fd fd_arr[MAX_TX_RING_SLOTS];
+ struct qm_fd fd_arr[DPAA_TX_BURST_SIZE];
uint32_t frames_to_send, loop, i = 0;
uint16_t state;
int ret;
@@ -683,7 +683,8 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_DP_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
while (nb_bufs) {
- frames_to_send = (nb_bufs >> 3) ? MAX_TX_RING_SLOTS : nb_bufs;
+ frames_to_send = (nb_bufs > DPAA_TX_BURST_SIZE) ?
+ DPAA_TX_BURST_SIZE : nb_bufs;
for (loop = 0; loop < frames_to_send; loop++, i++) {
mbuf = bufs[i];
if (RTE_MBUF_DIRECT(mbuf)) {
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 11/19] net/dpaa: optimize the Tx burst
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (9 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 10/19] net/dpaa: change Tx HW budget to 7 Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 12/19] net/dpaa: optimize Rx path Hemant Agrawal
` (8 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Optimize it for best case. Create a function
for TX offloads to be used in multiple legs.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 73 ++++++++++++++++++++++++++++----------------
1 file changed, 46 insertions(+), 27 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 565ca50..148f265 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -272,6 +272,30 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
}
+static inline void
+dpaa_unsegmented_checksum(struct rte_mbuf *mbuf, struct qm_fd *fd_arr)
+{
+ if (!mbuf->packet_type) {
+ struct rte_net_hdr_lens hdr_lens;
+
+ mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
+ RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
+ | RTE_PTYPE_L4_MASK);
+ mbuf->l2_len = hdr_lens.l2_len;
+ mbuf->l3_len = hdr_lens.l3_len;
+ }
+ if (mbuf->data_off < (DEFAULT_TX_ICEOF +
+ sizeof(struct dpaa_eth_parse_results_t))) {
+ DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
+ "Not enough Headroom "
+ "space for correct Checksum offload."
+ "So Calculating checksum in Software.");
+ dpaa_checksum(mbuf);
+ } else {
+ dpaa_checksum_offload(mbuf, fd_arr, mbuf->buf_addr);
+ }
+}
+
struct rte_mbuf *
dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
{
@@ -594,27 +618,8 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
rte_pktmbuf_free(mbuf);
}
- if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK) {
- if (!mbuf->packet_type) {
- struct rte_net_hdr_lens hdr_lens;
-
- mbuf->packet_type = rte_net_get_ptype(mbuf, &hdr_lens,
- RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK
- | RTE_PTYPE_L4_MASK);
- mbuf->l2_len = hdr_lens.l2_len;
- mbuf->l3_len = hdr_lens.l3_len;
- }
- if (mbuf->data_off < (DEFAULT_TX_ICEOF +
- sizeof(struct dpaa_eth_parse_results_t))) {
- DPAA_DP_LOG(DEBUG, "Checksum offload Err: "
- "Not enough Headroom "
- "space for correct Checksum offload."
- "So Calculating checksum in Software.");
- dpaa_checksum(mbuf);
- } else {
- dpaa_checksum_offload(mbuf, fd_arr, mbuf->buf_addr);
- }
- }
+ if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK)
+ dpaa_unsegmented_checksum(mbuf, fd_arr);
}
/* Handle all mbufs on dpaa BMAN managed pool */
@@ -670,7 +675,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct rte_mempool *mp;
struct dpaa_bp_info *bp_info;
struct qm_fd fd_arr[DPAA_TX_BURST_SIZE];
- uint32_t frames_to_send, loop, i = 0;
+ uint32_t frames_to_send, loop, sent = 0;
uint16_t state;
int ret;
@@ -685,10 +690,23 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
while (nb_bufs) {
frames_to_send = (nb_bufs > DPAA_TX_BURST_SIZE) ?
DPAA_TX_BURST_SIZE : nb_bufs;
- for (loop = 0; loop < frames_to_send; loop++, i++) {
- mbuf = bufs[i];
- if (RTE_MBUF_DIRECT(mbuf)) {
+ for (loop = 0; loop < frames_to_send; loop++) {
+ mbuf = *(bufs++);
+ if (likely(RTE_MBUF_DIRECT(mbuf))) {
mp = mbuf->pool;
+ bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+ if (likely(mp->ops_index ==
+ bp_info->dpaa_ops_index &&
+ mbuf->nb_segs == 1 &&
+ rte_mbuf_refcnt_read(mbuf) == 1)) {
+ DPAA_MBUF_TO_CONTIG_FD(mbuf,
+ &fd_arr[loop], bp_info->bpid);
+ if (mbuf->ol_flags &
+ DPAA_TX_CKSUM_OFFLOAD_MASK)
+ dpaa_unsegmented_checksum(mbuf,
+ &fd_arr[loop]);
+ continue;
+ }
} else {
mi = rte_mbuf_from_indirect(mbuf);
mp = mi->pool;
@@ -729,11 +747,12 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
frames_to_send - loop);
}
nb_bufs -= frames_to_send;
+ sent += frames_to_send;
}
- DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", i, q);
+ DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
- return i;
+ return sent;
}
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 12/19] net/dpaa: optimize Rx path
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (10 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 11/19] net/dpaa: optimize the Tx burst Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 13/19] bus/dpaa: query queue frame count support Hemant Agrawal
` (7 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Nipun Gupta
From: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 48 ++++++++++++++++++++------------------------
drivers/net/dpaa/dpaa_rxtx.h | 2 +-
2 files changed, 23 insertions(+), 27 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 148f265..98671fa 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -97,12 +97,6 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
switch (prs) {
- case DPAA_PKT_TYPE_NONE:
- m->packet_type = 0;
- break;
- case DPAA_PKT_TYPE_ETHER:
- m->packet_type = RTE_PTYPE_L2_ETHER;
- break;
case DPAA_PKT_TYPE_IPV4:
m->packet_type = RTE_PTYPE_L2_ETHER |
RTE_PTYPE_L3_IPV4;
@@ -111,6 +105,9 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
m->packet_type = RTE_PTYPE_L2_ETHER |
RTE_PTYPE_L3_IPV6;
break;
+ case DPAA_PKT_TYPE_ETHER:
+ m->packet_type = RTE_PTYPE_L2_ETHER;
+ break;
case DPAA_PKT_TYPE_IPV4_FRAG:
case DPAA_PKT_TYPE_IPV4_FRAG_UDP:
case DPAA_PKT_TYPE_IPV4_FRAG_TCP:
@@ -173,6 +170,9 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
m->packet_type = RTE_PTYPE_L2_ETHER |
RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP;
break;
+ case DPAA_PKT_TYPE_NONE:
+ m->packet_type = 0;
+ break;
/* More switch cases can be added */
default:
dpaa_slow_parsing(m, prs);
@@ -183,12 +183,11 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m,
<< DPAA_PKT_L3_LEN_SHIFT;
/* Set the hash values */
- m->hash.rss = (uint32_t)(rte_be_to_cpu_64(annot->hash));
- m->ol_flags = PKT_RX_RSS_HASH;
+ m->hash.rss = (uint32_t)(annot->hash);
/* All packets with Bad checksum are dropped by interface (and
* corresponding notification issued to RX error queues).
*/
- m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+ m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_GOOD;
/* Check if Vlan is present */
if (prs & DPAA_PARSE_VLAN_MASK)
@@ -297,7 +296,7 @@ dpaa_unsegmented_checksum(struct rte_mbuf *mbuf, struct qm_fd *fd_arr)
}
struct rte_mbuf *
-dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
+dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
{
struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
@@ -355,34 +354,31 @@ dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid)
return first_seg;
}
-static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
- uint32_t ifid)
+static inline struct rte_mbuf *
+dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
{
- struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
struct rte_mbuf *mbuf;
- void *ptr;
+ struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+ void *ptr = rte_dpaa_mem_ptov(qm_fd_addr(fd));
uint8_t format =
(fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
- uint16_t offset =
- (fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
- uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
+ uint16_t offset;
+ uint32_t length;
DPAA_DP_LOG(DEBUG, " FD--->MBUF");
if (unlikely(format == qm_fd_sg))
return dpaa_eth_sg_to_mbuf(fd, ifid);
+ rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
+
+ offset = (fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
+ length = fd->opaque & DPAA_FD_LENGTH_MASK;
+
/* Ignoring case when format != qm_fd_contig */
dpaa_display_frame(fd);
- ptr = rte_dpaa_mem_ptov(fd->addr);
- /* Ignoring case when ptr would be NULL. That is only possible incase
- * of a corrupted packet
- */
mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
- /* Prefetch the Parse results and packet data to L1 */
- rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
- rte_prefetch0((void *)((uint8_t *)ptr + offset));
mbuf->data_off = offset;
mbuf->data_len = length;
@@ -462,11 +458,11 @@ static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
if (!dpaa_mbuf)
return NULL;
- memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + mbuf->data_off, (void *)
+ memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + RTE_PKTMBUF_HEADROOM, (void *)
((uint8_t *)(mbuf->buf_addr) + mbuf->data_off), mbuf->pkt_len);
/* Copy only the required fields */
- dpaa_mbuf->data_off = mbuf->data_off;
+ dpaa_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
dpaa_mbuf->pkt_len = mbuf->pkt_len;
dpaa_mbuf->ol_flags = mbuf->ol_flags;
dpaa_mbuf->packet_type = mbuf->packet_type;
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 9308b3a..78e804f 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -262,7 +262,7 @@ uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
struct rte_mbuf **bufs __rte_unused,
uint16_t nb_bufs __rte_unused);
-struct rte_mbuf *dpaa_eth_sg_to_mbuf(struct qm_fd *fd, uint32_t ifid);
+struct rte_mbuf *dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid);
int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
struct qm_fd *fd,
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 13/19] bus/dpaa: query queue frame count support
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (11 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 12/19] net/dpaa: optimize Rx path Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 14/19] net/dpaa: add Rx queue " Hemant Agrawal
` (6 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 22 ++++++++++++++++++++++
drivers/bus/dpaa/include/fsl_qman.h | 7 +++++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 +
3 files changed, 30 insertions(+)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index d8fb25a..ffb008e 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -1722,6 +1722,28 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)
return 0;
}
+int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt)
+{
+ struct qm_mc_command *mcc;
+ struct qm_mc_result *mcr;
+ struct qman_portal *p = get_affine_portal();
+
+ mcc = qm_mc_start(&p->p);
+ mcc->queryfq.fqid = cpu_to_be32(fq->fqid);
+ qm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);
+ while (!(mcr = qm_mc_result(&p->p)))
+ cpu_relax();
+ DPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);
+
+ if (mcr->result == QM_MCR_RESULT_OK)
+ *frm_cnt = be24_to_cpu(mcr->queryfq_np.frm_cnt);
+ else if (mcr->result == QM_MCR_RESULT_ERR_FQID)
+ return -ERANGE;
+ else if (mcr->result != QM_MCR_RESULT_OK)
+ return -EIO;
+ return 0;
+}
+
int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)
{
struct qm_mc_command *mcc;
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index fc00d8d..d769d50 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1616,6 +1616,13 @@ int qman_query_fq_has_pkts(struct qman_fq *fq);
int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
/**
+ * qman_query_fq_frmcnt - Queries fq frame count
+ * @fq: the frame queue object to be queried
+ * @frm_cnt: number of frames in the queue
+ */
+int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);
+
+/**
* qman_query_wq - Queries work queue lengths
* @query_dedicated: If non-zero, query length of WQs in the channel dedicated
* to this software portal. Otherwise, query length of WQs in a
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 4e3afda..212c75f 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -73,6 +73,7 @@ DPDK_18.02 {
qman_create_cgr;
qman_delete_cgr;
qman_modify_cgr;
+ qman_query_fq_frm_cnt;
qman_release_cgrid_range;
rte_dpaa_portal_fq_close;
rte_dpaa_portal_fq_init;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 14/19] net/dpaa: add Rx queue count support
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (12 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 13/19] bus/dpaa: query queue frame count support Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 15/19] net/dpaa: add support for loopback API Hemant Agrawal
` (5 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5d94af5..de016ab 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -513,6 +513,22 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
PMD_INIT_FUNC_TRACE();
}
+static uint32_t
+dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+ u32 frm_cnt = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
+ RTE_LOG(DEBUG, PMD, "RX frame count for q(%d) is %u\n",
+ rx_queue_id, frm_cnt);
+ }
+ return frm_cnt;
+}
+
static int dpaa_link_down(struct rte_eth_dev *dev)
{
PMD_INIT_FUNC_TRACE();
@@ -664,6 +680,7 @@ static struct eth_dev_ops dpaa_devops = {
.tx_queue_setup = dpaa_eth_tx_queue_setup,
.rx_queue_release = dpaa_eth_rx_queue_release,
.tx_queue_release = dpaa_eth_tx_queue_release,
+ .rx_queue_count = dpaa_dev_rx_queue_count,
.flow_ctrl_get = dpaa_flow_ctrl_get,
.flow_ctrl_set = dpaa_flow_ctrl_set,
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 15/19] net/dpaa: add support for loopback API
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (13 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 14/19] net/dpaa: add Rx queue " Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 16/19] app/testpmd: add support for loopback config for dpaa Hemant Agrawal
` (4 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
PMD specific API is being added as an EXPERIMENTAL API
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf | 1 +
drivers/net/dpaa/Makefile | 3 +++
drivers/net/dpaa/dpaa_ethdev.c | 42 +++++++++++++++++++++++++++++++
drivers/net/dpaa/rte_pmd_dpaa.h | 39 ++++++++++++++++++++++++++++
drivers/net/dpaa/rte_pmd_dpaa_version.map | 8 ++++++
6 files changed, 94 insertions(+)
create mode 100644 drivers/net/dpaa/rte_pmd_dpaa.h
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 3492702..38314af 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -60,6 +60,7 @@ The public API headers are grouped by topics:
[ixgbe] (@ref rte_pmd_ixgbe.h),
[i40e] (@ref rte_pmd_i40e.h),
[bnxt] (@ref rte_pmd_bnxt.h),
+ [dpaa] (@ref rte_pmd_dpaa.h),
[crypto_scheduler] (@ref rte_cryptodev_scheduler.h)
- **memory**:
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index b2cbe94..09e3232 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -33,6 +33,7 @@ INPUT = doc/api/doxy-api-index.md \
drivers/crypto/scheduler \
drivers/net/bnxt \
drivers/net/bonding \
+ drivers/net/dpaa \
drivers/net/i40e \
drivers/net/ixgbe \
drivers/net/softnic \
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index e5f662f..b1fc5a0 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -34,4 +34,7 @@ LDLIBS += -lrte_mempool_dpaa
LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs
+# install this header file
+SYMLINK-$(CONFIG_RTE_LIBRTE_DPAA_PMD)-include := rte_pmd_dpaa.h
+
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index de016ab..85ccea4 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -38,6 +38,7 @@
#include <dpaa_ethdev.h>
#include <dpaa_rxtx.h>
+#include <rte_pmd_dpaa.h>
#include <fsl_usd.h>
#include <fsl_qman.h>
@@ -84,6 +85,8 @@ static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
offsetof(struct dpaa_if_stats, tund)},
};
+static struct rte_dpaa_driver rte_dpaa_pmd;
+
static int
dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
@@ -707,6 +710,45 @@ static struct eth_dev_ops dpaa_devops = {
.fw_version_get = dpaa_fw_version_get,
};
+static bool
+is_device_supported(struct rte_eth_dev *dev, struct rte_dpaa_driver *drv)
+{
+ if (strcmp(dev->device->driver->name,
+ drv->driver.name))
+ return false;
+
+ return true;
+}
+
+static bool
+is_dpaa_supported(struct rte_eth_dev *dev)
+{
+ return is_device_supported(dev, &rte_dpaa_pmd);
+}
+
+int
+rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
+{
+ struct rte_eth_dev *dev;
+ struct dpaa_if *dpaa_intf;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV);
+
+ dev = &rte_eth_devices[port];
+
+ if (!is_dpaa_supported(dev))
+ return -ENOTSUP;
+
+ dpaa_intf = dev->data->dev_private;
+
+ if (on)
+ fman_if_loopback_enable(dpaa_intf->fif);
+ else
+ fman_if_loopback_disable(dpaa_intf->fif);
+
+ return 0;
+}
+
static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
{
struct rte_eth_fc_conf *fc_conf;
diff --git a/drivers/net/dpaa/rte_pmd_dpaa.h b/drivers/net/dpaa/rte_pmd_dpaa.h
new file mode 100644
index 0000000..9614be8
--- /dev/null
+++ b/drivers/net/dpaa/rte_pmd_dpaa.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2018 NXP
+ */
+
+#ifndef _PMD_DPAA_H_
+#define _PMD_DPAA_H_
+
+/**
+ * @file rte_pmd_dpaa.h
+ *
+ * NXP dpaa PMD specific functions.
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ */
+
+#include <rte_ethdev.h>
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enable/Disable TX loopback
+ *
+ * @param port
+ * The port identifier of the Ethernet device.
+ * @param on
+ * 1 - Enable TX loopback.
+ * 0 - Disable TX loopback.
+ * @return
+ * - (0) if successful.
+ * - (-ENODEV) if *port* invalid.
+ * - (-EINVAL) if bad parameter.
+ */
+int
+rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on);
+
+#endif /* _PMD_DPAA_H_ */
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index a70bd19..d1f3ea4 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -2,3 +2,11 @@ DPDK_17.11 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ rte_pmd_dpaa_set_tx_loopback;
+
+ local: *;
+} DPDK_17.11;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 16/19] app/testpmd: add support for loopback config for dpaa
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (14 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 15/19] net/dpaa: add support for loopback API Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 17/19] bus/dpaa: add support for static queues Hemant Agrawal
` (3 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
app/test-pmd/Makefile | 4 ++++
app/test-pmd/cmdline.c | 7 +++++++
2 files changed, 11 insertions(+)
diff --git a/app/test-pmd/Makefile b/app/test-pmd/Makefile
index 82b3481..34125e5 100644
--- a/app/test-pmd/Makefile
+++ b/app/test-pmd/Makefile
@@ -43,6 +43,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_BOND),y)
LDLIBS += -lrte_pmd_bond
endif
+ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
+LDLIBS += -lrte_pmd_dpaa
+endif
+
ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y)
LDLIBS += -lrte_pmd_ixgbe
endif
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f71d963..32096aa 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -89,6 +89,9 @@
#include <rte_eth_bond.h>
#include <rte_eth_bond_8023ad.h>
#endif
+#ifdef RTE_LIBRTE_DPAA_PMD
+#include <rte_pmd_dpaa.h>
+#endif
#ifdef RTE_LIBRTE_IXGBE_PMD
#include <rte_pmd_ixgbe.h>
#endif
@@ -12620,6 +12623,10 @@ cmd_set_tx_loopback_parsed(
if (ret == -ENOTSUP)
ret = rte_pmd_bnxt_set_tx_loopback(res->port_id, is_on);
#endif
+#ifdef RTE_LIBRTE_DPAA_PMD
+ if (ret == -ENOTSUP)
+ ret = rte_pmd_dpaa_set_tx_loopback(res->port_id, is_on);
+#endif
switch (ret) {
case 0:
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 17/19] bus/dpaa: add support for static queues
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (15 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 16/19] app/testpmd: add support for loopback config for dpaa Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 18/19] net/dpaa: integrate the support of push mode in PMD Hemant Agrawal
` (2 subsequent siblings)
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Sunil Kumar Kori
DPAA hardware support two kinds of queues:
1. Pull mode queue - where one needs to regularly pull the packets.
2. Push mode queue - where the hw pushes the packet to queue. These are
high performance queues, but limited in number.
This patch add the driver support for push mode queues.
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 64 +++++++++++++++++++++++++++++++
drivers/bus/dpaa/base/qbman/qman.h | 4 +-
drivers/bus/dpaa/include/fsl_qman.h | 14 ++++++-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 4 ++
4 files changed, 83 insertions(+), 3 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index ffb008e..7e285a5 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -1051,6 +1051,70 @@ u16 qman_affine_channel(int cpu)
return affine_channels[cpu];
}
+unsigned int qman_portal_poll_rx(unsigned int poll_limit,
+ void **bufs,
+ struct qman_portal *p)
+{
+ const struct qm_dqrr_entry *dq;
+ struct qman_fq *fq;
+ enum qman_cb_dqrr_result res;
+ unsigned int limit = 0;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ struct qm_dqrr_entry *shadow;
+#endif
+ unsigned int rx_number = 0;
+
+ do {
+ qm_dqrr_pvb_update(&p->p);
+ dq = qm_dqrr_current(&p->p);
+ if (unlikely(!dq))
+ break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ /* If running on an LE system the fields of the
+ * dequeue entry must be swapper. Because the
+ * QMan HW will ignore writes the DQRR entry is
+ * copied and the index stored within the copy
+ */
+ shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+ *shadow = *dq;
+ dq = shadow;
+ shadow->fqid = be32_to_cpu(shadow->fqid);
+ shadow->contextB = be32_to_cpu(shadow->contextB);
+ shadow->seqnum = be16_to_cpu(shadow->seqnum);
+ hw_fd_to_cpu(&shadow->fd);
+#endif
+
+ /* SDQCR: context_b points to the FQ */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+ fq = get_fq_table_entry(dq->contextB);
+#else
+ fq = (void *)(uintptr_t)dq->contextB;
+#endif
+ /* Now let the callback do its stuff */
+ res = fq->cb.dqrr_dpdk_cb(NULL, p, fq, dq, &bufs[rx_number]);
+ rx_number++;
+ /* Interpret 'dq' from a driver perspective. */
+ /*
+ * Parking isn't possible unless HELDACTIVE was set. NB,
+ * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+ * check for HELDACTIVE to cover both.
+ */
+ DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+ (res != qman_cb_dqrr_park));
+ qm_dqrr_cdc_consume_1ptr(&p->p, dq, res == qman_cb_dqrr_park);
+ /* Move forward */
+ qm_dqrr_next(&p->p);
+ /*
+ * Entry processed and consumed, increment our counter. The
+ * callback can request that we exit after consuming the
+ * entry, and we also exit if we reach our processing limit,
+ * so loop back only if neither of these conditions is met.
+ */
+ } while (likely(++limit < poll_limit));
+
+ return limit;
+}
+
struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
{
struct qman_portal *p = get_affine_portal();
diff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h
index a433369..4346d86 100644
--- a/drivers/bus/dpaa/base/qbman/qman.h
+++ b/drivers/bus/dpaa/base/qbman/qman.h
@@ -154,7 +154,7 @@ struct qm_eqcr {
};
struct qm_dqrr {
- const struct qm_dqrr_entry *ring, *cursor;
+ struct qm_dqrr_entry *ring, *cursor;
u8 pi, ci, fill, ithresh, vbit;
#ifdef RTE_LIBRTE_DPAA_HWDEBUG
enum qm_dqrr_dmode dmode;
@@ -441,7 +441,7 @@ static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)
return ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);
}
-static inline const struct qm_dqrr_entry *DQRR_INC(
+static inline struct qm_dqrr_entry *DQRR_INC(
const struct qm_dqrr_entry *e)
{
return DQRR_CARRYCLEAR(e + 1);
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index d769d50..ad40d80 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1124,6 +1124,12 @@ typedef enum qman_cb_dqrr_result (*qman_cb_dqrr)(struct qman_portal *qm,
struct qman_fq *fq,
const struct qm_dqrr_entry *dqrr);
+typedef enum qman_cb_dqrr_result (*qman_dpdk_cb_dqrr)(void *event,
+ struct qman_portal *qm,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bd);
+
/*
* This callback type is used when handling ERNs, FQRNs and FQRLs via MR. They
* are always consumed after the callback returns.
@@ -1182,7 +1188,10 @@ enum qman_fq_state {
*/
struct qman_fq_cb {
- qman_cb_dqrr dqrr; /* for dequeued frames */
+ union { /* for dequeued frames */
+ qman_dpdk_cb_dqrr dqrr_dpdk_cb;
+ qman_cb_dqrr dqrr;
+ };
qman_cb_mr ern; /* for s/w ERNs */
qman_cb_mr fqs; /* frame-queue state changes*/
};
@@ -1299,6 +1308,9 @@ int qman_get_portal_index(void);
*/
u16 qman_affine_channel(int cpu);
+unsigned int qman_portal_poll_rx(unsigned int poll_limit,
+ void **bufs, struct qman_portal *q);
+
/**
* qman_set_vdq - Issue a volatile dequeue command
* @fq: Frame Queue on which the volatile dequeue command is issued
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 212c75f..ac455cd 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -70,11 +70,15 @@ DPDK_18.02 {
dpaa_svr_family;
qman_alloc_cgrid_range;
+ qman_alloc_pool_range;
qman_create_cgr;
qman_delete_cgr;
qman_modify_cgr;
+ qman_oos_fq;
+ qman_portal_poll_rx;
qman_query_fq_frm_cnt;
qman_release_cgrid_range;
+ qman_retire_fq;
rte_dpaa_portal_fq_close;
rte_dpaa_portal_fq_init;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 18/19] net/dpaa: integrate the support of push mode in PMD
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (16 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 17/19] bus/dpaa: add support for static queues Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 19/19] bus/dpaa: support for enqueue frames of multiple queues Hemant Agrawal
2018-01-10 20:14 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Ferruh Yigit
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Sunil Kumar Kori, Nipun Gupta
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
doc/guides/nics/dpaa.rst | 11 ++++++++
drivers/net/dpaa/dpaa_ethdev.c | 64 +++++++++++++++++++++++++++++++++++++-----
drivers/net/dpaa/dpaa_ethdev.h | 2 +-
drivers/net/dpaa/dpaa_rxtx.c | 34 ++++++++++++++++++++++
drivers/net/dpaa/dpaa_rxtx.h | 5 ++++
5 files changed, 108 insertions(+), 8 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index a62f128..0a13996 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -290,6 +290,17 @@ state during application initialization:
In case the application is configured to use lesser number of queues than
configured above, it might result in packet loss (because of distribution).
+- ``DPAA_PUSH_QUEUES_NUMBER`` (default 4)
+
+ This defines the number of High performance queues to be used for ethdev Rx.
+ These queues use one private HW portal per queue configured, so they are
+ limited in the system. The first configured ethdev queues will be
+ automatically be assigned from the these high perf PUSH queues. Any queue
+ configuration beyond that will be standard Rx queues. The application can
+ choose to change their number if HW portals are limited.
+ The valid values are from '0' to '4'. The valuse shall be set to '0' if the
+ application want to use eventdev with DPAA device.
+
Driver compilation and testing
------------------------------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 85ccea4..444c122 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -47,6 +47,14 @@
/* Keep track of whether QMAN and BMAN have been globally initialized */
static int is_global_init;
+/* At present we only allow up to 4 push mode queues - as each of this queue
+ * need dedicated portal and we are short of portals.
+ */
+#define DPAA_MAX_PUSH_MODE_QUEUE 4
+
+static int dpaa_push_mode_max_queue = DPAA_MAX_PUSH_MODE_QUEUE;
+static int dpaa_push_queue_idx; /* Queue index which are in push mode*/
+
/* Per FQ Taildrop in frame count */
static unsigned int td_threshold = CGR_RX_PERFQ_THRESH;
@@ -434,6 +442,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
+ struct qm_mcc_initfq opts = {0};
+ u32 flags = 0;
+ int ret;
PMD_INIT_FUNC_TRACE();
@@ -469,13 +480,45 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->name, fd_offset,
fman_if_get_fdoff(dpaa_intf->fif));
}
-
+ /* checking if push mode only, no error check for now */
+ if (dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
+ dpaa_push_queue_idx++;
+ opts.we_mask = QM_INITFQ_WE_FQCTRL | QM_INITFQ_WE_CONTEXTA;
+ opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK |
+ QM_FQCTRL_CTXASTASHING |
+ QM_FQCTRL_PREFERINCACHE;
+ opts.fqd.context_a.stashing.exclusive = 0;
+ opts.fqd.context_a.stashing.annotation_cl =
+ DPAA_IF_RX_ANNOTATION_STASH;
+ opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+ opts.fqd.context_a.stashing.context_cl =
+ DPAA_IF_RX_CONTEXT_STASH;
+
+ /*Create a channel and associate given queue with the channel*/
+ qman_alloc_pool_range((u32 *)&rxq->ch_id, 1, 1, 0);
+ opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ;
+ opts.fqd.dest.channel = rxq->ch_id;
+ opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+ flags = QMAN_INITFQ_FLAG_SCHED;
+
+ /* Configure tail drop */
+ if (dpaa_intf->cgr_rx) {
+ opts.we_mask |= QM_INITFQ_WE_CGID;
+ opts.fqd.cgid = dpaa_intf->cgr_rx[queue_idx].cgrid;
+ opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+ }
+ ret = qman_init_fq(rxq, flags, &opts);
+ if (ret)
+ DPAA_PMD_ERR("Channel/Queue association failed. fqid %d"
+ " ret: %d", rxq->fqid, ret);
+ rxq->cb.dqrr_dpdk_cb = dpaa_rx_cb;
+ rxq->is_static = true;
+ }
dev->data->rx_queues[queue_idx] = rxq;
/* configure the CGR size as per the desc size */
if (dpaa_intf->cgr_rx) {
struct qm_mcc_initcgr cgr_opts = {0};
- int ret;
/* Enable tail drop with cgr on this queue */
qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres, nb_desc, 0);
@@ -809,11 +852,8 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
fqid, ret);
return ret;
}
-
- opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
- QM_INITFQ_WE_CONTEXTA;
-
- opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+ fq->is_static = false;
+ opts.we_mask = QM_INITFQ_WE_FQCTRL | QM_INITFQ_WE_CONTEXTA;
opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
QM_FQCTRL_PREFERINCACHE;
opts.fqd.context_a.stashing.exclusive = 0;
@@ -947,6 +987,16 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
else
num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
+ /* if push mode queues to be enabled. Currenly we are allowing only
+ * one queue per thread.
+ */
+ if (getenv("DPAA_PUSH_QUEUES_NUMBER")) {
+ dpaa_push_mode_max_queue =
+ atoi(getenv("DPAA_PUSH_QUEUES_NUMBER"));
+ if (dpaa_push_mode_max_queue > DPAA_MAX_PUSH_MODE_QUEUE)
+ dpaa_push_mode_max_queue = DPAA_MAX_PUSH_MODE_QUEUE;
+ }
+
/* Each device can not have more than DPAA_PCD_FQID_MULTIPLIER RX
* queues.
*/
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 1b36567..1fa6caf 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -54,7 +54,7 @@
#define DPAA_MAX_NUM_PCD_QUEUES 32
#define DPAA_IF_TX_PRIORITY 3
-#define DPAA_IF_RX_PRIORITY 4
+#define DPAA_IF_RX_PRIORITY 0
#define DPAA_IF_DEBUG_PRIORITY 7
#define DPAA_IF_RX_ANNOTATION_STASH 1
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 98671fa..0413932 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -394,6 +394,37 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
return mbuf;
}
+enum qman_cb_dqrr_result dpaa_rx_cb(void *event __always_unused,
+ struct qman_portal *qm __always_unused,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bufs)
+{
+ const struct qm_fd *fd = &dqrr->fd;
+
+ *bufs = dpaa_eth_fd_to_mbuf(fd,
+ ((struct dpaa_if *)fq->dpaa_intf)->ifid);
+ return qman_cb_dqrr_consume;
+}
+
+static uint16_t
+dpaa_eth_queue_portal_rx(struct qman_fq *fq,
+ struct rte_mbuf **bufs,
+ uint16_t nb_bufs)
+{
+ int ret;
+
+ if (unlikely(fq->qp == NULL)) {
+ ret = rte_dpaa_portal_fq_init((void *)0, fq);
+ if (ret) {
+ DPAA_PMD_ERR("Failure in affining portal %d", ret);
+ return 0;
+ }
+ }
+
+ return qman_portal_poll_rx(nb_bufs, (void **)bufs, fq->qp);
+}
+
uint16_t dpaa_eth_queue_rx(void *q,
struct rte_mbuf **bufs,
uint16_t nb_bufs)
@@ -403,6 +434,9 @@ uint16_t dpaa_eth_queue_rx(void *q,
uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
int ret;
+ if (likely(fq->is_static))
+ return dpaa_eth_queue_portal_rx(fq, bufs, nb_bufs);
+
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_PMD_ERR("Failure in affining portal");
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 78e804f..29d8f95 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -268,4 +268,9 @@ int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
struct qm_fd *fd,
uint32_t bpid);
+enum qman_cb_dqrr_result dpaa_rx_cb(void *event,
+ struct qman_portal *qm,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bd);
#endif
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* [dpdk-dev] [PATCH v3 19/19] bus/dpaa: support for enqueue frames of multiple queues
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (17 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 18/19] net/dpaa: integrate the support of push mode in PMD Hemant Agrawal
@ 2018-01-10 10:46 ` Hemant Agrawal
2018-01-10 20:14 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Ferruh Yigit
19 siblings, 0 replies; 65+ messages in thread
From: Hemant Agrawal @ 2018-01-10 10:46 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, shreyansh.jain, Akhil Goyal, Nipun Gupta
From: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 66 +++++++++++++++++++++++++++++++
drivers/bus/dpaa/include/fsl_qman.h | 14 +++++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 +
3 files changed, 81 insertions(+)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 7e285a5..e171356 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -2158,6 +2158,72 @@ int qman_enqueue_multi(struct qman_fq *fq,
return sent;
}
+int
+qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
+ int frames_to_send)
+{
+ struct qman_portal *p = get_affine_portal();
+ struct qm_portal *portal = &p->p;
+
+ register struct qm_eqcr *eqcr = &portal->eqcr;
+ struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+
+ u8 i, diff, old_ci, sent = 0;
+
+ /* Update the available entries if no entry is free */
+ if (!eqcr->available) {
+ old_ci = eqcr->ci;
+ eqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);
+ diff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);
+ eqcr->available += diff;
+ if (!diff)
+ return 0;
+ }
+
+ /* try to send as many frames as possible */
+ while (eqcr->available && frames_to_send--) {
+ eq->fqid = fq[sent]->fqid_le;
+ eq->fd.opaque_addr = fd->opaque_addr;
+ eq->fd.addr = cpu_to_be40(fd->addr);
+ eq->fd.status = cpu_to_be32(fd->status);
+ eq->fd.opaque = cpu_to_be32(fd->opaque);
+
+ eq = (void *)((unsigned long)(eq + 1) &
+ (~(unsigned long)(QM_EQCR_SIZE << 6)));
+ eqcr->available--;
+ sent++;
+ fd++;
+ }
+ lwsync();
+
+ /* In order for flushes to complete faster, all lines are recorded in
+ * 32 bit word.
+ */
+ eq = eqcr->cursor;
+ for (i = 0; i < sent; i++) {
+ eq->__dont_write_directly__verb =
+ QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
+ prev_eq = eq;
+ eq = (void *)((unsigned long)(eq + 1) &
+ (~(unsigned long)(QM_EQCR_SIZE << 6)));
+ if (unlikely((prev_eq + 1) != eq))
+ eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+ }
+
+ /* We need to flush all the lines but without load/store operations
+ * between them
+ */
+ eq = eqcr->cursor;
+ for (i = 0; i < sent; i++) {
+ dcbf(eq);
+ eq = (void *)((unsigned long)(eq + 1) &
+ (~(unsigned long)(QM_EQCR_SIZE << 6)));
+ }
+ /* Update cursor for the next call */
+ eqcr->cursor = eq;
+ return sent;
+}
+
int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,
struct qman_fq *orp, u16 orp_seqnum)
{
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index ad40d80..0e3e4fe 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1703,6 +1703,20 @@ int qman_enqueue_multi(struct qman_fq *fq,
const struct qm_fd *fd,
int frames_to_send);
+/**
+ * qman_enqueue_multi_fq - Enqueue multiple frames to their respective frame
+ * queues.
+ * @fq[]: Array of frame queue objects to enqueue to
+ * @fd: pointer to first descriptor of frame to be enqueued
+ * @frames_to_send: number of frames to be sent.
+ *
+ * This API is similar to qman_enqueue_multi(), but it takes fd which needs
+ * to be processed by different frame queues.
+ */
+int
+qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
+ int frames_to_send);
+
typedef int (*qman_cb_precommit) (void *arg);
/**
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index ac455cd..64068de 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -73,6 +73,7 @@ DPDK_18.02 {
qman_alloc_pool_range;
qman_create_cgr;
qman_delete_cgr;
+ qman_enqueue_multi_fq;
qman_modify_cgr;
qman_oos_fq;
qman_portal_poll_rx;
--
2.7.4
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
` (18 preceding siblings ...)
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 19/19] bus/dpaa: support for enqueue frames of multiple queues Hemant Agrawal
@ 2018-01-10 20:14 ` Ferruh Yigit
19 siblings, 0 replies; 65+ messages in thread
From: Ferruh Yigit @ 2018-01-10 20:14 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: shreyansh.jain
On 1/10/2018 10:46 AM, Hemant Agrawal wrote:
> This patch series add various improvement and performance related
> optimizations for DPAA PMD
>
> v3:
> - handling review comments from Ferruh
> - update the API doc for new PMD specific API
>
> v2:
> - fix the spelling of PORTALS
> - Add Akhil's patch wich is required for crypto
> - minor improvement in push mode patch
>
> Akhil Goyal (1):
> bus/dpaa: support for enqueue frames of multiple queues
>
> Ashish Jain (2):
> net/dpaa: fix the mbuf packet type if zero
> net/dpaa: set the correct frame size in device MTU
>
> Hemant Agrawal (11):
> net/dpaa: fix uninitialized and unused variables
> net/dpaa: fix FW version code
> bus/dpaa: update platform soc value register routines
> net/dpaa: add frame count based tail drop with CGR
> bus/dpaa: add support to create dynamic HW portal
> bus/dpaa: query queue frame count support
> net/dpaa: add Rx queue count support
> net/dpaa: add support for loopback API
> app/testpmd: add support for loopback config for dpaa
> bus/dpaa: add support for static queues
> net/dpaa: integrate the support of push mode in PMD
>
> Nipun Gupta (5):
> bus/dpaa: optimize the qman HW stashing settings
> bus/dpaa: optimize the endianness conversions
> net/dpaa: change Tx HW budget to 7
> net/dpaa: optimize the Tx burst
> net/dpaa: optimize Rx path
Series applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 65+ messages in thread
end of thread, other threads:[~2018-01-10 20:14 UTC | newest]
Thread overview: 65+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-13 12:05 [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 01/18] net/dpaa: fix coverity reported issues Hemant Agrawal
2018-01-09 10:46 ` Ferruh Yigit
2018-01-09 13:29 ` Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 02/18] net/dpaa: fix the mbuf packet type if zero Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 03/18] net/dpaa: fix FW version code Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 04/18] bus/dpaa: update platform soc value register routines Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 05/18] net/dpaa: set the correct frame size in device MTU Hemant Agrawal
2018-01-09 10:46 ` Ferruh Yigit
2017-12-13 12:05 ` [dpdk-dev] [PATCH 06/18] net/dpaa: add frame count based tail drop with CGR Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 07/18] bus/dpaa: optimize the qman HW stashing settings Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 08/18] bus/dpaa: optimize the endianness conversions Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 09/18] bus/dpaa: add support to create dynamic HW portal Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 10/18] net/dpaa: change Tx HW budget to 7 Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 11/18] net/dpaa: optimize the Tx burst Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 12/18] net/dpaa: optimize Rx path Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 13/18] bus/dpaa: query queue frame count support Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 14/18] net/dpaa: add Rx queue " Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 15/18] net/dpaa: add support for loopback API Hemant Agrawal
2018-01-09 10:46 ` Ferruh Yigit
2017-12-13 12:05 ` [dpdk-dev] [PATCH 16/18] app/testpmd: add support for loopback config for dpaa Hemant Agrawal
2017-12-13 12:05 ` [dpdk-dev] [PATCH 17/18] bus/dpaa: add support for static queues Hemant Agrawal
2018-01-09 10:46 ` Ferruh Yigit
2017-12-13 12:05 ` [dpdk-dev] [PATCH 18/18] net/dpaa: integrate the support of push mode in PMD Hemant Agrawal
2018-01-09 10:47 ` [dpdk-dev] [PATCH 00/18] DPAA PMD improvements Ferruh Yigit
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 " Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 01/18] net/dpaa: fix the mbuf packet type if zero Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 02/18] net/dpaa: fix FW version code Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 03/18] bus/dpaa: update platform soc value register routines Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 04/18] net/dpaa: set the correct frame size in device MTU Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 05/18] net/dpaa: add frame count based tail drop with CGR Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 06/18] bus/dpaa: optimize the qman HW stashing settings Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 07/18] bus/dpaa: optimize the endianness conversions Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 08/18] bus/dpaa: add support to create dynamic HW portal Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 09/18] net/dpaa: change Tx HW budget to 7 Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 10/18] net/dpaa: optimize the Tx burst Hemant Agrawal
2018-01-09 13:22 ` [dpdk-dev] [PATCH v2 11/18] net/dpaa: optimize Rx path Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 12/18] bus/dpaa: query queue frame count support Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 13/18] net/dpaa: add Rx queue " Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 14/18] net/dpaa: add support for loopback API Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 15/18] app/testpmd: add support for loopback config for dpaa Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 16/18] bus/dpaa: add support for static queues Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 17/18] net/dpaa: integrate the support of push mode in PMD Hemant Agrawal
2018-01-09 13:23 ` [dpdk-dev] [PATCH v2 18/18] bus/dpaa: support for enqueue frames of multiple queues Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 01/19] net/dpaa: fix uninitialized and unused variables Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 02/19] net/dpaa: fix the mbuf packet type if zero Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 03/19] net/dpaa: fix FW version code Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 04/19] bus/dpaa: update platform soc value register routines Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 05/19] net/dpaa: set the correct frame size in device MTU Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 06/19] net/dpaa: add frame count based tail drop with CGR Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 07/19] bus/dpaa: optimize the qman HW stashing settings Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 08/19] bus/dpaa: optimize the endianness conversions Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 09/19] bus/dpaa: add support to create dynamic HW portal Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 10/19] net/dpaa: change Tx HW budget to 7 Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 11/19] net/dpaa: optimize the Tx burst Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 12/19] net/dpaa: optimize Rx path Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 13/19] bus/dpaa: query queue frame count support Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 14/19] net/dpaa: add Rx queue " Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 15/19] net/dpaa: add support for loopback API Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 16/19] app/testpmd: add support for loopback config for dpaa Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 17/19] bus/dpaa: add support for static queues Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 18/19] net/dpaa: integrate the support of push mode in PMD Hemant Agrawal
2018-01-10 10:46 ` [dpdk-dev] [PATCH v3 19/19] bus/dpaa: support for enqueue frames of multiple queues Hemant Agrawal
2018-01-10 20:14 ` [dpdk-dev] [PATCH v3 00/19 ]DPAA PMD improvements Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).