DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
@ 2020-03-02 13:01 ` David Marchand
  2020-03-05  9:06   ` Hemant Agrawal (OSS)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 01/16] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 109+ messages in thread
From: David Marchand @ 2020-03-02 13:01 UTC (permalink / raw)
  To: Hemant Agrawal; +Cc: Yigit, Ferruh, dev

On Mon, Mar 2, 2020 at 10:26 AM Hemant Agrawal <hemant.agrawal@nxp.com> wrote:
>
> This patch series add various patches for enhancing and fixing NXP
> fslmc bus, dpaa bus, and dpaax.
>
> - the main change is support to allow thread migration across lcores
> - improving the multi-process support

This series triggers an ABI warning that must be investigated.
https://travis-ci.com/ovsrobot/dpdk/jobs/292904119#L2233


-- 
David Marchand


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
@ 2020-03-02 14:58 Hemant Agrawal
  2020-03-02 13:01 ` David Marchand
                   ` (17 more replies)
  0 siblings, 18 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev

This patch series add various patches for enhancing and fixing NXP
fslmc bus, dpaa bus, and dpaax.

- the main change is support to allow thread migration across lcores
- improving the multi-process support


Apeksha Gupta (1):
  bus/fslmc: fix dereferencing null pointer

Gagandeep Singh (2):
  bus/fslmc: combine thread specific variables
  net/dpaa: enable Tx queue taildrop

Hemant Agrawal (1):
  bus/fslmc: support handle portal alloc failure

Nipun Gupta (8):
  bus/fslmc: rework portal allocation to a per thread basis
  bus/fslmc: limit pthread destructor called for dpaa2 only
  bus/fslmc: support portal migration
  drivers: enhance portal allocation failure log
  bus/fslmc: rename the cinh read functions used for ls1088
  net/dpaa: return error on multiple mp config
  net/dpaa: update process specific device info
  net/dpaa2: do not prefetch annotaion for physical mode

Rohit Raj (3):
  net/dpaa2: fix 10g port negotiation issue
  bus/dpaa: enable link state interrupt
  bus/dpaa: enable set link status

Sachin Saxena (1):
  net/dpaa: add 2.5G support

 drivers/bus/dpaa/base/fman/fman.c             |  10 +-
 drivers/bus/dpaa/base/fman/netcfg_layer.c     |   3 +-
 drivers/bus/dpaa/base/qbman/process.c         |  95 ++-
 drivers/bus/dpaa/base/qbman/qman.c            |  43 ++
 drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
 drivers/bus/dpaa/include/fman.h               |   3 +
 drivers/bus/dpaa/include/fsl_qman.h           |  17 +
 drivers/bus/dpaa/include/process.h            |  23 +
 drivers/bus/dpaa/rte_bus_dpaa_version.map     |  11 +
 drivers/bus/dpaa/rte_dpaa_bus.h               |   6 +-
 drivers/bus/fslmc/Makefile                    |   1 +
 drivers/bus/fslmc/fslmc_bus.c                 |   2 -
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      | 284 ++++-----
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |  10 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  14 +-
 .../fslmc/qbman/include/fsl_qbman_portal.h    |   8 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |   9 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        | 580 +++++++++++++++++-
 drivers/bus/fslmc/qbman/qbman_portal.h        |  19 +-
 drivers/bus/fslmc/qbman/qbman_sys.h           | 135 +++-
 drivers/bus/fslmc/rte_fslmc.h                 |  18 -
 drivers/common/dpaax/compat.h                 |   5 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |   8 +-
 drivers/event/dpaa2/dpaa2_eventdev.c          |   8 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |   1 +
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |  12 +-
 drivers/net/dpaa/dpaa_ethdev.c                | 417 +++++++++----
 drivers/net/dpaa/dpaa_ethdev.h                |   3 +-
 drivers/net/dpaa/dpaa_rxtx.c                  |  71 +++
 drivers/net/dpaa/dpaa_rxtx.h                  |   3 +
 drivers/net/dpaa2/dpaa2_ethdev.c              |   8 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |  56 +-
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c         |   8 +-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c           |  12 +-
 34 files changed, 1566 insertions(+), 365 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 01/16] net/dpaa2: fix 10g port negotiation issue
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-03-02 13:01 ` David Marchand
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 02/16] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, stable, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fixed 10g port negotiation issue with another 10G/non 10G port.
Initialize the port link speed.

Fixes: c5acbb5ea20e ("net/dpaa2: support link status event")
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 2cde55e7c..4fc550a88 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -553,9 +553,6 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
 		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
 
-	/* update the current status */
-	dpaa2_dev_link_update(dev, 0);
-
 	return 0;
 }
 
@@ -1757,6 +1754,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	/* changing tx burst function to start enqueues */
 	dev->tx_pkt_burst = dpaa2_dev_tx;
 	dev->data->dev_link.link_status = state.up;
+	dev->data->dev_link.link_speed = state.rate;
 
 	if (state.up)
 		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 02/16] bus/fslmc: fix dereferencing null pointer
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-03-02 13:01 ` David Marchand
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 01/16] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 03/16] bus/fslmc: combine thread specific variables Hemant Agrawal
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, stable, Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

This patch fixees the nxp internal coverity reported
null pointer dereferncing issue.

Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 0bb2ce880..34374ae4b 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -20,26 +20,27 @@ struct qbman_fq_query_desc {
 	uint8_t verb;
 	uint8_t reserved[3];
 	uint32_t fqid;
-	uint8_t reserved2[57];
+	uint8_t reserved2[56];
 };
 
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 			 struct qbman_fq_query_np_rslt *r)
 {
 	struct qbman_fq_query_desc *p;
+	struct qbman_fq_query_np_rslt *var;
 
 	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->fqid = fqid;
-	*r = *(struct qbman_fq_query_np_rslt *)qbman_swp_mc_complete(s, p,
-						QBMAN_FQ_QUERY_NP);
-	if (!r) {
+	var = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY_NP);
+	if (!var) {
 		pr_err("qbman: Query FQID %d NP fields failed, no response\n",
 		       fqid);
 		return -EIO;
 	}
+	*r = *var;
 
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY_NP);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 03/16] bus/fslmc: combine thread specific variables
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (2 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 02/16] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 04/16] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

This is to reduce the thread local storage

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/fslmc/fslmc_bus.c            |  2 --
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h |  7 +++++++
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h  |  8 ++++++++
 drivers/bus/fslmc/rte_fslmc.h            | 18 ------------------
 4 files changed, 15 insertions(+), 20 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index b3e964aa9..1f657822f 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -37,8 +37,6 @@ rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type)
 	return rte_fslmc_bus.device_count[device_type];
 }
 
-RTE_DEFINE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
-
 static void
 cleanup_fslmc_device_list(void)
 {
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 2829c9380..9da4af782 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -28,6 +28,13 @@ RTE_DECLARE_PER_LCORE(struct dpaa2_io_portal_t, _dpaa2_io);
 #define DPAA2_PER_LCORE_ETHRX_DPIO RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
 #define DPAA2_PER_LCORE_ETHRX_PORTAL DPAA2_PER_LCORE_ETHRX_DPIO->sw_portal
 
+#define DPAA2_PER_LCORE_DQRR_SIZE \
+	RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_size
+#define DPAA2_PER_LCORE_DQRR_HELD \
+	RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_held
+#define DPAA2_PER_LCORE_DQRR_MBUF(i) \
+	RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.mbuf[i]
+
 /* Variable to store DPAA2 DQRR size */
 extern uint8_t dpaa2_dqrr_size;
 /* Variable to store DPAA2 EQCR size */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index ab2b213f8..bde1441f4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -87,6 +87,13 @@ struct eqresp_metadata {
 	struct rte_mempool *mp;
 };
 
+#define DPAA2_PORTAL_DEQUEUE_DEPTH	32
+struct dpaa2_portal_dqrr {
+	struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH];
+	uint64_t dqrr_held;
+	uint8_t dqrr_size;
+};
+
 struct dpaa2_dpio_dev {
 	TAILQ_ENTRY(dpaa2_dpio_dev) next;
 		/**< Pointer to Next device instance */
@@ -112,6 +119,7 @@ struct dpaa2_dpio_dev {
 	struct rte_intr_handle intr_handle; /* Interrupt related info */
 	int32_t	epoll_fd; /**< File descriptor created for interrupt polling */
 	int32_t hw_id; /**< An unique ID of this DPIO device instance */
+	struct dpaa2_portal_dqrr dpaa2_held_bufs;
 };
 
 struct dpaa2_dpbp_dev {
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 96ba8dc25..a59f0077e 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -137,24 +137,6 @@ struct rte_fslmc_bus {
 				/**< Count of all devices scanned */
 };
 
-#define DPAA2_PORTAL_DEQUEUE_DEPTH	32
-
-/* Create storage for dqrr entries per lcore */
-struct dpaa2_portal_dqrr {
-	struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH];
-	uint64_t dqrr_held;
-	uint8_t dqrr_size;
-};
-
-RTE_DECLARE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
-
-#define DPAA2_PER_LCORE_DQRR_SIZE \
-	RTE_PER_LCORE(dpaa2_held_bufs).dqrr_size
-#define DPAA2_PER_LCORE_DQRR_HELD \
-	RTE_PER_LCORE(dpaa2_held_bufs).dqrr_held
-#define DPAA2_PER_LCORE_DQRR_MBUF(i) \
-	RTE_PER_LCORE(dpaa2_held_bufs).mbuf[i]
-
 /**
  * Register a DPAA2 driver.
  *
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 04/16] bus/fslmc: rework portal allocation to a per thread basis
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (3 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 03/16] bus/fslmc: combine thread specific variables Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 05/16] bus/fslmc: support handle portal alloc failure Hemant Agrawal
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

The patch reworks the portal allocation which was previously
being done on per lcore basis to a per thread basis.
Now user can also create its own threads and use DPAA2 portals
for packet I/O.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/Makefile               |   1 +
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 210 ++++++++++++-----------
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h |   3 -
 3 files changed, 114 insertions(+), 100 deletions(-)

diff --git a/drivers/bus/fslmc/Makefile b/drivers/bus/fslmc/Makefile
index 6d2286088..b38305fb4 100644
--- a/drivers/bus/fslmc/Makefile
+++ b/drivers/bus/fslmc/Makefile
@@ -18,6 +18,7 @@ CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
 CFLAGS += -I$(RTE_SDK)/drivers/common/dpaax
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
+LDLIBS += -lpthread
 LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev
 LDLIBS += -lrte_common_dpaax
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 739ce434b..e765a382f 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -62,6 +62,9 @@ uint8_t dpaa2_dqrr_size;
 /* Variable to store DPAA2 EQCR size */
 uint8_t dpaa2_eqcr_size;
 
+/* Variable to hold the portal_key, once created.*/
+static pthread_key_t dpaa2_portal_key;
+
 /*Stashing Macros default for LS208x*/
 static int dpaa2_core_cluster_base = 0x04;
 static int dpaa2_cluster_sz = 2;
@@ -87,6 +90,32 @@ static int dpaa2_cluster_sz = 2;
  * Cluster 4 (ID = x07) : CPU14, CPU15;
  */
 
+static int
+dpaa2_get_core_id(void)
+{
+	rte_cpuset_t cpuset;
+	int i, ret, cpu_id = -1;
+
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+		&cpuset);
+	if (ret) {
+		DPAA2_BUS_ERR("pthread_getaffinity_np() failed");
+		return ret;
+	}
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (CPU_ISSET(i, &cpuset)) {
+			if (cpu_id == -1)
+				cpu_id = i;
+			else
+				/* Multiple cpus are affined */
+				return -1;
+		}
+	}
+
+	return cpu_id;
+}
+
 static int
 dpaa2_core_cluster_sdest(int cpu_id)
 {
@@ -97,7 +126,7 @@ dpaa2_core_cluster_sdest(int cpu_id)
 
 #ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
 static void
-dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
+dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id)
 {
 #define STRING_LEN	28
 #define COMMAND_LEN	50
@@ -130,7 +159,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
 		return;
 	}
 
-	cpu_mask = cpu_mask << dpaa2_cpu[lcoreid];
+	cpu_mask = cpu_mask << dpaa2_cpu[cpu_id];
 	snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity",
 		 cpu_mask, token);
 	ret = system(command);
@@ -144,7 +173,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
 	fclose(file);
 }
 
-static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
+static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int cpu_id)
 {
 	struct epoll_event epoll_ev;
 	int eventfd, dpio_epoll_fd, ret;
@@ -181,36 +210,42 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
 	}
 	dpio_dev->epoll_fd = dpio_epoll_fd;
 
-	dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, lcoreid);
+	dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, cpu_id);
 
 	return 0;
 }
+
+static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
+{
+	int ret;
+
+	ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+	if (ret)
+		DPAA2_BUS_ERR("DPIO interrupt disable failed");
+
+	close(dpio_dev->epoll_fd);
+}
 #endif
 
 static int
-dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
+dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
 {
 	int sdest, ret;
 	int cpu_id;
 
 	/* Set the Stashing Destination */
-	if (lcoreid < 0) {
-		lcoreid = rte_get_master_lcore();
-		if (lcoreid < 0) {
-			DPAA2_BUS_ERR("Getting CPU Index failed");
-			return -1;
-		}
+	cpu_id = dpaa2_get_core_id();
+	if (cpu_id < 0) {
+		DPAA2_BUS_ERR("Thread not affined to a single core");
+		return -1;
 	}
 
-	cpu_id = dpaa2_cpu[lcoreid];
-
 	/* Set the STASH Destination depending on Current CPU ID.
 	 * Valid values of SDEST are 4,5,6,7. Where,
 	 */
-
 	sdest = dpaa2_core_cluster_sdest(cpu_id);
-	DPAA2_BUS_DEBUG("Portal= %d  CPU= %u lcore id =%u SDEST= %d",
-			dpio_dev->index, cpu_id, lcoreid, sdest);
+	DPAA2_BUS_DEBUG("Portal= %d  CPU= %u SDEST= %d",
+			dpio_dev->index, cpu_id, sdest);
 
 	ret = dpio_set_stashing_destination(dpio_dev->dpio, CMD_PRI_LOW,
 					    dpio_dev->token, sdest);
@@ -220,7 +255,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
 	}
 
 #ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
-	if (dpaa2_dpio_intr_init(dpio_dev, lcoreid)) {
+	if (dpaa2_dpio_intr_init(dpio_dev, cpu_id)) {
 		DPAA2_BUS_ERR("Interrupt registration failed for dpio");
 		return -1;
 	}
@@ -229,7 +264,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
 	return 0;
 }
 
-static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
+static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
 	int ret;
@@ -245,108 +280,74 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
 	DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
 			dpio_dev, dpio_dev->index, syscall(SYS_gettid));
 
-	ret = dpaa2_configure_stashing(dpio_dev, lcoreid);
-	if (ret)
+	ret = dpaa2_configure_stashing(dpio_dev);
+	if (ret) {
 		DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+		return NULL;
+	}
 
 	return dpio_dev;
 }
 
+static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
+{
+#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
+	dpaa2_dpio_intr_deinit(dpio_dev);
+#endif
+	if (dpio_dev)
+		rte_atomic16_clear(&dpio_dev->ref_count);
+}
+
 int
 dpaa2_affine_qbman_swp(void)
 {
-	unsigned int lcore_id = rte_lcore_id();
+	struct dpaa2_dpio_dev *dpio_dev;
 	uint64_t tid = syscall(SYS_gettid);
 
-	if (lcore_id == LCORE_ID_ANY)
-		lcore_id = rte_get_master_lcore();
-	/* if the core id is not supported */
-	else if (lcore_id >= RTE_MAX_LCORE)
-		return -1;
-
-	if (dpaa2_io_portal[lcore_id].dpio_dev) {
-		DPAA2_BUS_DP_INFO("DPAA Portal=%p (%d) is being shared"
-			    " between thread %" PRIu64 " and current "
-			    "%" PRIu64 "\n",
-			    dpaa2_io_portal[lcore_id].dpio_dev,
-			    dpaa2_io_portal[lcore_id].dpio_dev->index,
-			    dpaa2_io_portal[lcore_id].net_tid,
-			    tid);
-		RTE_PER_LCORE(_dpaa2_io).dpio_dev
-			= dpaa2_io_portal[lcore_id].dpio_dev;
-		rte_atomic16_inc(&dpaa2_io_portal
-				 [lcore_id].dpio_dev->ref_count);
-		dpaa2_io_portal[lcore_id].net_tid = tid;
-
-		DPAA2_BUS_DP_DEBUG("Old Portal=%p (%d) affined thread - "
-				   "%" PRIu64 "\n",
-			    dpaa2_io_portal[lcore_id].dpio_dev,
-			    dpaa2_io_portal[lcore_id].dpio_dev->index,
-			    tid);
-		return 0;
-	}
-
 	/* Populate the dpaa2_io_portal structure */
-	dpaa2_io_portal[lcore_id].dpio_dev = dpaa2_get_qbman_swp(lcore_id);
-
-	if (dpaa2_io_portal[lcore_id].dpio_dev) {
-		RTE_PER_LCORE(_dpaa2_io).dpio_dev
-			= dpaa2_io_portal[lcore_id].dpio_dev;
-		dpaa2_io_portal[lcore_id].net_tid = tid;
+	if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) {
+		dpio_dev = dpaa2_get_qbman_swp();
+		if (!dpio_dev) {
+			DPAA2_BUS_ERR("No software portal resource left");
+			return -1;
+		}
+		RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
 
-		return 0;
-	} else {
-		return -1;
+		DPAA2_BUS_INFO(
+			"DPAA Portal=%p (%d) is affined to thread %" PRIu64,
+			dpio_dev, dpio_dev->index, tid);
 	}
+	return 0;
 }
 
 int
 dpaa2_affine_qbman_ethrx_swp(void)
 {
-	unsigned int lcore_id = rte_lcore_id();
+	struct dpaa2_dpio_dev *dpio_dev;
 	uint64_t tid = syscall(SYS_gettid);
 
-	if (lcore_id == LCORE_ID_ANY)
-		lcore_id = rte_get_master_lcore();
-	/* if the core id is not supported */
-	else if (lcore_id >= RTE_MAX_LCORE)
-		return -1;
+	/* Populate the dpaa2_io_portal structure */
+	if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) {
+		dpio_dev = dpaa2_get_qbman_swp();
+		if (!dpio_dev) {
+			DPAA2_BUS_ERR("No software portal resource left");
+			return -1;
+		}
+		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
 
-	if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) {
-		DPAA2_BUS_DP_INFO(
-			"DPAA Portal=%p (%d) is being shared between thread"
-			" %" PRIu64 " and current %" PRIu64 "\n",
-			dpaa2_io_portal[lcore_id].ethrx_dpio_dev,
-			dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index,
-			dpaa2_io_portal[lcore_id].sec_tid,
-			tid);
-		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
-			= dpaa2_io_portal[lcore_id].ethrx_dpio_dev;
-		rte_atomic16_inc(&dpaa2_io_portal
-				 [lcore_id].ethrx_dpio_dev->ref_count);
-		dpaa2_io_portal[lcore_id].sec_tid = tid;
-
-		DPAA2_BUS_DP_DEBUG(
-			"Old Portal=%p (%d) affined thread"
-			" - %" PRIu64 "\n",
-			dpaa2_io_portal[lcore_id].ethrx_dpio_dev,
-			dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index,
-			tid);
-		return 0;
+		DPAA2_BUS_INFO(
+			"DPAA Portal=%p (%d) is affined for eth rx to thread %"
+			PRIu64, dpio_dev, dpio_dev->index, tid);
 	}
+	return 0;
+}
 
-	/* Populate the dpaa2_io_portal structure */
-	dpaa2_io_portal[lcore_id].ethrx_dpio_dev =
-		dpaa2_get_qbman_swp(lcore_id);
-
-	if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) {
-		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
-			= dpaa2_io_portal[lcore_id].ethrx_dpio_dev;
-		dpaa2_io_portal[lcore_id].sec_tid = tid;
-		return 0;
-	} else {
-		return -1;
-	}
+static void __attribute__((destructor(102))) dpaa2_portal_finish(void *arg)
+{
+	RTE_SET_USED(arg);
+
+	dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).dpio_dev);
+	dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev);
 }
 
 /*
@@ -398,6 +399,7 @@ dpaa2_create_dpio_device(int vdev_fd,
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)};
 	struct qbman_swp_desc p_des;
 	struct dpio_attr attr;
+	int ret;
 	static int check_lcore_cpuset;
 
 	if (obj_info->num_regions < NUM_DPIO_REGIONS) {
@@ -547,12 +549,26 @@ dpaa2_create_dpio_device(int vdev_fd,
 
 	TAILQ_INSERT_TAIL(&dpio_dev_list, dpio_dev, next);
 
+	if (!dpaa2_portal_key) {
+		/* create the key, supplying a function that'll be invoked
+		 * when a portal affined thread will be deleted.
+		 */
+		ret = pthread_key_create(&dpaa2_portal_key,
+					 dpaa2_portal_finish);
+		if (ret) {
+			DPAA2_BUS_DEBUG("Unable to create pthread key (%d)",
+					ret);
+			goto err;
+		}
+	}
+
 	return 0;
 
 err:
 	if (dpio_dev->dpio) {
 		dpio_disable(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token);
 		dpio_close(dpio_dev->dpio, CMD_PRI_LOW,  dpio_dev->token);
+		rte_free(dpio_dev->eqresp);
 		rte_free(dpio_dev->dpio);
 	}
 
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 9da4af782..8af0474a1 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -14,9 +14,6 @@
 struct dpaa2_io_portal_t {
 	struct dpaa2_dpio_dev *dpio_dev;
 	struct dpaa2_dpio_dev *ethrx_dpio_dev;
-	uint64_t net_tid;
-	uint64_t sec_tid;
-	void *eventdev;
 };
 
 /*! Global per thread DPIO portal */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 05/16] bus/fslmc: support handle portal alloc failure
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (4 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 04/16] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-09 17:00   ` Ferruh Yigit
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 06/16] bus/fslmc: limit pthread destructor called for dpaa2 only Hemant Agrawal
                   ` (11 subsequent siblings)
  17 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Nipun Gupta, Hemant Agrawal

Add the error handling on failure.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 28 ++++++++++++++----------
 1 file changed, 16 insertions(+), 12 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index e765a382f..1a1453ea3 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -264,6 +264,16 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
 	return 0;
 }
 
+static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
+{
+	if (dpio_dev) {
+#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
+		dpaa2_dpio_intr_deinit(dpio_dev);
+#endif
+		rte_atomic16_clear(&dpio_dev->ref_count);
+	}
+}
+
 static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
@@ -274,8 +284,10 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 		if (dpio_dev && rte_atomic16_test_and_set(&dpio_dev->ref_count))
 			break;
 	}
-	if (!dpio_dev)
+	if (!dpio_dev) {
+		DPAA2_BUS_ERR("No software portal resource left");
 		return NULL;
+	}
 
 	DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
 			dpio_dev, dpio_dev->index, syscall(SYS_gettid));
@@ -283,21 +295,13 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 	ret = dpaa2_configure_stashing(dpio_dev);
 	if (ret) {
 		DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+		rte_atomic16_clear(&dpio_dev->ref_count);
 		return NULL;
 	}
 
 	return dpio_dev;
 }
 
-static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
-{
-#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
-	dpaa2_dpio_intr_deinit(dpio_dev);
-#endif
-	if (dpio_dev)
-		rte_atomic16_clear(&dpio_dev->ref_count);
-}
-
 int
 dpaa2_affine_qbman_swp(void)
 {
@@ -308,7 +312,7 @@ dpaa2_affine_qbman_swp(void)
 	if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) {
 		dpio_dev = dpaa2_get_qbman_swp();
 		if (!dpio_dev) {
-			DPAA2_BUS_ERR("No software portal resource left");
+			DPAA2_BUS_ERR("Error in software portal allocation");
 			return -1;
 		}
 		RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
@@ -330,7 +334,7 @@ dpaa2_affine_qbman_ethrx_swp(void)
 	if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) {
 		dpio_dev = dpaa2_get_qbman_swp();
 		if (!dpio_dev) {
-			DPAA2_BUS_ERR("No software portal resource left");
+			DPAA2_BUS_ERR("Error in software portal allocation");
 			return -1;
 		}
 		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 06/16] bus/fslmc: limit pthread destructor called for dpaa2 only
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (5 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 05/16] bus/fslmc: support handle portal alloc failure Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 07/16] bus/fslmc: support portal migration Hemant Agrawal
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

The destructor was being called for non-dpaa2 as well

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1453ea3..054d45306 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -299,6 +299,13 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 		return NULL;
 	}
 
+	ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev);
+	if (ret) {
+		DPAA2_BUS_ERR("pthread_setspecific failed with ret: %d", ret);
+		dpaa2_put_qbman_swp(dpio_dev);
+		return NULL;
+	}
+
 	return dpio_dev;
 }
 
@@ -346,12 +353,14 @@ dpaa2_affine_qbman_ethrx_swp(void)
 	return 0;
 }
 
-static void __attribute__((destructor(102))) dpaa2_portal_finish(void *arg)
+static void dpaa2_portal_finish(void *arg)
 {
 	RTE_SET_USED(arg);
 
 	dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).dpio_dev);
 	dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev);
+
+	pthread_setspecific(dpaa2_portal_key, NULL);
 }
 
 /*
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 07/16] bus/fslmc: support portal migration
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (6 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 06/16] bus/fslmc: limit pthread destructor called for dpaa2 only Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-03 17:43   ` Ferruh Yigit
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 08/16] drivers: enhance portal allocation failure log Hemant Agrawal
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

The patch adds support for portal migration by disabling stashing
for the portals which is used in the non-affined threads, or on
threads affined to multiple cores

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  83 +--
 .../fslmc/qbman/include/fsl_qbman_portal.h    |   8 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        | 554 +++++++++++++++++-
 drivers/bus/fslmc/qbman/qbman_portal.h        |  19 +-
 drivers/bus/fslmc/qbman/qbman_sys.h           | 135 ++++-
 5 files changed, 717 insertions(+), 82 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 054d45306..2102d2981 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -53,10 +53,6 @@ static uint32_t io_space_count;
 /* Variable to store DPAA2 platform type */
 uint32_t dpaa2_svr_family;
 
-/* Physical core id for lcores running on dpaa2. */
-/* DPAA2 only support 1 lcore to 1 phy cpu mapping */
-static unsigned int dpaa2_cpu[RTE_MAX_LCORE];
-
 /* Variable to store DPAA2 DQRR size */
 uint8_t dpaa2_dqrr_size;
 /* Variable to store DPAA2 EQCR size */
@@ -159,7 +155,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id)
 		return;
 	}
 
-	cpu_mask = cpu_mask << dpaa2_cpu[cpu_id];
+	cpu_mask = cpu_mask << cpu_id;
 	snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity",
 		 cpu_mask, token);
 	ret = system(command);
@@ -228,17 +224,9 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
 #endif
 
 static int
-dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
+dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int cpu_id)
 {
 	int sdest, ret;
-	int cpu_id;
-
-	/* Set the Stashing Destination */
-	cpu_id = dpaa2_get_core_id();
-	if (cpu_id < 0) {
-		DPAA2_BUS_ERR("Thread not affined to a single core");
-		return -1;
-	}
 
 	/* Set the STASH Destination depending on Current CPU ID.
 	 * Valid values of SDEST are 4,5,6,7. Where,
@@ -277,6 +265,7 @@ static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
 static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
+	int cpu_id;
 	int ret;
 
 	/* Get DPIO dev handle from list using index */
@@ -292,11 +281,19 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 	DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
 			dpio_dev, dpio_dev->index, syscall(SYS_gettid));
 
-	ret = dpaa2_configure_stashing(dpio_dev);
-	if (ret) {
-		DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
-		rte_atomic16_clear(&dpio_dev->ref_count);
-		return NULL;
+	/* Set the Stashing Destination */
+	cpu_id = dpaa2_get_core_id();
+	if (cpu_id < 0) {
+		DPAA2_BUS_WARN("Thread not affined to a single core");
+		if (dpaa2_svr_family != SVR_LX2160A)
+			qbman_swp_update(dpio_dev->sw_portal, 1);
+	} else {
+		ret = dpaa2_configure_stashing(dpio_dev, cpu_id);
+		if (ret) {
+			DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+			rte_atomic16_clear(&dpio_dev->ref_count);
+			return NULL;
+		}
 	}
 
 	ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev);
@@ -363,46 +360,6 @@ static void dpaa2_portal_finish(void *arg)
 	pthread_setspecific(dpaa2_portal_key, NULL);
 }
 
-/*
- * This checks for not supported lcore mappings as well as get the physical
- * cpuid for the lcore.
- * one lcore can only map to 1 cpu i.e. 1@10-14 not supported.
- * one cpu can be mapped to more than one lcores.
- */
-static int
-dpaa2_check_lcore_cpuset(void)
-{
-	unsigned int lcore_id, i;
-	int ret = 0;
-
-	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
-		dpaa2_cpu[lcore_id] = 0xffffffff;
-
-	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
-		rte_cpuset_t cpuset = rte_lcore_cpuset(lcore_id);
-
-		for (i = 0; i < CPU_SETSIZE; i++) {
-			if (!CPU_ISSET(i, &cpuset))
-				continue;
-			if (i >= RTE_MAX_LCORE) {
-				DPAA2_BUS_ERR("ERR:lcore map to core %u (>= %u) not supported",
-					i, RTE_MAX_LCORE);
-				ret = -1;
-				continue;
-			}
-			RTE_LOG(DEBUG, EAL, "lcore id = %u cpu=%u\n",
-				lcore_id, i);
-			if (dpaa2_cpu[lcore_id] != 0xffffffff) {
-				DPAA2_BUS_ERR("ERR:lcore map to multi-cpu not supported");
-				ret = -1;
-				continue;
-			}
-			dpaa2_cpu[lcore_id] = i;
-		}
-	}
-	return ret;
-}
-
 static int
 dpaa2_create_dpio_device(int vdev_fd,
 			 struct vfio_device_info *obj_info,
@@ -413,7 +370,6 @@ dpaa2_create_dpio_device(int vdev_fd,
 	struct qbman_swp_desc p_des;
 	struct dpio_attr attr;
 	int ret;
-	static int check_lcore_cpuset;
 
 	if (obj_info->num_regions < NUM_DPIO_REGIONS) {
 		DPAA2_BUS_ERR("Not sufficient number of DPIO regions");
@@ -433,13 +389,6 @@ dpaa2_create_dpio_device(int vdev_fd,
 	/* Using single portal  for all devices */
 	dpio_dev->mc_portal = rte_mcp_ptr_list[MC_PORTAL_INDEX];
 
-	if (!check_lcore_cpuset) {
-		check_lcore_cpuset = 1;
-
-		if (dpaa2_check_lcore_cpuset() < 0)
-			goto err;
-	}
-
 	dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
 				     RTE_CACHE_LINE_SIZE);
 	if (!dpio_dev->dpio) {
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
index 88f0a9968..0d6364d99 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014 Freescale Semiconductor, Inc.
- * Copyright 2015-2019 NXP
+ * Copyright 2015-2020 NXP
  *
  */
 #ifndef _FSL_QBMAN_PORTAL_H
@@ -43,6 +43,12 @@ extern uint32_t dpaa2_svr_family;
  */
 struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d);
 
+/**
+ * qbman_swp_update() - Update portal cacheability attributes.
+ * @p: the given qbman swp portal
+ */
+int qbman_swp_update(struct qbman_swp *p, int stash_off);
+
 /**
  * qbman_swp_finish() - Create and destroy a functional object representing
  * the given QBMan portal descriptor.
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index d4223bdc8..a06b88dd2 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
  *
  */
 
@@ -82,6 +82,10 @@ qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd);
 static int
+qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd);
+static int
 qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd);
@@ -99,6 +103,12 @@ qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
 		uint32_t *flags,
 		int num_frames);
 static int
+qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd,
+		uint32_t *flags,
+		int num_frames);
+static int
 qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
@@ -118,6 +128,12 @@ qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
 		uint32_t *flags,
 		int num_frames);
 static int
+qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		struct qbman_fd **fd,
+		uint32_t *flags,
+		int num_frames);
+static int
 qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		struct qbman_fd **fd,
@@ -135,6 +151,11 @@ qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
 		const struct qbman_fd *fd,
 		int num_frames);
 static int
+qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd,
+		int num_frames);
+static int
 qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
@@ -143,9 +164,12 @@ qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
 static int
 qbman_swp_pull_direct(struct qbman_swp *s, struct qbman_pull_desc *d);
 static int
+qbman_swp_pull_cinh_direct(struct qbman_swp *s, struct qbman_pull_desc *d);
+static int
 qbman_swp_pull_mem_back(struct qbman_swp *s, struct qbman_pull_desc *d);
 
 const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s);
+const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s);
 const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s);
 
 static int
@@ -153,6 +177,10 @@ qbman_swp_release_direct(struct qbman_swp *s,
 		const struct qbman_release_desc *d,
 		const uint64_t *buffers, unsigned int num_buffers);
 static int
+qbman_swp_release_cinh_direct(struct qbman_swp *s,
+		const struct qbman_release_desc *d,
+		const uint64_t *buffers, unsigned int num_buffers);
+static int
 qbman_swp_release_mem_back(struct qbman_swp *s,
 		const struct qbman_release_desc *d,
 		const uint64_t *buffers, unsigned int num_buffers);
@@ -327,6 +355,28 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
 	return p;
 }
 
+int qbman_swp_update(struct qbman_swp *p, int stash_off)
+{
+	const struct qbman_swp_desc *d = &p->desc;
+	struct qbman_swp_sys *s = &p->sys;
+	int ret;
+
+	/* Nothing needs to be done for QBMAN rev > 5000 with fast access */
+	if ((qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+			&& (d->cena_access_mode == qman_cena_fastest_access))
+		return 0;
+
+	ret = qbman_swp_sys_update(s, d, p->dqrr.dqrr_size, stash_off);
+	if (ret) {
+		pr_err("qbman_swp_sys_init() failed %d\n", ret);
+		return ret;
+	}
+
+	p->stash_off = stash_off;
+
+	return 0;
+}
+
 void qbman_swp_finish(struct qbman_swp *p)
 {
 #ifdef QBMAN_CHECKING
@@ -462,6 +512,27 @@ void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb)
 #endif
 }
 
+void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb)
+{
+	uint8_t *v = cmd;
+#ifdef QBMAN_CHECKING
+	QBMAN_BUG_ON(!(p->mc.check != swp_mc_can_submit));
+#endif
+	/* TBD: "|=" is going to hurt performance. Need to move as many fields
+	 * out of word zero, and for those that remain, the "OR" needs to occur
+	 * at the caller side. This debug check helps to catch cases where the
+	 * caller wants to OR but has forgotten to do so.
+	 */
+	QBMAN_BUG_ON((*v & cmd_verb) != *v);
+	dma_wmb();
+	*v = cmd_verb | p->mc.valid_bit;
+	qbman_cinh_write_complete(&p->sys, QBMAN_CENA_SWP_CR, cmd);
+	clean(cmd);
+#ifdef QBMAN_CHECKING
+	p->mc.check = swp_mc_can_poll;
+#endif
+}
+
 void *qbman_swp_mc_result(struct qbman_swp *p)
 {
 	uint32_t *ret, verb;
@@ -500,6 +571,27 @@ void *qbman_swp_mc_result(struct qbman_swp *p)
 	return ret;
 }
 
+void *qbman_swp_mc_result_cinh(struct qbman_swp *p)
+{
+	uint32_t *ret, verb;
+#ifdef QBMAN_CHECKING
+	QBMAN_BUG_ON(p->mc.check != swp_mc_can_poll);
+#endif
+	ret = qbman_cinh_read_shadow(&p->sys,
+			      QBMAN_CENA_SWP_RR(p->mc.valid_bit));
+	/* Remove the valid-bit -
+	 * command completed iff the rest is non-zero
+	 */
+	verb = ret[0] & ~QB_VALID_BIT;
+	if (!verb)
+		return NULL;
+	p->mc.valid_bit ^= QB_VALID_BIT;
+#ifdef QBMAN_CHECKING
+	p->mc.check = swp_mc_can_start;
+#endif
+	return ret;
+}
+
 /***********/
 /* Enqueue */
 /***********/
@@ -640,6 +732,16 @@ static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p,
 				     QMAN_RT_MODE);
 }
 
+static void memcpy_byte_by_byte(void *to, const void *from, size_t n)
+{
+	const uint8_t *src = from;
+	volatile uint8_t *dest = to;
+	size_t i;
+
+	for (i = 0; i < n; i++)
+		dest[i] = src[i];
+}
+
 
 static int qbman_swp_enqueue_array_mode_direct(struct qbman_swp *s,
 					       const struct qbman_eq_desc *d,
@@ -754,7 +856,7 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
 			return -EBUSY;
 	}
 
-	p = qbman_cena_write_start_wo_shadow(&s->sys,
+	p = qbman_cinh_write_start_wo_shadow(&s->sys,
 			QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
 	memcpy(&p[1], &cl[1], 28);
 	memcpy(&p[8], fd, sizeof(*fd));
@@ -762,8 +864,44 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
 
 	/* Set the verb byte, have to substitute in the valid-bit */
 	p[0] = cl[0] | s->eqcr.pi_vb;
-	qbman_cena_write_complete_wo_shadow(&s->sys,
+	s->eqcr.pi++;
+	s->eqcr.pi &= full_mask;
+	s->eqcr.available--;
+	if (!(s->eqcr.pi & half_mask))
+		s->eqcr.pi_vb ^= QB_VALID_BIT;
+
+	return 0;
+}
+
+static int qbman_swp_enqueue_ring_mode_cinh_direct(
+		struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqcr_ci, full_mask, half_mask;
+
+	half_mask = (s->eqcr.pi_ci_mask>>1);
+	full_mask = s->eqcr.pi_ci_mask;
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cinh_read(&s->sys,
+				QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+				eqcr_ci, s->eqcr.ci);
+		if (!s->eqcr.available)
+			return -EBUSY;
+	}
+
+	p = qbman_cinh_write_start_wo_shadow(&s->sys,
 			QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
+	memcpy_byte_by_byte(&p[1], &cl[1], 28);
+	memcpy_byte_by_byte(&p[8], fd, sizeof(*fd));
+	lwsync();
+
+	/* Set the verb byte, have to substitute in the valid-bit */
+	p[0] = cl[0] | s->eqcr.pi_vb;
 	s->eqcr.pi++;
 	s->eqcr.pi &= full_mask;
 	s->eqcr.available--;
@@ -815,7 +953,10 @@ static int qbman_swp_enqueue_ring_mode(struct qbman_swp *s,
 				       const struct qbman_eq_desc *d,
 				       const struct qbman_fd *fd)
 {
-	return qbman_swp_enqueue_ring_mode_ptr(s, d, fd);
+	if (!s->stash_off)
+		return qbman_swp_enqueue_ring_mode_ptr(s, d, fd);
+	else
+		return qbman_swp_enqueue_ring_mode_cinh_direct(s, d, fd);
 }
 
 int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
@@ -966,6 +1107,67 @@ static int qbman_swp_enqueue_multiple_cinh_direct(
 	return num_enqueued;
 }
 
+static int qbman_swp_enqueue_multiple_cinh_direct(
+		struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd,
+		uint32_t *flags,
+		int num_frames)
+{
+	uint32_t *p = NULL;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+	int i, num_enqueued = 0;
+
+	half_mask = (s->eqcr.pi_ci_mask>>1);
+	full_mask = s->eqcr.pi_ci_mask;
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cinh_read(&s->sys,
+				QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+				eqcr_ci, s->eqcr.ci);
+		if (!s->eqcr.available)
+			return 0;
+	}
+
+	eqcr_pi = s->eqcr.pi;
+	num_enqueued = (s->eqcr.available < num_frames) ?
+			s->eqcr.available : num_frames;
+	s->eqcr.available -= num_enqueued;
+	/* Fill in the EQCR ring */
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		memcpy_byte_by_byte(&p[1], &cl[1], 28);
+		memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd));
+		eqcr_pi++;
+	}
+
+	lwsync();
+
+	/* Set the verb byte, have to substitute in the valid-bit */
+	eqcr_pi = s->eqcr.pi;
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		p[0] = cl[0] | s->eqcr.pi_vb;
+		if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
+			struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+
+			d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+				((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
+		}
+		eqcr_pi++;
+		if (!(eqcr_pi & half_mask))
+			s->eqcr.pi_vb ^= QB_VALID_BIT;
+	}
+
+	s->eqcr.pi = eqcr_pi & full_mask;
+
+	return num_enqueued;
+}
+
 static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
 					       const struct qbman_eq_desc *d,
 					       const struct qbman_fd *fd,
@@ -1025,7 +1227,12 @@ inline int qbman_swp_enqueue_multiple(struct qbman_swp *s,
 				      uint32_t *flags,
 				      int num_frames)
 {
-	return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags, num_frames);
+	if (!s->stash_off)
+		return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags,
+						num_frames);
+	else
+		return qbman_swp_enqueue_multiple_cinh_direct(s, d, fd, flags,
+						num_frames);
 }
 
 static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
@@ -1167,6 +1374,67 @@ static int qbman_swp_enqueue_multiple_fd_cinh_direct(
 	return num_enqueued;
 }
 
+static int qbman_swp_enqueue_multiple_fd_cinh_direct(
+		struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		struct qbman_fd **fd,
+		uint32_t *flags,
+		int num_frames)
+{
+	uint32_t *p = NULL;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+	int i, num_enqueued = 0;
+
+	half_mask = (s->eqcr.pi_ci_mask>>1);
+	full_mask = s->eqcr.pi_ci_mask;
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cinh_read(&s->sys,
+				QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+				eqcr_ci, s->eqcr.ci);
+		if (!s->eqcr.available)
+			return 0;
+	}
+
+	eqcr_pi = s->eqcr.pi;
+	num_enqueued = (s->eqcr.available < num_frames) ?
+			s->eqcr.available : num_frames;
+	s->eqcr.available -= num_enqueued;
+	/* Fill in the EQCR ring */
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		memcpy_byte_by_byte(&p[1], &cl[1], 28);
+		memcpy_byte_by_byte(&p[8], fd[i], sizeof(struct qbman_fd));
+		eqcr_pi++;
+	}
+
+	lwsync();
+
+	/* Set the verb byte, have to substitute in the valid-bit */
+	eqcr_pi = s->eqcr.pi;
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		p[0] = cl[0] | s->eqcr.pi_vb;
+		if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
+			struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+
+			d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+				((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
+		}
+		eqcr_pi++;
+		if (!(eqcr_pi & half_mask))
+			s->eqcr.pi_vb ^= QB_VALID_BIT;
+	}
+
+	s->eqcr.pi = eqcr_pi & full_mask;
+
+	return num_enqueued;
+}
+
 static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s,
 						  const struct qbman_eq_desc *d,
 						  struct qbman_fd **fd,
@@ -1233,7 +1501,12 @@ inline int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
 					 uint32_t *flags,
 					 int num_frames)
 {
-	return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags, num_frames);
+	if (!s->stash_off)
+		return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags,
+					num_frames);
+	else
+		return qbman_swp_enqueue_multiple_fd_cinh_direct(s, d, fd,
+					flags, num_frames);
 }
 
 static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
@@ -1365,6 +1638,62 @@ static int qbman_swp_enqueue_multiple_desc_cinh_direct(
 	return num_enqueued;
 }
 
+static int qbman_swp_enqueue_multiple_desc_cinh_direct(
+		struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd,
+		int num_frames)
+{
+	uint32_t *p;
+	const uint32_t *cl;
+	uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+	int i, num_enqueued = 0;
+
+	half_mask = (s->eqcr.pi_ci_mask>>1);
+	full_mask = s->eqcr.pi_ci_mask;
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cinh_read(&s->sys,
+				QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+					eqcr_ci, s->eqcr.ci);
+		if (!s->eqcr.available)
+			return 0;
+	}
+
+	eqcr_pi = s->eqcr.pi;
+	num_enqueued = (s->eqcr.available < num_frames) ?
+			s->eqcr.available : num_frames;
+	s->eqcr.available -= num_enqueued;
+	/* Fill in the EQCR ring */
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		cl = qb_cl(&d[i]);
+		memcpy_byte_by_byte(&p[1], &cl[1], 28);
+		memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd));
+		eqcr_pi++;
+	}
+
+	lwsync();
+
+	/* Set the verb byte, have to substitute in the valid-bit */
+	eqcr_pi = s->eqcr.pi;
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		cl = qb_cl(&d[i]);
+		p[0] = cl[0] | s->eqcr.pi_vb;
+		eqcr_pi++;
+		if (!(eqcr_pi & half_mask))
+			s->eqcr.pi_vb ^= QB_VALID_BIT;
+	}
+
+	s->eqcr.pi = eqcr_pi & full_mask;
+
+	return num_enqueued;
+}
+
 static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
 					const struct qbman_eq_desc *d,
 					const struct qbman_fd *fd,
@@ -1426,7 +1755,13 @@ inline int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
 					   const struct qbman_fd *fd,
 					   int num_frames)
 {
-	return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd, num_frames);
+	if (!s->stash_off)
+		return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd,
+					num_frames);
+	else
+		return qbman_swp_enqueue_multiple_desc_cinh_direct(s, d, fd,
+					num_frames);
+
 }
 
 /*************************/
@@ -1574,6 +1909,30 @@ static int qbman_swp_pull_direct(struct qbman_swp *s,
 	return 0;
 }
 
+static int qbman_swp_pull_cinh_direct(struct qbman_swp *s,
+				 struct qbman_pull_desc *d)
+{
+	uint32_t *p;
+	uint32_t *cl = qb_cl(d);
+
+	if (!atomic_dec_and_test(&s->vdq.busy)) {
+		atomic_inc(&s->vdq.busy);
+		return -EBUSY;
+	}
+
+	d->pull.tok = s->sys.idx + 1;
+	s->vdq.storage = (void *)(size_t)d->pull.rsp_addr_virt;
+	p = qbman_cinh_write_start_wo_shadow(&s->sys, QBMAN_CENA_SWP_VDQCR);
+	memcpy_byte_by_byte(&p[1], &cl[1], 12);
+
+	/* Set the verb byte, have to substitute in the valid-bit */
+	lwsync();
+	p[0] = cl[0] | s->vdq.valid_bit;
+	s->vdq.valid_bit ^= QB_VALID_BIT;
+
+	return 0;
+}
+
 static int qbman_swp_pull_mem_back(struct qbman_swp *s,
 				   struct qbman_pull_desc *d)
 {
@@ -1601,7 +1960,10 @@ static int qbman_swp_pull_mem_back(struct qbman_swp *s,
 
 inline int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
 {
-	return qbman_swp_pull_ptr(s, d);
+	if (!s->stash_off)
+		return qbman_swp_pull_ptr(s, d);
+	else
+		return qbman_swp_pull_cinh_direct(s, d);
 }
 
 /****************/
@@ -1638,7 +2000,10 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s)
  */
 inline const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s)
 {
-	return qbman_swp_dqrr_next_ptr(s);
+	if (!s->stash_off)
+		return qbman_swp_dqrr_next_ptr(s);
+	else
+		return qbman_swp_dqrr_next_cinh_direct(s);
 }
 
 const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s)
@@ -1718,6 +2083,81 @@ const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s)
 	return p;
 }
 
+const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s)
+{
+	uint32_t verb;
+	uint32_t response_verb;
+	uint32_t flags;
+	const struct qbman_result *p;
+
+	/* Before using valid-bit to detect if something is there, we have to
+	 * handle the case of the DQRR reset bug...
+	 */
+	if (s->dqrr.reset_bug) {
+		/* We pick up new entries by cache-inhibited producer index,
+		 * which means that a non-coherent mapping would require us to
+		 * invalidate and read *only* once that PI has indicated that
+		 * there's an entry here. The first trip around the DQRR ring
+		 * will be much less efficient than all subsequent trips around
+		 * it...
+		 */
+		uint8_t pi = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_DQPI) &
+			     QMAN_DQRR_PI_MASK;
+
+		/* there are new entries if pi != next_idx */
+		if (pi == s->dqrr.next_idx)
+			return NULL;
+
+		/* if next_idx is/was the last ring index, and 'pi' is
+		 * different, we can disable the workaround as all the ring
+		 * entries have now been DMA'd to so valid-bit checking is
+		 * repaired. Note: this logic needs to be based on next_idx
+		 * (which increments one at a time), rather than on pi (which
+		 * can burst and wrap-around between our snapshots of it).
+		 */
+		QBMAN_BUG_ON((s->dqrr.dqrr_size - 1) < 0);
+		if (s->dqrr.next_idx == (s->dqrr.dqrr_size - 1u)) {
+			pr_debug("DEBUG: next_idx=%d, pi=%d, clear reset bug\n",
+				 s->dqrr.next_idx, pi);
+			s->dqrr.reset_bug = 0;
+		}
+	}
+	p = qbman_cinh_read_wo_shadow(&s->sys,
+			QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
+
+	verb = p->dq.verb;
+
+	/* If the valid-bit isn't of the expected polarity, nothing there. Note,
+	 * in the DQRR reset bug workaround, we shouldn't need to skip these
+	 * check, because we've already determined that a new entry is available
+	 * and we've invalidated the cacheline before reading it, so the
+	 * valid-bit behaviour is repaired and should tell us what we already
+	 * knew from reading PI.
+	 */
+	if ((verb & QB_VALID_BIT) != s->dqrr.valid_bit)
+		return NULL;
+
+	/* There's something there. Move "next_idx" attention to the next ring
+	 * entry (and prefetch it) before returning what we found.
+	 */
+	s->dqrr.next_idx++;
+	if (s->dqrr.next_idx == s->dqrr.dqrr_size) {
+		s->dqrr.next_idx = 0;
+		s->dqrr.valid_bit ^= QB_VALID_BIT;
+	}
+	/* If this is the final response to a volatile dequeue command
+	 * indicate that the vdq is no longer busy
+	 */
+	flags = p->dq.stat;
+	response_verb = verb & QBMAN_RESPONSE_VERB_MASK;
+	if ((response_verb == QBMAN_RESULT_DQ) &&
+	    (flags & QBMAN_DQ_STAT_VOLATILE) &&
+	    (flags & QBMAN_DQ_STAT_EXPIRED))
+		atomic_inc(&s->vdq.busy);
+
+	return p;
+}
+
 const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s)
 {
 	uint32_t verb;
@@ -2096,6 +2536,37 @@ static int qbman_swp_release_direct(struct qbman_swp *s,
 	return 0;
 }
 
+static int qbman_swp_release_cinh_direct(struct qbman_swp *s,
+				    const struct qbman_release_desc *d,
+				    const uint64_t *buffers,
+				    unsigned int num_buffers)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t rar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_RAR);
+
+	pr_debug("RAR=%08x\n", rar);
+	if (!RAR_SUCCESS(rar))
+		return -EBUSY;
+
+	QBMAN_BUG_ON(!num_buffers || (num_buffers > 7));
+
+	/* Start the release command */
+	p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				     QBMAN_CENA_SWP_RCR(RAR_IDX(rar)));
+
+	/* Copy the caller's buffer pointers to the command */
+	memcpy_byte_by_byte(&p[2], buffers, num_buffers * sizeof(uint64_t));
+
+	/* Set the verb byte, have to substitute in the valid-bit and the
+	 * number of buffers.
+	 */
+	lwsync();
+	p[0] = cl[0] | RAR_VB(rar) | num_buffers;
+
+	return 0;
+}
+
 static int qbman_swp_release_mem_back(struct qbman_swp *s,
 				      const struct qbman_release_desc *d,
 				      const uint64_t *buffers,
@@ -2134,7 +2605,11 @@ inline int qbman_swp_release(struct qbman_swp *s,
 			     const uint64_t *buffers,
 			     unsigned int num_buffers)
 {
-	return qbman_swp_release_ptr(s, d, buffers, num_buffers);
+	if (!s->stash_off)
+		return qbman_swp_release_ptr(s, d, buffers, num_buffers);
+	else
+		return qbman_swp_release_cinh_direct(s, d, buffers,
+						num_buffers);
 }
 
 /*******************/
@@ -2157,8 +2632,8 @@ struct qbman_acquire_rslt {
 	uint64_t buf[7];
 };
 
-int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
-		      unsigned int num_buffers)
+static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
+				uint64_t *buffers, unsigned int num_buffers)
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
@@ -2202,6 +2677,61 @@ int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
 	return (int)r->num;
 }
 
+static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
+			uint64_t *buffers, unsigned int num_buffers)
+{
+	struct qbman_acquire_desc *p;
+	struct qbman_acquire_rslt *r;
+
+	if (!num_buffers || (num_buffers > 7))
+		return -EINVAL;
+
+	/* Start the management command */
+	p = qbman_swp_mc_start(s);
+
+	if (!p)
+		return -EBUSY;
+
+	/* Encode the caller-provided attributes */
+	p->bpid = bpid;
+	p->num = num_buffers;
+
+	/* Complete the management command */
+	r = qbman_swp_mc_complete_cinh(s, p, QBMAN_MC_ACQUIRE);
+	if (!r) {
+		pr_err("qbman: acquire from BPID %d failed, no response\n",
+		       bpid);
+		return -EIO;
+	}
+
+	/* Decode the outcome */
+	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_MC_ACQUIRE);
+
+	/* Determine success or failure */
+	if (r->rslt != QBMAN_MC_RSLT_OK) {
+		pr_err("Acquire buffers from BPID 0x%x failed, code=0x%02x\n",
+		       bpid, r->rslt);
+		return -EIO;
+	}
+
+	QBMAN_BUG_ON(r->num > num_buffers);
+
+	/* Copy the acquired buffers to the caller's array */
+	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+
+	return (int)r->num;
+}
+
+int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
+		      unsigned int num_buffers)
+{
+	if (!s->stash_off)
+		return qbman_swp_acquire_direct(s, bpid, buffers, num_buffers);
+	else
+		return qbman_swp_acquire_cinh_direct(s, bpid, buffers,
+					num_buffers);
+}
+
 /*****************/
 /* FQ management */
 /*****************/
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.h b/drivers/bus/fslmc/qbman/qbman_portal.h
index 3aaacae52..1cf791830 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/qbman_portal.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
  *
  */
 
@@ -102,6 +102,7 @@ struct qbman_swp {
 		uint32_t ci;
 		int available;
 	} eqcr;
+	uint8_t stash_off;
 };
 
 /* -------------------------- */
@@ -118,7 +119,9 @@ struct qbman_swp {
  */
 void *qbman_swp_mc_start(struct qbman_swp *p);
 void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb);
+void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb);
 void *qbman_swp_mc_result(struct qbman_swp *p);
+void *qbman_swp_mc_result_cinh(struct qbman_swp *p);
 
 /* Wraps up submit + poll-for-result */
 static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
@@ -135,6 +138,20 @@ static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
 	return cmd;
 }
 
+static inline void *qbman_swp_mc_complete_cinh(struct qbman_swp *swp, void *cmd,
+					  uint8_t cmd_verb)
+{
+	int loopvar = 1000;
+
+	qbman_swp_mc_submit_cinh(swp, cmd, cmd_verb);
+	do {
+		cmd = qbman_swp_mc_result_cinh(swp);
+	} while (!cmd && loopvar--);
+	QBMAN_BUG_ON(!loopvar);
+
+	return cmd;
+}
+
 /* ---------------------- */
 /* Descriptors/cachelines */
 /* ---------------------- */
diff --git a/drivers/bus/fslmc/qbman/qbman_sys.h b/drivers/bus/fslmc/qbman/qbman_sys.h
index 55449edf3..61f817c47 100644
--- a/drivers/bus/fslmc/qbman/qbman_sys.h
+++ b/drivers/bus/fslmc/qbman/qbman_sys.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2019 NXP
+ * Copyright 2019-2020 NXP
  */
 /* qbman_sys_decl.h and qbman_sys.h are the two platform-specific files in the
  * driver. They are only included via qbman_private.h, which is itself a
@@ -190,6 +190,34 @@ static inline void qbman_cinh_write(struct qbman_swp_sys *s, uint32_t offset,
 #endif
 }
 
+static inline void *qbman_cinh_write_start_wo_shadow(struct qbman_swp_sys *s,
+						     uint32_t offset)
+{
+#ifdef QBMAN_CINH_TRACE
+	pr_info("qbman_cinh_write_start(%p:%d:0x%03x)\n",
+		s->addr_cinh, s->idx, offset);
+#endif
+	QBMAN_BUG_ON(offset & 63);
+	return (s->addr_cinh + offset);
+}
+
+static inline void qbman_cinh_write_complete(struct qbman_swp_sys *s,
+					     uint32_t offset, void *cmd)
+{
+	const uint32_t *shadow = cmd;
+	int loop;
+#ifdef QBMAN_CINH_TRACE
+	pr_info("qbman_cinh_write_complete(%p:%d:0x%03x) %p\n",
+		s->addr_cinh, s->idx, offset, shadow);
+	hexdump(cmd, 64);
+#endif
+	for (loop = 15; loop >= 1; loop--)
+		__raw_writel(shadow[loop], s->addr_cinh +
+					 offset + loop * 4);
+	lwsync();
+	__raw_writel(shadow[0], s->addr_cinh + offset);
+}
+
 static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
 {
 	uint32_t reg = __raw_readl(s->addr_cinh + offset);
@@ -200,6 +228,35 @@ static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
 	return reg;
 }
 
+static inline void *qbman_cinh_read_shadow(struct qbman_swp_sys *s,
+					   uint32_t offset)
+{
+	uint32_t *shadow = (uint32_t *)(s->cena + offset);
+	unsigned int loop;
+#ifdef QBMAN_CINH_TRACE
+	pr_info(" %s (%p:%d:0x%03x) %p\n", __func__,
+		s->addr_cinh, s->idx, offset, shadow);
+#endif
+
+	for (loop = 0; loop < 16; loop++)
+		shadow[loop] = __raw_readl(s->addr_cinh + offset
+					+ loop * 4);
+#ifdef QBMAN_CINH_TRACE
+	hexdump(shadow, 64);
+#endif
+	return shadow;
+}
+
+static inline void *qbman_cinh_read_wo_shadow(struct qbman_swp_sys *s,
+					      uint32_t offset)
+{
+#ifdef QBMAN_CINH_TRACE
+	pr_info("qbman_cinh_read(%p:%d:0x%03x)\n",
+		s->addr_cinh, s->idx, offset);
+#endif
+	return s->addr_cinh + offset;
+}
+
 static inline void *qbman_cena_write_start(struct qbman_swp_sys *s,
 					   uint32_t offset)
 {
@@ -476,6 +533,82 @@ static inline int qbman_swp_sys_init(struct qbman_swp_sys *s,
 	return 0;
 }
 
+static inline int qbman_swp_sys_update(struct qbman_swp_sys *s,
+				     const struct qbman_swp_desc *d,
+				     uint8_t dqrr_size,
+				     int stash_off)
+{
+	uint32_t reg;
+	int i;
+	int cena_region_size = 4*1024;
+	uint8_t est = 1;
+#ifdef RTE_ARCH_64
+	uint8_t wn = CENA_WRITE_ENABLE;
+#else
+	uint8_t wn = CINH_WRITE_ENABLE;
+#endif
+
+	if (stash_off)
+		wn = CINH_WRITE_ENABLE;
+
+	QBMAN_BUG_ON(d->idx < 0);
+#ifdef QBMAN_CHECKING
+	/* We should never be asked to initialise for a portal that isn't in
+	 * the power-on state. (Ie. don't forget to reset portals when they are
+	 * decommissioned!)
+	 */
+	reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+	QBMAN_BUG_ON(reg);
+#endif
+	if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+			&& (d->cena_access_mode == qman_cena_fastest_access))
+		memset(s->addr_cena, 0, cena_region_size);
+	else {
+		/* Invalidate the portal memory.
+		 * This ensures no stale cache lines
+		 */
+		for (i = 0; i < cena_region_size; i += 64)
+			dccivac(s->addr_cena + i);
+	}
+
+	if (dpaa2_svr_family == SVR_LS1080A)
+		est = 0;
+
+	if (s->eqcr_mode == qman_eqcr_vb_array) {
+		reg = qbman_set_swp_cfg(dqrr_size, wn,
+					0, 3, 2, 3, 1, 1, 1, 1, 1, 1);
+	} else {
+		if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 &&
+			    (d->cena_access_mode == qman_cena_fastest_access))
+			reg = qbman_set_swp_cfg(dqrr_size, wn,
+						1, 3, 2, 0, 1, 1, 1, 1, 1, 1);
+		else
+			reg = qbman_set_swp_cfg(dqrr_size, wn,
+						est, 3, 2, 2, 1, 1, 1, 1, 1, 1);
+	}
+
+	if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+			&& (d->cena_access_mode == qman_cena_fastest_access))
+		reg |= 1 << SWP_CFG_CPBS_SHIFT | /* memory-backed mode */
+		       1 << SWP_CFG_VPM_SHIFT |  /* VDQCR read triggered mode */
+		       1 << SWP_CFG_CPM_SHIFT;   /* CR read triggered mode */
+
+	qbman_cinh_write(s, QBMAN_CINH_SWP_CFG, reg);
+	reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+	if (!reg) {
+		pr_err("The portal %d is not enabled!\n", s->idx);
+		return -1;
+	}
+
+	if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+			&& (d->cena_access_mode == qman_cena_fastest_access)) {
+		qbman_cinh_write(s, QBMAN_CINH_SWP_EQCR_PI, QMAN_RT_MODE);
+		qbman_cinh_write(s, QBMAN_CINH_SWP_RCR_PI, QMAN_RT_MODE);
+	}
+
+	return 0;
+}
+
 static inline void qbman_swp_sys_finish(struct qbman_swp_sys *s)
 {
 	free(s->cena);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 08/16] drivers: enhance portal allocation failure log
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (7 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 07/16] bus/fslmc: support portal migration Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 09/16] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

The change adds printing the thread id when portal allocation
failure occurs

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++++++--
 drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++++++--
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++++++++---
 drivers/net/dpaa2/dpaa2_ethdev.c            |  4 +++-
 drivers/net/dpaa2/dpaa2_rxtx.c              | 16 ++++++++++++----
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++++++--
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++++++++---
 7 files changed, 51 insertions(+), 17 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 6ed2701ab..cf08003cc 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1459,7 +1459,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1641,7 +1643,9 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 1833d659d..bb02ea9fb 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -74,7 +74,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -273,7 +275,9 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 48887beb7..fa9b53e64 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -69,7 +69,9 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_MEMPOOL_ERR("Failure in affining portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			goto err1;
 		}
 	}
@@ -198,7 +200,9 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return;
 		}
 	}
@@ -317,7 +321,9 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return ret;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4fc550a88..4a61c6f78 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -887,7 +887,9 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return -EINVAL;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 52d913d9e..d809e0f4b 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -760,7 +760,9 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -872,7 +874,9 @@ uint16_t dpaa2_dev_tx_conf(void *queue)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1011,7 +1015,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1272,7 +1278,9 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
index 997d1c873..7c21c6a52 100644
--- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
+++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
@@ -70,7 +70,9 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -133,7 +135,9 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c90595400..d5202d652 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -666,7 +666,9 @@ dpdmai_dev_enqueue_multi(struct dpaa2_dpdmai_dev *dpdmai_dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -788,7 +790,9 @@ dpdmai_dev_dequeue_multijob_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -929,7 +933,9 @@ dpdmai_dev_dequeue_multijob_no_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 09/16] bus/fslmc: rename the cinh read functions used for ls1088
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (8 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 08/16] drivers: enhance portal allocation failure log Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 10/16] net/dpaa: return error on multiple mp config Hemant Agrawal
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

This patch changes the qbman I/O function names as they are
only reading from cinh register, but writing to cena registers.

This gives way to add functions which purely work in cinh mode

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 34 +++++++++++++-------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index a06b88dd2..207faada3 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -78,7 +78,7 @@ qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd);
 static int
-qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_ring_mode_cinh_read_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd);
 static int
@@ -97,7 +97,7 @@ qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
 		uint32_t *flags,
 		int num_frames);
 static int
-qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_cinh_read_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
 		uint32_t *flags,
@@ -122,7 +122,7 @@ qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
 		uint32_t *flags,
 		int num_frames);
 static int
-qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_fd_cinh_read_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		struct qbman_fd **fd,
 		uint32_t *flags,
@@ -146,7 +146,7 @@ qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
 		const struct qbman_fd *fd,
 		int num_frames);
 static int
-qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_desc_cinh_read_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
 		int num_frames);
@@ -309,15 +309,15 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
 			&& (d->cena_access_mode == qman_cena_fastest_access)) {
 		p->eqcr.pi_ring_size = 32;
 		qbman_swp_enqueue_array_mode_ptr =
-				qbman_swp_enqueue_array_mode_mem_back;
+			qbman_swp_enqueue_array_mode_mem_back;
 		qbman_swp_enqueue_ring_mode_ptr =
-				qbman_swp_enqueue_ring_mode_mem_back;
+			qbman_swp_enqueue_ring_mode_mem_back;
 		qbman_swp_enqueue_multiple_ptr =
-				qbman_swp_enqueue_multiple_mem_back;
+			qbman_swp_enqueue_multiple_mem_back;
 		qbman_swp_enqueue_multiple_fd_ptr =
-				qbman_swp_enqueue_multiple_fd_mem_back;
+			qbman_swp_enqueue_multiple_fd_mem_back;
 		qbman_swp_enqueue_multiple_desc_ptr =
-				qbman_swp_enqueue_multiple_desc_mem_back;
+			qbman_swp_enqueue_multiple_desc_mem_back;
 		qbman_swp_pull_ptr = qbman_swp_pull_mem_back;
 		qbman_swp_dqrr_next_ptr = qbman_swp_dqrr_next_mem_back;
 		qbman_swp_release_ptr = qbman_swp_release_mem_back;
@@ -325,13 +325,13 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
 
 	if (dpaa2_svr_family == SVR_LS1080A) {
 		qbman_swp_enqueue_ring_mode_ptr =
-				qbman_swp_enqueue_ring_mode_cinh_direct;
+			qbman_swp_enqueue_ring_mode_cinh_read_direct;
 		qbman_swp_enqueue_multiple_ptr =
-				qbman_swp_enqueue_multiple_cinh_direct;
+			qbman_swp_enqueue_multiple_cinh_read_direct;
 		qbman_swp_enqueue_multiple_fd_ptr =
-				qbman_swp_enqueue_multiple_fd_cinh_direct;
+			qbman_swp_enqueue_multiple_fd_cinh_read_direct;
 		qbman_swp_enqueue_multiple_desc_ptr =
-				qbman_swp_enqueue_multiple_desc_cinh_direct;
+			qbman_swp_enqueue_multiple_desc_cinh_read_direct;
 	}
 
 	for (mask_size = p->eqcr.pi_ring_size; mask_size > 0; mask_size >>= 1)
@@ -835,7 +835,7 @@ static int qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s,
 	return 0;
 }
 
-static int qbman_swp_enqueue_ring_mode_cinh_direct(
+static int qbman_swp_enqueue_ring_mode_cinh_read_direct(
 		struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd)
@@ -1037,7 +1037,7 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
 	return num_enqueued;
 }
 
-static int qbman_swp_enqueue_multiple_cinh_direct(
+static int qbman_swp_enqueue_multiple_cinh_read_direct(
 		struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
@@ -1304,7 +1304,7 @@ static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
 	return num_enqueued;
 }
 
-static int qbman_swp_enqueue_multiple_fd_cinh_direct(
+static int qbman_swp_enqueue_multiple_fd_cinh_read_direct(
 		struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		struct qbman_fd **fd,
@@ -1573,7 +1573,7 @@ static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
 	return num_enqueued;
 }
 
-static int qbman_swp_enqueue_multiple_desc_cinh_direct(
+static int qbman_swp_enqueue_multiple_desc_cinh_read_direct(
 		struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 10/16] net/dpaa: return error on multiple mp config
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (9 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 09/16] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop Hemant Agrawal
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

multiple buffer pools are not supported on a single device.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/mempool/dpaa/dpaa_mempool.c | 1 +
 drivers/net/dpaa/dpaa_ethdev.c      | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 3a2528331..00db6f9bc 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -132,6 +132,7 @@ dpaa_mbuf_free_pool(struct rte_mempool *mp)
 		DPAA_MEMPOOL_INFO("BMAN pool freed for bpid =%d",
 				  bp_info->bpid);
 		rte_free(mp->pool_data);
+		bp_info->bp = NULL;
 		mp->pool_data = NULL;
 	}
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index fce9ce2fe..0384532d2 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -587,6 +587,12 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	DPAA_PMD_INFO("Rx queue setup for queue index: %d fq_id (0x%x)",
 			queue_idx, rxq->fqid);
 
+	if (dpaa_intf->bp_info && dpaa_intf->bp_info->bp &&
+	    dpaa_intf->bp_info->mp != mp) {
+		DPAA_PMD_WARN("Multiple pools on same interface not supported");
+		return -EINVAL;
+	}
+
 	/* Max packet can fit in single buffer */
 	if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
 		;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (10 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 10/16] net/dpaa: return error on multiple mp config Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-03 16:59   ` Ferruh Yigit
  2020-03-03 17:02   ` Ferruh Yigit
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 12/16] net/dpaa: add 2.5G support Hemant Agrawal
                   ` (5 subsequent siblings)
  17 siblings, 2 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

Enable congestion handling/tail drop for TX queues.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/dpaa/base/qbman/qman.c        |  43 +++++++++
 drivers/bus/dpaa/include/fsl_qman.h       |  17 ++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |   7 ++
 drivers/net/dpaa/dpaa_ethdev.c            | 111 ++++++++++++++++++++--
 drivers/net/dpaa/dpaa_ethdev.h            |   2 +-
 drivers/net/dpaa/dpaa_rxtx.c              |  71 ++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h              |   3 +
 7 files changed, 247 insertions(+), 7 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index b53eb9e5e..494aca1d0 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -40,6 +40,8 @@
 			spin_unlock(&__fq478->fqlock); \
 	} while (0)
 
+static qman_cb_free_mbuf qman_free_mbuf_cb;
+
 static inline void fq_set(struct qman_fq *fq, u32 mask)
 {
 	dpaa_set_bits(mask, &fq->flags);
@@ -790,6 +792,47 @@ static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
 	FQUNLOCK(fq);
 }
 
+void
+qman_ern_register_cb(qman_cb_free_mbuf cb)
+{
+	qman_free_mbuf_cb = cb;
+}
+
+
+void
+qman_ern_poll_free(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	u8 verb, num = 0;
+	const struct qm_mr_entry *msg;
+	const struct qm_fd *fd;
+	struct qm_mr_entry swapped_msg;
+
+	qm_mr_pvb_update(&p->p);
+	msg = qm_mr_current(&p->p);
+
+	while (msg != NULL) {
+		swapped_msg = *msg;
+		hw_fd_to_cpu(&swapped_msg.ern.fd);
+		verb = msg->ern.verb & QM_MR_VERB_TYPE_MASK;
+		fd = &swapped_msg.ern.fd;
+
+		if (unlikely(verb & 0x20)) {
+			printf("HW ERN notification, Nothing to do\n");
+		} else {
+			if ((fd->bpid & 0xff) != 0xff)
+				qman_free_mbuf_cb(fd);
+		}
+
+		num++;
+		qm_mr_next(&p->p);
+		qm_mr_pvb_update(&p->p);
+		msg = qm_mr_current(&p->p);
+	}
+
+	qm_mr_cci_consume(&p->p, num);
+}
+
 static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 {
 	const struct qm_mr_entry *msg;
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 4deea5e75..80fdb3950 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1152,6 +1152,10 @@ typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq,
 /* This callback type is used when handling DCP ERNs */
 typedef void (*qman_cb_dc_ern)(struct qman_portal *qm,
 				const struct qm_mr_entry *msg);
+
+/* This callback function will be used to free mbufs of ERN */
+typedef uint16_t (*qman_cb_free_mbuf)(const struct qm_fd *fd);
+
 /*
  * s/w-visible states. Ie. tentatively scheduled + truly scheduled + active +
  * held-active + held-suspended are just "sched". Things like "retired" will not
@@ -1778,6 +1782,19 @@ int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
 int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
 		       int frames_to_send);
 
+/**
+ * qman_ern_poll_free - Polling on MR and calling a callback function to free
+ * mbufs when SW ERNs received.
+ */
+__rte_experimental
+void qman_ern_poll_free(void);
+
+/**
+ * qman_ern_register_cb - Register a callback function to free buffers.
+ */
+__rte_experimental
+void qman_ern_register_cb(qman_cb_free_mbuf cb);
+
 /**
  * qman_enqueue_multi_fq - Enqueue multiple frames to their respective frame
  * queues.
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index e6ca4361e..86f5811b0 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -94,3 +94,10 @@ DPDK_20.0 {
 
 	local: *;
 };
+
+EXPERIMENTAL {
+	global:
+
+	qman_ern_poll_free;
+	qman_ern_register_cb;
+};
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 0384532d2..2ae79c9f5 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2017-2019 NXP
+ *   Copyright 2017-2020 NXP
  *
  */
 /* System headers */
@@ -86,9 +86,12 @@ static int dpaa_push_mode_max_queue = DPAA_DEFAULT_PUSH_MODE_QUEUE;
 static int dpaa_push_queue_idx; /* Queue index which are in push mode*/
 
 
-/* Per FQ Taildrop in frame count */
+/* Per RX FQ Taildrop in frame count */
 static unsigned int td_threshold = CGR_RX_PERFQ_THRESH;
 
+/* Per TX FQ Taildrop in frame count, disabled by default */
+static unsigned int td_tx_threshold;
+
 struct rte_dpaa_xstats_name_off {
 	char name[RTE_ETH_XSTATS_NAME_SIZE];
 	uint32_t offset;
@@ -275,7 +278,11 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* Change tx callback to the real one */
-	dev->tx_pkt_burst = dpaa_eth_queue_tx;
+	if (dpaa_intf->cgr_tx)
+		dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+	else
+		dev->tx_pkt_burst = dpaa_eth_queue_tx;
+
 	fman_if_enable_rx(dpaa_intf->fif);
 
 	return 0;
@@ -869,6 +876,7 @@ int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	DPAA_PMD_INFO("Tx queue setup for queue index: %d fq_id (0x%x)",
 			queue_idx, dpaa_intf->tx_queues[queue_idx].fqid);
 	dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+
 	return 0;
 }
 
@@ -1239,9 +1247,19 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
 
 /* Initialise a Tx FQ */
 static int dpaa_tx_queue_init(struct qman_fq *fq,
-			      struct fman_if *fman_intf)
+			      struct fman_if *fman_intf,
+			      struct qman_cgr *cgr_tx)
 {
 	struct qm_mcc_initfq opts = {0};
+	struct qm_mcc_initcgr cgr_opts = {
+		.we_mask = QM_CGR_WE_CS_THRES |
+				QM_CGR_WE_CSTD_EN |
+				QM_CGR_WE_MODE,
+		.cgr = {
+			.cstd_en = QM_CGR_EN,
+			.mode = QMAN_CGR_MODE_FRAME
+		}
+	};
 	int ret;
 
 	ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
@@ -1260,6 +1278,27 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
 	opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
 	opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
 	DPAA_PMD_DEBUG("init tx fq %p, fqid 0x%x", fq, fq->fqid);
+
+	if (cgr_tx) {
+		/* Enable tail drop with cgr on this queue */
+		qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres,
+				      td_tx_threshold, 0);
+		cgr_tx->cb = NULL;
+		ret = qman_create_cgr(cgr_tx, QMAN_CGR_FLAG_USE_INIT,
+				      &cgr_opts);
+		if (ret) {
+			DPAA_PMD_WARN(
+				"rx taildrop init fail on rx fqid 0x%x(ret=%d)",
+				fq->fqid, ret);
+			goto without_cgr;
+		}
+		opts.we_mask |= QM_INITFQ_WE_CGID;
+		opts.fqd.cgid = cgr_tx->cgrid;
+		opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+		DPAA_PMD_DEBUG("Tx FQ tail drop enabled, threshold = %d\n",
+				td_tx_threshold);
+	}
+without_cgr:
 	ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
 	if (ret)
 		DPAA_PMD_ERR("init tx fqid 0x%x failed %d", fq->fqid, ret);
@@ -1312,6 +1351,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	struct fman_if *fman_intf;
 	struct fman_if_bpool *bp, *tmp_bp;
 	uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES];
+	uint32_t cgrid_tx[MAX_DPAA_CORES];
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1321,7 +1361,10 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 		eth_dev->dev_ops = &dpaa_devops;
 		/* Plugging of UCODE burst API not supported in Secondary */
 		eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
-		eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
+		if (dpaa_intf->cgr_tx)
+			eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+		else
+			eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
 #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
 		qman_set_fq_lookup_table(
 				dpaa_intf->rx_queues->qman_fq_lookup_table);
@@ -1368,6 +1411,21 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 		return -ENOMEM;
 	}
 
+	memset(cgrid, 0, sizeof(cgrid));
+	memset(cgrid_tx, 0, sizeof(cgrid_tx));
+
+	/* if DPAA_TX_TAILDROP_THRESHOLD is set, use that value; if 0, it means
+	 * Tx tail drop is disabled.
+	 */
+	if (getenv("DPAA_TX_TAILDROP_THRESHOLD")) {
+		td_tx_threshold = atoi(getenv("DPAA_TX_TAILDROP_THRESHOLD"));
+		DPAA_PMD_DEBUG("Tail drop threshold env configured: %u",
+			       td_tx_threshold);
+		/* if a very large value is being configured */
+		if (td_tx_threshold > UINT16_MAX)
+			td_tx_threshold = CGR_RX_PERFQ_THRESH;
+	}
+
 	/* If congestion control is enabled globally*/
 	if (td_threshold) {
 		dpaa_intf->cgr_rx = rte_zmalloc(NULL,
@@ -1416,9 +1474,36 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 		goto free_rx;
 	}
 
+	/* If congestion control is enabled globally*/
+	if (td_tx_threshold) {
+		dpaa_intf->cgr_tx = rte_zmalloc(NULL,
+			sizeof(struct qman_cgr) * MAX_DPAA_CORES,
+			MAX_CACHELINE);
+		if (!dpaa_intf->cgr_tx) {
+			DPAA_PMD_ERR("Failed to alloc mem for cgr_tx\n");
+			ret = -ENOMEM;
+			goto free_rx;
+		}
+
+		ret = qman_alloc_cgrid_range(&cgrid_tx[0], MAX_DPAA_CORES,
+					     1, 0);
+		if (ret != MAX_DPAA_CORES) {
+			DPAA_PMD_WARN("insufficient CGRIDs available");
+			ret = -EINVAL;
+			goto free_rx;
+		}
+	} else {
+		dpaa_intf->cgr_tx = NULL;
+	}
+
+
 	for (loop = 0; loop < MAX_DPAA_CORES; loop++) {
+		if (dpaa_intf->cgr_tx)
+			dpaa_intf->cgr_tx[loop].cgrid = cgrid_tx[loop];
+
 		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
-					 fman_intf);
+			fman_intf,
+			dpaa_intf->cgr_tx ? &dpaa_intf->cgr_tx[loop] : NULL);
 		if (ret)
 			goto free_tx;
 		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
@@ -1495,6 +1580,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 
 free_rx:
 	rte_free(dpaa_intf->cgr_rx);
+	rte_free(dpaa_intf->cgr_tx);
 	rte_free(dpaa_intf->rx_queues);
 	dpaa_intf->rx_queues = NULL;
 	dpaa_intf->nb_rx_queues = 0;
@@ -1535,6 +1621,17 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
 	rte_free(dpaa_intf->cgr_rx);
 	dpaa_intf->cgr_rx = NULL;
 
+	/* Release TX congestion Groups */
+	if (dpaa_intf->cgr_tx) {
+		for (loop = 0; loop < MAX_DPAA_CORES; loop++)
+			qman_delete_cgr(&dpaa_intf->cgr_tx[loop]);
+
+		qman_release_cgrid_range(dpaa_intf->cgr_tx[loop].cgrid,
+					 MAX_DPAA_CORES);
+		rte_free(dpaa_intf->cgr_tx);
+		dpaa_intf->cgr_tx = NULL;
+	}
+
 	rte_free(dpaa_intf->rx_queues);
 	dpaa_intf->rx_queues = NULL;
 
@@ -1640,6 +1737,8 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
 	eth_dev->device = &dpaa_dev->device;
 	dpaa_dev->eth_dev = eth_dev;
 
+	qman_ern_register_cb(dpaa_free_mbuf);
+
 	/* Invoke PMD device initialization function */
 	diag = dpaa_dev_init(eth_dev);
 	if (diag == 0) {
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index da06f1faa..3eab029fd 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -110,6 +110,7 @@ struct dpaa_if {
 	struct qman_fq *rx_queues;
 	struct qman_cgr *cgr_rx;
 	struct qman_fq *tx_queues;
+	struct qman_cgr *cgr_tx;
 	struct qman_fq debug_queues[2];
 	uint16_t nb_rx_queues;
 	uint16_t nb_tx_queues;
@@ -181,5 +182,4 @@ dpaa_rx_cb_atomic(void *event,
 		  struct qman_fq *fq,
 		  const struct qm_dqrr_entry *dqrr,
 		  void **bufs);
-
 #endif
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 5dba1db8b..5dedff25e 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -398,6 +398,69 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
 	return mbuf;
 }
 
+uint16_t
+dpaa_free_mbuf(const struct qm_fd *fd)
+{
+	struct rte_mbuf *mbuf;
+	struct dpaa_bp_info *bp_info;
+	uint8_t format;
+	void *ptr;
+
+	bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
+	if (unlikely(format == qm_fd_sg)) {
+		struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+		struct qm_sg_entry *sgt, *sg_temp;
+		void *vaddr, *sg_vaddr;
+		int i = 0;
+		uint16_t fd_offset = fd->offset;
+
+		vaddr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd));
+		if (!vaddr) {
+			DPAA_PMD_ERR("unable to convert physical address");
+			return -1;
+		}
+		sgt = vaddr + fd_offset;
+		sg_temp = &sgt[i++];
+		hw_sg_to_cpu(sg_temp);
+		temp = (struct rte_mbuf *)
+			((char *)vaddr - bp_info->meta_data_size);
+		sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
+						qm_sg_entry_get64(sg_temp));
+
+		first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						bp_info->meta_data_size);
+		first_seg->nb_segs = 1;
+		prev_seg = first_seg;
+		while (i < DPAA_SGT_MAX_ENTRIES) {
+			sg_temp = &sgt[i++];
+			hw_sg_to_cpu(sg_temp);
+			sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
+						qm_sg_entry_get64(sg_temp));
+			cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						      bp_info->meta_data_size);
+			first_seg->nb_segs += 1;
+			prev_seg->next = cur_seg;
+			if (sg_temp->final) {
+				cur_seg->next = NULL;
+				break;
+			}
+			prev_seg = cur_seg;
+		}
+
+		rte_pktmbuf_free_seg(temp);
+		rte_pktmbuf_free_seg(first_seg);
+		return 0;
+	}
+
+	ptr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd));
+	mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+
+	rte_pktmbuf_free(mbuf);
+
+	return 0;
+}
+
 /* Specific for LS1043 */
 void
 dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
@@ -1011,6 +1074,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	return sent;
 }
 
+uint16_t
+dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	qman_ern_poll_free();
+
+	return dpaa_eth_queue_tx(q, bufs, nb_bufs);
+}
+
 uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
 			      struct rte_mbuf **bufs __rte_unused,
 		uint16_t nb_bufs __rte_unused)
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 75b093c1e..d41add704 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -254,6 +254,8 @@ struct annotations_t {
 
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
+uint16_t dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs,
+				uint16_t nb_bufs);
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
@@ -266,6 +268,7 @@ int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 			   struct qm_fd *fd,
 			   uint32_t bpid);
 
+uint16_t dpaa_free_mbuf(const struct qm_fd *fd);
 void dpaa_rx_cb(struct qman_fq **fq,
 		struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 12/16] net/dpaa: add 2.5G support
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (11 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 13/16] net/dpaa: update process specific device info Hemant Agrawal
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Sachin Saxena, Gagandeep Singh

From: Sachin Saxena <sachin.saxena@nxp.com>

Handle 2.5Gbps ethernet ports as well.

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman.c         | 6 ++++--
 drivers/bus/dpaa/base/fman/netcfg_layer.c | 3 ++-
 drivers/bus/dpaa/include/fman.h           | 1 +
 drivers/net/dpaa/dpaa_ethdev.c            | 9 ++++++++-
 4 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 6d77a7e39..ae26041ca 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -263,7 +263,7 @@ fman_if_init(const struct device_node *dpa_node)
 		fman_dealloc_bufs_mask_hi = 0;
 		fman_dealloc_bufs_mask_lo = 0;
 	}
-	/* Is the MAC node 1G, 10G? */
+	/* Is the MAC node 1G, 2.5G, 10G? */
 	__if->__if.is_memac = 0;
 
 	if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
@@ -279,7 +279,9 @@ fman_if_init(const struct device_node *dpa_node)
 			/* Right now forcing memac to 1g in case of error*/
 			__if->__if.mac_type = fman_mac_1g;
 		} else {
-			if (strstr(char_prop, "sgmii"))
+			if (strstr(char_prop, "sgmii-2500"))
+				__if->__if.mac_type = fman_mac_2_5g;
+			else if (strstr(char_prop, "sgmii"))
 				__if->__if.mac_type = fman_mac_1g;
 			else if (strstr(char_prop, "rgmii")) {
 				__if->__if.mac_type = fman_mac_1g;
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index 36eca88cd..b7009f229 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -44,7 +44,8 @@ dump_netcfg(struct netcfg_info *cfg_ptr)
 
 		printf("\n+ Fman %d, MAC %d (%s);\n",
 		       __if->fman_idx, __if->mac_idx,
-		       (__if->mac_type == fman_mac_1g) ? "1G" : "10G");
+		       (__if->mac_type == fman_mac_1g) ? "1G" :
+		       (__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
 
 		printf("\tmac_addr: %02x:%02x:%02x:%02x:%02x:%02x\n",
 		       (&__if->mac_addr)->addr_bytes[0],
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index c02d32d22..12e598b2d 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -71,6 +71,7 @@ TAILQ_HEAD(rte_fman_if_list, __fman_if);
 enum fman_mac_type {
 	fman_offline = 0,
 	fman_mac_1g,
+	fman_mac_2_5g,
 	fman_mac_10g,
 };
 
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 2ae79c9f5..1d23fc674 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -356,8 +356,13 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 
 	if (dpaa_intf->fif->mac_type == fman_mac_1g) {
 		dev_info->speed_capa = ETH_LINK_SPEED_1G;
+	} else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) {
+		dev_info->speed_capa = ETH_LINK_SPEED_1G
+					| ETH_LINK_SPEED_2_5G;
 	} else if (dpaa_intf->fif->mac_type == fman_mac_10g) {
-		dev_info->speed_capa = (ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G);
+		dev_info->speed_capa = ETH_LINK_SPEED_1G
+					| ETH_LINK_SPEED_2_5G
+					| ETH_LINK_SPEED_10G;
 	} else {
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
 			     dpaa_intf->name, dpaa_intf->fif->mac_type);
@@ -384,6 +389,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 
 	if (dpaa_intf->fif->mac_type == fman_mac_1g)
 		link->link_speed = ETH_SPEED_NUM_1G;
+	else if (dpaa_intf->fif->mac_type == fman_mac_2_5g)
+		link->link_speed = ETH_SPEED_NUM_2_5G;
 	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
 		link->link_speed = ETH_SPEED_NUM_10G;
 	else
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 13/16] net/dpaa: update process specific device info
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (12 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 12/16] net/dpaa: add 2.5G support Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 14/16] bus/dpaa: enable link state interrupt Hemant Agrawal
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

For DPAA devices the memory maps stored in the FMAN interface
information is per process. Store them in the device process specific
area.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c | 207 ++++++++++++++++-----------------
 drivers/net/dpaa/dpaa_ethdev.h |   1 -
 2 files changed, 102 insertions(+), 106 deletions(-)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 1d23fc674..abe247acd 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -149,7 +149,6 @@ dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts)
 static int
 dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 	uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
 				+ VLAN_TAG_SIZE;
 	uint32_t buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
@@ -185,7 +184,7 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
-	fman_if_set_maxfrm(dpaa_intf->fif, frame_size);
+	fman_if_set_maxfrm(dev->process_private, frame_size);
 
 	return 0;
 }
@@ -193,7 +192,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 static int
 dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 	struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
 	uint64_t rx_offloads = eth_conf->rxmode.offloads;
 	uint64_t tx_offloads = eth_conf->txmode.offloads;
@@ -232,14 +230,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 			max_len = DPAA_MAX_RX_PKT_LEN;
 		}
 
-		fman_if_set_maxfrm(dpaa_intf->fif, max_len);
+		fman_if_set_maxfrm(dev->process_private, max_len);
 		dev->data->mtu = max_len
 			- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
 	}
 
 	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
 		DPAA_PMD_DEBUG("enabling scatter mode");
-		fman_if_set_sg(dpaa_intf->fif, 1);
+		fman_if_set_sg(dev->process_private, 1);
 		dev->data->scattered_rx = 1;
 	}
 
@@ -283,18 +281,18 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 	else
 		dev->tx_pkt_burst = dpaa_eth_queue_tx;
 
-	fman_if_enable_rx(dpaa_intf->fif);
+	fman_if_enable_rx(dev->process_private);
 
 	return 0;
 }
 
 static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct fman_if *fif = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_disable_rx(dpaa_intf->fif);
+	fman_if_disable_rx(fif);
 	dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
 }
 
@@ -342,6 +340,7 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct fman_if *fif = dev->process_private;
 
 	DPAA_PMD_DEBUG(": %s", dpaa_intf->name);
 
@@ -354,18 +353,18 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 
-	if (dpaa_intf->fif->mac_type == fman_mac_1g) {
+	if (fif->mac_type == fman_mac_1g) {
 		dev_info->speed_capa = ETH_LINK_SPEED_1G;
-	} else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) {
+	} else if (fif->mac_type == fman_mac_2_5g) {
 		dev_info->speed_capa = ETH_LINK_SPEED_1G
 					| ETH_LINK_SPEED_2_5G;
-	} else if (dpaa_intf->fif->mac_type == fman_mac_10g) {
+	} else if (fif->mac_type == fman_mac_10g) {
 		dev_info->speed_capa = ETH_LINK_SPEED_1G
 					| ETH_LINK_SPEED_2_5G
 					| ETH_LINK_SPEED_10G;
 	} else {
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
-			     dpaa_intf->name, dpaa_intf->fif->mac_type);
+			     dpaa_intf->name, fif->mac_type);
 		return -EINVAL;
 	}
 
@@ -384,18 +383,19 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 	struct rte_eth_link *link = &dev->data->dev_link;
+	struct fman_if *fif = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpaa_intf->fif->mac_type == fman_mac_1g)
+	if (fif->mac_type == fman_mac_1g)
 		link->link_speed = ETH_SPEED_NUM_1G;
-	else if (dpaa_intf->fif->mac_type == fman_mac_2_5g)
+	else if (fif->mac_type == fman_mac_2_5g)
 		link->link_speed = ETH_SPEED_NUM_2_5G;
-	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
+	else if (fif->mac_type == fman_mac_10g)
 		link->link_speed = ETH_SPEED_NUM_10G;
 	else
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
-			     dpaa_intf->name, dpaa_intf->fif->mac_type);
+			     dpaa_intf->name, fif->mac_type);
 
 	link->link_status = dpaa_intf->valid;
 	link->link_duplex = ETH_LINK_FULL_DUPLEX;
@@ -406,21 +406,17 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 static int dpaa_eth_stats_get(struct rte_eth_dev *dev,
 			       struct rte_eth_stats *stats)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_stats_get(dpaa_intf->fif, stats);
+	fman_if_stats_get(dev->process_private, stats);
 	return 0;
 }
 
 static int dpaa_eth_stats_reset(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_stats_reset(dpaa_intf->fif);
+	fman_if_stats_reset(dev->process_private);
 
 	return 0;
 }
@@ -429,7 +425,6 @@ static int
 dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 		    unsigned int n)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 	unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
 	uint64_t values[sizeof(struct dpaa_if_stats) / 8];
 
@@ -439,7 +434,7 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 	if (xstats == NULL)
 		return 0;
 
-	fman_if_stats_get_all(dpaa_intf->fif, values,
+	fman_if_stats_get_all(dev->process_private, values,
 			      sizeof(struct dpaa_if_stats) / 8);
 
 	for (i = 0; i < num; i++) {
@@ -476,15 +471,13 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 	uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
 
 	if (!ids) {
-		struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 		if (n < stat_cnt)
 			return stat_cnt;
 
 		if (!values)
 			return 0;
 
-		fman_if_stats_get_all(dpaa_intf->fif, values_copy,
+		fman_if_stats_get_all(dev->process_private, values_copy,
 				      sizeof(struct dpaa_if_stats) / 8);
 
 		for (i = 0; i < stat_cnt; i++)
@@ -533,44 +526,36 @@ dpaa_xstats_get_names_by_id(
 
 static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_promiscuous_enable(dpaa_intf->fif);
+	fman_if_promiscuous_enable(dev->process_private);
 
 	return 0;
 }
 
 static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_promiscuous_disable(dpaa_intf->fif);
+	fman_if_promiscuous_disable(dev->process_private);
 
 	return 0;
 }
 
 static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_set_mcast_filter_table(dpaa_intf->fif);
+	fman_if_set_mcast_filter_table(dev->process_private);
 
 	return 0;
 }
 
 static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_reset_mcast_filter_table(dpaa_intf->fif);
+	fman_if_reset_mcast_filter_table(dev->process_private);
 
 	return 0;
 }
@@ -583,6 +568,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    struct rte_mempool *mp)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct fman_if *fif = dev->process_private;
 	struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
 	struct qm_mcc_initfq opts = {0};
 	u32 flags = 0;
@@ -645,22 +631,22 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		icp.iciof = DEFAULT_ICIOF;
 		icp.iceof = DEFAULT_RX_ICEOF;
 		icp.icsz = DEFAULT_ICSZ;
-		fman_if_set_ic_params(dpaa_intf->fif, &icp);
+		fman_if_set_ic_params(fif, &icp);
 
 		fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
-		fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+		fman_if_set_fdoff(fif, fd_offset);
 
 		/* Buffer pool size should be equal to Dataroom Size*/
 		bp_size = rte_pktmbuf_data_room_size(mp);
-		fman_if_set_bp(dpaa_intf->fif, mp->size,
+		fman_if_set_bp(fif, mp->size,
 			       dpaa_intf->bp_info->bpid, bp_size);
 		dpaa_intf->valid = 1;
 		DPAA_PMD_DEBUG("if:%s fd_offset = %d offset = %d",
 				dpaa_intf->name, fd_offset,
-				fman_if_get_fdoff(dpaa_intf->fif));
+				fman_if_get_fdoff(fif));
 	}
 	DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
-		fman_if_get_sg_enable(dpaa_intf->fif),
+		fman_if_get_sg_enable(fif),
 		dev->data->dev_conf.rxmode.max_rx_pkt_len);
 	/* checking if push mode only, no error check for now */
 	if (!rxq->is_static &&
@@ -952,11 +938,12 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
 		return 0;
 	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
 		 fc_conf->mode == RTE_FC_FULL) {
-		fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water,
+		fman_if_set_fc_threshold(dev->process_private,
+					 fc_conf->high_water,
 					 fc_conf->low_water,
-				dpaa_intf->bp_info->bpid);
+					 dpaa_intf->bp_info->bpid);
 		if (fc_conf->pause_time)
-			fman_if_set_fc_quanta(dpaa_intf->fif,
+			fman_if_set_fc_quanta(dev->process_private,
 					      fc_conf->pause_time);
 	}
 
@@ -992,10 +979,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
 		fc_conf->autoneg = net_fc->autoneg;
 		return 0;
 	}
-	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	ret = fman_if_get_fc_threshold(dev->process_private);
 	if (ret) {
 		fc_conf->mode = RTE_FC_TX_PAUSE;
-		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+		fc_conf->pause_time =
+			fman_if_get_fc_quanta(dev->process_private);
 	} else {
 		fc_conf->mode = RTE_FC_NONE;
 	}
@@ -1010,11 +998,11 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
 			     __rte_unused uint32_t pool)
 {
 	int ret;
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, index);
+	ret = fman_if_add_mac_addr(dev->process_private,
+				   addr->addr_bytes, index);
 
 	if (ret)
 		RTE_LOG(ERR, PMD, "error: Adding the MAC ADDR failed:"
@@ -1026,11 +1014,9 @@ static void
 dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
 			  uint32_t index)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_clear_mac_addr(dpaa_intf->fif, index);
+	fman_if_clear_mac_addr(dev->process_private, index);
 }
 
 static int
@@ -1038,11 +1024,10 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
 		       struct rte_ether_addr *addr)
 {
 	int ret;
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, 0);
+	ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
 	if (ret)
 		RTE_LOG(ERR, PMD, "error: Setting the MAC ADDR failed %d", ret);
 
@@ -1145,7 +1130,6 @@ int
 rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
 {
 	struct rte_eth_dev *dev;
-	struct dpaa_if *dpaa_intf;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV);
 
@@ -1154,17 +1138,16 @@ rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
 	if (!is_dpaa_supported(dev))
 		return -ENOTSUP;
 
-	dpaa_intf = dev->data->dev_private;
-
 	if (on)
-		fman_if_loopback_enable(dpaa_intf->fif);
+		fman_if_loopback_enable(dev->process_private);
 	else
-		fman_if_loopback_disable(dpaa_intf->fif);
+		fman_if_loopback_disable(dev->process_private);
 
 	return 0;
 }
 
-static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
+static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
+			       struct fman_if *fman_intf)
 {
 	struct rte_eth_fc_conf *fc_conf;
 	int ret;
@@ -1180,10 +1163,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
 		}
 	}
 	fc_conf = dpaa_intf->fc_conf;
-	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	ret = fman_if_get_fc_threshold(fman_intf);
 	if (ret) {
 		fc_conf->mode = RTE_FC_TX_PAUSE;
-		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+		fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
 	} else {
 		fc_conf->mode = RTE_FC_NONE;
 	}
@@ -1345,6 +1328,39 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
 }
 #endif
 
+/* Initialise a network interface */
+static int
+dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
+{
+	struct rte_dpaa_device *dpaa_device;
+	struct fm_eth_port_cfg *cfg;
+	struct dpaa_if *dpaa_intf;
+	struct fman_if *fman_intf;
+	int dev_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
+	dev_id = dpaa_device->id.dev_id;
+	cfg = &dpaa_netcfg->port_cfg[dev_id];
+	fman_intf = cfg->fman_if;
+	eth_dev->process_private = fman_intf;
+
+	/* Plugging of UCODE burst API not supported in Secondary */
+	dpaa_intf = eth_dev->data->dev_private;
+	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+	if (dpaa_intf->cgr_tx)
+		eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+	else
+		eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	qman_set_fq_lookup_table(
+		dpaa_intf->rx_queues->qman_fq_lookup_table);
+#endif
+
+	return 0;
+}
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -1362,23 +1378,6 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	dpaa_intf = eth_dev->data->dev_private;
-	/* For secondary processes, the primary has done all the work */
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
-		eth_dev->dev_ops = &dpaa_devops;
-		/* Plugging of UCODE burst API not supported in Secondary */
-		eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
-		if (dpaa_intf->cgr_tx)
-			eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
-		else
-			eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
-		qman_set_fq_lookup_table(
-				dpaa_intf->rx_queues->qman_fq_lookup_table);
-#endif
-		return 0;
-	}
-
 	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
 	dev_id = dpaa_device->id.dev_id;
 	dpaa_intf = eth_dev->data->dev_private;
@@ -1388,7 +1387,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	dpaa_intf->name = dpaa_device->name;
 
 	/* save fman_if & cfg in the interface struture */
-	dpaa_intf->fif = fman_intf;
+	eth_dev->process_private = fman_intf;
 	dpaa_intf->ifid = dev_id;
 	dpaa_intf->cfg = cfg;
 
@@ -1457,7 +1456,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 		if (default_q)
 			fqid = cfg->rx_def;
 		else
-			fqid = DPAA_PCD_FQID_START + dpaa_intf->fif->mac_idx *
+			fqid = DPAA_PCD_FQID_START + fman_intf->mac_idx *
 				DPAA_PCD_FQID_MULTIPLIER + loop;
 
 		if (dpaa_intf->cgr_rx)
@@ -1529,7 +1528,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	DPAA_PMD_DEBUG("All frame queues created");
 
 	/* Get the initial configuration for flow control */
-	dpaa_fc_set_default(dpaa_intf);
+	dpaa_fc_set_default(dpaa_intf, fman_intf);
 
 	/* reset bpool list, initialize bpool dynamically */
 	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
@@ -1682,6 +1681,13 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
 			return -ENOMEM;
 		eth_dev->device = &dpaa_dev->device;
 		eth_dev->dev_ops = &dpaa_devops;
+
+		ret = dpaa_dev_init_secondary(eth_dev);
+		if (ret != 0) {
+			RTE_LOG(ERR, PMD, "secondary dev init failed\n");
+			return ret;
+		}
+
 		rte_eth_dev_probing_finish(eth_dev);
 		return 0;
 	}
@@ -1718,29 +1724,20 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
 		}
 	}
 
-	/* In case of secondary process, the device is already configured
-	 * and no further action is required, except portal initialization
-	 * and verifying secondary attachment to port name.
-	 */
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
-		eth_dev = rte_eth_dev_attach_secondary(dpaa_dev->name);
-		if (!eth_dev)
-			return -ENOMEM;
-	} else {
-		eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
-		if (eth_dev == NULL)
-			return -ENOMEM;
+	eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
+	if (!eth_dev)
+		return -ENOMEM;
 
-		eth_dev->data->dev_private = rte_zmalloc(
-						"ethdev private structure",
-						sizeof(struct dpaa_if),
-						RTE_CACHE_LINE_SIZE);
-		if (!eth_dev->data->dev_private) {
-			DPAA_PMD_ERR("Cannot allocate memzone for port data");
-			rte_eth_dev_release_port(eth_dev);
-			return -ENOMEM;
-		}
+	eth_dev->data->dev_private = rte_zmalloc(
+					"ethdev private structure",
+					sizeof(struct dpaa_if),
+					RTE_CACHE_LINE_SIZE);
+	if (!eth_dev->data->dev_private) {
+		DPAA_PMD_ERR("Cannot allocate memzone for port data");
+		rte_eth_dev_release_port(eth_dev);
+		return -ENOMEM;
 	}
+
 	eth_dev->device = &dpaa_dev->device;
 	dpaa_dev->eth_dev = eth_dev;
 
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 3eab029fd..72a9c5910 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -115,7 +115,6 @@ struct dpaa_if {
 	uint16_t nb_rx_queues;
 	uint16_t nb_tx_queues;
 	uint32_t ifid;
-	struct fman_if *fif;
 	struct dpaa_bp_info *bp_info;
 	struct rte_eth_fc_conf *fc_conf;
 };
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 14/16] bus/dpaa: enable link state interrupt
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (13 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 13/16] net/dpaa: update process specific device info Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 15/16] bus/dpaa: enable set link status Hemant Agrawal
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Enable/disable link state interrupt and get link state api is
defined using IOCTL calls.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman.c         |  4 +-
 drivers/bus/dpaa/base/qbman/process.c     | 68 ++++++++++++++++++-
 drivers/bus/dpaa/dpaa_bus.c               | 28 +++++++-
 drivers/bus/dpaa/include/fman.h           |  2 +
 drivers/bus/dpaa/include/process.h        | 20 ++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  3 +
 drivers/bus/dpaa/rte_dpaa_bus.h           |  6 +-
 drivers/common/dpaax/compat.h             |  5 +-
 drivers/net/dpaa/dpaa_ethdev.c            | 82 ++++++++++++++++++++++-
 9 files changed, 212 insertions(+), 6 deletions(-)

diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index ae26041ca..33be9e5d7 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2020 NXP
  *
  */
 
@@ -185,6 +185,8 @@ fman_if_init(const struct device_node *dpa_node)
 	}
 	memset(__if, 0, sizeof(*__if));
 	INIT_LIST_HEAD(&__if->__if.bpool_list);
+	strlcpy(__if->node_name, dpa_node->name, IF_NAME_MAX_LEN - 1);
+	__if->node_name[IF_NAME_MAX_LEN - 1] = '\0';
 	strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
 	__if->node_path[PATH_MAX - 1] = '\0';
 
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
index 2c23c98df..598b10661 100644
--- a/drivers/bus/dpaa/base/qbman/process.c
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2011-2016 Freescale Semiconductor Inc.
- * Copyright 2017 NXP
+ * Copyright 2017,2020 NXP
  *
  */
 #include <assert.h>
@@ -296,3 +296,69 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal)
 
 	return process_portal_free(&input);
 }
+
+#define DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT \
+	_IOW(DPAA_IOCTL_MAGIC, 0x0E, struct usdpaa_ioctl_link_status)
+
+#define DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT \
+	_IOW(DPAA_IOCTL_MAGIC, 0x0F, char*)
+
+int rte_dpaa_intr_enable(char *if_name, int efd)
+{
+	struct usdpaa_ioctl_link_status args;
+
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	args.efd = (uint32_t)efd;
+	strcpy(args.if_name, if_name);
+
+	ret = ioctl(fd, DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT, &args);
+	if (ret) {
+		perror("Failed to enable interrupt\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+int rte_dpaa_intr_disable(char *if_name)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT, &if_name);
+	if (ret) {
+		perror("Failed to disable interrupt\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+#define DPAA_IOCTL_GET_LINK_STATUS \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x10, struct usdpaa_ioctl_link_status_args)
+
+int rte_dpaa_get_link_status(char *if_name)
+{
+	int ret = check_fd();
+	struct usdpaa_ioctl_link_status_args args;
+
+	if (ret)
+		return ret;
+
+	strcpy(args.if_name, if_name);
+	args.link_status = 0;
+
+	ret = ioctl(fd, DPAA_IOCTL_GET_LINK_STATUS, &args);
+	if (ret) {
+		perror("Failed to get link status\n");
+		return ret;
+	}
+
+	return args.link_status;
+}
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index f27820db3..2dedb138d 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017-2019 NXP
+ *   Copyright 2017-2020 NXP
  *
  */
 /* System headers */
@@ -13,6 +13,7 @@
 #include <pthread.h>
 #include <sys/types.h>
 #include <sys/syscall.h>
+#include <sys/eventfd.h>
 
 #include <rte_byteorder.h>
 #include <rte_common.h>
@@ -545,6 +546,23 @@ rte_dpaa_bus_dev_build(void)
 	return 0;
 }
 
+static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
+{
+	int fd;
+
+	fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+	if (fd < 0) {
+		DPAA_BUS_ERR("Cannot set up eventfd, error %i (%s)",
+			     errno, strerror(errno));
+		return errno;
+	}
+
+	intr_handle->fd = fd;
+	intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+	return 0;
+}
+
 static int
 rte_dpaa_bus_probe(void)
 {
@@ -592,6 +610,14 @@ rte_dpaa_bus_probe(void)
 		fclose(svr_file);
 	}
 
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		if (dev->device_type == FSL_DPAA_ETH) {
+			ret = rte_dpaa_setup_intr(&dev->intr_handle);
+			if (ret)
+				DPAA_PMD_ERR("Error setting up interrupt.\n");
+		}
+	}
+
 	/* And initialize the PA->VA translation table */
 	dpaax_iova_table_populate();
 
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 12e598b2d..d90f2f5fc 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,6 +2,7 @@
  *
  * Copyright 2010-2012 Freescale Semiconductor, Inc.
  * All rights reserved.
+ * Copyright 2019-2020 NXP
  *
  */
 
@@ -361,6 +362,7 @@ struct fman_if_ic_params {
  */
 struct __fman_if {
 	struct fman_if __if;
+	char node_name[IF_NAME_MAX_LEN];
 	char node_path[PATH_MAX];
 	uint64_t regs_size;
 	void *ccsr_map;
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index d9ec94ee2..312da1245 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -2,6 +2,7 @@
  *
  * Copyright 2010-2011 Freescale Semiconductor, Inc.
  * All rights reserved.
+ * Copyright 2020 NXP
  *
  */
 
@@ -74,4 +75,23 @@ struct dpaa_ioctl_irq_map {
 int process_portal_irq_map(int fd,  struct dpaa_ioctl_irq_map *irq);
 int process_portal_irq_unmap(int fd);
 
+struct usdpaa_ioctl_link_status {
+	char            if_name[IF_NAME_MAX_LEN];
+	uint32_t        efd;
+};
+
+__rte_experimental
+int rte_dpaa_intr_enable(char *if_name, int efd);
+
+__rte_experimental
+int rte_dpaa_intr_disable(char *if_name);
+
+struct usdpaa_ioctl_link_status_args {
+	/* network device node name */
+	char    if_name[IF_NAME_MAX_LEN];
+	int     link_status;
+};
+__rte_experimental
+int rte_dpaa_get_link_status(char *if_name);
+
 #endif	/*  __PROCESS_H */
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 86f5811b0..e8fa8f569 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -100,4 +100,7 @@ EXPERIMENTAL {
 
 	qman_ern_poll_free;
 	qman_ern_register_cb;
+	rte_dpaa_get_link_status;
+	rte_dpaa_intr_disable;
+	rte_dpaa_intr_enable;
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 373aca978..f385f6de8 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017-2019 NXP
+ *   Copyright 2017-2020 NXP
  *
  */
 #ifndef __RTE_DPAA_BUS_H__
@@ -30,6 +30,9 @@
 #define SVR_LS1046A_FAMILY	0x87070000
 #define SVR_MASK		0xffff0000
 
+/** Device driver supports link state interrupt */
+#define RTE_DPAA_DRV_INTR_LSC  0x0008
+
 #define RTE_DEV_TO_DPAA_CONST(ptr) \
 	container_of(ptr, const struct rte_dpaa_device, device)
 
@@ -88,6 +91,7 @@ struct rte_dpaa_driver {
 	TAILQ_ENTRY(rte_dpaa_driver) next;
 	struct rte_driver driver;
 	struct rte_dpaa_bus *dpaa_bus;
+	uint32_t drv_flags;                 /**< Flags for controlling device.*/
 	enum rte_dpaa_type drv_type;
 	rte_dpaa_probe_t probe;
 	rte_dpaa_remove_t remove;
diff --git a/drivers/common/dpaax/compat.h b/drivers/common/dpaax/compat.h
index 12c9d9917..78e16fa2f 100644
--- a/drivers/common/dpaax/compat.h
+++ b/drivers/common/dpaax/compat.h
@@ -2,7 +2,7 @@
  *
  * Copyright 2011 Freescale Semiconductor, Inc.
  * All rights reserved.
- * Copyright 2019 NXP
+ * Copyright 2019-2020 NXP
  *
  */
 
@@ -390,4 +390,7 @@ static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
 #define atomic_dec_return(v)    rte_atomic32_sub_return(v, 1)
 #define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
 
+/* Interface name len*/
+#define IF_NAME_MAX_LEN 16
+
 #endif /* __COMPAT_H */
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index abe247acd..28c6b1c17 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -45,6 +45,7 @@
 #include <fsl_qman.h>
 #include <fsl_bman.h>
 #include <fsl_fman.h>
+#include <process.h>
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
@@ -131,6 +132,11 @@ static struct rte_dpaa_driver rte_dpaa_pmd;
 static int
 dpaa_eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info);
 
+static int dpaa_eth_link_update(struct rte_eth_dev *dev,
+				int wait_to_complete __rte_unused);
+
+static void dpaa_interrupt_handler(void *param);
+
 static inline void
 dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts)
 {
@@ -195,9 +201,18 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 	struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
 	uint64_t rx_offloads = eth_conf->rxmode.offloads;
 	uint64_t tx_offloads = eth_conf->txmode.offloads;
+	struct rte_device *rdev = dev->device;
+	struct rte_dpaa_device *dpaa_dev;
+	struct fman_if *fif = dev->process_private;
+	struct __fman_if *__fif;
+	struct rte_intr_handle *intr_handle;
 
 	PMD_INIT_FUNC_TRACE();
 
+	dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+	intr_handle = &dpaa_dev->intr_handle;
+	__fif = container_of(fif, struct __fman_if, __if);
+
 	/* Rx offloads which are enabled by default */
 	if (dev_rx_offloads_nodis & ~rx_offloads) {
 		DPAA_PMD_INFO(
@@ -241,6 +256,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 		dev->data->scattered_rx = 1;
 	}
 
+	/* if the interrupts were configured on this devices*/
+	if (intr_handle && intr_handle->fd &&
+	    dev->data->dev_conf.intr_conf.lsc != 0)
+		rte_intr_callback_register(intr_handle, dpaa_interrupt_handler,
+					   (void *)dev);
+
+	rte_dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+
 	return 0;
 }
 
@@ -269,6 +292,25 @@ dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
 	return NULL;
 }
 
+static void dpaa_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = param;
+	struct rte_device *rdev = dev->device;
+	struct rte_dpaa_device *dpaa_dev;
+	struct rte_intr_handle *intr_handle;
+	uint64_t buf;
+	int bytes_read;
+
+	dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+	intr_handle = &dpaa_dev->intr_handle;
+
+	bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+	if (bytes_read < 0)
+		DPAA_PMD_ERR("Error reading eventfd\n");
+	dpaa_eth_link_update(dev, 0);
+	_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+}
+
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -298,9 +340,27 @@ static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 
 static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 {
+	struct fman_if *fif = dev->process_private;
+	struct __fman_if *__fif;
+	struct rte_device *rdev = dev->device;
+	struct rte_dpaa_device *dpaa_dev;
+	struct rte_intr_handle *intr_handle;
+
 	PMD_INIT_FUNC_TRACE();
 
+	dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+	intr_handle = &dpaa_dev->intr_handle;
+	__fif = container_of(fif, struct __fman_if, __if);
+
 	dpaa_eth_dev_stop(dev);
+
+	rte_dpaa_intr_disable(__fif->node_name);
+
+	if (intr_handle && intr_handle->fd &&
+	    dev->data->dev_conf.intr_conf.lsc != 0)
+		rte_intr_callback_unregister(intr_handle,
+					     dpaa_interrupt_handler,
+					     (void *)dev);
 }
 
 static int
@@ -384,6 +444,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 	struct rte_eth_link *link = &dev->data->dev_link;
 	struct fman_if *fif = dev->process_private;
+	struct __fman_if *__fif = container_of(fif, struct __fman_if, __if);
+	int ret;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -397,9 +459,23 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
 			     dpaa_intf->name, fif->mac_type);
 
-	link->link_status = dpaa_intf->valid;
+	ret = rte_dpaa_get_link_status(__fif->node_name);
+	if (ret < 0) {
+		if (ret == -EINVAL) {
+			DPAA_PMD_DEBUG("Using default link status-No Support");
+			ret = 1;
+		} else {
+			DPAA_PMD_ERR("rte_dpaa_get_link_status %d", ret);
+			return ret;
+		}
+	}
+
+	link->link_status = ret;
 	link->link_duplex = ETH_LINK_FULL_DUPLEX;
 	link->link_autoneg = ETH_LINK_AUTONEG;
+
+	DPAA_PMD_INFO("Port %d Link is %s\n", dev->data->port_id,
+		      link->link_status ? "Up" : "Down");
 	return 0;
 }
 
@@ -1743,6 +1819,9 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
 
 	qman_ern_register_cb(dpaa_free_mbuf);
 
+	if (dpaa_drv->drv_flags & RTE_DPAA_DRV_INTR_LSC)
+		eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
+
 	/* Invoke PMD device initialization function */
 	diag = dpaa_dev_init(eth_dev);
 	if (diag == 0) {
@@ -1770,6 +1849,7 @@ rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
 }
 
 static struct rte_dpaa_driver rte_dpaa_pmd = {
+	.drv_flags = RTE_DPAA_DRV_INTR_LSC,
 	.drv_type = FSL_DPAA_ETH,
 	.probe = rte_dpaa_probe,
 	.remove = rte_dpaa_remove,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 15/16] bus/dpaa: enable set link status
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (14 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 14/16] bus/dpaa: enable link state interrupt Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 16/16] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Enabled set link status API to start/stop phy
device from application.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/dpaa/base/qbman/process.c     | 35 ++++++++++++++++++++---
 drivers/bus/dpaa/include/process.h        |  3 ++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  1 +
 drivers/net/dpaa/dpaa_ethdev.c            | 14 +++++++--
 4 files changed, 47 insertions(+), 6 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
index 598b10661..8ab57f105 100644
--- a/drivers/bus/dpaa/base/qbman/process.c
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -317,7 +317,7 @@ int rte_dpaa_intr_enable(char *if_name, int efd)
 
 	ret = ioctl(fd, DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT, &args);
 	if (ret) {
-		perror("Failed to enable interrupt\n");
+		printf("Failed to enable interrupt: Not Supported\n");
 		return ret;
 	}
 
@@ -333,7 +333,7 @@ int rte_dpaa_intr_disable(char *if_name)
 
 	ret = ioctl(fd, DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT, &if_name);
 	if (ret) {
-		perror("Failed to disable interrupt\n");
+		printf("Failed to disable interrupt: Not Supported\n");
 		return ret;
 	}
 
@@ -356,9 +356,36 @@ int rte_dpaa_get_link_status(char *if_name)
 
 	ret = ioctl(fd, DPAA_IOCTL_GET_LINK_STATUS, &args);
 	if (ret) {
-		perror("Failed to get link status\n");
-		return ret;
+		printf("Failed to get link status: Not Supported\n");
+		return -errno;
 	}
 
 	return args.link_status;
 }
+
+#define DPAA_IOCTL_UPDATE_LINK_STATUS \
+	_IOW(DPAA_IOCTL_MAGIC, 0x11, struct usdpaa_ioctl_link_status_args)
+
+int rte_dpaa_update_link_status(char *if_name, int link_status)
+{
+	struct usdpaa_ioctl_link_status_args args;
+	int ret;
+
+	ret = check_fd();
+	if (ret)
+		return ret;
+
+	strcpy(args.if_name, if_name);
+	args.link_status = link_status;
+
+	ret = ioctl(fd, DPAA_IOCTL_UPDATE_LINK_STATUS, &args);
+	if (ret) {
+		if (errno == EINVAL)
+			printf("Failed to set link status: Not Supported\n");
+		else
+			perror("Failed to set link status");
+		return ret;
+	}
+
+	return 0;
+}
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index 312da1245..9f8c85895 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -94,4 +94,7 @@ struct usdpaa_ioctl_link_status_args {
 __rte_experimental
 int rte_dpaa_get_link_status(char *if_name);
 
+__rte_experimental
+int rte_dpaa_update_link_status(char *if_name, int link_status);
+
 #endif	/*  __PROCESS_H */
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index e8fa8f569..3eac85429 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -103,4 +103,5 @@ EXPERIMENTAL {
 	rte_dpaa_get_link_status;
 	rte_dpaa_intr_disable;
 	rte_dpaa_intr_enable;
+	rte_dpaa_update_link_status;
 };
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 28c6b1c17..c0a96dd47 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -972,17 +972,27 @@ dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 static int dpaa_link_down(struct rte_eth_dev *dev)
 {
+	struct fman_if *fif = dev->process_private;
+	struct __fman_if *__fif;
+
 	PMD_INIT_FUNC_TRACE();
 
-	dpaa_eth_dev_stop(dev);
+	__fif = container_of(fif, struct __fman_if, __if);
+
+	rte_dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
 	return 0;
 }
 
 static int dpaa_link_up(struct rte_eth_dev *dev)
 {
+	struct fman_if *fif = dev->process_private;
+	struct __fman_if *__fif;
+
 	PMD_INIT_FUNC_TRACE();
 
-	dpaa_eth_dev_start(dev);
+	__fif = container_of(fif, struct __fman_if, __if);
+
+	rte_dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH 16/16] net/dpaa2: do not prefetch annotaion for physical mode
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (15 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 15/16] bus/dpaa: enable set link status Hemant Agrawal
@ 2020-03-02 14:58 ` Hemant Agrawal
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
  17 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-02 14:58 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

When IOVA is physical address do not prefetch the annotation
of the next frame, as there is a cost involved there to convert
the physical address to virtual address.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  6 ++--
 drivers/net/dpaa2/dpaa2_rxtx.c          | 40 +++++++++++++++----------
 2 files changed, 28 insertions(+), 18 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index bde1441f4..6b07b628a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -403,8 +403,8 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
+#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
 
 #endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index d809e0f4b..4d024a85f 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -324,8 +324,8 @@ static inline struct rte_mbuf *__attribute__((hot))
 eth_fd_to_mbuf(const struct qbman_fd *fd,
 	       int port_id)
 {
-	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
-		DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
+	void *iova_addr = DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(iova_addr,
 		     rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size);
 
 	/* need to repopulated some of the fields,
@@ -350,8 +350,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 		dpaa2_dev_rx_parse_new(mbuf, fd);
 	else
 		mbuf->packet_type = dpaa2_dev_rx_parse(mbuf,
-			(void *)((size_t)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd))
-			 + DPAA2_FD_PTA_SIZE));
+			(void *)((size_t)iova_addr + DPAA2_FD_PTA_SIZE));
 
 	DPAA2_PMD_DP_DEBUG("to mbuf - mbuf =%p, mbuf->buf_addr =%p, off = %d,"
 		"fd_off=%d fd =%" PRIx64 ", meta = %d  bpid =%d, len=%d\n",
@@ -518,7 +517,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, pull_size;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct queue_storage_info_t *q_storage = dpaa2_q->q_storage;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
@@ -617,12 +616,15 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 		fd = qbman_result_DQ_fd(dq_storage);
 
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 		if (dpaa2_svr_family != SVR_LX2160A) {
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
+			const struct qbman_fd *next_fd =
+				qbman_result_DQ_fd(dq_storage + 1);
 			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0((void *)(size_t)(DPAA2_GET_FD_ADDR(
-				      next_fd) + DPAA2_FD_PTA_SIZE + 16));
+			rte_prefetch0(DPAA2_IOVA_TO_VADDR((DPAA2_GET_FD_ADDR(
+				next_fd) + DPAA2_FD_PTA_SIZE + 16)));
 		}
+#endif
 
 		if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 			bufs[num_rx] = eth_sg_fd_to_mbuf(fd, eth_data->port_id);
@@ -753,7 +755,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, next_pull = nb_pkts, num_pulled;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
 
@@ -821,11 +823,19 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			}
 			fd = qbman_result_DQ_fd(dq_storage);
 
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
-			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0(
-				(void *)(size_t)(DPAA2_GET_FD_ADDR(next_fd)
-					+ DPAA2_FD_PTA_SIZE + 16));
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+			if (dpaa2_svr_family != SVR_LX2160A) {
+				const struct qbman_fd *next_fd =
+					qbman_result_DQ_fd(dq_storage + 1);
+
+				/* Prefetch Annotation address for the parse
+				 * results.
+				 */
+				rte_prefetch0((DPAA2_IOVA_TO_VADDR(
+					DPAA2_GET_FD_ADDR(next_fd) +
+					DPAA2_FD_PTA_SIZE + 16)));
+			}
+#endif
 
 			if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 				bufs[num_rx] = eth_sg_fd_to_mbuf(fd,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop Hemant Agrawal
@ 2020-03-03 16:59   ` Ferruh Yigit
  2020-03-04  8:43     ` Hemant Agrawal (OSS)
  2020-03-04  8:49     ` David Marchand
  2020-03-03 17:02   ` Ferruh Yigit
  1 sibling, 2 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-03-03 16:59 UTC (permalink / raw)
  To: Hemant Agrawal
  Cc: dev, Gagandeep Singh, Jerin Jacob Kollanukkaran, David Marchand,
	Akhil Goyal

On 3/2/2020 2:58 PM, Hemant Agrawal wrote:
> From: Gagandeep Singh <g.singh@nxp.com>
> 
> Enable congestion handling/tail drop for TX queues.
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>

I don't know why it revealed itself with this patch, but I am getting an error
with this patch [1], this looks like build dependency error.

While trying to link 'librte_pmd_dpaa_event.so', the dependent library
'librte_pmd_dpaa_sec.so' is not ready yet.

A dependency directive is fixing it [2], at least in my environment, but it may
need to be backported to old releases too.

Would you mind sending a patch to fix the issue?
I suggest merging it before this patchset.


[1]
/usr/bin/ld: cannot find -lrte_pmd_dpaa_sec
collect2: error: ld returned 1 exit status
make[9]: *** [.../mk/rte.lib.mk:100: librte_pmd_dpaa_event.so.20.0.2] Error 1


[2]
 diff --git a/drivers/Makefile b/drivers/Makefile
 index 46374ca69..c70bdf9cc 100644
 --- a/drivers/Makefile
 +++ b/drivers/Makefile
 @@ -21,7 +21,7 @@ DEPDIRS-compress := bus mempool
  DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += vdpa
  DEPDIRS-vdpa := common bus mempool
  DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
 -DEPDIRS-event := common bus mempool net
 +DEPDIRS-event := common bus mempool net crypto
  DIRS-$(CONFIG_RTE_LIBRTE_RAWDEV) += raw
  DEPDIRS-raw := common bus mempool net event

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop Hemant Agrawal
  2020-03-03 16:59   ` Ferruh Yigit
@ 2020-03-03 17:02   ` Ferruh Yigit
  2020-03-05  6:49     ` Gagandeep Singh
  1 sibling, 1 reply; 109+ messages in thread
From: Ferruh Yigit @ 2020-03-03 17:02 UTC (permalink / raw)
  To: Hemant Agrawal; +Cc: dev, Gagandeep Singh, David Marchand, Neil Horman

On 3/2/2020 2:58 PM, Hemant Agrawal wrote:
> From: Gagandeep Singh <g.singh@nxp.com>
> 
> Enable congestion handling/tail drop for TX queues.
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>

<...>

> diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
> index e6ca4361e..86f5811b0 100644
> --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
> +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
> @@ -94,3 +94,10 @@ DPDK_20.0 {
>  
>  	local: *;
>  };
> +
> +EXPERIMENTAL {
> +	global:
> +
> +	qman_ern_poll_free;
> +	qman_ern_register_cb;
> +};

Aren't these bus APIs internal (between net/crypto/event drivers and bus
libraries)? And they are *not* for applications to use.

If they are only internal is there any point to make them experimental?

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 07/16] bus/fslmc: support portal migration
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 07/16] bus/fslmc: support portal migration Hemant Agrawal
@ 2020-03-03 17:43   ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-03-03 17:43 UTC (permalink / raw)
  To: Hemant Agrawal; +Cc: dev, Nipun Gupta

On 3/2/2020 2:58 PM, Hemant Agrawal wrote:
> From: Nipun Gupta <nipun.gupta@nxp.com>
> 
> The patch adds support for portal migration by disabling stashing
> for the portals which is used in the non-affined threads, or on
> threads affined to multiple cores
> 
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>

<...>

> @@ -754,7 +856,7 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
>  			return -EBUSY;
>  	}
>  
> -	p = qbman_cena_write_start_wo_shadow(&s->sys,
> +	p = qbman_cinh_write_start_wo_shadow(&s->sys,
>  			QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
>  	memcpy(&p[1], &cl[1], 28);
>  	memcpy(&p[8], fd, sizeof(*fd));
> @@ -762,8 +864,44 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
>  
>  	/* Set the verb byte, have to substitute in the valid-bit */
>  	p[0] = cl[0] | s->eqcr.pi_vb;
> -	qbman_cena_write_complete_wo_shadow(&s->sys,
> +	s->eqcr.pi++;
> +	s->eqcr.pi &= full_mask;
> +	s->eqcr.available--;
> +	if (!(s->eqcr.pi & half_mask))
> +		s->eqcr.pi_vb ^= QB_VALID_BIT;
> +
> +	return 0;
> +}
> +
> +static int qbman_swp_enqueue_ring_mode_cinh_direct(
> +		struct qbman_swp *s,
> +		const struct qbman_eq_desc *d,
> +		const struct qbman_fd *fd)
> +{

This patch is adding a second 'qbman_swp_enqueue_ring_mode_cinh_direct()'
function, it may be a git artifact.

The duplicated functions seems removed later but this patch is wrong, needs
fixing. Please make sure each patch in series is functional and compiles fine.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop
  2020-03-03 16:59   ` Ferruh Yigit
@ 2020-03-04  8:43     ` Hemant Agrawal (OSS)
  2020-03-04  8:49     ` David Marchand
  1 sibling, 0 replies; 109+ messages in thread
From: Hemant Agrawal (OSS) @ 2020-03-04  8:43 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: dev, Gagandeep Singh, Jerin Jacob Kollanukkaran, David Marchand,
	Akhil Goyal

Hi Ferruh,

> I don't know why it revealed itself with this patch, but I am getting an error
> with this patch [1], this looks like build dependency error.
> 
> While trying to link 'librte_pmd_dpaa_event.so', the dependent library
> 'librte_pmd_dpaa_sec.so' is not ready yet.
> 
> A dependency directive is fixing it [2], at least in my environment, but it may
> need to be backported to old releases too.
> 
> Would you mind sending a patch to fix the issue?
> I suggest merging it before this patchset.

[Hemant]  Thanks! We will send the patch.

> 
> 
> [1]
> /usr/bin/ld: cannot find -lrte_pmd_dpaa_sec
> collect2: error: ld returned 1 exit status
> make[9]: *** [.../mk/rte.lib.mk:100: librte_pmd_dpaa_event.so.20.0.2] Error 1
> 
> 
> [2]
>  diff --git a/drivers/Makefile b/drivers/Makefile
>  index 46374ca69..c70bdf9cc 100644
>  --- a/drivers/Makefile
>  +++ b/drivers/Makefile
>  @@ -21,7 +21,7 @@ DEPDIRS-compress := bus mempool
>   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += vdpa
>   DEPDIRS-vdpa := common bus mempool
>   DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
>  -DEPDIRS-event := common bus mempool net
>  +DEPDIRS-event := common bus mempool net crypto
>   DIRS-$(CONFIG_RTE_LIBRTE_RAWDEV) += raw
>   DEPDIRS-raw := common bus mempool net event

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop
  2020-03-03 16:59   ` Ferruh Yigit
  2020-03-04  8:43     ` Hemant Agrawal (OSS)
@ 2020-03-04  8:49     ` David Marchand
  1 sibling, 0 replies; 109+ messages in thread
From: David Marchand @ 2020-03-04  8:49 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Hemant Agrawal, dev, Gagandeep Singh, Jerin Jacob Kollanukkaran,
	Akhil Goyal

On Tue, Mar 3, 2020 at 5:59 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 3/2/2020 2:58 PM, Hemant Agrawal wrote:
> > From: Gagandeep Singh <g.singh@nxp.com>
> >
> > Enable congestion handling/tail drop for TX queues.
> >
> > Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
>
> I don't know why it revealed itself with this patch, but I am getting an error
> with this patch [1], this looks like build dependency error.
>
> While trying to link 'librte_pmd_dpaa_event.so', the dependent library
> 'librte_pmd_dpaa_sec.so' is not ready yet.
>
> A dependency directive is fixing it [2], at least in my environment, but it may
> need to be backported to old releases too.
>
> Would you mind sending a patch to fix the issue?
> I suggest merging it before this patchset.
>
>
> [1]
> /usr/bin/ld: cannot find -lrte_pmd_dpaa_sec
> collect2: error: ld returned 1 exit status
> make[9]: *** [.../mk/rte.lib.mk:100: librte_pmd_dpaa_event.so.20.0.2] Error 1

Looks like what I had reported earlier.
http://inbox.dpdk.org/dev/CAJFAV8yQThD3JE8n8zxM8xiH=Rp3yomhzjH0p_8uzzXdiveuxA@mail.gmail.com/


>
>
> [2]
>  diff --git a/drivers/Makefile b/drivers/Makefile
>  index 46374ca69..c70bdf9cc 100644
>  --- a/drivers/Makefile
>  +++ b/drivers/Makefile
>  @@ -21,7 +21,7 @@ DEPDIRS-compress := bus mempool
>   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += vdpa
>   DEPDIRS-vdpa := common bus mempool
>   DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
>  -DEPDIRS-event := common bus mempool net
>  +DEPDIRS-event := common bus mempool net crypto
>   DIRS-$(CONFIG_RTE_LIBRTE_RAWDEV) += raw
>   DEPDIRS-raw := common bus mempool net event
>

lgtm.
Thanks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop
  2020-03-03 17:02   ` Ferruh Yigit
@ 2020-03-05  6:49     ` Gagandeep Singh
  2020-03-05 14:14       ` Ferruh Yigit
  0 siblings, 1 reply; 109+ messages in thread
From: Gagandeep Singh @ 2020-03-05  6:49 UTC (permalink / raw)
  To: Ferruh Yigit, Hemant Agrawal; +Cc: dev, David Marchand, Neil Horman

Hi Ferruh,
> > diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map
> b/drivers/bus/dpaa/rte_bus_dpaa_version.map
> > index e6ca4361e..86f5811b0 100644
> > --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
> > +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
> > @@ -94,3 +94,10 @@ DPDK_20.0 {
> >
> >  	local: *;
> >  };
> > +
> > +EXPERIMENTAL {
> > +	global:
> > +
> > +	qman_ern_poll_free;
> > +	qman_ern_register_cb;
> > +};
> 
> Aren't these bus APIs internal (between net/crypto/event drivers and bus
> libraries)? And they are *not* for applications to use.
> 
> If they are only internal is there any point to make them experimental?

Yes, these are internal only APIS.  So, is it ok to add them directly into DPDK_20.0 section? 

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-03-02 13:01 ` David Marchand
@ 2020-03-05  9:06   ` Hemant Agrawal (OSS)
  2020-03-05  9:09     ` David Marchand
  0 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal (OSS) @ 2020-03-05  9:06 UTC (permalink / raw)
  To: David Marchand; +Cc: Yigit, Ferruh, dev

Hi David,
> On Mon, Mar 2, 2020 at 10:26 AM Hemant Agrawal
> <hemant.agrawal@nxp.com> wrote:
> >
> > This patch series add various patches for enhancing and fixing NXP
> > fslmc bus, dpaa bus, and dpaax.
> >
> > - the main change is support to allow thread migration across lcores
> > - improving the multi-process support
> 
> This series triggers an ABI warning that must be investigated.
>https://travis-ci.com/ovsrobot/dpdk/jobs/292904119#L2233

[Hemant] 
As per the logs:

Variables changes summary: 1 Removed, 2 Changed, 0 Added variables
1 Removed variable:
  'dpaa2_portal_dqrr per_lcore_dpaa2_held_bufs'    {per_lcore_dpaa2_held_bufs@@DPDK_20.0}
2 Changed variables:
  [C]'dpaa2_io_portal_t dpaa2_io_portal[128]' was changed at dpaa2_hw_dpio.h:40:1: size of symbol changed from 5120 to 2048
  [C]'dpaa2_io_portal_t per_lcore__dpaa2_io' was changed at dpaa2_hw_dpio.h:20:1: size of symbol changed from 40 to 16

Error: ABI issue reported for 'abidiff --suppr devtools/libabigail.abignore --no-added-syms --headers-dir1 reference/usr/local/include --headers-dir2 install/usr/local/include reference/dump/librte_bus_fslmc.dump install/dump/librte_bus_fslmc.dump'

---------------

These changes are w.r.t modifications in internal structures and variables. They may be ignored.
> 
> --
> David Marchand


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-03-05  9:06   ` Hemant Agrawal (OSS)
@ 2020-03-05  9:09     ` David Marchand
  2020-03-05  9:19       ` Hemant Agrawal (OSS)
  0 siblings, 1 reply; 109+ messages in thread
From: David Marchand @ 2020-03-05  9:09 UTC (permalink / raw)
  To: Hemant Agrawal (OSS); +Cc: Yigit, Ferruh, dev

On Thu, Mar 5, 2020 at 10:06 AM Hemant Agrawal (OSS)
<hemant.agrawal@oss.nxp.com> wrote:
>
> Hi David,
> > On Mon, Mar 2, 2020 at 10:26 AM Hemant Agrawal
> > <hemant.agrawal@nxp.com> wrote:
> > >
> > > This patch series add various patches for enhancing and fixing NXP
> > > fslmc bus, dpaa bus, and dpaax.
> > >
> > > - the main change is support to allow thread migration across lcores
> > > - improving the multi-process support
> >
> > This series triggers an ABI warning that must be investigated.
> >https://travis-ci.com/ovsrobot/dpdk/jobs/292904119#L2233
>
> [Hemant]
> As per the logs:
>
> Variables changes summary: 1 Removed, 2 Changed, 0 Added variables
> 1 Removed variable:
>   'dpaa2_portal_dqrr per_lcore_dpaa2_held_bufs'    {per_lcore_dpaa2_held_bufs@@DPDK_20.0}
> 2 Changed variables:
>   [C]'dpaa2_io_portal_t dpaa2_io_portal[128]' was changed at dpaa2_hw_dpio.h:40:1: size of symbol changed from 5120 to 2048
>   [C]'dpaa2_io_portal_t per_lcore__dpaa2_io' was changed at dpaa2_hw_dpio.h:20:1: size of symbol changed from 40 to 16
>
> Error: ABI issue reported for 'abidiff --suppr devtools/libabigail.abignore --no-added-syms --headers-dir1 reference/usr/local/include --headers-dir2 install/usr/local/include reference/dump/librte_bus_fslmc.dump install/dump/librte_bus_fslmc.dump'
>
> ---------------
>
> These changes are w.r.t modifications in internal structures and variables. They may be ignored.

The ABI check considers symbol exposed in headers available to final users.
If those are internal, why are the headers public?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-03-05  9:09     ` David Marchand
@ 2020-03-05  9:19       ` Hemant Agrawal (OSS)
  2020-03-06 10:12         ` David Marchand
  0 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal (OSS) @ 2020-03-05  9:19 UTC (permalink / raw)
  To: David Marchand, Hemant Agrawal (OSS); +Cc: Yigit, Ferruh, dev

Hi David,

> On Thu, Mar 5, 2020 at 10:06 AM Hemant Agrawal (OSS)
> <hemant.agrawal@oss.nxp.com> wrote:
> >
> > Hi David,
> > > On Mon, Mar 2, 2020 at 10:26 AM Hemant Agrawal
> > > <hemant.agrawal@nxp.com> wrote:
> > > >
> > > > This patch series add various patches for enhancing and fixing NXP
> > > > fslmc bus, dpaa bus, and dpaax.
> > > >
> > > > - the main change is support to allow thread migration across
> > > > lcores
> > > > - improving the multi-process support
> > >
> > > This series triggers an ABI warning that must be investigated.
> > >https://travis-ci.com/ovsrobot/dpdk/jobs/292904119#L2233
> >
> > [Hemant]
> > As per the logs:
> >
> > Variables changes summary: 1 Removed, 2 Changed, 0 Added variables
> > 1 Removed variable:
> >   'dpaa2_portal_dqrr per_lcore_dpaa2_held_bufs'
> {per_lcore_dpaa2_held_bufs@@DPDK_20.0}
> > 2 Changed variables:
> >   [C]'dpaa2_io_portal_t dpaa2_io_portal[128]' was changed at
> dpaa2_hw_dpio.h:40:1: size of symbol changed from 5120 to 2048
> >   [C]'dpaa2_io_portal_t per_lcore__dpaa2_io' was changed at
> > dpaa2_hw_dpio.h:20:1: size of symbol changed from 40 to 16
> >
> > Error: ABI issue reported for 'abidiff --suppr devtools/libabigail.abignore --
> no-added-syms --headers-dir1 reference/usr/local/include --headers-dir2
> install/usr/local/include reference/dump/librte_bus_fslmc.dump
> install/dump/librte_bus_fslmc.dump'
> >
> > ---------------
> >
> > These changes are w.r.t modifications in internal structures and variables.
> They may be ignored.
> 
> The ABI check considers symbol exposed in headers available to final users.
> If those are internal, why are the headers public?
> 

[Hemant] These symbols are not part of any public header files,  but they are part of *.map files to share them between different driver libs i.e bus_fslmc and net_dpaa2 

> 
> --
> David Marchand


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop
  2020-03-05  6:49     ` Gagandeep Singh
@ 2020-03-05 14:14       ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-03-05 14:14 UTC (permalink / raw)
  To: Gagandeep Singh, Hemant Agrawal; +Cc: dev, David Marchand, Neil Horman

On 3/5/2020 6:49 AM, Gagandeep Singh wrote:
> Hi Ferruh,
>>> diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map
>> b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>> index e6ca4361e..86f5811b0 100644
>>> --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>> +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
>>> @@ -94,3 +94,10 @@ DPDK_20.0 {
>>>
>>>  	local: *;
>>>  };
>>> +
>>> +EXPERIMENTAL {
>>> +	global:
>>> +
>>> +	qman_ern_poll_free;
>>> +	qman_ern_register_cb;
>>> +};
>>
>> Aren't these bus APIs internal (between net/crypto/event drivers and bus
>> libraries)? And they are *not* for applications to use.
>>
>> If they are only internal is there any point to make them experimental?
> 
> Yes, these are internal only APIS.  So, is it ok to add them directly into DPDK_20.0 section? 
> 

I think it is OK, but I don't know the what will be the affect to the ABI check
tools.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements
  2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                   ` (16 preceding siblings ...)
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 16/16] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
@ 2020-03-06  9:57 ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 01/16] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
                     ` (16 more replies)
  17 siblings, 17 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

This patch series add various patches for enhancing and fixing NXP
fslmc bus, dpaa bus, and dpaax.

- the main change is support to allow thread migration across lcores
- improving the multi-process support


v2: address review comments

Apeksha Gupta (1):
  bus/fslmc: fix dereferencing null pointer

Gagandeep Singh (2):
  bus/fslmc: combine thread specific variables
  net/dpaa: enable Tx queue taildrop

Hemant Agrawal (1):
  bus/fslmc: support handle portal alloc failure

Nipun Gupta (8):
  bus/fslmc: rework portal allocation to a per thread basis
  bus/fslmc: limit pthread destructor called for dpaa2 only
  bus/fslmc: support portal migration
  drivers: enhance portal allocation failure log
  bus/fslmc: rename the cinh read functions used for ls1088
  net/dpaa: return error on multiple mp config
  net/dpaa: update process specific device info
  net/dpaa2: do not prefetch annotaion for physical mode

Rohit Raj (3):
  net/dpaa2: fix 10g port negotiation issue
  bus/dpaa: enable link state interrupt
  bus/dpaa: enable set link status

Sachin Saxena (1):
  net/dpaa: add 2.5G support

 drivers/bus/dpaa/base/fman/fman.c             |  10 +-
 drivers/bus/dpaa/base/fman/netcfg_layer.c     |   3 +-
 drivers/bus/dpaa/base/qbman/process.c         |  95 ++-
 drivers/bus/dpaa/base/qbman/qman.c            |  43 ++
 drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
 drivers/bus/dpaa/include/fman.h               |   3 +
 drivers/bus/dpaa/include/fsl_qman.h           |  15 +
 drivers/bus/dpaa/include/process.h            |  23 +
 drivers/bus/dpaa/rte_bus_dpaa_version.map     |  11 +
 drivers/bus/dpaa/rte_dpaa_bus.h               |   6 +-
 drivers/bus/fslmc/Makefile                    |   1 +
 drivers/bus/fslmc/fslmc_bus.c                 |   2 -
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      | 284 ++++-----
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |  10 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  14 +-
 .../fslmc/qbman/include/fsl_qbman_portal.h    |   8 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |   9 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        | 580 +++++++++++++++++-
 drivers/bus/fslmc/qbman/qbman_portal.h        |  19 +-
 drivers/bus/fslmc/qbman/qbman_sys.h           | 135 +++-
 drivers/bus/fslmc/rte_fslmc.h                 |  18 -
 drivers/common/dpaax/compat.h                 |   5 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |   8 +-
 drivers/event/dpaa2/dpaa2_eventdev.c          |   8 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |   1 +
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |  12 +-
 drivers/net/dpaa/dpaa_ethdev.c                | 417 +++++++++----
 drivers/net/dpaa/dpaa_ethdev.h                |   3 +-
 drivers/net/dpaa/dpaa_rxtx.c                  |  71 +++
 drivers/net/dpaa/dpaa_rxtx.h                  |   3 +
 drivers/net/dpaa2/dpaa2_ethdev.c              |   8 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |  56 +-
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c         |   8 +-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c           |  12 +-
 34 files changed, 1564 insertions(+), 365 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 01/16] net/dpaa2: fix 10g port negotiation issue
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 02/16] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
                     ` (15 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fixed 10g port negotiation issue with another 10G/non 10G port.
Initialize the port link speed.

Fixes: c5acbb5ea20e ("net/dpaa2: support link status event")
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 2cde55e7c..4fc550a88 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -553,9 +553,6 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
 		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
 
-	/* update the current status */
-	dpaa2_dev_link_update(dev, 0);
-
 	return 0;
 }
 
@@ -1757,6 +1754,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	/* changing tx burst function to start enqueues */
 	dev->tx_pkt_burst = dpaa2_dev_tx;
 	dev->data->dev_link.link_status = state.up;
+	dev->data->dev_link.link_speed = state.rate;
 
 	if (state.up)
 		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 02/16] bus/fslmc: fix dereferencing null pointer
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 01/16] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 03/16] bus/fslmc: combine thread specific variables Hemant Agrawal
                     ` (14 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable, Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

This patch fixees the nxp internal coverity reported
null pointer dereferncing issue.

Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 0bb2ce880..34374ae4b 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -20,26 +20,27 @@ struct qbman_fq_query_desc {
 	uint8_t verb;
 	uint8_t reserved[3];
 	uint32_t fqid;
-	uint8_t reserved2[57];
+	uint8_t reserved2[56];
 };
 
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 			 struct qbman_fq_query_np_rslt *r)
 {
 	struct qbman_fq_query_desc *p;
+	struct qbman_fq_query_np_rslt *var;
 
 	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->fqid = fqid;
-	*r = *(struct qbman_fq_query_np_rslt *)qbman_swp_mc_complete(s, p,
-						QBMAN_FQ_QUERY_NP);
-	if (!r) {
+	var = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY_NP);
+	if (!var) {
 		pr_err("qbman: Query FQID %d NP fields failed, no response\n",
 		       fqid);
 		return -EIO;
 	}
+	*r = *var;
 
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY_NP);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 03/16] bus/fslmc: combine thread specific variables
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 01/16] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 02/16] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 04/16] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
                     ` (13 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

This is to reduce the thread local storage.
Note that though these variables are part of *.map file
but they are internal to tbe fslmc and dpaa2 drivers and not exposed
outside.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/fslmc/fslmc_bus.c            |  2 --
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h |  7 +++++++
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h  |  8 ++++++++
 drivers/bus/fslmc/rte_fslmc.h            | 18 ------------------
 4 files changed, 15 insertions(+), 20 deletions(-)

diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index b3e964aa9..1f657822f 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -37,8 +37,6 @@ rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type)
 	return rte_fslmc_bus.device_count[device_type];
 }
 
-RTE_DEFINE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
-
 static void
 cleanup_fslmc_device_list(void)
 {
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 2829c9380..9da4af782 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -28,6 +28,13 @@ RTE_DECLARE_PER_LCORE(struct dpaa2_io_portal_t, _dpaa2_io);
 #define DPAA2_PER_LCORE_ETHRX_DPIO RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
 #define DPAA2_PER_LCORE_ETHRX_PORTAL DPAA2_PER_LCORE_ETHRX_DPIO->sw_portal
 
+#define DPAA2_PER_LCORE_DQRR_SIZE \
+	RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_size
+#define DPAA2_PER_LCORE_DQRR_HELD \
+	RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_held
+#define DPAA2_PER_LCORE_DQRR_MBUF(i) \
+	RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.mbuf[i]
+
 /* Variable to store DPAA2 DQRR size */
 extern uint8_t dpaa2_dqrr_size;
 /* Variable to store DPAA2 EQCR size */
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index ab2b213f8..bde1441f4 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -87,6 +87,13 @@ struct eqresp_metadata {
 	struct rte_mempool *mp;
 };
 
+#define DPAA2_PORTAL_DEQUEUE_DEPTH	32
+struct dpaa2_portal_dqrr {
+	struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH];
+	uint64_t dqrr_held;
+	uint8_t dqrr_size;
+};
+
 struct dpaa2_dpio_dev {
 	TAILQ_ENTRY(dpaa2_dpio_dev) next;
 		/**< Pointer to Next device instance */
@@ -112,6 +119,7 @@ struct dpaa2_dpio_dev {
 	struct rte_intr_handle intr_handle; /* Interrupt related info */
 	int32_t	epoll_fd; /**< File descriptor created for interrupt polling */
 	int32_t hw_id; /**< An unique ID of this DPIO device instance */
+	struct dpaa2_portal_dqrr dpaa2_held_bufs;
 };
 
 struct dpaa2_dpbp_dev {
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 96ba8dc25..a59f0077e 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -137,24 +137,6 @@ struct rte_fslmc_bus {
 				/**< Count of all devices scanned */
 };
 
-#define DPAA2_PORTAL_DEQUEUE_DEPTH	32
-
-/* Create storage for dqrr entries per lcore */
-struct dpaa2_portal_dqrr {
-	struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH];
-	uint64_t dqrr_held;
-	uint8_t dqrr_size;
-};
-
-RTE_DECLARE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
-
-#define DPAA2_PER_LCORE_DQRR_SIZE \
-	RTE_PER_LCORE(dpaa2_held_bufs).dqrr_size
-#define DPAA2_PER_LCORE_DQRR_HELD \
-	RTE_PER_LCORE(dpaa2_held_bufs).dqrr_held
-#define DPAA2_PER_LCORE_DQRR_MBUF(i) \
-	RTE_PER_LCORE(dpaa2_held_bufs).mbuf[i]
-
 /**
  * Register a DPAA2 driver.
  *
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 04/16] bus/fslmc: rework portal allocation to a per thread basis
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (2 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 03/16] bus/fslmc: combine thread specific variables Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 05/16] bus/fslmc: support handle portal alloc failure Hemant Agrawal
                     ` (12 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

The patch reworks the portal allocation which was previously
being done on per lcore basis to a per thread basis.
Now user can also create its own threads and use DPAA2 portals
for packet I/O.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/Makefile               |   1 +
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 210 ++++++++++++-----------
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h |   3 -
 3 files changed, 114 insertions(+), 100 deletions(-)

diff --git a/drivers/bus/fslmc/Makefile b/drivers/bus/fslmc/Makefile
index 6d2286088..b38305fb4 100644
--- a/drivers/bus/fslmc/Makefile
+++ b/drivers/bus/fslmc/Makefile
@@ -18,6 +18,7 @@ CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc
 CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include
 CFLAGS += -I$(RTE_SDK)/drivers/common/dpaax
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
+LDLIBS += -lpthread
 LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev
 LDLIBS += -lrte_common_dpaax
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 739ce434b..e765a382f 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -62,6 +62,9 @@ uint8_t dpaa2_dqrr_size;
 /* Variable to store DPAA2 EQCR size */
 uint8_t dpaa2_eqcr_size;
 
+/* Variable to hold the portal_key, once created.*/
+static pthread_key_t dpaa2_portal_key;
+
 /*Stashing Macros default for LS208x*/
 static int dpaa2_core_cluster_base = 0x04;
 static int dpaa2_cluster_sz = 2;
@@ -87,6 +90,32 @@ static int dpaa2_cluster_sz = 2;
  * Cluster 4 (ID = x07) : CPU14, CPU15;
  */
 
+static int
+dpaa2_get_core_id(void)
+{
+	rte_cpuset_t cpuset;
+	int i, ret, cpu_id = -1;
+
+	ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t),
+		&cpuset);
+	if (ret) {
+		DPAA2_BUS_ERR("pthread_getaffinity_np() failed");
+		return ret;
+	}
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (CPU_ISSET(i, &cpuset)) {
+			if (cpu_id == -1)
+				cpu_id = i;
+			else
+				/* Multiple cpus are affined */
+				return -1;
+		}
+	}
+
+	return cpu_id;
+}
+
 static int
 dpaa2_core_cluster_sdest(int cpu_id)
 {
@@ -97,7 +126,7 @@ dpaa2_core_cluster_sdest(int cpu_id)
 
 #ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
 static void
-dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
+dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id)
 {
 #define STRING_LEN	28
 #define COMMAND_LEN	50
@@ -130,7 +159,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
 		return;
 	}
 
-	cpu_mask = cpu_mask << dpaa2_cpu[lcoreid];
+	cpu_mask = cpu_mask << dpaa2_cpu[cpu_id];
 	snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity",
 		 cpu_mask, token);
 	ret = system(command);
@@ -144,7 +173,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid)
 	fclose(file);
 }
 
-static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
+static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int cpu_id)
 {
 	struct epoll_event epoll_ev;
 	int eventfd, dpio_epoll_fd, ret;
@@ -181,36 +210,42 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
 	}
 	dpio_dev->epoll_fd = dpio_epoll_fd;
 
-	dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, lcoreid);
+	dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, cpu_id);
 
 	return 0;
 }
+
+static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
+{
+	int ret;
+
+	ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+	if (ret)
+		DPAA2_BUS_ERR("DPIO interrupt disable failed");
+
+	close(dpio_dev->epoll_fd);
+}
 #endif
 
 static int
-dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
+dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
 {
 	int sdest, ret;
 	int cpu_id;
 
 	/* Set the Stashing Destination */
-	if (lcoreid < 0) {
-		lcoreid = rte_get_master_lcore();
-		if (lcoreid < 0) {
-			DPAA2_BUS_ERR("Getting CPU Index failed");
-			return -1;
-		}
+	cpu_id = dpaa2_get_core_id();
+	if (cpu_id < 0) {
+		DPAA2_BUS_ERR("Thread not affined to a single core");
+		return -1;
 	}
 
-	cpu_id = dpaa2_cpu[lcoreid];
-
 	/* Set the STASH Destination depending on Current CPU ID.
 	 * Valid values of SDEST are 4,5,6,7. Where,
 	 */
-
 	sdest = dpaa2_core_cluster_sdest(cpu_id);
-	DPAA2_BUS_DEBUG("Portal= %d  CPU= %u lcore id =%u SDEST= %d",
-			dpio_dev->index, cpu_id, lcoreid, sdest);
+	DPAA2_BUS_DEBUG("Portal= %d  CPU= %u SDEST= %d",
+			dpio_dev->index, cpu_id, sdest);
 
 	ret = dpio_set_stashing_destination(dpio_dev->dpio, CMD_PRI_LOW,
 					    dpio_dev->token, sdest);
@@ -220,7 +255,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
 	}
 
 #ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
-	if (dpaa2_dpio_intr_init(dpio_dev, lcoreid)) {
+	if (dpaa2_dpio_intr_init(dpio_dev, cpu_id)) {
 		DPAA2_BUS_ERR("Interrupt registration failed for dpio");
 		return -1;
 	}
@@ -229,7 +264,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid)
 	return 0;
 }
 
-static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
+static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
 	int ret;
@@ -245,108 +280,74 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid)
 	DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
 			dpio_dev, dpio_dev->index, syscall(SYS_gettid));
 
-	ret = dpaa2_configure_stashing(dpio_dev, lcoreid);
-	if (ret)
+	ret = dpaa2_configure_stashing(dpio_dev);
+	if (ret) {
 		DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+		return NULL;
+	}
 
 	return dpio_dev;
 }
 
+static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
+{
+#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
+	dpaa2_dpio_intr_deinit(dpio_dev);
+#endif
+	if (dpio_dev)
+		rte_atomic16_clear(&dpio_dev->ref_count);
+}
+
 int
 dpaa2_affine_qbman_swp(void)
 {
-	unsigned int lcore_id = rte_lcore_id();
+	struct dpaa2_dpio_dev *dpio_dev;
 	uint64_t tid = syscall(SYS_gettid);
 
-	if (lcore_id == LCORE_ID_ANY)
-		lcore_id = rte_get_master_lcore();
-	/* if the core id is not supported */
-	else if (lcore_id >= RTE_MAX_LCORE)
-		return -1;
-
-	if (dpaa2_io_portal[lcore_id].dpio_dev) {
-		DPAA2_BUS_DP_INFO("DPAA Portal=%p (%d) is being shared"
-			    " between thread %" PRIu64 " and current "
-			    "%" PRIu64 "\n",
-			    dpaa2_io_portal[lcore_id].dpio_dev,
-			    dpaa2_io_portal[lcore_id].dpio_dev->index,
-			    dpaa2_io_portal[lcore_id].net_tid,
-			    tid);
-		RTE_PER_LCORE(_dpaa2_io).dpio_dev
-			= dpaa2_io_portal[lcore_id].dpio_dev;
-		rte_atomic16_inc(&dpaa2_io_portal
-				 [lcore_id].dpio_dev->ref_count);
-		dpaa2_io_portal[lcore_id].net_tid = tid;
-
-		DPAA2_BUS_DP_DEBUG("Old Portal=%p (%d) affined thread - "
-				   "%" PRIu64 "\n",
-			    dpaa2_io_portal[lcore_id].dpio_dev,
-			    dpaa2_io_portal[lcore_id].dpio_dev->index,
-			    tid);
-		return 0;
-	}
-
 	/* Populate the dpaa2_io_portal structure */
-	dpaa2_io_portal[lcore_id].dpio_dev = dpaa2_get_qbman_swp(lcore_id);
-
-	if (dpaa2_io_portal[lcore_id].dpio_dev) {
-		RTE_PER_LCORE(_dpaa2_io).dpio_dev
-			= dpaa2_io_portal[lcore_id].dpio_dev;
-		dpaa2_io_portal[lcore_id].net_tid = tid;
+	if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) {
+		dpio_dev = dpaa2_get_qbman_swp();
+		if (!dpio_dev) {
+			DPAA2_BUS_ERR("No software portal resource left");
+			return -1;
+		}
+		RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
 
-		return 0;
-	} else {
-		return -1;
+		DPAA2_BUS_INFO(
+			"DPAA Portal=%p (%d) is affined to thread %" PRIu64,
+			dpio_dev, dpio_dev->index, tid);
 	}
+	return 0;
 }
 
 int
 dpaa2_affine_qbman_ethrx_swp(void)
 {
-	unsigned int lcore_id = rte_lcore_id();
+	struct dpaa2_dpio_dev *dpio_dev;
 	uint64_t tid = syscall(SYS_gettid);
 
-	if (lcore_id == LCORE_ID_ANY)
-		lcore_id = rte_get_master_lcore();
-	/* if the core id is not supported */
-	else if (lcore_id >= RTE_MAX_LCORE)
-		return -1;
+	/* Populate the dpaa2_io_portal structure */
+	if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) {
+		dpio_dev = dpaa2_get_qbman_swp();
+		if (!dpio_dev) {
+			DPAA2_BUS_ERR("No software portal resource left");
+			return -1;
+		}
+		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
 
-	if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) {
-		DPAA2_BUS_DP_INFO(
-			"DPAA Portal=%p (%d) is being shared between thread"
-			" %" PRIu64 " and current %" PRIu64 "\n",
-			dpaa2_io_portal[lcore_id].ethrx_dpio_dev,
-			dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index,
-			dpaa2_io_portal[lcore_id].sec_tid,
-			tid);
-		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
-			= dpaa2_io_portal[lcore_id].ethrx_dpio_dev;
-		rte_atomic16_inc(&dpaa2_io_portal
-				 [lcore_id].ethrx_dpio_dev->ref_count);
-		dpaa2_io_portal[lcore_id].sec_tid = tid;
-
-		DPAA2_BUS_DP_DEBUG(
-			"Old Portal=%p (%d) affined thread"
-			" - %" PRIu64 "\n",
-			dpaa2_io_portal[lcore_id].ethrx_dpio_dev,
-			dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index,
-			tid);
-		return 0;
+		DPAA2_BUS_INFO(
+			"DPAA Portal=%p (%d) is affined for eth rx to thread %"
+			PRIu64, dpio_dev, dpio_dev->index, tid);
 	}
+	return 0;
+}
 
-	/* Populate the dpaa2_io_portal structure */
-	dpaa2_io_portal[lcore_id].ethrx_dpio_dev =
-		dpaa2_get_qbman_swp(lcore_id);
-
-	if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) {
-		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev
-			= dpaa2_io_portal[lcore_id].ethrx_dpio_dev;
-		dpaa2_io_portal[lcore_id].sec_tid = tid;
-		return 0;
-	} else {
-		return -1;
-	}
+static void __attribute__((destructor(102))) dpaa2_portal_finish(void *arg)
+{
+	RTE_SET_USED(arg);
+
+	dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).dpio_dev);
+	dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev);
 }
 
 /*
@@ -398,6 +399,7 @@ dpaa2_create_dpio_device(int vdev_fd,
 	struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)};
 	struct qbman_swp_desc p_des;
 	struct dpio_attr attr;
+	int ret;
 	static int check_lcore_cpuset;
 
 	if (obj_info->num_regions < NUM_DPIO_REGIONS) {
@@ -547,12 +549,26 @@ dpaa2_create_dpio_device(int vdev_fd,
 
 	TAILQ_INSERT_TAIL(&dpio_dev_list, dpio_dev, next);
 
+	if (!dpaa2_portal_key) {
+		/* create the key, supplying a function that'll be invoked
+		 * when a portal affined thread will be deleted.
+		 */
+		ret = pthread_key_create(&dpaa2_portal_key,
+					 dpaa2_portal_finish);
+		if (ret) {
+			DPAA2_BUS_DEBUG("Unable to create pthread key (%d)",
+					ret);
+			goto err;
+		}
+	}
+
 	return 0;
 
 err:
 	if (dpio_dev->dpio) {
 		dpio_disable(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token);
 		dpio_close(dpio_dev->dpio, CMD_PRI_LOW,  dpio_dev->token);
+		rte_free(dpio_dev->eqresp);
 		rte_free(dpio_dev->dpio);
 	}
 
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 9da4af782..8af0474a1 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -14,9 +14,6 @@
 struct dpaa2_io_portal_t {
 	struct dpaa2_dpio_dev *dpio_dev;
 	struct dpaa2_dpio_dev *ethrx_dpio_dev;
-	uint64_t net_tid;
-	uint64_t sec_tid;
-	void *eventdev;
 };
 
 /*! Global per thread DPIO portal */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 05/16] bus/fslmc: support handle portal alloc failure
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (3 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 04/16] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-13 16:20     ` Ferruh Yigit
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 06/16] bus/fslmc: limit pthread destructor called for dpaa2 only Hemant Agrawal
                     ` (11 subsequent siblings)
  16 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Nipun Gupta, Hemant Agrawal

Add the error handling on failure.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 28 ++++++++++++++----------
 1 file changed, 16 insertions(+), 12 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index e765a382f..1a1453ea3 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -264,6 +264,16 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
 	return 0;
 }
 
+static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
+{
+	if (dpio_dev) {
+#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
+		dpaa2_dpio_intr_deinit(dpio_dev);
+#endif
+		rte_atomic16_clear(&dpio_dev->ref_count);
+	}
+}
+
 static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
@@ -274,8 +284,10 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 		if (dpio_dev && rte_atomic16_test_and_set(&dpio_dev->ref_count))
 			break;
 	}
-	if (!dpio_dev)
+	if (!dpio_dev) {
+		DPAA2_BUS_ERR("No software portal resource left");
 		return NULL;
+	}
 
 	DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
 			dpio_dev, dpio_dev->index, syscall(SYS_gettid));
@@ -283,21 +295,13 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 	ret = dpaa2_configure_stashing(dpio_dev);
 	if (ret) {
 		DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+		rte_atomic16_clear(&dpio_dev->ref_count);
 		return NULL;
 	}
 
 	return dpio_dev;
 }
 
-static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
-{
-#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV
-	dpaa2_dpio_intr_deinit(dpio_dev);
-#endif
-	if (dpio_dev)
-		rte_atomic16_clear(&dpio_dev->ref_count);
-}
-
 int
 dpaa2_affine_qbman_swp(void)
 {
@@ -308,7 +312,7 @@ dpaa2_affine_qbman_swp(void)
 	if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) {
 		dpio_dev = dpaa2_get_qbman_swp();
 		if (!dpio_dev) {
-			DPAA2_BUS_ERR("No software portal resource left");
+			DPAA2_BUS_ERR("Error in software portal allocation");
 			return -1;
 		}
 		RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev;
@@ -330,7 +334,7 @@ dpaa2_affine_qbman_ethrx_swp(void)
 	if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) {
 		dpio_dev = dpaa2_get_qbman_swp();
 		if (!dpio_dev) {
-			DPAA2_BUS_ERR("No software portal resource left");
+			DPAA2_BUS_ERR("Error in software portal allocation");
 			return -1;
 		}
 		RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 06/16] bus/fslmc: limit pthread destructor called for dpaa2 only
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (4 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 05/16] bus/fslmc: support handle portal alloc failure Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 07/16] bus/fslmc: support portal migration Hemant Agrawal
                     ` (10 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

The destructor was being called for non-dpaa2 as well

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1453ea3..054d45306 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -299,6 +299,13 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 		return NULL;
 	}
 
+	ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev);
+	if (ret) {
+		DPAA2_BUS_ERR("pthread_setspecific failed with ret: %d", ret);
+		dpaa2_put_qbman_swp(dpio_dev);
+		return NULL;
+	}
+
 	return dpio_dev;
 }
 
@@ -346,12 +353,14 @@ dpaa2_affine_qbman_ethrx_swp(void)
 	return 0;
 }
 
-static void __attribute__((destructor(102))) dpaa2_portal_finish(void *arg)
+static void dpaa2_portal_finish(void *arg)
 {
 	RTE_SET_USED(arg);
 
 	dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).dpio_dev);
 	dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev);
+
+	pthread_setspecific(dpaa2_portal_key, NULL);
 }
 
 /*
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 07/16] bus/fslmc: support portal migration
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (5 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 06/16] bus/fslmc: limit pthread destructor called for dpaa2 only Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 08/16] drivers: enhance portal allocation failure log Hemant Agrawal
                     ` (9 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

The patch adds support for portal migration by disabling stashing
for the portals which is used in the non-affined threads, or on
threads affined to multiple cores

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  83 +----
 .../fslmc/qbman/include/fsl_qbman_portal.h    |   8 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        | 340 +++++++++++++++++-
 drivers/bus/fslmc/qbman/qbman_portal.h        |  19 +-
 drivers/bus/fslmc/qbman/qbman_sys.h           | 135 ++++++-
 5 files changed, 502 insertions(+), 83 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 054d45306..2102d2981 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -53,10 +53,6 @@ static uint32_t io_space_count;
 /* Variable to store DPAA2 platform type */
 uint32_t dpaa2_svr_family;
 
-/* Physical core id for lcores running on dpaa2. */
-/* DPAA2 only support 1 lcore to 1 phy cpu mapping */
-static unsigned int dpaa2_cpu[RTE_MAX_LCORE];
-
 /* Variable to store DPAA2 DQRR size */
 uint8_t dpaa2_dqrr_size;
 /* Variable to store DPAA2 EQCR size */
@@ -159,7 +155,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id)
 		return;
 	}
 
-	cpu_mask = cpu_mask << dpaa2_cpu[cpu_id];
+	cpu_mask = cpu_mask << cpu_id;
 	snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity",
 		 cpu_mask, token);
 	ret = system(command);
@@ -228,17 +224,9 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
 #endif
 
 static int
-dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev)
+dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int cpu_id)
 {
 	int sdest, ret;
-	int cpu_id;
-
-	/* Set the Stashing Destination */
-	cpu_id = dpaa2_get_core_id();
-	if (cpu_id < 0) {
-		DPAA2_BUS_ERR("Thread not affined to a single core");
-		return -1;
-	}
 
 	/* Set the STASH Destination depending on Current CPU ID.
 	 * Valid values of SDEST are 4,5,6,7. Where,
@@ -277,6 +265,7 @@ static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev)
 static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 {
 	struct dpaa2_dpio_dev *dpio_dev = NULL;
+	int cpu_id;
 	int ret;
 
 	/* Get DPIO dev handle from list using index */
@@ -292,11 +281,19 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 	DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu",
 			dpio_dev, dpio_dev->index, syscall(SYS_gettid));
 
-	ret = dpaa2_configure_stashing(dpio_dev);
-	if (ret) {
-		DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
-		rte_atomic16_clear(&dpio_dev->ref_count);
-		return NULL;
+	/* Set the Stashing Destination */
+	cpu_id = dpaa2_get_core_id();
+	if (cpu_id < 0) {
+		DPAA2_BUS_WARN("Thread not affined to a single core");
+		if (dpaa2_svr_family != SVR_LX2160A)
+			qbman_swp_update(dpio_dev->sw_portal, 1);
+	} else {
+		ret = dpaa2_configure_stashing(dpio_dev, cpu_id);
+		if (ret) {
+			DPAA2_BUS_ERR("dpaa2_configure_stashing failed");
+			rte_atomic16_clear(&dpio_dev->ref_count);
+			return NULL;
+		}
 	}
 
 	ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev);
@@ -363,46 +360,6 @@ static void dpaa2_portal_finish(void *arg)
 	pthread_setspecific(dpaa2_portal_key, NULL);
 }
 
-/*
- * This checks for not supported lcore mappings as well as get the physical
- * cpuid for the lcore.
- * one lcore can only map to 1 cpu i.e. 1@10-14 not supported.
- * one cpu can be mapped to more than one lcores.
- */
-static int
-dpaa2_check_lcore_cpuset(void)
-{
-	unsigned int lcore_id, i;
-	int ret = 0;
-
-	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
-		dpaa2_cpu[lcore_id] = 0xffffffff;
-
-	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
-		rte_cpuset_t cpuset = rte_lcore_cpuset(lcore_id);
-
-		for (i = 0; i < CPU_SETSIZE; i++) {
-			if (!CPU_ISSET(i, &cpuset))
-				continue;
-			if (i >= RTE_MAX_LCORE) {
-				DPAA2_BUS_ERR("ERR:lcore map to core %u (>= %u) not supported",
-					i, RTE_MAX_LCORE);
-				ret = -1;
-				continue;
-			}
-			RTE_LOG(DEBUG, EAL, "lcore id = %u cpu=%u\n",
-				lcore_id, i);
-			if (dpaa2_cpu[lcore_id] != 0xffffffff) {
-				DPAA2_BUS_ERR("ERR:lcore map to multi-cpu not supported");
-				ret = -1;
-				continue;
-			}
-			dpaa2_cpu[lcore_id] = i;
-		}
-	}
-	return ret;
-}
-
 static int
 dpaa2_create_dpio_device(int vdev_fd,
 			 struct vfio_device_info *obj_info,
@@ -413,7 +370,6 @@ dpaa2_create_dpio_device(int vdev_fd,
 	struct qbman_swp_desc p_des;
 	struct dpio_attr attr;
 	int ret;
-	static int check_lcore_cpuset;
 
 	if (obj_info->num_regions < NUM_DPIO_REGIONS) {
 		DPAA2_BUS_ERR("Not sufficient number of DPIO regions");
@@ -433,13 +389,6 @@ dpaa2_create_dpio_device(int vdev_fd,
 	/* Using single portal  for all devices */
 	dpio_dev->mc_portal = rte_mcp_ptr_list[MC_PORTAL_INDEX];
 
-	if (!check_lcore_cpuset) {
-		check_lcore_cpuset = 1;
-
-		if (dpaa2_check_lcore_cpuset() < 0)
-			goto err;
-	}
-
 	dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
 				     RTE_CACHE_LINE_SIZE);
 	if (!dpio_dev->dpio) {
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
index 88f0a9968..0d6364d99 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014 Freescale Semiconductor, Inc.
- * Copyright 2015-2019 NXP
+ * Copyright 2015-2020 NXP
  *
  */
 #ifndef _FSL_QBMAN_PORTAL_H
@@ -43,6 +43,12 @@ extern uint32_t dpaa2_svr_family;
  */
 struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d);
 
+/**
+ * qbman_swp_update() - Update portal cacheability attributes.
+ * @p: the given qbman swp portal
+ */
+int qbman_swp_update(struct qbman_swp *p, int stash_off);
+
 /**
  * qbman_swp_finish() - Create and destroy a functional object representing
  * the given QBMan portal descriptor.
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index d4223bdc8..747ccfbac 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
  *
  */
 
@@ -82,6 +82,10 @@ qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd);
 static int
+qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd);
+static int
 qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd);
@@ -99,6 +103,12 @@ qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
 		uint32_t *flags,
 		int num_frames);
 static int
+qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd,
+		uint32_t *flags,
+		int num_frames);
+static int
 qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
@@ -118,6 +128,12 @@ qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
 		uint32_t *flags,
 		int num_frames);
 static int
+qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		struct qbman_fd **fd,
+		uint32_t *flags,
+		int num_frames);
+static int
 qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		struct qbman_fd **fd,
@@ -135,6 +151,11 @@ qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
 		const struct qbman_fd *fd,
 		int num_frames);
 static int
+qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd,
+		int num_frames);
+static int
 qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
@@ -143,9 +164,12 @@ qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
 static int
 qbman_swp_pull_direct(struct qbman_swp *s, struct qbman_pull_desc *d);
 static int
+qbman_swp_pull_cinh_direct(struct qbman_swp *s, struct qbman_pull_desc *d);
+static int
 qbman_swp_pull_mem_back(struct qbman_swp *s, struct qbman_pull_desc *d);
 
 const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s);
+const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s);
 const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s);
 
 static int
@@ -153,6 +177,10 @@ qbman_swp_release_direct(struct qbman_swp *s,
 		const struct qbman_release_desc *d,
 		const uint64_t *buffers, unsigned int num_buffers);
 static int
+qbman_swp_release_cinh_direct(struct qbman_swp *s,
+		const struct qbman_release_desc *d,
+		const uint64_t *buffers, unsigned int num_buffers);
+static int
 qbman_swp_release_mem_back(struct qbman_swp *s,
 		const struct qbman_release_desc *d,
 		const uint64_t *buffers, unsigned int num_buffers);
@@ -327,6 +355,28 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
 	return p;
 }
 
+int qbman_swp_update(struct qbman_swp *p, int stash_off)
+{
+	const struct qbman_swp_desc *d = &p->desc;
+	struct qbman_swp_sys *s = &p->sys;
+	int ret;
+
+	/* Nothing needs to be done for QBMAN rev > 5000 with fast access */
+	if ((qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+			&& (d->cena_access_mode == qman_cena_fastest_access))
+		return 0;
+
+	ret = qbman_swp_sys_update(s, d, p->dqrr.dqrr_size, stash_off);
+	if (ret) {
+		pr_err("qbman_swp_sys_init() failed %d\n", ret);
+		return ret;
+	}
+
+	p->stash_off = stash_off;
+
+	return 0;
+}
+
 void qbman_swp_finish(struct qbman_swp *p)
 {
 #ifdef QBMAN_CHECKING
@@ -462,6 +512,27 @@ void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb)
 #endif
 }
 
+void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb)
+{
+	uint8_t *v = cmd;
+#ifdef QBMAN_CHECKING
+	QBMAN_BUG_ON(!(p->mc.check != swp_mc_can_submit));
+#endif
+	/* TBD: "|=" is going to hurt performance. Need to move as many fields
+	 * out of word zero, and for those that remain, the "OR" needs to occur
+	 * at the caller side. This debug check helps to catch cases where the
+	 * caller wants to OR but has forgotten to do so.
+	 */
+	QBMAN_BUG_ON((*v & cmd_verb) != *v);
+	dma_wmb();
+	*v = cmd_verb | p->mc.valid_bit;
+	qbman_cinh_write_complete(&p->sys, QBMAN_CENA_SWP_CR, cmd);
+	clean(cmd);
+#ifdef QBMAN_CHECKING
+	p->mc.check = swp_mc_can_poll;
+#endif
+}
+
 void *qbman_swp_mc_result(struct qbman_swp *p)
 {
 	uint32_t *ret, verb;
@@ -500,6 +571,27 @@ void *qbman_swp_mc_result(struct qbman_swp *p)
 	return ret;
 }
 
+void *qbman_swp_mc_result_cinh(struct qbman_swp *p)
+{
+	uint32_t *ret, verb;
+#ifdef QBMAN_CHECKING
+	QBMAN_BUG_ON(p->mc.check != swp_mc_can_poll);
+#endif
+	ret = qbman_cinh_read_shadow(&p->sys,
+			      QBMAN_CENA_SWP_RR(p->mc.valid_bit));
+	/* Remove the valid-bit -
+	 * command completed iff the rest is non-zero
+	 */
+	verb = ret[0] & ~QB_VALID_BIT;
+	if (!verb)
+		return NULL;
+	p->mc.valid_bit ^= QB_VALID_BIT;
+#ifdef QBMAN_CHECKING
+	p->mc.check = swp_mc_can_start;
+#endif
+	return ret;
+}
+
 /***********/
 /* Enqueue */
 /***********/
@@ -640,6 +732,16 @@ static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p,
 				     QMAN_RT_MODE);
 }
 
+static void memcpy_byte_by_byte(void *to, const void *from, size_t n)
+{
+	const uint8_t *src = from;
+	volatile uint8_t *dest = to;
+	size_t i;
+
+	for (i = 0; i < n; i++)
+		dest[i] = src[i];
+}
+
 
 static int qbman_swp_enqueue_array_mode_direct(struct qbman_swp *s,
 					       const struct qbman_eq_desc *d,
@@ -754,7 +856,7 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
 			return -EBUSY;
 	}
 
-	p = qbman_cena_write_start_wo_shadow(&s->sys,
+	p = qbman_cinh_write_start_wo_shadow(&s->sys,
 			QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
 	memcpy(&p[1], &cl[1], 28);
 	memcpy(&p[8], fd, sizeof(*fd));
@@ -762,8 +864,6 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
 
 	/* Set the verb byte, have to substitute in the valid-bit */
 	p[0] = cl[0] | s->eqcr.pi_vb;
-	qbman_cena_write_complete_wo_shadow(&s->sys,
-			QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
 	s->eqcr.pi++;
 	s->eqcr.pi &= full_mask;
 	s->eqcr.available--;
@@ -815,7 +915,10 @@ static int qbman_swp_enqueue_ring_mode(struct qbman_swp *s,
 				       const struct qbman_eq_desc *d,
 				       const struct qbman_fd *fd)
 {
-	return qbman_swp_enqueue_ring_mode_ptr(s, d, fd);
+	if (!s->stash_off)
+		return qbman_swp_enqueue_ring_mode_ptr(s, d, fd);
+	else
+		return qbman_swp_enqueue_ring_mode_cinh_direct(s, d, fd);
 }
 
 int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
@@ -1025,7 +1128,12 @@ inline int qbman_swp_enqueue_multiple(struct qbman_swp *s,
 				      uint32_t *flags,
 				      int num_frames)
 {
-	return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags, num_frames);
+	if (!s->stash_off)
+		return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags,
+						num_frames);
+	else
+		return qbman_swp_enqueue_multiple_cinh_direct(s, d, fd, flags,
+						num_frames);
 }
 
 static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
@@ -1233,7 +1341,12 @@ inline int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
 					 uint32_t *flags,
 					 int num_frames)
 {
-	return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags, num_frames);
+	if (!s->stash_off)
+		return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags,
+					num_frames);
+	else
+		return qbman_swp_enqueue_multiple_fd_cinh_direct(s, d, fd,
+					flags, num_frames);
 }
 
 static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
@@ -1426,7 +1539,13 @@ inline int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
 					   const struct qbman_fd *fd,
 					   int num_frames)
 {
-	return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd, num_frames);
+	if (!s->stash_off)
+		return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd,
+					num_frames);
+	else
+		return qbman_swp_enqueue_multiple_desc_cinh_direct(s, d, fd,
+					num_frames);
+
 }
 
 /*************************/
@@ -1574,6 +1693,30 @@ static int qbman_swp_pull_direct(struct qbman_swp *s,
 	return 0;
 }
 
+static int qbman_swp_pull_cinh_direct(struct qbman_swp *s,
+				 struct qbman_pull_desc *d)
+{
+	uint32_t *p;
+	uint32_t *cl = qb_cl(d);
+
+	if (!atomic_dec_and_test(&s->vdq.busy)) {
+		atomic_inc(&s->vdq.busy);
+		return -EBUSY;
+	}
+
+	d->pull.tok = s->sys.idx + 1;
+	s->vdq.storage = (void *)(size_t)d->pull.rsp_addr_virt;
+	p = qbman_cinh_write_start_wo_shadow(&s->sys, QBMAN_CENA_SWP_VDQCR);
+	memcpy_byte_by_byte(&p[1], &cl[1], 12);
+
+	/* Set the verb byte, have to substitute in the valid-bit */
+	lwsync();
+	p[0] = cl[0] | s->vdq.valid_bit;
+	s->vdq.valid_bit ^= QB_VALID_BIT;
+
+	return 0;
+}
+
 static int qbman_swp_pull_mem_back(struct qbman_swp *s,
 				   struct qbman_pull_desc *d)
 {
@@ -1601,7 +1744,10 @@ static int qbman_swp_pull_mem_back(struct qbman_swp *s,
 
 inline int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
 {
-	return qbman_swp_pull_ptr(s, d);
+	if (!s->stash_off)
+		return qbman_swp_pull_ptr(s, d);
+	else
+		return qbman_swp_pull_cinh_direct(s, d);
 }
 
 /****************/
@@ -1638,7 +1784,10 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s)
  */
 inline const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s)
 {
-	return qbman_swp_dqrr_next_ptr(s);
+	if (!s->stash_off)
+		return qbman_swp_dqrr_next_ptr(s);
+	else
+		return qbman_swp_dqrr_next_cinh_direct(s);
 }
 
 const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s)
@@ -1718,6 +1867,81 @@ const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s)
 	return p;
 }
 
+const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s)
+{
+	uint32_t verb;
+	uint32_t response_verb;
+	uint32_t flags;
+	const struct qbman_result *p;
+
+	/* Before using valid-bit to detect if something is there, we have to
+	 * handle the case of the DQRR reset bug...
+	 */
+	if (s->dqrr.reset_bug) {
+		/* We pick up new entries by cache-inhibited producer index,
+		 * which means that a non-coherent mapping would require us to
+		 * invalidate and read *only* once that PI has indicated that
+		 * there's an entry here. The first trip around the DQRR ring
+		 * will be much less efficient than all subsequent trips around
+		 * it...
+		 */
+		uint8_t pi = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_DQPI) &
+			     QMAN_DQRR_PI_MASK;
+
+		/* there are new entries if pi != next_idx */
+		if (pi == s->dqrr.next_idx)
+			return NULL;
+
+		/* if next_idx is/was the last ring index, and 'pi' is
+		 * different, we can disable the workaround as all the ring
+		 * entries have now been DMA'd to so valid-bit checking is
+		 * repaired. Note: this logic needs to be based on next_idx
+		 * (which increments one at a time), rather than on pi (which
+		 * can burst and wrap-around between our snapshots of it).
+		 */
+		QBMAN_BUG_ON((s->dqrr.dqrr_size - 1) < 0);
+		if (s->dqrr.next_idx == (s->dqrr.dqrr_size - 1u)) {
+			pr_debug("DEBUG: next_idx=%d, pi=%d, clear reset bug\n",
+				 s->dqrr.next_idx, pi);
+			s->dqrr.reset_bug = 0;
+		}
+	}
+	p = qbman_cinh_read_wo_shadow(&s->sys,
+			QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
+
+	verb = p->dq.verb;
+
+	/* If the valid-bit isn't of the expected polarity, nothing there. Note,
+	 * in the DQRR reset bug workaround, we shouldn't need to skip these
+	 * check, because we've already determined that a new entry is available
+	 * and we've invalidated the cacheline before reading it, so the
+	 * valid-bit behaviour is repaired and should tell us what we already
+	 * knew from reading PI.
+	 */
+	if ((verb & QB_VALID_BIT) != s->dqrr.valid_bit)
+		return NULL;
+
+	/* There's something there. Move "next_idx" attention to the next ring
+	 * entry (and prefetch it) before returning what we found.
+	 */
+	s->dqrr.next_idx++;
+	if (s->dqrr.next_idx == s->dqrr.dqrr_size) {
+		s->dqrr.next_idx = 0;
+		s->dqrr.valid_bit ^= QB_VALID_BIT;
+	}
+	/* If this is the final response to a volatile dequeue command
+	 * indicate that the vdq is no longer busy
+	 */
+	flags = p->dq.stat;
+	response_verb = verb & QBMAN_RESPONSE_VERB_MASK;
+	if ((response_verb == QBMAN_RESULT_DQ) &&
+	    (flags & QBMAN_DQ_STAT_VOLATILE) &&
+	    (flags & QBMAN_DQ_STAT_EXPIRED))
+		atomic_inc(&s->vdq.busy);
+
+	return p;
+}
+
 const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s)
 {
 	uint32_t verb;
@@ -2096,6 +2320,37 @@ static int qbman_swp_release_direct(struct qbman_swp *s,
 	return 0;
 }
 
+static int qbman_swp_release_cinh_direct(struct qbman_swp *s,
+				    const struct qbman_release_desc *d,
+				    const uint64_t *buffers,
+				    unsigned int num_buffers)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t rar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_RAR);
+
+	pr_debug("RAR=%08x\n", rar);
+	if (!RAR_SUCCESS(rar))
+		return -EBUSY;
+
+	QBMAN_BUG_ON(!num_buffers || (num_buffers > 7));
+
+	/* Start the release command */
+	p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				     QBMAN_CENA_SWP_RCR(RAR_IDX(rar)));
+
+	/* Copy the caller's buffer pointers to the command */
+	memcpy_byte_by_byte(&p[2], buffers, num_buffers * sizeof(uint64_t));
+
+	/* Set the verb byte, have to substitute in the valid-bit and the
+	 * number of buffers.
+	 */
+	lwsync();
+	p[0] = cl[0] | RAR_VB(rar) | num_buffers;
+
+	return 0;
+}
+
 static int qbman_swp_release_mem_back(struct qbman_swp *s,
 				      const struct qbman_release_desc *d,
 				      const uint64_t *buffers,
@@ -2134,7 +2389,11 @@ inline int qbman_swp_release(struct qbman_swp *s,
 			     const uint64_t *buffers,
 			     unsigned int num_buffers)
 {
-	return qbman_swp_release_ptr(s, d, buffers, num_buffers);
+	if (!s->stash_off)
+		return qbman_swp_release_ptr(s, d, buffers, num_buffers);
+	else
+		return qbman_swp_release_cinh_direct(s, d, buffers,
+						num_buffers);
 }
 
 /*******************/
@@ -2157,8 +2416,8 @@ struct qbman_acquire_rslt {
 	uint64_t buf[7];
 };
 
-int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
-		      unsigned int num_buffers)
+static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid,
+				uint64_t *buffers, unsigned int num_buffers)
 {
 	struct qbman_acquire_desc *p;
 	struct qbman_acquire_rslt *r;
@@ -2202,6 +2461,61 @@ int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
 	return (int)r->num;
 }
 
+static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
+			uint64_t *buffers, unsigned int num_buffers)
+{
+	struct qbman_acquire_desc *p;
+	struct qbman_acquire_rslt *r;
+
+	if (!num_buffers || (num_buffers > 7))
+		return -EINVAL;
+
+	/* Start the management command */
+	p = qbman_swp_mc_start(s);
+
+	if (!p)
+		return -EBUSY;
+
+	/* Encode the caller-provided attributes */
+	p->bpid = bpid;
+	p->num = num_buffers;
+
+	/* Complete the management command */
+	r = qbman_swp_mc_complete_cinh(s, p, QBMAN_MC_ACQUIRE);
+	if (!r) {
+		pr_err("qbman: acquire from BPID %d failed, no response\n",
+		       bpid);
+		return -EIO;
+	}
+
+	/* Decode the outcome */
+	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_MC_ACQUIRE);
+
+	/* Determine success or failure */
+	if (r->rslt != QBMAN_MC_RSLT_OK) {
+		pr_err("Acquire buffers from BPID 0x%x failed, code=0x%02x\n",
+		       bpid, r->rslt);
+		return -EIO;
+	}
+
+	QBMAN_BUG_ON(r->num > num_buffers);
+
+	/* Copy the acquired buffers to the caller's array */
+	u64_from_le32_copy(buffers, &r->buf[0], r->num);
+
+	return (int)r->num;
+}
+
+int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
+		      unsigned int num_buffers)
+{
+	if (!s->stash_off)
+		return qbman_swp_acquire_direct(s, bpid, buffers, num_buffers);
+	else
+		return qbman_swp_acquire_cinh_direct(s, bpid, buffers,
+					num_buffers);
+}
+
 /*****************/
 /* FQ management */
 /*****************/
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.h b/drivers/bus/fslmc/qbman/qbman_portal.h
index 3aaacae52..1cf791830 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/qbman_portal.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
  *
  */
 
@@ -102,6 +102,7 @@ struct qbman_swp {
 		uint32_t ci;
 		int available;
 	} eqcr;
+	uint8_t stash_off;
 };
 
 /* -------------------------- */
@@ -118,7 +119,9 @@ struct qbman_swp {
  */
 void *qbman_swp_mc_start(struct qbman_swp *p);
 void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb);
+void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb);
 void *qbman_swp_mc_result(struct qbman_swp *p);
+void *qbman_swp_mc_result_cinh(struct qbman_swp *p);
 
 /* Wraps up submit + poll-for-result */
 static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
@@ -135,6 +138,20 @@ static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
 	return cmd;
 }
 
+static inline void *qbman_swp_mc_complete_cinh(struct qbman_swp *swp, void *cmd,
+					  uint8_t cmd_verb)
+{
+	int loopvar = 1000;
+
+	qbman_swp_mc_submit_cinh(swp, cmd, cmd_verb);
+	do {
+		cmd = qbman_swp_mc_result_cinh(swp);
+	} while (!cmd && loopvar--);
+	QBMAN_BUG_ON(!loopvar);
+
+	return cmd;
+}
+
 /* ---------------------- */
 /* Descriptors/cachelines */
 /* ---------------------- */
diff --git a/drivers/bus/fslmc/qbman/qbman_sys.h b/drivers/bus/fslmc/qbman/qbman_sys.h
index 55449edf3..61f817c47 100644
--- a/drivers/bus/fslmc/qbman/qbman_sys.h
+++ b/drivers/bus/fslmc/qbman/qbman_sys.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  * Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
- * Copyright 2019 NXP
+ * Copyright 2019-2020 NXP
  */
 /* qbman_sys_decl.h and qbman_sys.h are the two platform-specific files in the
  * driver. They are only included via qbman_private.h, which is itself a
@@ -190,6 +190,34 @@ static inline void qbman_cinh_write(struct qbman_swp_sys *s, uint32_t offset,
 #endif
 }
 
+static inline void *qbman_cinh_write_start_wo_shadow(struct qbman_swp_sys *s,
+						     uint32_t offset)
+{
+#ifdef QBMAN_CINH_TRACE
+	pr_info("qbman_cinh_write_start(%p:%d:0x%03x)\n",
+		s->addr_cinh, s->idx, offset);
+#endif
+	QBMAN_BUG_ON(offset & 63);
+	return (s->addr_cinh + offset);
+}
+
+static inline void qbman_cinh_write_complete(struct qbman_swp_sys *s,
+					     uint32_t offset, void *cmd)
+{
+	const uint32_t *shadow = cmd;
+	int loop;
+#ifdef QBMAN_CINH_TRACE
+	pr_info("qbman_cinh_write_complete(%p:%d:0x%03x) %p\n",
+		s->addr_cinh, s->idx, offset, shadow);
+	hexdump(cmd, 64);
+#endif
+	for (loop = 15; loop >= 1; loop--)
+		__raw_writel(shadow[loop], s->addr_cinh +
+					 offset + loop * 4);
+	lwsync();
+	__raw_writel(shadow[0], s->addr_cinh + offset);
+}
+
 static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
 {
 	uint32_t reg = __raw_readl(s->addr_cinh + offset);
@@ -200,6 +228,35 @@ static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
 	return reg;
 }
 
+static inline void *qbman_cinh_read_shadow(struct qbman_swp_sys *s,
+					   uint32_t offset)
+{
+	uint32_t *shadow = (uint32_t *)(s->cena + offset);
+	unsigned int loop;
+#ifdef QBMAN_CINH_TRACE
+	pr_info(" %s (%p:%d:0x%03x) %p\n", __func__,
+		s->addr_cinh, s->idx, offset, shadow);
+#endif
+
+	for (loop = 0; loop < 16; loop++)
+		shadow[loop] = __raw_readl(s->addr_cinh + offset
+					+ loop * 4);
+#ifdef QBMAN_CINH_TRACE
+	hexdump(shadow, 64);
+#endif
+	return shadow;
+}
+
+static inline void *qbman_cinh_read_wo_shadow(struct qbman_swp_sys *s,
+					      uint32_t offset)
+{
+#ifdef QBMAN_CINH_TRACE
+	pr_info("qbman_cinh_read(%p:%d:0x%03x)\n",
+		s->addr_cinh, s->idx, offset);
+#endif
+	return s->addr_cinh + offset;
+}
+
 static inline void *qbman_cena_write_start(struct qbman_swp_sys *s,
 					   uint32_t offset)
 {
@@ -476,6 +533,82 @@ static inline int qbman_swp_sys_init(struct qbman_swp_sys *s,
 	return 0;
 }
 
+static inline int qbman_swp_sys_update(struct qbman_swp_sys *s,
+				     const struct qbman_swp_desc *d,
+				     uint8_t dqrr_size,
+				     int stash_off)
+{
+	uint32_t reg;
+	int i;
+	int cena_region_size = 4*1024;
+	uint8_t est = 1;
+#ifdef RTE_ARCH_64
+	uint8_t wn = CENA_WRITE_ENABLE;
+#else
+	uint8_t wn = CINH_WRITE_ENABLE;
+#endif
+
+	if (stash_off)
+		wn = CINH_WRITE_ENABLE;
+
+	QBMAN_BUG_ON(d->idx < 0);
+#ifdef QBMAN_CHECKING
+	/* We should never be asked to initialise for a portal that isn't in
+	 * the power-on state. (Ie. don't forget to reset portals when they are
+	 * decommissioned!)
+	 */
+	reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+	QBMAN_BUG_ON(reg);
+#endif
+	if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+			&& (d->cena_access_mode == qman_cena_fastest_access))
+		memset(s->addr_cena, 0, cena_region_size);
+	else {
+		/* Invalidate the portal memory.
+		 * This ensures no stale cache lines
+		 */
+		for (i = 0; i < cena_region_size; i += 64)
+			dccivac(s->addr_cena + i);
+	}
+
+	if (dpaa2_svr_family == SVR_LS1080A)
+		est = 0;
+
+	if (s->eqcr_mode == qman_eqcr_vb_array) {
+		reg = qbman_set_swp_cfg(dqrr_size, wn,
+					0, 3, 2, 3, 1, 1, 1, 1, 1, 1);
+	} else {
+		if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 &&
+			    (d->cena_access_mode == qman_cena_fastest_access))
+			reg = qbman_set_swp_cfg(dqrr_size, wn,
+						1, 3, 2, 0, 1, 1, 1, 1, 1, 1);
+		else
+			reg = qbman_set_swp_cfg(dqrr_size, wn,
+						est, 3, 2, 2, 1, 1, 1, 1, 1, 1);
+	}
+
+	if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+			&& (d->cena_access_mode == qman_cena_fastest_access))
+		reg |= 1 << SWP_CFG_CPBS_SHIFT | /* memory-backed mode */
+		       1 << SWP_CFG_VPM_SHIFT |  /* VDQCR read triggered mode */
+		       1 << SWP_CFG_CPM_SHIFT;   /* CR read triggered mode */
+
+	qbman_cinh_write(s, QBMAN_CINH_SWP_CFG, reg);
+	reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
+	if (!reg) {
+		pr_err("The portal %d is not enabled!\n", s->idx);
+		return -1;
+	}
+
+	if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000
+			&& (d->cena_access_mode == qman_cena_fastest_access)) {
+		qbman_cinh_write(s, QBMAN_CINH_SWP_EQCR_PI, QMAN_RT_MODE);
+		qbman_cinh_write(s, QBMAN_CINH_SWP_RCR_PI, QMAN_RT_MODE);
+	}
+
+	return 0;
+}
+
 static inline void qbman_swp_sys_finish(struct qbman_swp_sys *s)
 {
 	free(s->cena);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 08/16] drivers: enhance portal allocation failure log
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (6 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 07/16] bus/fslmc: support portal migration Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 09/16] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
                     ` (8 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

The change adds printing the thread id when portal allocation
failure occurs

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++++++--
 drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++++++--
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++++++++---
 drivers/net/dpaa2/dpaa2_ethdev.c            |  4 +++-
 drivers/net/dpaa2/dpaa2_rxtx.c              | 16 ++++++++++++----
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++++++--
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++++++++---
 7 files changed, 51 insertions(+), 17 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 6ed2701ab..cf08003cc 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1459,7 +1459,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1641,7 +1643,9 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 1833d659d..bb02ea9fb 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -74,7 +74,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -273,7 +275,9 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 48887beb7..fa9b53e64 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -69,7 +69,9 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_MEMPOOL_ERR("Failure in affining portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			goto err1;
 		}
 	}
@@ -198,7 +200,9 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return;
 		}
 	}
@@ -317,7 +321,9 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return ret;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4fc550a88..4a61c6f78 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -887,7 +887,9 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return -EINVAL;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 52d913d9e..d809e0f4b 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -760,7 +760,9 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -872,7 +874,9 @@ uint16_t dpaa2_dev_tx_conf(void *queue)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1011,7 +1015,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1272,7 +1278,9 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
index 997d1c873..7c21c6a52 100644
--- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
+++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
@@ -70,7 +70,9 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -133,7 +135,9 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c90595400..d5202d652 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -666,7 +666,9 @@ dpdmai_dev_enqueue_multi(struct dpaa2_dpdmai_dev *dpdmai_dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -788,7 +790,9 @@ dpdmai_dev_dequeue_multijob_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -929,7 +933,9 @@ dpdmai_dev_dequeue_multijob_no_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 09/16] bus/fslmc: rename the cinh read functions used for ls1088
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (7 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 08/16] drivers: enhance portal allocation failure log Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 10/16] net/dpaa: return error on multiple mp config Hemant Agrawal
                     ` (7 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

This patch changes the qbman I/O function names as they are
only reading from cinh register, but writing to cena registers.

This gives way to add functions which purely work in cinh mode

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_portal.c | 250 +++++++++++++++++++++++--
 1 file changed, 233 insertions(+), 17 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 747ccfbac..207faada3 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -78,7 +78,7 @@ qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd);
 static int
-qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_ring_mode_cinh_read_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd);
 static int
@@ -97,7 +97,7 @@ qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
 		uint32_t *flags,
 		int num_frames);
 static int
-qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_cinh_read_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
 		uint32_t *flags,
@@ -122,7 +122,7 @@ qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
 		uint32_t *flags,
 		int num_frames);
 static int
-qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_fd_cinh_read_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		struct qbman_fd **fd,
 		uint32_t *flags,
@@ -146,7 +146,7 @@ qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
 		const struct qbman_fd *fd,
 		int num_frames);
 static int
-qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s,
+qbman_swp_enqueue_multiple_desc_cinh_read_direct(struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
 		int num_frames);
@@ -309,15 +309,15 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
 			&& (d->cena_access_mode == qman_cena_fastest_access)) {
 		p->eqcr.pi_ring_size = 32;
 		qbman_swp_enqueue_array_mode_ptr =
-				qbman_swp_enqueue_array_mode_mem_back;
+			qbman_swp_enqueue_array_mode_mem_back;
 		qbman_swp_enqueue_ring_mode_ptr =
-				qbman_swp_enqueue_ring_mode_mem_back;
+			qbman_swp_enqueue_ring_mode_mem_back;
 		qbman_swp_enqueue_multiple_ptr =
-				qbman_swp_enqueue_multiple_mem_back;
+			qbman_swp_enqueue_multiple_mem_back;
 		qbman_swp_enqueue_multiple_fd_ptr =
-				qbman_swp_enqueue_multiple_fd_mem_back;
+			qbman_swp_enqueue_multiple_fd_mem_back;
 		qbman_swp_enqueue_multiple_desc_ptr =
-				qbman_swp_enqueue_multiple_desc_mem_back;
+			qbman_swp_enqueue_multiple_desc_mem_back;
 		qbman_swp_pull_ptr = qbman_swp_pull_mem_back;
 		qbman_swp_dqrr_next_ptr = qbman_swp_dqrr_next_mem_back;
 		qbman_swp_release_ptr = qbman_swp_release_mem_back;
@@ -325,13 +325,13 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
 
 	if (dpaa2_svr_family == SVR_LS1080A) {
 		qbman_swp_enqueue_ring_mode_ptr =
-				qbman_swp_enqueue_ring_mode_cinh_direct;
+			qbman_swp_enqueue_ring_mode_cinh_read_direct;
 		qbman_swp_enqueue_multiple_ptr =
-				qbman_swp_enqueue_multiple_cinh_direct;
+			qbman_swp_enqueue_multiple_cinh_read_direct;
 		qbman_swp_enqueue_multiple_fd_ptr =
-				qbman_swp_enqueue_multiple_fd_cinh_direct;
+			qbman_swp_enqueue_multiple_fd_cinh_read_direct;
 		qbman_swp_enqueue_multiple_desc_ptr =
-				qbman_swp_enqueue_multiple_desc_cinh_direct;
+			qbman_swp_enqueue_multiple_desc_cinh_read_direct;
 	}
 
 	for (mask_size = p->eqcr.pi_ring_size; mask_size > 0; mask_size >>= 1)
@@ -835,7 +835,7 @@ static int qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s,
 	return 0;
 }
 
-static int qbman_swp_enqueue_ring_mode_cinh_direct(
+static int qbman_swp_enqueue_ring_mode_cinh_read_direct(
 		struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd)
@@ -873,6 +873,44 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct(
 	return 0;
 }
 
+static int qbman_swp_enqueue_ring_mode_cinh_direct(
+		struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd)
+{
+	uint32_t *p;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqcr_ci, full_mask, half_mask;
+
+	half_mask = (s->eqcr.pi_ci_mask>>1);
+	full_mask = s->eqcr.pi_ci_mask;
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cinh_read(&s->sys,
+				QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+				eqcr_ci, s->eqcr.ci);
+		if (!s->eqcr.available)
+			return -EBUSY;
+	}
+
+	p = qbman_cinh_write_start_wo_shadow(&s->sys,
+			QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask));
+	memcpy_byte_by_byte(&p[1], &cl[1], 28);
+	memcpy_byte_by_byte(&p[8], fd, sizeof(*fd));
+	lwsync();
+
+	/* Set the verb byte, have to substitute in the valid-bit */
+	p[0] = cl[0] | s->eqcr.pi_vb;
+	s->eqcr.pi++;
+	s->eqcr.pi &= full_mask;
+	s->eqcr.available--;
+	if (!(s->eqcr.pi & half_mask))
+		s->eqcr.pi_vb ^= QB_VALID_BIT;
+
+	return 0;
+}
+
 static int qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s,
 						const struct qbman_eq_desc *d,
 						const struct qbman_fd *fd)
@@ -999,7 +1037,7 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
 	return num_enqueued;
 }
 
-static int qbman_swp_enqueue_multiple_cinh_direct(
+static int qbman_swp_enqueue_multiple_cinh_read_direct(
 		struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
@@ -1069,6 +1107,67 @@ static int qbman_swp_enqueue_multiple_cinh_direct(
 	return num_enqueued;
 }
 
+static int qbman_swp_enqueue_multiple_cinh_direct(
+		struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd,
+		uint32_t *flags,
+		int num_frames)
+{
+	uint32_t *p = NULL;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+	int i, num_enqueued = 0;
+
+	half_mask = (s->eqcr.pi_ci_mask>>1);
+	full_mask = s->eqcr.pi_ci_mask;
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cinh_read(&s->sys,
+				QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+				eqcr_ci, s->eqcr.ci);
+		if (!s->eqcr.available)
+			return 0;
+	}
+
+	eqcr_pi = s->eqcr.pi;
+	num_enqueued = (s->eqcr.available < num_frames) ?
+			s->eqcr.available : num_frames;
+	s->eqcr.available -= num_enqueued;
+	/* Fill in the EQCR ring */
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		memcpy_byte_by_byte(&p[1], &cl[1], 28);
+		memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd));
+		eqcr_pi++;
+	}
+
+	lwsync();
+
+	/* Set the verb byte, have to substitute in the valid-bit */
+	eqcr_pi = s->eqcr.pi;
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		p[0] = cl[0] | s->eqcr.pi_vb;
+		if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
+			struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+
+			d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+				((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
+		}
+		eqcr_pi++;
+		if (!(eqcr_pi & half_mask))
+			s->eqcr.pi_vb ^= QB_VALID_BIT;
+	}
+
+	s->eqcr.pi = eqcr_pi & full_mask;
+
+	return num_enqueued;
+}
+
 static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
 					       const struct qbman_eq_desc *d,
 					       const struct qbman_fd *fd,
@@ -1205,7 +1304,7 @@ static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s,
 	return num_enqueued;
 }
 
-static int qbman_swp_enqueue_multiple_fd_cinh_direct(
+static int qbman_swp_enqueue_multiple_fd_cinh_read_direct(
 		struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		struct qbman_fd **fd,
@@ -1275,6 +1374,67 @@ static int qbman_swp_enqueue_multiple_fd_cinh_direct(
 	return num_enqueued;
 }
 
+static int qbman_swp_enqueue_multiple_fd_cinh_direct(
+		struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		struct qbman_fd **fd,
+		uint32_t *flags,
+		int num_frames)
+{
+	uint32_t *p = NULL;
+	const uint32_t *cl = qb_cl(d);
+	uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+	int i, num_enqueued = 0;
+
+	half_mask = (s->eqcr.pi_ci_mask>>1);
+	full_mask = s->eqcr.pi_ci_mask;
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cinh_read(&s->sys,
+				QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+				eqcr_ci, s->eqcr.ci);
+		if (!s->eqcr.available)
+			return 0;
+	}
+
+	eqcr_pi = s->eqcr.pi;
+	num_enqueued = (s->eqcr.available < num_frames) ?
+			s->eqcr.available : num_frames;
+	s->eqcr.available -= num_enqueued;
+	/* Fill in the EQCR ring */
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		memcpy_byte_by_byte(&p[1], &cl[1], 28);
+		memcpy_byte_by_byte(&p[8], fd[i], sizeof(struct qbman_fd));
+		eqcr_pi++;
+	}
+
+	lwsync();
+
+	/* Set the verb byte, have to substitute in the valid-bit */
+	eqcr_pi = s->eqcr.pi;
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		p[0] = cl[0] | s->eqcr.pi_vb;
+		if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
+			struct qbman_eq_desc *d = (struct qbman_eq_desc *)p;
+
+			d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
+				((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
+		}
+		eqcr_pi++;
+		if (!(eqcr_pi & half_mask))
+			s->eqcr.pi_vb ^= QB_VALID_BIT;
+	}
+
+	s->eqcr.pi = eqcr_pi & full_mask;
+
+	return num_enqueued;
+}
+
 static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s,
 						  const struct qbman_eq_desc *d,
 						  struct qbman_fd **fd,
@@ -1413,7 +1573,7 @@ static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s,
 	return num_enqueued;
 }
 
-static int qbman_swp_enqueue_multiple_desc_cinh_direct(
+static int qbman_swp_enqueue_multiple_desc_cinh_read_direct(
 		struct qbman_swp *s,
 		const struct qbman_eq_desc *d,
 		const struct qbman_fd *fd,
@@ -1478,6 +1638,62 @@ static int qbman_swp_enqueue_multiple_desc_cinh_direct(
 	return num_enqueued;
 }
 
+static int qbman_swp_enqueue_multiple_desc_cinh_direct(
+		struct qbman_swp *s,
+		const struct qbman_eq_desc *d,
+		const struct qbman_fd *fd,
+		int num_frames)
+{
+	uint32_t *p;
+	const uint32_t *cl;
+	uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
+	int i, num_enqueued = 0;
+
+	half_mask = (s->eqcr.pi_ci_mask>>1);
+	full_mask = s->eqcr.pi_ci_mask;
+	if (!s->eqcr.available) {
+		eqcr_ci = s->eqcr.ci;
+		s->eqcr.ci = qbman_cinh_read(&s->sys,
+				QBMAN_CINH_SWP_EQCR_CI) & full_mask;
+		s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
+					eqcr_ci, s->eqcr.ci);
+		if (!s->eqcr.available)
+			return 0;
+	}
+
+	eqcr_pi = s->eqcr.pi;
+	num_enqueued = (s->eqcr.available < num_frames) ?
+			s->eqcr.available : num_frames;
+	s->eqcr.available -= num_enqueued;
+	/* Fill in the EQCR ring */
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		cl = qb_cl(&d[i]);
+		memcpy_byte_by_byte(&p[1], &cl[1], 28);
+		memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd));
+		eqcr_pi++;
+	}
+
+	lwsync();
+
+	/* Set the verb byte, have to substitute in the valid-bit */
+	eqcr_pi = s->eqcr.pi;
+	for (i = 0; i < num_enqueued; i++) {
+		p = qbman_cinh_write_start_wo_shadow(&s->sys,
+				QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask));
+		cl = qb_cl(&d[i]);
+		p[0] = cl[0] | s->eqcr.pi_vb;
+		eqcr_pi++;
+		if (!(eqcr_pi & half_mask))
+			s->eqcr.pi_vb ^= QB_VALID_BIT;
+	}
+
+	s->eqcr.pi = eqcr_pi & full_mask;
+
+	return num_enqueued;
+}
+
 static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
 					const struct qbman_eq_desc *d,
 					const struct qbman_fd *fd,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 10/16] net/dpaa: return error on multiple mp config
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (8 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 09/16] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 11/16] net/dpaa: enable Tx queue taildrop Hemant Agrawal
                     ` (6 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

multiple buffer pools are not supported on a single device.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/mempool/dpaa/dpaa_mempool.c | 1 +
 drivers/net/dpaa/dpaa_ethdev.c      | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 3a2528331..00db6f9bc 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -132,6 +132,7 @@ dpaa_mbuf_free_pool(struct rte_mempool *mp)
 		DPAA_MEMPOOL_INFO("BMAN pool freed for bpid =%d",
 				  bp_info->bpid);
 		rte_free(mp->pool_data);
+		bp_info->bp = NULL;
 		mp->pool_data = NULL;
 	}
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index fce9ce2fe..0384532d2 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -587,6 +587,12 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	DPAA_PMD_INFO("Rx queue setup for queue index: %d fq_id (0x%x)",
 			queue_idx, rxq->fqid);
 
+	if (dpaa_intf->bp_info && dpaa_intf->bp_info->bp &&
+	    dpaa_intf->bp_info->mp != mp) {
+		DPAA_PMD_WARN("Multiple pools on same interface not supported");
+		return -EINVAL;
+	}
+
 	/* Max packet can fit in single buffer */
 	if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
 		;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 11/16] net/dpaa: enable Tx queue taildrop
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (9 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 10/16] net/dpaa: return error on multiple mp config Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 12/16] net/dpaa: add 2.5G support Hemant Agrawal
                     ` (5 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

Enable congestion handling/tail drop for TX queues.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/dpaa/base/qbman/qman.c        |  43 +++++++++
 drivers/bus/dpaa/include/fsl_qman.h       |  15 +++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |   2 +
 drivers/net/dpaa/dpaa_ethdev.c            | 111 ++++++++++++++++++++--
 drivers/net/dpaa/dpaa_ethdev.h            |   2 +-
 drivers/net/dpaa/dpaa_rxtx.c              |  71 ++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h              |   3 +
 7 files changed, 240 insertions(+), 7 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index b53eb9e5e..494aca1d0 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -40,6 +40,8 @@
 			spin_unlock(&__fq478->fqlock); \
 	} while (0)
 
+static qman_cb_free_mbuf qman_free_mbuf_cb;
+
 static inline void fq_set(struct qman_fq *fq, u32 mask)
 {
 	dpaa_set_bits(mask, &fq->flags);
@@ -790,6 +792,47 @@ static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,
 	FQUNLOCK(fq);
 }
 
+void
+qman_ern_register_cb(qman_cb_free_mbuf cb)
+{
+	qman_free_mbuf_cb = cb;
+}
+
+
+void
+qman_ern_poll_free(void)
+{
+	struct qman_portal *p = get_affine_portal();
+	u8 verb, num = 0;
+	const struct qm_mr_entry *msg;
+	const struct qm_fd *fd;
+	struct qm_mr_entry swapped_msg;
+
+	qm_mr_pvb_update(&p->p);
+	msg = qm_mr_current(&p->p);
+
+	while (msg != NULL) {
+		swapped_msg = *msg;
+		hw_fd_to_cpu(&swapped_msg.ern.fd);
+		verb = msg->ern.verb & QM_MR_VERB_TYPE_MASK;
+		fd = &swapped_msg.ern.fd;
+
+		if (unlikely(verb & 0x20)) {
+			printf("HW ERN notification, Nothing to do\n");
+		} else {
+			if ((fd->bpid & 0xff) != 0xff)
+				qman_free_mbuf_cb(fd);
+		}
+
+		num++;
+		qm_mr_next(&p->p);
+		qm_mr_pvb_update(&p->p);
+		msg = qm_mr_current(&p->p);
+	}
+
+	qm_mr_cci_consume(&p->p, num);
+}
+
 static u32 __poll_portal_slow(struct qman_portal *p, u32 is)
 {
 	const struct qm_mr_entry *msg;
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 4deea5e75..fc00417fb 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1152,6 +1152,10 @@ typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq,
 /* This callback type is used when handling DCP ERNs */
 typedef void (*qman_cb_dc_ern)(struct qman_portal *qm,
 				const struct qm_mr_entry *msg);
+
+/* This callback function will be used to free mbufs of ERN */
+typedef uint16_t (*qman_cb_free_mbuf)(const struct qm_fd *fd);
+
 /*
  * s/w-visible states. Ie. tentatively scheduled + truly scheduled + active +
  * held-active + held-suspended are just "sched". Things like "retired" will not
@@ -1778,6 +1782,17 @@ int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
 int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
 		       int frames_to_send);
 
+/**
+ * qman_ern_poll_free - Polling on MR and calling a callback function to free
+ * mbufs when SW ERNs received.
+ */
+void qman_ern_poll_free(void);
+
+/**
+ * qman_ern_register_cb - Register a callback function to free buffers.
+ */
+void qman_ern_register_cb(qman_cb_free_mbuf cb);
+
 /**
  * qman_enqueue_multi_fq - Enqueue multiple frames to their respective frame
  * queues.
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index e6ca4361e..ed319539c 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -61,6 +61,8 @@ DPDK_20.0 {
 	qman_enqueue;
 	qman_enqueue_multi;
 	qman_enqueue_multi_fq;
+	qman_ern_poll_free;
+	qman_ern_register_cb;
 	qman_fq_fqid;
 	qman_fq_portal_irqsource_add;
 	qman_fq_portal_irqsource_remove;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 0384532d2..2ae79c9f5 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2017-2019 NXP
+ *   Copyright 2017-2020 NXP
  *
  */
 /* System headers */
@@ -86,9 +86,12 @@ static int dpaa_push_mode_max_queue = DPAA_DEFAULT_PUSH_MODE_QUEUE;
 static int dpaa_push_queue_idx; /* Queue index which are in push mode*/
 
 
-/* Per FQ Taildrop in frame count */
+/* Per RX FQ Taildrop in frame count */
 static unsigned int td_threshold = CGR_RX_PERFQ_THRESH;
 
+/* Per TX FQ Taildrop in frame count, disabled by default */
+static unsigned int td_tx_threshold;
+
 struct rte_dpaa_xstats_name_off {
 	char name[RTE_ETH_XSTATS_NAME_SIZE];
 	uint32_t offset;
@@ -275,7 +278,11 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* Change tx callback to the real one */
-	dev->tx_pkt_burst = dpaa_eth_queue_tx;
+	if (dpaa_intf->cgr_tx)
+		dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+	else
+		dev->tx_pkt_burst = dpaa_eth_queue_tx;
+
 	fman_if_enable_rx(dpaa_intf->fif);
 
 	return 0;
@@ -869,6 +876,7 @@ int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	DPAA_PMD_INFO("Tx queue setup for queue index: %d fq_id (0x%x)",
 			queue_idx, dpaa_intf->tx_queues[queue_idx].fqid);
 	dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+
 	return 0;
 }
 
@@ -1239,9 +1247,19 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
 
 /* Initialise a Tx FQ */
 static int dpaa_tx_queue_init(struct qman_fq *fq,
-			      struct fman_if *fman_intf)
+			      struct fman_if *fman_intf,
+			      struct qman_cgr *cgr_tx)
 {
 	struct qm_mcc_initfq opts = {0};
+	struct qm_mcc_initcgr cgr_opts = {
+		.we_mask = QM_CGR_WE_CS_THRES |
+				QM_CGR_WE_CSTD_EN |
+				QM_CGR_WE_MODE,
+		.cgr = {
+			.cstd_en = QM_CGR_EN,
+			.mode = QMAN_CGR_MODE_FRAME
+		}
+	};
 	int ret;
 
 	ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
@@ -1260,6 +1278,27 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
 	opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
 	opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
 	DPAA_PMD_DEBUG("init tx fq %p, fqid 0x%x", fq, fq->fqid);
+
+	if (cgr_tx) {
+		/* Enable tail drop with cgr on this queue */
+		qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres,
+				      td_tx_threshold, 0);
+		cgr_tx->cb = NULL;
+		ret = qman_create_cgr(cgr_tx, QMAN_CGR_FLAG_USE_INIT,
+				      &cgr_opts);
+		if (ret) {
+			DPAA_PMD_WARN(
+				"rx taildrop init fail on rx fqid 0x%x(ret=%d)",
+				fq->fqid, ret);
+			goto without_cgr;
+		}
+		opts.we_mask |= QM_INITFQ_WE_CGID;
+		opts.fqd.cgid = cgr_tx->cgrid;
+		opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+		DPAA_PMD_DEBUG("Tx FQ tail drop enabled, threshold = %d\n",
+				td_tx_threshold);
+	}
+without_cgr:
 	ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
 	if (ret)
 		DPAA_PMD_ERR("init tx fqid 0x%x failed %d", fq->fqid, ret);
@@ -1312,6 +1351,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	struct fman_if *fman_intf;
 	struct fman_if_bpool *bp, *tmp_bp;
 	uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES];
+	uint32_t cgrid_tx[MAX_DPAA_CORES];
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -1321,7 +1361,10 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 		eth_dev->dev_ops = &dpaa_devops;
 		/* Plugging of UCODE burst API not supported in Secondary */
 		eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
-		eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
+		if (dpaa_intf->cgr_tx)
+			eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+		else
+			eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
 #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
 		qman_set_fq_lookup_table(
 				dpaa_intf->rx_queues->qman_fq_lookup_table);
@@ -1368,6 +1411,21 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 		return -ENOMEM;
 	}
 
+	memset(cgrid, 0, sizeof(cgrid));
+	memset(cgrid_tx, 0, sizeof(cgrid_tx));
+
+	/* if DPAA_TX_TAILDROP_THRESHOLD is set, use that value; if 0, it means
+	 * Tx tail drop is disabled.
+	 */
+	if (getenv("DPAA_TX_TAILDROP_THRESHOLD")) {
+		td_tx_threshold = atoi(getenv("DPAA_TX_TAILDROP_THRESHOLD"));
+		DPAA_PMD_DEBUG("Tail drop threshold env configured: %u",
+			       td_tx_threshold);
+		/* if a very large value is being configured */
+		if (td_tx_threshold > UINT16_MAX)
+			td_tx_threshold = CGR_RX_PERFQ_THRESH;
+	}
+
 	/* If congestion control is enabled globally*/
 	if (td_threshold) {
 		dpaa_intf->cgr_rx = rte_zmalloc(NULL,
@@ -1416,9 +1474,36 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 		goto free_rx;
 	}
 
+	/* If congestion control is enabled globally*/
+	if (td_tx_threshold) {
+		dpaa_intf->cgr_tx = rte_zmalloc(NULL,
+			sizeof(struct qman_cgr) * MAX_DPAA_CORES,
+			MAX_CACHELINE);
+		if (!dpaa_intf->cgr_tx) {
+			DPAA_PMD_ERR("Failed to alloc mem for cgr_tx\n");
+			ret = -ENOMEM;
+			goto free_rx;
+		}
+
+		ret = qman_alloc_cgrid_range(&cgrid_tx[0], MAX_DPAA_CORES,
+					     1, 0);
+		if (ret != MAX_DPAA_CORES) {
+			DPAA_PMD_WARN("insufficient CGRIDs available");
+			ret = -EINVAL;
+			goto free_rx;
+		}
+	} else {
+		dpaa_intf->cgr_tx = NULL;
+	}
+
+
 	for (loop = 0; loop < MAX_DPAA_CORES; loop++) {
+		if (dpaa_intf->cgr_tx)
+			dpaa_intf->cgr_tx[loop].cgrid = cgrid_tx[loop];
+
 		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
-					 fman_intf);
+			fman_intf,
+			dpaa_intf->cgr_tx ? &dpaa_intf->cgr_tx[loop] : NULL);
 		if (ret)
 			goto free_tx;
 		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
@@ -1495,6 +1580,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 
 free_rx:
 	rte_free(dpaa_intf->cgr_rx);
+	rte_free(dpaa_intf->cgr_tx);
 	rte_free(dpaa_intf->rx_queues);
 	dpaa_intf->rx_queues = NULL;
 	dpaa_intf->nb_rx_queues = 0;
@@ -1535,6 +1621,17 @@ dpaa_dev_uninit(struct rte_eth_dev *dev)
 	rte_free(dpaa_intf->cgr_rx);
 	dpaa_intf->cgr_rx = NULL;
 
+	/* Release TX congestion Groups */
+	if (dpaa_intf->cgr_tx) {
+		for (loop = 0; loop < MAX_DPAA_CORES; loop++)
+			qman_delete_cgr(&dpaa_intf->cgr_tx[loop]);
+
+		qman_release_cgrid_range(dpaa_intf->cgr_tx[loop].cgrid,
+					 MAX_DPAA_CORES);
+		rte_free(dpaa_intf->cgr_tx);
+		dpaa_intf->cgr_tx = NULL;
+	}
+
 	rte_free(dpaa_intf->rx_queues);
 	dpaa_intf->rx_queues = NULL;
 
@@ -1640,6 +1737,8 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
 	eth_dev->device = &dpaa_dev->device;
 	dpaa_dev->eth_dev = eth_dev;
 
+	qman_ern_register_cb(dpaa_free_mbuf);
+
 	/* Invoke PMD device initialization function */
 	diag = dpaa_dev_init(eth_dev);
 	if (diag == 0) {
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index da06f1faa..3eab029fd 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -110,6 +110,7 @@ struct dpaa_if {
 	struct qman_fq *rx_queues;
 	struct qman_cgr *cgr_rx;
 	struct qman_fq *tx_queues;
+	struct qman_cgr *cgr_tx;
 	struct qman_fq debug_queues[2];
 	uint16_t nb_rx_queues;
 	uint16_t nb_tx_queues;
@@ -181,5 +182,4 @@ dpaa_rx_cb_atomic(void *event,
 		  struct qman_fq *fq,
 		  const struct qm_dqrr_entry *dqrr,
 		  void **bufs);
-
 #endif
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 5dba1db8b..5dedff25e 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -398,6 +398,69 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
 	return mbuf;
 }
 
+uint16_t
+dpaa_free_mbuf(const struct qm_fd *fd)
+{
+	struct rte_mbuf *mbuf;
+	struct dpaa_bp_info *bp_info;
+	uint8_t format;
+	void *ptr;
+
+	bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
+	if (unlikely(format == qm_fd_sg)) {
+		struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+		struct qm_sg_entry *sgt, *sg_temp;
+		void *vaddr, *sg_vaddr;
+		int i = 0;
+		uint16_t fd_offset = fd->offset;
+
+		vaddr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd));
+		if (!vaddr) {
+			DPAA_PMD_ERR("unable to convert physical address");
+			return -1;
+		}
+		sgt = vaddr + fd_offset;
+		sg_temp = &sgt[i++];
+		hw_sg_to_cpu(sg_temp);
+		temp = (struct rte_mbuf *)
+			((char *)vaddr - bp_info->meta_data_size);
+		sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
+						qm_sg_entry_get64(sg_temp));
+
+		first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						bp_info->meta_data_size);
+		first_seg->nb_segs = 1;
+		prev_seg = first_seg;
+		while (i < DPAA_SGT_MAX_ENTRIES) {
+			sg_temp = &sgt[i++];
+			hw_sg_to_cpu(sg_temp);
+			sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
+						qm_sg_entry_get64(sg_temp));
+			cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+						      bp_info->meta_data_size);
+			first_seg->nb_segs += 1;
+			prev_seg->next = cur_seg;
+			if (sg_temp->final) {
+				cur_seg->next = NULL;
+				break;
+			}
+			prev_seg = cur_seg;
+		}
+
+		rte_pktmbuf_free_seg(temp);
+		rte_pktmbuf_free_seg(first_seg);
+		return 0;
+	}
+
+	ptr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd));
+	mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+
+	rte_pktmbuf_free(mbuf);
+
+	return 0;
+}
+
 /* Specific for LS1043 */
 void
 dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
@@ -1011,6 +1074,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	return sent;
 }
 
+uint16_t
+dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	qman_ern_poll_free();
+
+	return dpaa_eth_queue_tx(q, bufs, nb_bufs);
+}
+
 uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
 			      struct rte_mbuf **bufs __rte_unused,
 		uint16_t nb_bufs __rte_unused)
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 75b093c1e..d41add704 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -254,6 +254,8 @@ struct annotations_t {
 
 uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
+uint16_t dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs,
+				uint16_t nb_bufs);
 uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
 
 uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
@@ -266,6 +268,7 @@ int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 			   struct qm_fd *fd,
 			   uint32_t bpid);
 
+uint16_t dpaa_free_mbuf(const struct qm_fd *fd);
 void dpaa_rx_cb(struct qman_fq **fq,
 		struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 12/16] net/dpaa: add 2.5G support
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (10 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 11/16] net/dpaa: enable Tx queue taildrop Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 13/16] net/dpaa: update process specific device info Hemant Agrawal
                     ` (4 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Sachin Saxena, Gagandeep Singh

From: Sachin Saxena <sachin.saxena@nxp.com>

Handle 2.5Gbps ethernet ports as well.

Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman.c         | 6 ++++--
 drivers/bus/dpaa/base/fman/netcfg_layer.c | 3 ++-
 drivers/bus/dpaa/include/fman.h           | 1 +
 drivers/net/dpaa/dpaa_ethdev.c            | 9 ++++++++-
 4 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 6d77a7e39..ae26041ca 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -263,7 +263,7 @@ fman_if_init(const struct device_node *dpa_node)
 		fman_dealloc_bufs_mask_hi = 0;
 		fman_dealloc_bufs_mask_lo = 0;
 	}
-	/* Is the MAC node 1G, 10G? */
+	/* Is the MAC node 1G, 2.5G, 10G? */
 	__if->__if.is_memac = 0;
 
 	if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
@@ -279,7 +279,9 @@ fman_if_init(const struct device_node *dpa_node)
 			/* Right now forcing memac to 1g in case of error*/
 			__if->__if.mac_type = fman_mac_1g;
 		} else {
-			if (strstr(char_prop, "sgmii"))
+			if (strstr(char_prop, "sgmii-2500"))
+				__if->__if.mac_type = fman_mac_2_5g;
+			else if (strstr(char_prop, "sgmii"))
 				__if->__if.mac_type = fman_mac_1g;
 			else if (strstr(char_prop, "rgmii")) {
 				__if->__if.mac_type = fman_mac_1g;
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index 36eca88cd..b7009f229 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -44,7 +44,8 @@ dump_netcfg(struct netcfg_info *cfg_ptr)
 
 		printf("\n+ Fman %d, MAC %d (%s);\n",
 		       __if->fman_idx, __if->mac_idx,
-		       (__if->mac_type == fman_mac_1g) ? "1G" : "10G");
+		       (__if->mac_type == fman_mac_1g) ? "1G" :
+		       (__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
 
 		printf("\tmac_addr: %02x:%02x:%02x:%02x:%02x:%02x\n",
 		       (&__if->mac_addr)->addr_bytes[0],
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index c02d32d22..12e598b2d 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -71,6 +71,7 @@ TAILQ_HEAD(rte_fman_if_list, __fman_if);
 enum fman_mac_type {
 	fman_offline = 0,
 	fman_mac_1g,
+	fman_mac_2_5g,
 	fman_mac_10g,
 };
 
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 2ae79c9f5..1d23fc674 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -356,8 +356,13 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 
 	if (dpaa_intf->fif->mac_type == fman_mac_1g) {
 		dev_info->speed_capa = ETH_LINK_SPEED_1G;
+	} else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) {
+		dev_info->speed_capa = ETH_LINK_SPEED_1G
+					| ETH_LINK_SPEED_2_5G;
 	} else if (dpaa_intf->fif->mac_type == fman_mac_10g) {
-		dev_info->speed_capa = (ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G);
+		dev_info->speed_capa = ETH_LINK_SPEED_1G
+					| ETH_LINK_SPEED_2_5G
+					| ETH_LINK_SPEED_10G;
 	} else {
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
 			     dpaa_intf->name, dpaa_intf->fif->mac_type);
@@ -384,6 +389,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 
 	if (dpaa_intf->fif->mac_type == fman_mac_1g)
 		link->link_speed = ETH_SPEED_NUM_1G;
+	else if (dpaa_intf->fif->mac_type == fman_mac_2_5g)
+		link->link_speed = ETH_SPEED_NUM_2_5G;
 	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
 		link->link_speed = ETH_SPEED_NUM_10G;
 	else
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 13/16] net/dpaa: update process specific device info
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (11 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 12/16] net/dpaa: add 2.5G support Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 14/16] bus/dpaa: enable link state interrupt Hemant Agrawal
                     ` (3 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

For DPAA devices the memory maps stored in the FMAN interface
information is per process. Store them in the device process specific
area.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c | 207 ++++++++++++++++-----------------
 drivers/net/dpaa/dpaa_ethdev.h |   1 -
 2 files changed, 102 insertions(+), 106 deletions(-)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 1d23fc674..abe247acd 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -149,7 +149,6 @@ dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts)
 static int
 dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 	uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
 				+ VLAN_TAG_SIZE;
 	uint32_t buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
@@ -185,7 +184,7 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
-	fman_if_set_maxfrm(dpaa_intf->fif, frame_size);
+	fman_if_set_maxfrm(dev->process_private, frame_size);
 
 	return 0;
 }
@@ -193,7 +192,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 static int
 dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 	struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
 	uint64_t rx_offloads = eth_conf->rxmode.offloads;
 	uint64_t tx_offloads = eth_conf->txmode.offloads;
@@ -232,14 +230,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 			max_len = DPAA_MAX_RX_PKT_LEN;
 		}
 
-		fman_if_set_maxfrm(dpaa_intf->fif, max_len);
+		fman_if_set_maxfrm(dev->process_private, max_len);
 		dev->data->mtu = max_len
 			- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
 	}
 
 	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
 		DPAA_PMD_DEBUG("enabling scatter mode");
-		fman_if_set_sg(dpaa_intf->fif, 1);
+		fman_if_set_sg(dev->process_private, 1);
 		dev->data->scattered_rx = 1;
 	}
 
@@ -283,18 +281,18 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 	else
 		dev->tx_pkt_burst = dpaa_eth_queue_tx;
 
-	fman_if_enable_rx(dpaa_intf->fif);
+	fman_if_enable_rx(dev->process_private);
 
 	return 0;
 }
 
 static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct fman_if *fif = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_disable_rx(dpaa_intf->fif);
+	fman_if_disable_rx(fif);
 	dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
 }
 
@@ -342,6 +340,7 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct fman_if *fif = dev->process_private;
 
 	DPAA_PMD_DEBUG(": %s", dpaa_intf->name);
 
@@ -354,18 +353,18 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 
-	if (dpaa_intf->fif->mac_type == fman_mac_1g) {
+	if (fif->mac_type == fman_mac_1g) {
 		dev_info->speed_capa = ETH_LINK_SPEED_1G;
-	} else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) {
+	} else if (fif->mac_type == fman_mac_2_5g) {
 		dev_info->speed_capa = ETH_LINK_SPEED_1G
 					| ETH_LINK_SPEED_2_5G;
-	} else if (dpaa_intf->fif->mac_type == fman_mac_10g) {
+	} else if (fif->mac_type == fman_mac_10g) {
 		dev_info->speed_capa = ETH_LINK_SPEED_1G
 					| ETH_LINK_SPEED_2_5G
 					| ETH_LINK_SPEED_10G;
 	} else {
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
-			     dpaa_intf->name, dpaa_intf->fif->mac_type);
+			     dpaa_intf->name, fif->mac_type);
 		return -EINVAL;
 	}
 
@@ -384,18 +383,19 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 	struct rte_eth_link *link = &dev->data->dev_link;
+	struct fman_if *fif = dev->process_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dpaa_intf->fif->mac_type == fman_mac_1g)
+	if (fif->mac_type == fman_mac_1g)
 		link->link_speed = ETH_SPEED_NUM_1G;
-	else if (dpaa_intf->fif->mac_type == fman_mac_2_5g)
+	else if (fif->mac_type == fman_mac_2_5g)
 		link->link_speed = ETH_SPEED_NUM_2_5G;
-	else if (dpaa_intf->fif->mac_type == fman_mac_10g)
+	else if (fif->mac_type == fman_mac_10g)
 		link->link_speed = ETH_SPEED_NUM_10G;
 	else
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
-			     dpaa_intf->name, dpaa_intf->fif->mac_type);
+			     dpaa_intf->name, fif->mac_type);
 
 	link->link_status = dpaa_intf->valid;
 	link->link_duplex = ETH_LINK_FULL_DUPLEX;
@@ -406,21 +406,17 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 static int dpaa_eth_stats_get(struct rte_eth_dev *dev,
 			       struct rte_eth_stats *stats)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_stats_get(dpaa_intf->fif, stats);
+	fman_if_stats_get(dev->process_private, stats);
 	return 0;
 }
 
 static int dpaa_eth_stats_reset(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_stats_reset(dpaa_intf->fif);
+	fman_if_stats_reset(dev->process_private);
 
 	return 0;
 }
@@ -429,7 +425,6 @@ static int
 dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 		    unsigned int n)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 	unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
 	uint64_t values[sizeof(struct dpaa_if_stats) / 8];
 
@@ -439,7 +434,7 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 	if (xstats == NULL)
 		return 0;
 
-	fman_if_stats_get_all(dpaa_intf->fif, values,
+	fman_if_stats_get_all(dev->process_private, values,
 			      sizeof(struct dpaa_if_stats) / 8);
 
 	for (i = 0; i < num; i++) {
@@ -476,15 +471,13 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 	uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
 
 	if (!ids) {
-		struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 		if (n < stat_cnt)
 			return stat_cnt;
 
 		if (!values)
 			return 0;
 
-		fman_if_stats_get_all(dpaa_intf->fif, values_copy,
+		fman_if_stats_get_all(dev->process_private, values_copy,
 				      sizeof(struct dpaa_if_stats) / 8);
 
 		for (i = 0; i < stat_cnt; i++)
@@ -533,44 +526,36 @@ dpaa_xstats_get_names_by_id(
 
 static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_promiscuous_enable(dpaa_intf->fif);
+	fman_if_promiscuous_enable(dev->process_private);
 
 	return 0;
 }
 
 static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_promiscuous_disable(dpaa_intf->fif);
+	fman_if_promiscuous_disable(dev->process_private);
 
 	return 0;
 }
 
 static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_set_mcast_filter_table(dpaa_intf->fif);
+	fman_if_set_mcast_filter_table(dev->process_private);
 
 	return 0;
 }
 
 static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_reset_mcast_filter_table(dpaa_intf->fif);
+	fman_if_reset_mcast_filter_table(dev->process_private);
 
 	return 0;
 }
@@ -583,6 +568,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			    struct rte_mempool *mp)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+	struct fman_if *fif = dev->process_private;
 	struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
 	struct qm_mcc_initfq opts = {0};
 	u32 flags = 0;
@@ -645,22 +631,22 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		icp.iciof = DEFAULT_ICIOF;
 		icp.iceof = DEFAULT_RX_ICEOF;
 		icp.icsz = DEFAULT_ICSZ;
-		fman_if_set_ic_params(dpaa_intf->fif, &icp);
+		fman_if_set_ic_params(fif, &icp);
 
 		fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
-		fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+		fman_if_set_fdoff(fif, fd_offset);
 
 		/* Buffer pool size should be equal to Dataroom Size*/
 		bp_size = rte_pktmbuf_data_room_size(mp);
-		fman_if_set_bp(dpaa_intf->fif, mp->size,
+		fman_if_set_bp(fif, mp->size,
 			       dpaa_intf->bp_info->bpid, bp_size);
 		dpaa_intf->valid = 1;
 		DPAA_PMD_DEBUG("if:%s fd_offset = %d offset = %d",
 				dpaa_intf->name, fd_offset,
-				fman_if_get_fdoff(dpaa_intf->fif));
+				fman_if_get_fdoff(fif));
 	}
 	DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
-		fman_if_get_sg_enable(dpaa_intf->fif),
+		fman_if_get_sg_enable(fif),
 		dev->data->dev_conf.rxmode.max_rx_pkt_len);
 	/* checking if push mode only, no error check for now */
 	if (!rxq->is_static &&
@@ -952,11 +938,12 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
 		return 0;
 	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
 		 fc_conf->mode == RTE_FC_FULL) {
-		fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water,
+		fman_if_set_fc_threshold(dev->process_private,
+					 fc_conf->high_water,
 					 fc_conf->low_water,
-				dpaa_intf->bp_info->bpid);
+					 dpaa_intf->bp_info->bpid);
 		if (fc_conf->pause_time)
-			fman_if_set_fc_quanta(dpaa_intf->fif,
+			fman_if_set_fc_quanta(dev->process_private,
 					      fc_conf->pause_time);
 	}
 
@@ -992,10 +979,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
 		fc_conf->autoneg = net_fc->autoneg;
 		return 0;
 	}
-	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	ret = fman_if_get_fc_threshold(dev->process_private);
 	if (ret) {
 		fc_conf->mode = RTE_FC_TX_PAUSE;
-		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+		fc_conf->pause_time =
+			fman_if_get_fc_quanta(dev->process_private);
 	} else {
 		fc_conf->mode = RTE_FC_NONE;
 	}
@@ -1010,11 +998,11 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
 			     __rte_unused uint32_t pool)
 {
 	int ret;
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, index);
+	ret = fman_if_add_mac_addr(dev->process_private,
+				   addr->addr_bytes, index);
 
 	if (ret)
 		RTE_LOG(ERR, PMD, "error: Adding the MAC ADDR failed:"
@@ -1026,11 +1014,9 @@ static void
 dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
 			  uint32_t index)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-
 	PMD_INIT_FUNC_TRACE();
 
-	fman_if_clear_mac_addr(dpaa_intf->fif, index);
+	fman_if_clear_mac_addr(dev->process_private, index);
 }
 
 static int
@@ -1038,11 +1024,10 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
 		       struct rte_ether_addr *addr)
 {
 	int ret;
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 
 	PMD_INIT_FUNC_TRACE();
 
-	ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, 0);
+	ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
 	if (ret)
 		RTE_LOG(ERR, PMD, "error: Setting the MAC ADDR failed %d", ret);
 
@@ -1145,7 +1130,6 @@ int
 rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
 {
 	struct rte_eth_dev *dev;
-	struct dpaa_if *dpaa_intf;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV);
 
@@ -1154,17 +1138,16 @@ rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on)
 	if (!is_dpaa_supported(dev))
 		return -ENOTSUP;
 
-	dpaa_intf = dev->data->dev_private;
-
 	if (on)
-		fman_if_loopback_enable(dpaa_intf->fif);
+		fman_if_loopback_enable(dev->process_private);
 	else
-		fman_if_loopback_disable(dpaa_intf->fif);
+		fman_if_loopback_disable(dev->process_private);
 
 	return 0;
 }
 
-static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
+static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
+			       struct fman_if *fman_intf)
 {
 	struct rte_eth_fc_conf *fc_conf;
 	int ret;
@@ -1180,10 +1163,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf)
 		}
 	}
 	fc_conf = dpaa_intf->fc_conf;
-	ret = fman_if_get_fc_threshold(dpaa_intf->fif);
+	ret = fman_if_get_fc_threshold(fman_intf);
 	if (ret) {
 		fc_conf->mode = RTE_FC_TX_PAUSE;
-		fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif);
+		fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
 	} else {
 		fc_conf->mode = RTE_FC_NONE;
 	}
@@ -1345,6 +1328,39 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
 }
 #endif
 
+/* Initialise a network interface */
+static int
+dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
+{
+	struct rte_dpaa_device *dpaa_device;
+	struct fm_eth_port_cfg *cfg;
+	struct dpaa_if *dpaa_intf;
+	struct fman_if *fman_intf;
+	int dev_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
+	dev_id = dpaa_device->id.dev_id;
+	cfg = &dpaa_netcfg->port_cfg[dev_id];
+	fman_intf = cfg->fman_if;
+	eth_dev->process_private = fman_intf;
+
+	/* Plugging of UCODE burst API not supported in Secondary */
+	dpaa_intf = eth_dev->data->dev_private;
+	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+	if (dpaa_intf->cgr_tx)
+		eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
+	else
+		eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+	qman_set_fq_lookup_table(
+		dpaa_intf->rx_queues->qman_fq_lookup_table);
+#endif
+
+	return 0;
+}
+
 /* Initialise a network interface */
 static int
 dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -1362,23 +1378,6 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	dpaa_intf = eth_dev->data->dev_private;
-	/* For secondary processes, the primary has done all the work */
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
-		eth_dev->dev_ops = &dpaa_devops;
-		/* Plugging of UCODE burst API not supported in Secondary */
-		eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
-		if (dpaa_intf->cgr_tx)
-			eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow;
-		else
-			eth_dev->tx_pkt_burst = dpaa_eth_queue_tx;
-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
-		qman_set_fq_lookup_table(
-				dpaa_intf->rx_queues->qman_fq_lookup_table);
-#endif
-		return 0;
-	}
-
 	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
 	dev_id = dpaa_device->id.dev_id;
 	dpaa_intf = eth_dev->data->dev_private;
@@ -1388,7 +1387,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	dpaa_intf->name = dpaa_device->name;
 
 	/* save fman_if & cfg in the interface struture */
-	dpaa_intf->fif = fman_intf;
+	eth_dev->process_private = fman_intf;
 	dpaa_intf->ifid = dev_id;
 	dpaa_intf->cfg = cfg;
 
@@ -1457,7 +1456,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 		if (default_q)
 			fqid = cfg->rx_def;
 		else
-			fqid = DPAA_PCD_FQID_START + dpaa_intf->fif->mac_idx *
+			fqid = DPAA_PCD_FQID_START + fman_intf->mac_idx *
 				DPAA_PCD_FQID_MULTIPLIER + loop;
 
 		if (dpaa_intf->cgr_rx)
@@ -1529,7 +1528,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
 	DPAA_PMD_DEBUG("All frame queues created");
 
 	/* Get the initial configuration for flow control */
-	dpaa_fc_set_default(dpaa_intf);
+	dpaa_fc_set_default(dpaa_intf, fman_intf);
 
 	/* reset bpool list, initialize bpool dynamically */
 	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
@@ -1682,6 +1681,13 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
 			return -ENOMEM;
 		eth_dev->device = &dpaa_dev->device;
 		eth_dev->dev_ops = &dpaa_devops;
+
+		ret = dpaa_dev_init_secondary(eth_dev);
+		if (ret != 0) {
+			RTE_LOG(ERR, PMD, "secondary dev init failed\n");
+			return ret;
+		}
+
 		rte_eth_dev_probing_finish(eth_dev);
 		return 0;
 	}
@@ -1718,29 +1724,20 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
 		}
 	}
 
-	/* In case of secondary process, the device is already configured
-	 * and no further action is required, except portal initialization
-	 * and verifying secondary attachment to port name.
-	 */
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
-		eth_dev = rte_eth_dev_attach_secondary(dpaa_dev->name);
-		if (!eth_dev)
-			return -ENOMEM;
-	} else {
-		eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
-		if (eth_dev == NULL)
-			return -ENOMEM;
+	eth_dev = rte_eth_dev_allocate(dpaa_dev->name);
+	if (!eth_dev)
+		return -ENOMEM;
 
-		eth_dev->data->dev_private = rte_zmalloc(
-						"ethdev private structure",
-						sizeof(struct dpaa_if),
-						RTE_CACHE_LINE_SIZE);
-		if (!eth_dev->data->dev_private) {
-			DPAA_PMD_ERR("Cannot allocate memzone for port data");
-			rte_eth_dev_release_port(eth_dev);
-			return -ENOMEM;
-		}
+	eth_dev->data->dev_private = rte_zmalloc(
+					"ethdev private structure",
+					sizeof(struct dpaa_if),
+					RTE_CACHE_LINE_SIZE);
+	if (!eth_dev->data->dev_private) {
+		DPAA_PMD_ERR("Cannot allocate memzone for port data");
+		rte_eth_dev_release_port(eth_dev);
+		return -ENOMEM;
 	}
+
 	eth_dev->device = &dpaa_dev->device;
 	dpaa_dev->eth_dev = eth_dev;
 
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 3eab029fd..72a9c5910 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -115,7 +115,6 @@ struct dpaa_if {
 	uint16_t nb_rx_queues;
 	uint16_t nb_tx_queues;
 	uint32_t ifid;
-	struct fman_if *fif;
 	struct dpaa_bp_info *bp_info;
 	struct rte_eth_fc_conf *fc_conf;
 };
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 14/16] bus/dpaa: enable link state interrupt
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (12 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 13/16] net/dpaa: update process specific device info Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 15/16] bus/dpaa: enable set link status Hemant Agrawal
                     ` (2 subsequent siblings)
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Enable/disable link state interrupt and get link state api is
defined using IOCTL calls.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/dpaa/base/fman/fman.c         |  4 +-
 drivers/bus/dpaa/base/qbman/process.c     | 68 ++++++++++++++++++-
 drivers/bus/dpaa/dpaa_bus.c               | 28 +++++++-
 drivers/bus/dpaa/include/fman.h           |  2 +
 drivers/bus/dpaa/include/process.h        | 20 ++++++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  8 +++
 drivers/bus/dpaa/rte_dpaa_bus.h           |  6 +-
 drivers/common/dpaax/compat.h             |  5 +-
 drivers/net/dpaa/dpaa_ethdev.c            | 82 ++++++++++++++++++++++-
 9 files changed, 217 insertions(+), 6 deletions(-)

diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index ae26041ca..33be9e5d7 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2020 NXP
  *
  */
 
@@ -185,6 +185,8 @@ fman_if_init(const struct device_node *dpa_node)
 	}
 	memset(__if, 0, sizeof(*__if));
 	INIT_LIST_HEAD(&__if->__if.bpool_list);
+	strlcpy(__if->node_name, dpa_node->name, IF_NAME_MAX_LEN - 1);
+	__if->node_name[IF_NAME_MAX_LEN - 1] = '\0';
 	strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
 	__if->node_path[PATH_MAX - 1] = '\0';
 
diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
index 2c23c98df..598b10661 100644
--- a/drivers/bus/dpaa/base/qbman/process.c
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2011-2016 Freescale Semiconductor Inc.
- * Copyright 2017 NXP
+ * Copyright 2017,2020 NXP
  *
  */
 #include <assert.h>
@@ -296,3 +296,69 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal)
 
 	return process_portal_free(&input);
 }
+
+#define DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT \
+	_IOW(DPAA_IOCTL_MAGIC, 0x0E, struct usdpaa_ioctl_link_status)
+
+#define DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT \
+	_IOW(DPAA_IOCTL_MAGIC, 0x0F, char*)
+
+int rte_dpaa_intr_enable(char *if_name, int efd)
+{
+	struct usdpaa_ioctl_link_status args;
+
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	args.efd = (uint32_t)efd;
+	strcpy(args.if_name, if_name);
+
+	ret = ioctl(fd, DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT, &args);
+	if (ret) {
+		perror("Failed to enable interrupt\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+int rte_dpaa_intr_disable(char *if_name)
+{
+	int ret = check_fd();
+
+	if (ret)
+		return ret;
+
+	ret = ioctl(fd, DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT, &if_name);
+	if (ret) {
+		perror("Failed to disable interrupt\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+#define DPAA_IOCTL_GET_LINK_STATUS \
+	_IOWR(DPAA_IOCTL_MAGIC, 0x10, struct usdpaa_ioctl_link_status_args)
+
+int rte_dpaa_get_link_status(char *if_name)
+{
+	int ret = check_fd();
+	struct usdpaa_ioctl_link_status_args args;
+
+	if (ret)
+		return ret;
+
+	strcpy(args.if_name, if_name);
+	args.link_status = 0;
+
+	ret = ioctl(fd, DPAA_IOCTL_GET_LINK_STATUS, &args);
+	if (ret) {
+		perror("Failed to get link status\n");
+		return ret;
+	}
+
+	return args.link_status;
+}
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index f27820db3..2dedb138d 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017-2019 NXP
+ *   Copyright 2017-2020 NXP
  *
  */
 /* System headers */
@@ -13,6 +13,7 @@
 #include <pthread.h>
 #include <sys/types.h>
 #include <sys/syscall.h>
+#include <sys/eventfd.h>
 
 #include <rte_byteorder.h>
 #include <rte_common.h>
@@ -545,6 +546,23 @@ rte_dpaa_bus_dev_build(void)
 	return 0;
 }
 
+static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
+{
+	int fd;
+
+	fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+	if (fd < 0) {
+		DPAA_BUS_ERR("Cannot set up eventfd, error %i (%s)",
+			     errno, strerror(errno));
+		return errno;
+	}
+
+	intr_handle->fd = fd;
+	intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+	return 0;
+}
+
 static int
 rte_dpaa_bus_probe(void)
 {
@@ -592,6 +610,14 @@ rte_dpaa_bus_probe(void)
 		fclose(svr_file);
 	}
 
+	TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
+		if (dev->device_type == FSL_DPAA_ETH) {
+			ret = rte_dpaa_setup_intr(&dev->intr_handle);
+			if (ret)
+				DPAA_PMD_ERR("Error setting up interrupt.\n");
+		}
+	}
+
 	/* And initialize the PA->VA translation table */
 	dpaax_iova_table_populate();
 
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 12e598b2d..d90f2f5fc 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,6 +2,7 @@
  *
  * Copyright 2010-2012 Freescale Semiconductor, Inc.
  * All rights reserved.
+ * Copyright 2019-2020 NXP
  *
  */
 
@@ -361,6 +362,7 @@ struct fman_if_ic_params {
  */
 struct __fman_if {
 	struct fman_if __if;
+	char node_name[IF_NAME_MAX_LEN];
 	char node_path[PATH_MAX];
 	uint64_t regs_size;
 	void *ccsr_map;
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index d9ec94ee2..312da1245 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -2,6 +2,7 @@
  *
  * Copyright 2010-2011 Freescale Semiconductor, Inc.
  * All rights reserved.
+ * Copyright 2020 NXP
  *
  */
 
@@ -74,4 +75,23 @@ struct dpaa_ioctl_irq_map {
 int process_portal_irq_map(int fd,  struct dpaa_ioctl_irq_map *irq);
 int process_portal_irq_unmap(int fd);
 
+struct usdpaa_ioctl_link_status {
+	char            if_name[IF_NAME_MAX_LEN];
+	uint32_t        efd;
+};
+
+__rte_experimental
+int rte_dpaa_intr_enable(char *if_name, int efd);
+
+__rte_experimental
+int rte_dpaa_intr_disable(char *if_name);
+
+struct usdpaa_ioctl_link_status_args {
+	/* network device node name */
+	char    if_name[IF_NAME_MAX_LEN];
+	int     link_status;
+};
+__rte_experimental
+int rte_dpaa_get_link_status(char *if_name);
+
 #endif	/*  __PROCESS_H */
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index ed319539c..bf70e6656 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -96,3 +96,11 @@ DPDK_20.0 {
 
 	local: *;
 };
+
+EXPERIMENTAL {
+	global:
+
+	rte_dpaa_get_link_status;
+	rte_dpaa_intr_disable;
+	rte_dpaa_intr_enable;
+};
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 373aca978..f385f6de8 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017-2019 NXP
+ *   Copyright 2017-2020 NXP
  *
  */
 #ifndef __RTE_DPAA_BUS_H__
@@ -30,6 +30,9 @@
 #define SVR_LS1046A_FAMILY	0x87070000
 #define SVR_MASK		0xffff0000
 
+/** Device driver supports link state interrupt */
+#define RTE_DPAA_DRV_INTR_LSC  0x0008
+
 #define RTE_DEV_TO_DPAA_CONST(ptr) \
 	container_of(ptr, const struct rte_dpaa_device, device)
 
@@ -88,6 +91,7 @@ struct rte_dpaa_driver {
 	TAILQ_ENTRY(rte_dpaa_driver) next;
 	struct rte_driver driver;
 	struct rte_dpaa_bus *dpaa_bus;
+	uint32_t drv_flags;                 /**< Flags for controlling device.*/
 	enum rte_dpaa_type drv_type;
 	rte_dpaa_probe_t probe;
 	rte_dpaa_remove_t remove;
diff --git a/drivers/common/dpaax/compat.h b/drivers/common/dpaax/compat.h
index 12c9d9917..78e16fa2f 100644
--- a/drivers/common/dpaax/compat.h
+++ b/drivers/common/dpaax/compat.h
@@ -2,7 +2,7 @@
  *
  * Copyright 2011 Freescale Semiconductor, Inc.
  * All rights reserved.
- * Copyright 2019 NXP
+ * Copyright 2019-2020 NXP
  *
  */
 
@@ -390,4 +390,7 @@ static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused)
 #define atomic_dec_return(v)    rte_atomic32_sub_return(v, 1)
 #define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0)
 
+/* Interface name len*/
+#define IF_NAME_MAX_LEN 16
+
 #endif /* __COMPAT_H */
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index abe247acd..28c6b1c17 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -45,6 +45,7 @@
 #include <fsl_qman.h>
 #include <fsl_bman.h>
 #include <fsl_fman.h>
+#include <process.h>
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
@@ -131,6 +132,11 @@ static struct rte_dpaa_driver rte_dpaa_pmd;
 static int
 dpaa_eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info);
 
+static int dpaa_eth_link_update(struct rte_eth_dev *dev,
+				int wait_to_complete __rte_unused);
+
+static void dpaa_interrupt_handler(void *param);
+
 static inline void
 dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts)
 {
@@ -195,9 +201,18 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 	struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
 	uint64_t rx_offloads = eth_conf->rxmode.offloads;
 	uint64_t tx_offloads = eth_conf->txmode.offloads;
+	struct rte_device *rdev = dev->device;
+	struct rte_dpaa_device *dpaa_dev;
+	struct fman_if *fif = dev->process_private;
+	struct __fman_if *__fif;
+	struct rte_intr_handle *intr_handle;
 
 	PMD_INIT_FUNC_TRACE();
 
+	dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+	intr_handle = &dpaa_dev->intr_handle;
+	__fif = container_of(fif, struct __fman_if, __if);
+
 	/* Rx offloads which are enabled by default */
 	if (dev_rx_offloads_nodis & ~rx_offloads) {
 		DPAA_PMD_INFO(
@@ -241,6 +256,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 		dev->data->scattered_rx = 1;
 	}
 
+	/* if the interrupts were configured on this devices*/
+	if (intr_handle && intr_handle->fd &&
+	    dev->data->dev_conf.intr_conf.lsc != 0)
+		rte_intr_callback_register(intr_handle, dpaa_interrupt_handler,
+					   (void *)dev);
+
+	rte_dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+
 	return 0;
 }
 
@@ -269,6 +292,25 @@ dpaa_supported_ptypes_get(struct rte_eth_dev *dev)
 	return NULL;
 }
 
+static void dpaa_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = param;
+	struct rte_device *rdev = dev->device;
+	struct rte_dpaa_device *dpaa_dev;
+	struct rte_intr_handle *intr_handle;
+	uint64_t buf;
+	int bytes_read;
+
+	dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+	intr_handle = &dpaa_dev->intr_handle;
+
+	bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+	if (bytes_read < 0)
+		DPAA_PMD_ERR("Error reading eventfd\n");
+	dpaa_eth_link_update(dev, 0);
+	_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+}
+
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -298,9 +340,27 @@ static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 
 static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
 {
+	struct fman_if *fif = dev->process_private;
+	struct __fman_if *__fif;
+	struct rte_device *rdev = dev->device;
+	struct rte_dpaa_device *dpaa_dev;
+	struct rte_intr_handle *intr_handle;
+
 	PMD_INIT_FUNC_TRACE();
 
+	dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
+	intr_handle = &dpaa_dev->intr_handle;
+	__fif = container_of(fif, struct __fman_if, __if);
+
 	dpaa_eth_dev_stop(dev);
+
+	rte_dpaa_intr_disable(__fif->node_name);
+
+	if (intr_handle && intr_handle->fd &&
+	    dev->data->dev_conf.intr_conf.lsc != 0)
+		rte_intr_callback_unregister(intr_handle,
+					     dpaa_interrupt_handler,
+					     (void *)dev);
 }
 
 static int
@@ -384,6 +444,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	struct dpaa_if *dpaa_intf = dev->data->dev_private;
 	struct rte_eth_link *link = &dev->data->dev_link;
 	struct fman_if *fif = dev->process_private;
+	struct __fman_if *__fif = container_of(fif, struct __fman_if, __if);
+	int ret;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -397,9 +459,23 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
 			     dpaa_intf->name, fif->mac_type);
 
-	link->link_status = dpaa_intf->valid;
+	ret = rte_dpaa_get_link_status(__fif->node_name);
+	if (ret < 0) {
+		if (ret == -EINVAL) {
+			DPAA_PMD_DEBUG("Using default link status-No Support");
+			ret = 1;
+		} else {
+			DPAA_PMD_ERR("rte_dpaa_get_link_status %d", ret);
+			return ret;
+		}
+	}
+
+	link->link_status = ret;
 	link->link_duplex = ETH_LINK_FULL_DUPLEX;
 	link->link_autoneg = ETH_LINK_AUTONEG;
+
+	DPAA_PMD_INFO("Port %d Link is %s\n", dev->data->port_id,
+		      link->link_status ? "Up" : "Down");
 	return 0;
 }
 
@@ -1743,6 +1819,9 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused,
 
 	qman_ern_register_cb(dpaa_free_mbuf);
 
+	if (dpaa_drv->drv_flags & RTE_DPAA_DRV_INTR_LSC)
+		eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
+
 	/* Invoke PMD device initialization function */
 	diag = dpaa_dev_init(eth_dev);
 	if (diag == 0) {
@@ -1770,6 +1849,7 @@ rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev)
 }
 
 static struct rte_dpaa_driver rte_dpaa_pmd = {
+	.drv_flags = RTE_DPAA_DRV_INTR_LSC,
 	.drv_type = FSL_DPAA_ETH,
 	.probe = rte_dpaa_probe,
 	.remove = rte_dpaa_remove,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 15/16] bus/dpaa: enable set link status
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (13 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 14/16] bus/dpaa: enable link state interrupt Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 16/16] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Enabled set link status API to start/stop phy
device from application.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/bus/dpaa/base/qbman/process.c     | 35 ++++++++++++++++++++---
 drivers/bus/dpaa/include/process.h        |  3 ++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  1 +
 drivers/net/dpaa/dpaa_ethdev.c            | 14 +++++++--
 4 files changed, 47 insertions(+), 6 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c
index 598b10661..8ab57f105 100644
--- a/drivers/bus/dpaa/base/qbman/process.c
+++ b/drivers/bus/dpaa/base/qbman/process.c
@@ -317,7 +317,7 @@ int rte_dpaa_intr_enable(char *if_name, int efd)
 
 	ret = ioctl(fd, DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT, &args);
 	if (ret) {
-		perror("Failed to enable interrupt\n");
+		printf("Failed to enable interrupt: Not Supported\n");
 		return ret;
 	}
 
@@ -333,7 +333,7 @@ int rte_dpaa_intr_disable(char *if_name)
 
 	ret = ioctl(fd, DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT, &if_name);
 	if (ret) {
-		perror("Failed to disable interrupt\n");
+		printf("Failed to disable interrupt: Not Supported\n");
 		return ret;
 	}
 
@@ -356,9 +356,36 @@ int rte_dpaa_get_link_status(char *if_name)
 
 	ret = ioctl(fd, DPAA_IOCTL_GET_LINK_STATUS, &args);
 	if (ret) {
-		perror("Failed to get link status\n");
-		return ret;
+		printf("Failed to get link status: Not Supported\n");
+		return -errno;
 	}
 
 	return args.link_status;
 }
+
+#define DPAA_IOCTL_UPDATE_LINK_STATUS \
+	_IOW(DPAA_IOCTL_MAGIC, 0x11, struct usdpaa_ioctl_link_status_args)
+
+int rte_dpaa_update_link_status(char *if_name, int link_status)
+{
+	struct usdpaa_ioctl_link_status_args args;
+	int ret;
+
+	ret = check_fd();
+	if (ret)
+		return ret;
+
+	strcpy(args.if_name, if_name);
+	args.link_status = link_status;
+
+	ret = ioctl(fd, DPAA_IOCTL_UPDATE_LINK_STATUS, &args);
+	if (ret) {
+		if (errno == EINVAL)
+			printf("Failed to set link status: Not Supported\n");
+		else
+			perror("Failed to set link status");
+		return ret;
+	}
+
+	return 0;
+}
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index 312da1245..9f8c85895 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -94,4 +94,7 @@ struct usdpaa_ioctl_link_status_args {
 __rte_experimental
 int rte_dpaa_get_link_status(char *if_name);
 
+__rte_experimental
+int rte_dpaa_update_link_status(char *if_name, int link_status);
+
 #endif	/*  __PROCESS_H */
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index bf70e6656..146f29556 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -103,4 +103,5 @@ EXPERIMENTAL {
 	rte_dpaa_get_link_status;
 	rte_dpaa_intr_disable;
 	rte_dpaa_intr_enable;
+	rte_dpaa_update_link_status;
 };
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 28c6b1c17..c0a96dd47 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -972,17 +972,27 @@ dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 static int dpaa_link_down(struct rte_eth_dev *dev)
 {
+	struct fman_if *fif = dev->process_private;
+	struct __fman_if *__fif;
+
 	PMD_INIT_FUNC_TRACE();
 
-	dpaa_eth_dev_stop(dev);
+	__fif = container_of(fif, struct __fman_if, __if);
+
+	rte_dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
 	return 0;
 }
 
 static int dpaa_link_up(struct rte_eth_dev *dev)
 {
+	struct fman_if *fif = dev->process_private;
+	struct __fman_if *__fif;
+
 	PMD_INIT_FUNC_TRACE();
 
-	dpaa_eth_dev_start(dev);
+	__fif = container_of(fif, struct __fman_if, __if);
+
+	rte_dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v2 16/16] net/dpaa2: do not prefetch annotaion for physical mode
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (14 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 15/16] bus/dpaa: enable set link status Hemant Agrawal
@ 2020-03-06  9:57   ` Hemant Agrawal
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
  16 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-03-06  9:57 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

When IOVA is physical address do not prefetch the annotation
of the next frame, as there is a cost involved there to convert
the physical address to virtual address.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  6 ++--
 drivers/net/dpaa2/dpaa2_rxtx.c          | 40 +++++++++++++++----------
 2 files changed, 28 insertions(+), 18 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index bde1441f4..6b07b628a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -403,8 +403,8 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
+#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
 
 #endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index d809e0f4b..4d024a85f 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -324,8 +324,8 @@ static inline struct rte_mbuf *__attribute__((hot))
 eth_fd_to_mbuf(const struct qbman_fd *fd,
 	       int port_id)
 {
-	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
-		DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
+	void *iova_addr = DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(iova_addr,
 		     rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size);
 
 	/* need to repopulated some of the fields,
@@ -350,8 +350,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 		dpaa2_dev_rx_parse_new(mbuf, fd);
 	else
 		mbuf->packet_type = dpaa2_dev_rx_parse(mbuf,
-			(void *)((size_t)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd))
-			 + DPAA2_FD_PTA_SIZE));
+			(void *)((size_t)iova_addr + DPAA2_FD_PTA_SIZE));
 
 	DPAA2_PMD_DP_DEBUG("to mbuf - mbuf =%p, mbuf->buf_addr =%p, off = %d,"
 		"fd_off=%d fd =%" PRIx64 ", meta = %d  bpid =%d, len=%d\n",
@@ -518,7 +517,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, pull_size;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct queue_storage_info_t *q_storage = dpaa2_q->q_storage;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
@@ -617,12 +616,15 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 		fd = qbman_result_DQ_fd(dq_storage);
 
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 		if (dpaa2_svr_family != SVR_LX2160A) {
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
+			const struct qbman_fd *next_fd =
+				qbman_result_DQ_fd(dq_storage + 1);
 			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0((void *)(size_t)(DPAA2_GET_FD_ADDR(
-				      next_fd) + DPAA2_FD_PTA_SIZE + 16));
+			rte_prefetch0(DPAA2_IOVA_TO_VADDR((DPAA2_GET_FD_ADDR(
+				next_fd) + DPAA2_FD_PTA_SIZE + 16)));
 		}
+#endif
 
 		if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 			bufs[num_rx] = eth_sg_fd_to_mbuf(fd, eth_data->port_id);
@@ -753,7 +755,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, next_pull = nb_pkts, num_pulled;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
 
@@ -821,11 +823,19 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			}
 			fd = qbman_result_DQ_fd(dq_storage);
 
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
-			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0(
-				(void *)(size_t)(DPAA2_GET_FD_ADDR(next_fd)
-					+ DPAA2_FD_PTA_SIZE + 16));
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+			if (dpaa2_svr_family != SVR_LX2160A) {
+				const struct qbman_fd *next_fd =
+					qbman_result_DQ_fd(dq_storage + 1);
+
+				/* Prefetch Annotation address for the parse
+				 * results.
+				 */
+				rte_prefetch0((DPAA2_IOVA_TO_VADDR(
+					DPAA2_GET_FD_ADDR(next_fd) +
+					DPAA2_FD_PTA_SIZE + 16)));
+			}
+#endif
 
 			if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 				bufs[num_rx] = eth_sg_fd_to_mbuf(fd,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-03-05  9:19       ` Hemant Agrawal (OSS)
@ 2020-03-06 10:12         ` David Marchand
  2020-03-10 10:36           ` Dodji Seketeli
  0 siblings, 1 reply; 109+ messages in thread
From: David Marchand @ 2020-03-06 10:12 UTC (permalink / raw)
  To: Hemant Agrawal (OSS)
  Cc: Yigit, Ferruh, dev, Neil Horman, Thomas Monjalon, Dodji Seketeli

On Thu, Mar 5, 2020 at 10:19 AM Hemant Agrawal (OSS)
<hemant.agrawal@oss.nxp.com> wrote:
> > On Thu, Mar 5, 2020 at 10:06 AM Hemant Agrawal (OSS)
> > <hemant.agrawal@oss.nxp.com> wrote:
> > >
> > > Hi David,
> > > > On Mon, Mar 2, 2020 at 10:26 AM Hemant Agrawal
> > > > <hemant.agrawal@nxp.com> wrote:
> > > > >
> > > > > This patch series add various patches for enhancing and fixing NXP
> > > > > fslmc bus, dpaa bus, and dpaax.
> > > > >
> > > > > - the main change is support to allow thread migration across
> > > > > lcores
> > > > > - improving the multi-process support
> > > >
> > > > This series triggers an ABI warning that must be investigated.
> > > >https://travis-ci.com/ovsrobot/dpdk/jobs/292904119#L2233
> > >
> > > [Hemant]
> > > As per the logs:
> > >
> > > Variables changes summary: 1 Removed, 2 Changed, 0 Added variables
> > > 1 Removed variable:
> > >   'dpaa2_portal_dqrr per_lcore_dpaa2_held_bufs'
> > {per_lcore_dpaa2_held_bufs@@DPDK_20.0}
> > > 2 Changed variables:
> > >   [C]'dpaa2_io_portal_t dpaa2_io_portal[128]' was changed at
> > dpaa2_hw_dpio.h:40:1: size of symbol changed from 5120 to 2048
> > >   [C]'dpaa2_io_portal_t per_lcore__dpaa2_io' was changed at
> > > dpaa2_hw_dpio.h:20:1: size of symbol changed from 40 to 16
> > >
> > > Error: ABI issue reported for 'abidiff --suppr devtools/libabigail.abignore --
> > no-added-syms --headers-dir1 reference/usr/local/include --headers-dir2
> > install/usr/local/include reference/dump/librte_bus_fslmc.dump
> > install/dump/librte_bus_fslmc.dump'
> > >
> > > ---------------
> > >
> > > These changes are w.r.t modifications in internal structures and variables.
> > They may be ignored.
> >
> > The ABI check considers symbol exposed in headers available to final users.
> > If those are internal, why are the headers public?
> >
>
> [Hemant] These symbols are not part of any public header files,  but they are part of *.map files to share them between different driver libs i.e bus_fslmc and net_dpaa2

I would expect libabigail to skip those symbols, so there is something
I have missed in how --headers-dirX work.


Anyway, all of those symbols in dpaa are part of the driver ABI.
We are still missing a way to mark internal symbols.
Neil had posted a framework for this
http://patchwork.dpdk.org/project/dpdk/list/?series=5004.

In order to get this series passing the checks, I recommend NXP
rebasing Neil scripts (I will help reviewing this part), then mark all
those symbols as internal in its drivers.
Other vendor will convert their drivers later, as there is no need at
the moment.

Thanks.

-- 
David Marchand


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 05/16] bus/fslmc: support handle portal alloc failure
  2020-03-02 14:58 ` [dpdk-dev] [PATCH 05/16] bus/fslmc: support handle portal alloc failure Hemant Agrawal
@ 2020-03-09 17:00   ` Ferruh Yigit
  2020-03-09 17:04     ` Ferruh Yigit
  0 siblings, 1 reply; 109+ messages in thread
From: Ferruh Yigit @ 2020-03-09 17:00 UTC (permalink / raw)
  To: Hemant Agrawal; +Cc: dev, Nipun Gupta

On 3/2/2020 2:58 PM, Hemant Agrawal wrote:
> Add the error handling on failure.
> 
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>


Hi,

This commit seems doing,
- Fix 'dpaa2_put_qbman_swp()' which previously may reference the 'dpio_dev' when
it is null, but the function introduced in this patchset, why not fix it at
first place where it is introduced?

- Updates some log messages

- Adds a new log message

- add 'rte_atomic16_clear()' on error in 'dpaa2_get_qbman_swp()'

I assume the title "support handle portal alloc failure" refers to the last one
but can you please give more details in the commit log to clarify it. Also if
this is fixing a defect can you please reflect it in the commit title/log?

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 05/16] bus/fslmc: support handle portal alloc failure
  2020-03-09 17:00   ` Ferruh Yigit
@ 2020-03-09 17:04     ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-03-09 17:04 UTC (permalink / raw)
  To: Hemant Agrawal; +Cc: dev, Nipun Gupta

On 3/9/2020 5:00 PM, Ferruh Yigit wrote:
> On 3/2/2020 2:58 PM, Hemant Agrawal wrote:
>> Add the error handling on failure.
>>
>> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> 
> 
> Hi,
> 
> This commit seems doing,
> - Fix 'dpaa2_put_qbman_swp()' which previously may reference the 'dpio_dev' when
> it is null, but the function introduced in this patchset, why not fix it at
> first place where it is introduced?
> 
> - Updates some log messages
> 
> - Adds a new log message
> 
> - add 'rte_atomic16_clear()' on error in 'dpaa2_get_qbman_swp()'
> 
> I assume the title "support handle portal alloc failure" refers to the last one
> but can you please give more details in the commit log to clarify it. Also if
> this is fixing a defect can you please reflect it in the commit title/log?
> 
> Thanks,
> ferruh
> 

The comment was for v2, put here by mistake ...

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-03-06 10:12         ` David Marchand
@ 2020-03-10 10:36           ` Dodji Seketeli
  2020-04-07 10:25             ` Hemant Agrawal
  0 siblings, 1 reply; 109+ messages in thread
From: Dodji Seketeli @ 2020-03-10 10:36 UTC (permalink / raw)
  To: David Marchand
  Cc: Hemant Agrawal (OSS), Yigit, Ferruh, dev, Neil Horman, Thomas Monjalon

Hello,

David Marchand <david.marchand@redhat.com> writes:

> On Thu, Mar 5, 2020 at 10:19 AM Hemant Agrawal (OSS)
> <hemant.agrawal@oss.nxp.com> wrote:
>> > On Thu, Mar 5, 2020 at 10:06 AM Hemant Agrawal (OSS)
>> > <hemant.agrawal@oss.nxp.com> wrote:
>> > >
>> > > Hi David,
>> > > > On Mon, Mar 2, 2020 at 10:26 AM Hemant Agrawal
>> > > > <hemant.agrawal@nxp.com> wrote:
>> > > > >
>> > > > > This patch series add various patches for enhancing and fixing NXP
>> > > > > fslmc bus, dpaa bus, and dpaax.
>> > > > >
>> > > > > - the main change is support to allow thread migration across
>> > > > > lcores
>> > > > > - improving the multi-process support
>> > > >
>> > > > This series triggers an ABI warning that must be investigated.
>> > > >https://travis-ci.com/ovsrobot/dpdk/jobs/292904119#L2233
>> > >
>> > > [Hemant]
>> > > As per the logs:
>> > >
>> > > Variables changes summary: 1 Removed, 2 Changed, 0 Added variables
>> > > 1 Removed variable:
>> > >   'dpaa2_portal_dqrr per_lcore_dpaa2_held_bufs'
>> > {per_lcore_dpaa2_held_bufs@@DPDK_20.0}
>> > > 2 Changed variables:
>> > >   [C]'dpaa2_io_portal_t dpaa2_io_portal[128]' was changed at
>> > dpaa2_hw_dpio.h:40:1: size of symbol changed from 5120 to 2048
>> > >   [C]'dpaa2_io_portal_t per_lcore__dpaa2_io' was changed at
>> > > dpaa2_hw_dpio.h:20:1: size of symbol changed from 40 to 16
>> > >
>> > > Error: ABI issue reported for 'abidiff --suppr devtools/libabigail.abignore --
>> > no-added-syms --headers-dir1 reference/usr/local/include --headers-dir2
>> > install/usr/local/include reference/dump/librte_bus_fslmc.dump
>> > install/dump/librte_bus_fslmc.dump'
>> > >
>> > > ---------------
>> > >
>> > > These changes are w.r.t modifications in internal structures and variables.
>> > They may be ignored.
>> >
>> > The ABI check considers symbol exposed in headers available to final users.
>> > If those are internal, why are the headers public?
>> >
>>
>> [Hemant] These symbols are not part of any public header files, but
>> they are part of *.map files to share them between different driver
>> libs i.e bus_fslmc and net_dpaa2
>
> I would expect libabigail to skip those symbols, so there is something
> I have missed in how --headers-dirX work.

In libabigail speak, we make a difference between *ELF symbols* and
types.

--header-dirX is about telling the tool what the public *types* are.  As
you rightfully implied, types that are defined in files that are not
found in the directories specified by --header-dirX are considered to be
private types and are thus not shown in the ABI change report.

ELF symbols however are a different matter.  Header files don't usually
define ELF symbols, be they variable of function symbols.  Header files
can at most declare variables or functions that would be actually
defined elsewhere in source code, leading to the definition of ELF
variable or function symbols in the final binary.  At this point, we
aren't talking about types anymore, as the ELF format doesn't know what
types (in C or any other language) are.  So --header-dirX don't deal with
ELF symbols.

And from what I understand from the message quoted above, the changes we
are talking about have to do with EFL variable symbols which size have
changed.  So in practise, these are global arrays (exposed by at the
binary level as an ELF variable symbol of a given size) with public
visibility which size have changed.

So my guess would be that if you guys don't want these arrays to be part
the binary interface of this library, they should probably be declared
static at the C level and accessed through some accessor function or
something like that.  At least, that's my humble uninformed opinion.

In the mean time, the tooling can be tought to ignore changes to these
ELF symbols, as you you guys all know already.


> Anyway, all of those symbols in dpaa are part of the driver ABI.
> We are still missing a way to mark internal symbols.
> Neil had posted a framework for this
> http://patchwork.dpdk.org/project/dpdk/list/?series=5004.
>
> In order to get this series passing the checks, I recommend NXP
> rebasing Neil scripts (I will help reviewing this part), then mark all
> those symbols as internal in its drivers.
> Other vendor will convert their drivers later, as there is no need at
> the moment.
>
> Thanks.

Cheers,

-- 
		Dodji


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v2 05/16] bus/fslmc: support handle portal alloc failure
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 05/16] bus/fslmc: support handle portal alloc failure Hemant Agrawal
@ 2020-03-13 16:20     ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-03-13 16:20 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: Nipun Gupta

On 3/6/2020 9:57 AM, Hemant Agrawal wrote:
> Add the error handling on failure.
> 
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>

[Copy/paste from v1 comment which put there by mistake]

Hi,

This commit seems doing,
- Fix 'dpaa2_put_qbman_swp()' which previously may reference the 'dpio_dev' when
it is null, but the function introduced in this patchset, why not fix it at
first place where it is introduced?

- Updates some log messages

- Adds a new log message

- add 'rte_atomic16_clear()' on error in 'dpaa2_get_qbman_swp()'

I assume the title "support handle portal alloc failure" refers to the last one
but can you please give more details in the commit log to clarify it. Also if
this is fixing a defect can you please reflect it in the commit title/log?

Thanks,
ferruh


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-03-10 10:36           ` Dodji Seketeli
@ 2020-04-07 10:25             ` Hemant Agrawal
  2020-04-07 12:20               ` Thomas Monjalon
  2020-04-08  7:20               ` Dodji Seketeli
  0 siblings, 2 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-04-07 10:25 UTC (permalink / raw)
  To: Dodji Seketeli, David Marchand
  Cc: Hemant Agrawal (OSS), Yigit, Ferruh, dev, Neil Horman, Thomas Monjalon

HI Dodji,
> 
> David Marchand <david.marchand@redhat.com> writes:
> 
> > On Thu, Mar 5, 2020 at 10:19 AM Hemant Agrawal (OSS)
> > <hemant.agrawal@oss.nxp.com> wrote:
> >> > On Thu, Mar 5, 2020 at 10:06 AM Hemant Agrawal (OSS)
> >> > <hemant.agrawal@oss.nxp.com> wrote:
> >> > >
> >> > > Hi David,
> >> > > > On Mon, Mar 2, 2020 at 10:26 AM Hemant Agrawal
> >> > > > <hemant.agrawal@nxp.com> wrote:
> >> > > > >
> >> > > > > This patch series add various patches for enhancing and
> >> > > > > fixing NXP fslmc bus, dpaa bus, and dpaax.
> >> > > > >
> >> > > > > - the main change is support to allow thread migration across
> >> > > > > lcores
> >> > > > > - improving the multi-process support
> >> > > >
> >> > > > This series triggers an ABI warning that must be investigated.
> >> > > >https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%
> >> > > >2Ftravis-
> ci.com%2Fovsrobot%2Fdpdk%2Fjobs%2F292904119%23L2233&amp
> >> > >
> >;data=02%7C01%7Chemant.agrawal%40nxp.com%7C91a3230cfd334c28bbd
> b0
> >> > >
> >8d7c4dee590%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C6371
> 943
> >> > >
> >33920176015&amp;sdata=%2BViKwS2sNucwLFD9VtvwxOK1huq0g%2B6TfT6
> Fqp
> >> > > >Nyz5w%3D&amp;reserved=0
> >> > >
> >> > > [Hemant]
> >> > > As per the logs:
> >> > >
> >> > > Variables changes summary: 1 Removed, 2 Changed, 0 Added
> >> > > variables
> >> > > 1 Removed variable:
> >> > >   'dpaa2_portal_dqrr per_lcore_dpaa2_held_bufs'
> >> > {per_lcore_dpaa2_held_bufs@@DPDK_20.0}
> >> > > 2 Changed variables:
> >> > >   [C]'dpaa2_io_portal_t dpaa2_io_portal[128]' was changed at
> >> > dpaa2_hw_dpio.h:40:1: size of symbol changed from 5120 to 2048
> >> > >   [C]'dpaa2_io_portal_t per_lcore__dpaa2_io' was changed at
> >> > > dpaa2_hw_dpio.h:20:1: size of symbol changed from 40 to 16
> >> > >
> >> > > Error: ABI issue reported for 'abidiff --suppr
> >> > > devtools/libabigail.abignore --
> >> > no-added-syms --headers-dir1 reference/usr/local/include
> >> > --headers-dir2 install/usr/local/include
> >> > reference/dump/librte_bus_fslmc.dump
> >> > install/dump/librte_bus_fslmc.dump'
> >> > >
> >> > > ---------------
> >> > >
> >> > > These changes are w.r.t modifications in internal structures and
> variables.
> >> > They may be ignored.
> >> >
> >> > The ABI check considers symbol exposed in headers available to final
> users.
> >> > If those are internal, why are the headers public?
> >> >
> >>
> >> [Hemant] These symbols are not part of any public header files, but
> >> they are part of *.map files to share them between different driver
> >> libs i.e bus_fslmc and net_dpaa2
> >
> > I would expect libabigail to skip those symbols, so there is something
> > I have missed in how --headers-dirX work.
> 
> In libabigail speak, we make a difference between *ELF symbols* and types.
> 
> --header-dirX is about telling the tool what the public *types* are.  As you
> rightfully implied, types that are defined in files that are not found in the
> directories specified by --header-dirX are considered to be private types and
> are thus not shown in the ABI change report.
> 
> ELF symbols however are a different matter.  Header files don't usually define
> ELF symbols, be they variable of function symbols.  Header files can at most
> declare variables or functions that would be actually defined elsewhere in
> source code, leading to the definition of ELF variable or function symbols in the
> final binary.  At this point, we aren't talking about types anymore, as the ELF
> format doesn't know what types (in C or any other language) are.  So --header-
> dirX don't deal with ELF symbols.
> 
> And from what I understand from the message quoted above, the changes we
> are talking about have to do with EFL variable symbols which size have
> changed.  So in practise, these are global arrays (exposed by at the binary level
> as an ELF variable symbol of a given size) with public visibility which size have
> changed.
> 
> So my guess would be that if you guys don't want these arrays to be part the
> binary interface of this library, they should probably be declared static at the C
> level and accessed through some accessor function or something like that.  At
> least, that's my humble uninformed opinion.

[Hemant] Actually some of these are in datapath, there is a performance impact of accessing them via function calls. 

> 
> In the mean time, the tooling can be tought to ignore changes to these ELF
> symbols, as you you guys all know already.
> 
[Hemant] will you please help me about adding entry to libagigail.abignore 
I tried doing following, but it is not helping
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -2,10 +2,15 @@
         symbol_version = EXPERIMENTAL
 [suppress_variable]
         symbol_version = EXPERIMENTAL
+       name = per_lcore__dpaa2_io
+       name = dpaa2_io_portal

 ; Explicit ignore for driver-only ABI
 [suppress_type]
         name = rte_cryptodev_ops
+       name = dpaa2_io_portal_t
> 
> > Anyway, all of those symbols in dpaa are part of the driver ABI.
> > We are still missing a way to mark internal symbols.
> > Neil had posted a framework for this
> >
> https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatchwo
> rk.dpdk.org%2Fproject%2Fdpdk%2Flist%2F%3Fseries%3D5004&amp;data=02
> %7C01%7Chemant.agrawal%40nxp.com%7C91a3230cfd334c28bbdb08d7c4d
> ee590%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C6371943339
> 20186005&amp;sdata=1Is%2BqQwP%2Bn0QVJ2HYK2%2Bx7TJooEvry1sNUUN
> fWMygkM%3D&amp;reserved=0.
> >
> > In order to get this series passing the checks, I recommend NXP
> > rebasing Neil scripts (I will help reviewing this part), then mark all
> > those symbols as internal in its drivers.
> > Other vendor will convert their drivers later, as there is no need at
> > the moment.
> >
[Hemant] I have commented on Neil's series.  It needs more changes in existing code. An approach like __rte_experimental will work better.

> > Thanks.
> 

Regards,
Hemant

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-04-07 10:25             ` Hemant Agrawal
@ 2020-04-07 12:20               ` Thomas Monjalon
  2020-04-08  7:20               ` Dodji Seketeli
  1 sibling, 0 replies; 109+ messages in thread
From: Thomas Monjalon @ 2020-04-07 12:20 UTC (permalink / raw)
  To: Hemant Agrawal (OSS), Hemant Agrawal
  Cc: Dodji Seketeli, David Marchand, dev, Yigit, Ferruh, dev, Neil Horman

07/04/2020 12:25, Hemant Agrawal:
> > In the mean time, the tooling can be tought to ignore changes to these ELF
> > symbols, as you you guys all know already.
> > 
> [Hemant] will you please help me about adding entry to libagigail.abignore 
> I tried doing following, but it is not helping
> 
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -2,10 +2,15 @@
>          symbol_version = EXPERIMENTAL
>  [suppress_variable]
>          symbol_version = EXPERIMENTAL
> +       name = per_lcore__dpaa2_io
> +       name = dpaa2_io_portal
> 
>  ; Explicit ignore for driver-only ABI
>  [suppress_type]
>          name = rte_cryptodev_ops
> +       name = dpaa2_io_portal_t
> > 
> > > Anyway, all of those symbols in dpaa are part of the driver ABI.
> > > We are still missing a way to mark internal symbols.
> > > Neil had posted a framework for this
> > >
> > https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatchwo
> > rk.dpdk.org%2Fproject%2Fdpdk%2Flist%2F%3Fseries%3D5004&amp;data=02
> > %7C01%7Chemant.agrawal%40nxp.com%7C91a3230cfd334c28bbdb08d7c4d
> > ee590%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C6371943339
> > 20186005&amp;sdata=1Is%2BqQwP%2Bn0QVJ2HYK2%2Bx7TJooEvry1sNUUN
> > fWMygkM%3D&amp;reserved=0.
> > >
> > > In order to get this series passing the checks, I recommend NXP
> > > rebasing Neil scripts (I will help reviewing this part), then mark all
> > > those symbols as internal in its drivers.
> > > Other vendor will convert their drivers later, as there is no need at
> > > the moment.
> > >
> [Hemant] I have commented on Neil's series.
> It needs more changes in existing code.
> An approach like __rte_experimental will work better.

I guess you mean __rte_internal?

Please Hemant don't wait for someone else filling the gap.
If __rte_internal is the right approach, please complete and use it.




^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-04-07 10:25             ` Hemant Agrawal
  2020-04-07 12:20               ` Thomas Monjalon
@ 2020-04-08  7:20               ` Dodji Seketeli
  2020-04-08  7:52                 ` Dodji Seketeli
  1 sibling, 1 reply; 109+ messages in thread
From: Dodji Seketeli @ 2020-04-08  7:20 UTC (permalink / raw)
  To: Hemant Agrawal
  Cc: Dodji Seketeli, David Marchand, Hemant Agrawal (OSS),
	Yigit, Ferruh, dev, Neil Horman, Thomas Monjalon

Hello Hemant,

Hemant Agrawal <hemant.agrawal@nxp.com> writes:

[...]

>> >> > > [Hemant]
>> >> > > As per the logs:
>> >> > >
>> >> > > Variables changes summary: 1 Removed, 2 Changed, 0 Added
>> >> > > variables
>> >> > > 1 Removed variable:
>> >> > >   'dpaa2_portal_dqrr per_lcore_dpaa2_held_bufs'
>> >> > {per_lcore_dpaa2_held_bufs@@DPDK_20.0}
>> >> > > 2 Changed variables:
>> >> > >   [C]'dpaa2_io_portal_t dpaa2_io_portal[128]' was changed at
>> >> > dpaa2_hw_dpio.h:40:1: size of symbol changed from 5120 to 2048
>> >> > >   [C]'dpaa2_io_portal_t per_lcore__dpaa2_io' was changed at
>> >> > > dpaa2_hw_dpio.h:20:1: size of symbol changed from 40 to 16
>> >> > >
>> >> > > Error: ABI issue reported for 'abidiff --suppr
>> >> > > devtools/libabigail.abignore --
>> >> > no-added-syms --headers-dir1 reference/usr/local/include
>> >> > --headers-dir2 install/usr/local/include
>> >> > reference/dump/librte_bus_fslmc.dump
>> >> > install/dump/librte_bus_fslmc.dump'

[...]

>> In the mean time, the tooling can be tought to ignore changes to these ELF
>> symbols, as you you guys all know already.
>> 
> [Hemant] will you please help me about adding entry to libagigail.abignore 
> I tried doing following, but it is not helping
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -2,10 +2,15 @@
>          symbol_version = EXPERIMENTAL
>  [suppress_variable]
>          symbol_version = EXPERIMENTAL
> +       name = per_lcore__dpaa2_io
> +       name = dpaa2_io_portal
>
>  ; Explicit ignore for driver-only ABI
>  [suppress_type]
>          name = rte_cryptodev_ops
> +       name = dpaa2_io_portal_t

So, I understand you want the tooling to ignore changes to the global
arrays dpaa2_io_portal and per_lcore__dpaa2_io, right?

If that is correct, then here are the entries you should add to the
libabigail.abignore file (please make sure the comments I have added
there is accurate):

        [suppress_variable]
          # This global variable is exported by the binary but is not part of
          # the logical ABI.  In a perfect world, that variable should not be
          # global, and we should access it via an accessor function.  We do
          # that right now because of performance concerns.
          name = dpaa2_io_portal

        [suppress_variable]
          # This global variable is exported by the binary but is not part of
          # the logical ABI.  In a perfect world, that variable should not be
          # global, and we should access it via an accessor function.  We do
          # that right now because of performance concerns.
          name = per_lcore__dpaa2_io

I hope this helps.

Cheers,

-- 
		Dodji


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-04-08  7:20               ` Dodji Seketeli
@ 2020-04-08  7:52                 ` Dodji Seketeli
  2020-04-08 12:35                   ` Thomas Monjalon
  0 siblings, 1 reply; 109+ messages in thread
From: Dodji Seketeli @ 2020-04-08  7:52 UTC (permalink / raw)
  To: Dodji Seketeli
  Cc: Hemant Agrawal, David Marchand, Hemant Agrawal (OSS),
	Yigit, Ferruh, dev, Neil Horman, Thomas Monjalon

Hello Thomas, Hemant,

Thomas Monjalon <thomas@monjalon.net> writes:

> 07/04/2020 12:25, Hemant Agrawal:

[...]

>> [Hemant] I have commented on Neil's series.
>> It needs more changes in existing code.
>> An approach like __rte_experimental will work better.
>
> I guess you mean __rte_internal?
>
> Please Hemant don't wait for someone else filling the gap.
> If __rte_internal is the right approach, please complete and use it.

Just so that I understand, is __rte_internal an ELF version that the
symbols per_lcore_dpaa2_held_bufs, dpaa2_io_portal and
per_lcore__dpaa2_io should have in the binary?

If that is the case, then it seems to me that the __rte_internal
approach that you are suggesting would be a much better approach that
the one I replied to Hemant about below.

I didn't mean to tell Hemant what approach he should take :-) I was just
trying to help him get the syntax of a libabigail suppression
specification right.

Sorry for the confusion I might have induced.

Dodji Seketeli <dseketel@redhat.com> writes:

> Hello Hemant,
>
> Hemant Agrawal <hemant.agrawal@nxp.com> writes:
>
> [...]
>
>>> >> > > [Hemant]
>>> >> > > As per the logs:
>>> >> > >
>>> >> > > Variables changes summary: 1 Removed, 2 Changed, 0 Added
>>> >> > > variables
>>> >> > > 1 Removed variable:
>>> >> > >   'dpaa2_portal_dqrr per_lcore_dpaa2_held_bufs'
>>> >> > {per_lcore_dpaa2_held_bufs@@DPDK_20.0}
>>> >> > > 2 Changed variables:
>>> >> > >   [C]'dpaa2_io_portal_t dpaa2_io_portal[128]' was changed at
>>> >> > dpaa2_hw_dpio.h:40:1: size of symbol changed from 5120 to 2048
>>> >> > >   [C]'dpaa2_io_portal_t per_lcore__dpaa2_io' was changed at
>>> >> > > dpaa2_hw_dpio.h:20:1: size of symbol changed from 40 to 16
>>> >> > >
>>> >> > > Error: ABI issue reported for 'abidiff --suppr
>>> >> > > devtools/libabigail.abignore --
>>> >> > no-added-syms --headers-dir1 reference/usr/local/include
>>> >> > --headers-dir2 install/usr/local/include
>>> >> > reference/dump/librte_bus_fslmc.dump
>>> >> > install/dump/librte_bus_fslmc.dump'
>
> [...]
>
>>> In the mean time, the tooling can be tought to ignore changes to these ELF
>>> symbols, as you you guys all know already.
>>> 
>> [Hemant] will you please help me about adding entry to libagigail.abignore 
>> I tried doing following, but it is not helping
>> --- a/devtools/libabigail.abignore
>> +++ b/devtools/libabigail.abignore
>> @@ -2,10 +2,15 @@
>>          symbol_version = EXPERIMENTAL
>>  [suppress_variable]
>>          symbol_version = EXPERIMENTAL
>> +       name = per_lcore__dpaa2_io
>> +       name = dpaa2_io_portal
>>
>>  ; Explicit ignore for driver-only ABI
>>  [suppress_type]
>>          name = rte_cryptodev_ops
>> +       name = dpaa2_io_portal_t
>
> So, I understand you want the tooling to ignore changes to the global
> arrays dpaa2_io_portal and per_lcore__dpaa2_io, right?
>
> If that is correct, then here are the entries you should add to the
> libabigail.abignore file (please make sure the comments I have added
> there is accurate):
>
>         [suppress_variable]
>           # This global variable is exported by the binary but is not part of
>           # the logical ABI.  In a perfect world, that variable should not be
>           # global, and we should access it via an accessor function.  We do
>           # that right now because of performance concerns.
>           name = dpaa2_io_portal
>
>         [suppress_variable]
>           # This global variable is exported by the binary but is not part of
>           # the logical ABI.  In a perfect world, that variable should not be
>           # global, and we should access it via an accessor function.  We do
>           # that right now because of performance concerns.
>           name = per_lcore__dpaa2_io
>
> I hope this helps.
>
> Cheers,

-- 
		Dodji


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements
  2020-04-08  7:52                 ` Dodji Seketeli
@ 2020-04-08 12:35                   ` Thomas Monjalon
  0 siblings, 0 replies; 109+ messages in thread
From: Thomas Monjalon @ 2020-04-08 12:35 UTC (permalink / raw)
  To: Dodji Seketeli
  Cc: Hemant Agrawal, David Marchand, Hemant Agrawal (OSS),
	Yigit, Ferruh, dev, Neil Horman

08/04/2020 09:52, Dodji Seketeli:
> Hello Thomas, Hemant,
> 
> Thomas Monjalon <thomas@monjalon.net> writes:
> > 07/04/2020 12:25, Hemant Agrawal:
> 
> [...]
> 
> >> [Hemant] I have commented on Neil's series.
> >> It needs more changes in existing code.
> >> An approach like __rte_experimental will work better.
> >
> > I guess you mean __rte_internal?
> >
> > Please Hemant don't wait for someone else filling the gap.
> > If __rte_internal is the right approach, please complete and use it.
> 
> Just so that I understand, is __rte_internal an ELF version that the
> symbols per_lcore_dpaa2_held_bufs, dpaa2_io_portal and
> per_lcore__dpaa2_io should have in the binary?

Correct

> If that is the case, then it seems to me that the __rte_internal
> approach that you are suggesting would be a much better approach that
> the one I replied to Hemant about below.

Yes I think we all agree, just waiting for the patch to be ready.

> I didn't mean to tell Hemant what approach he should take :-) I was just
> trying to help him get the syntax of a libabigail suppression
> specification right.
> 
> Sorry for the confusion I might have induced.

No problem, thanks for the help.




^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements
  2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
                     ` (15 preceding siblings ...)
  2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 16/16] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
@ 2020-05-04 12:41   ` Hemant Agrawal
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 1/8] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
                       ` (8 more replies)
  16 siblings, 9 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-04 12:41 UTC (permalink / raw)
  To: dev, ferruh.yigit

v3: Limiting the patches to avoid ABI breakage.


Apeksha Gupta (1):
  bus/fslmc: fix dereferencing null pointer

Hemant Agrawal (2):
  net/dpaa2: add default Rx params in devinfo
  net/dpaa2: reduce prints in queue count functions

Jun Yang (1):
  net/dpaa2: use cong group id for multiple tcs

Nipun Gupta (3):
  net/dpaa2: do not prefetch annotaion for physical mode
  drivers: dpaa2 enhance portal alloc failure log
  net/dpaa2: support UDP dst port based muxing

Rohit Raj (1):
  net/dpaa2: fix 10g port negotiation issue

 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     |  6 +--
 drivers/bus/fslmc/qbman/qbman_debug.c       |  9 ++--
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++-
 drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++--
 drivers/net/dpaa/dpaa_ethdev.c              |  4 ++
 drivers/net/dpaa/dpaa_ethdev.h              |  1 +
 drivers/net/dpaa2/dpaa2_ethdev.c            | 32 ++++++++----
 drivers/net/dpaa2/dpaa2_ethdev.h            |  2 +
 drivers/net/dpaa2/dpaa2_mux.c               | 24 ++++++++-
 drivers/net/dpaa2/dpaa2_rxtx.c              | 56 ++++++++++++++-------
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++--
 13 files changed, 134 insertions(+), 48 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v3 1/8] bus/fslmc: fix dereferencing null pointer
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
@ 2020-05-04 12:41     ` Hemant Agrawal
  2020-05-06 21:08       ` Ferruh Yigit
  2020-05-06 21:14       ` Ferruh Yigit
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 2/8] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
                       ` (7 subsequent siblings)
  8 siblings, 2 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-04 12:41 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

This patch fixees the nxp internal coverity reported
null pointer dereferncing issue.

Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 0bb2ce880f..34374ae4b6 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -20,26 +20,27 @@ struct qbman_fq_query_desc {
 	uint8_t verb;
 	uint8_t reserved[3];
 	uint32_t fqid;
-	uint8_t reserved2[57];
+	uint8_t reserved2[56];
 };
 
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 			 struct qbman_fq_query_np_rslt *r)
 {
 	struct qbman_fq_query_desc *p;
+	struct qbman_fq_query_np_rslt *var;
 
 	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->fqid = fqid;
-	*r = *(struct qbman_fq_query_np_rslt *)qbman_swp_mc_complete(s, p,
-						QBMAN_FQ_QUERY_NP);
-	if (!r) {
+	var = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY_NP);
+	if (!var) {
 		pr_err("qbman: Query FQID %d NP fields failed, no response\n",
 		       fqid);
 		return -EIO;
 	}
+	*r = *var;
 
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY_NP);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v3 2/8] net/dpaa2: fix 10g port negotiation issue
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 1/8] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
@ 2020-05-04 12:41     ` Hemant Agrawal
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 3/8] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
                       ` (6 subsequent siblings)
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-04 12:41 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fixed 10g port negotiation issue with another 10G/non 10G port.
Initialize the port link speed.

Fixes: c5acbb5ea20e ("net/dpaa2: support link status event")
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 2cde55e7cc..4fc550a885 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -553,9 +553,6 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
 		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
 
-	/* update the current status */
-	dpaa2_dev_link_update(dev, 0);
-
 	return 0;
 }
 
@@ -1757,6 +1754,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	/* changing tx burst function to start enqueues */
 	dev->tx_pkt_burst = dpaa2_dev_tx;
 	dev->data->dev_link.link_status = state.up;
+	dev->data->dev_link.link_speed = state.rate;
 
 	if (state.up)
 		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v3 3/8] net/dpaa2: do not prefetch annotaion for physical mode
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 1/8] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 2/8] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
@ 2020-05-04 12:41     ` Hemant Agrawal
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 4/8] net/dpaa2: add default Rx params in devinfo Hemant Agrawal
                       ` (5 subsequent siblings)
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-04 12:41 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

When IOVA is physical address do not prefetch the annotation
of the next frame, as there is a cost involved there to convert
the physical address to virtual address.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  6 ++--
 drivers/net/dpaa2/dpaa2_rxtx.c          | 40 +++++++++++++++----------
 2 files changed, 28 insertions(+), 18 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 44d3d49c7a..368fe7c688 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -395,8 +395,8 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
+#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
 
 #endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 703f0549ad..89a8221cb8 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -324,8 +324,8 @@ static inline struct rte_mbuf *__rte_hot
 eth_fd_to_mbuf(const struct qbman_fd *fd,
 	       int port_id)
 {
-	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
-		DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
+	void *iova_addr = DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(iova_addr,
 		     rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size);
 
 	/* need to repopulated some of the fields,
@@ -350,8 +350,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 		dpaa2_dev_rx_parse_new(mbuf, fd);
 	else
 		mbuf->packet_type = dpaa2_dev_rx_parse(mbuf,
-			(void *)((size_t)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd))
-			 + DPAA2_FD_PTA_SIZE));
+			(void *)((size_t)iova_addr + DPAA2_FD_PTA_SIZE));
 
 	DPAA2_PMD_DP_DEBUG("to mbuf - mbuf =%p, mbuf->buf_addr =%p, off = %d,"
 		"fd_off=%d fd =%" PRIx64 ", meta = %d  bpid =%d, len=%d\n",
@@ -518,7 +517,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, pull_size;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct queue_storage_info_t *q_storage = dpaa2_q->q_storage;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
@@ -617,12 +616,15 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 		fd = qbman_result_DQ_fd(dq_storage);
 
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 		if (dpaa2_svr_family != SVR_LX2160A) {
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
+			const struct qbman_fd *next_fd =
+				qbman_result_DQ_fd(dq_storage + 1);
 			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0((void *)(size_t)(DPAA2_GET_FD_ADDR(
-				      next_fd) + DPAA2_FD_PTA_SIZE + 16));
+			rte_prefetch0(DPAA2_IOVA_TO_VADDR((DPAA2_GET_FD_ADDR(
+				next_fd) + DPAA2_FD_PTA_SIZE + 16)));
 		}
+#endif
 
 		if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 			bufs[num_rx] = eth_sg_fd_to_mbuf(fd, eth_data->port_id);
@@ -753,7 +755,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, next_pull = nb_pkts, num_pulled;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
 
@@ -819,11 +821,19 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			}
 			fd = qbman_result_DQ_fd(dq_storage);
 
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
-			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0(
-				(void *)(size_t)(DPAA2_GET_FD_ADDR(next_fd)
-					+ DPAA2_FD_PTA_SIZE + 16));
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+			if (dpaa2_svr_family != SVR_LX2160A) {
+				const struct qbman_fd *next_fd =
+					qbman_result_DQ_fd(dq_storage + 1);
+
+				/* Prefetch Annotation address for the parse
+				 * results.
+				 */
+				rte_prefetch0((DPAA2_IOVA_TO_VADDR(
+					DPAA2_GET_FD_ADDR(next_fd) +
+					DPAA2_FD_PTA_SIZE + 16)));
+			}
+#endif
 
 			if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 				bufs[num_rx] = eth_sg_fd_to_mbuf(fd,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v3 4/8] net/dpaa2: add default Rx params in devinfo
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
                       ` (2 preceding siblings ...)
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 3/8] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
@ 2020-05-04 12:41     ` Hemant Agrawal
  2020-05-06 21:29       ` Ferruh Yigit
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 5/8] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
                       ` (4 subsequent siblings)
  8 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-04 12:41 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Hemant Agrawal

This patch adds default/preferred rx/tx params in dev info,
specially the advertised burst size.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c   |  4 ++++
 drivers/net/dpaa/dpaa_ethdev.h   |  1 +
 drivers/net/dpaa2/dpaa2_ethdev.c | 16 ++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h |  2 ++
 4 files changed, 23 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5f81968d80..56eb5ec47c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -363,6 +363,10 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 					dev_tx_offloads_nodis;
 	dev_info->default_rxportconf.burst_size = DPAA_DEF_RX_BURST_SIZE;
 	dev_info->default_txportconf.burst_size = DPAA_DEF_TX_BURST_SIZE;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_txportconf.ring_size = CGR_TX_CGR_THRESH;
+	dev_info->default_rxportconf.ring_size = CGR_RX_PERFQ_THRESH;
 
 	return 0;
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index da06f1faa1..af9fc2105d 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -42,6 +42,7 @@
 
 /* RX queue tail drop threshold (CGR Based) in frame count */
 #define CGR_RX_PERFQ_THRESH 256
+#define CGR_TX_CGR_THRESH 512
 
 /*max mac filter for memac(8) including primary mac addr*/
 #define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4fc550a885..b70a2ac01c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -275,6 +275,22 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
 
+	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
+	/* same is rx size for best perf */
+	dev_info->default_txportconf.burst_size = dpaa2_dqrr_size;
+
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_txportconf.ring_size = CONG_ENTER_TX_THRESHOLD;
+	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
+
+	if (dpaa2_svr_family == SVR_LX2160A) {
+		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
+				ETH_LINK_SPEED_40G |
+				ETH_LINK_SPEED_50G |
+				ETH_LINK_SPEED_100G;
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 31dca8c7b6..2c49a7f01f 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -24,6 +24,8 @@
 #define MAX_TX_QUEUES		16
 #define MAX_DPNI		8
 
+#define DPAA2_RX_DEFAULT_NBDESC 512
+
 /*default tc to be used for ,congestion, distribution etc configuration. */
 #define DPAA2_DEF_TC		0
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v3 5/8] drivers: dpaa2 enhance portal alloc failure log
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
                       ` (3 preceding siblings ...)
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 4/8] net/dpaa2: add default Rx params in devinfo Hemant Agrawal
@ 2020-05-04 12:41     ` Hemant Agrawal
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 6/8] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
                       ` (3 subsequent siblings)
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-04 12:41 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

The change adds printing the thread id when portal allocation
failure occurs

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++++++--
 drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++++++--
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++++++++---
 drivers/net/dpaa2/dpaa2_ethdev.c            |  4 +++-
 drivers/net/dpaa2/dpaa2_rxtx.c              | 16 ++++++++++++----
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++++++--
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++++++++---
 7 files changed, 51 insertions(+), 17 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 0919f3bf47..256a9a1955 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1459,7 +1459,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1641,7 +1643,9 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 2be6e12f66..a196ad4c64 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -74,7 +74,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -273,7 +275,9 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 48887beb7e..fa9b53e64d 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -69,7 +69,9 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_MEMPOOL_ERR("Failure in affining portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			goto err1;
 		}
 	}
@@ -198,7 +200,9 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return;
 		}
 	}
@@ -317,7 +321,9 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return ret;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index b70a2ac01c..817e9e0316 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -903,7 +903,9 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return -EINVAL;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 89a8221cb8..630f8c73c7 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -762,7 +762,9 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -882,7 +884,9 @@ uint16_t dpaa2_dev_tx_conf(void *queue)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1021,7 +1025,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1282,7 +1288,9 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
index 997d1c8739..7c21c6a528 100644
--- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
+++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
@@ -70,7 +70,9 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -133,7 +135,9 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c905954004..d5202d6522 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -666,7 +666,9 @@ dpdmai_dev_enqueue_multi(struct dpaa2_dpdmai_dev *dpdmai_dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -788,7 +790,9 @@ dpdmai_dev_dequeue_multijob_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -929,7 +933,9 @@ dpdmai_dev_dequeue_multijob_no_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v3 6/8] net/dpaa2: support UDP dst port based muxing
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
                       ` (4 preceding siblings ...)
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 5/8] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
@ 2020-05-04 12:41     ` Hemant Agrawal
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 7/8] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
                       ` (2 subsequent siblings)
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-04 12:41 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

This change adds DPDMUX support to bifurcate traffic on
the basis of UDP destination port.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index af90adb828..9ac8806faf 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
  */
 
 #include <sys/queue.h>
@@ -99,6 +99,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	case RTE_FLOW_ITEM_TYPE_IPV4:
 	{
 		const struct rte_flow_item_ipv4 *spec;
+
 		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_IP;
 		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_IP_PROTO;
 		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
@@ -113,10 +114,31 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	}
 	break;
 
+	case RTE_FLOW_ITEM_TYPE_UDP:
+	{
+		const struct rte_flow_item_udp *spec;
+		uint16_t udp_dst_port;
+
+		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_UDP;
+		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
+		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
+		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
+		kg_cfg.num_extracts = 1;
+
+		spec = (const struct rte_flow_item_udp *)pattern[0]->spec;
+		udp_dst_port = rte_constant_bswap16(spec->hdr.dst_port);
+		memcpy((void *)key_iova, (const void *)&udp_dst_port,
+							sizeof(rte_be16_t));
+		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
+		key_size = sizeof(uint16_t);
+	}
+	break;
+
 	case RTE_FLOW_ITEM_TYPE_ETH:
 	{
 		const struct rte_flow_item_eth *spec;
 		uint16_t eth_type;
+
 		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_ETH;
 		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_ETH_TYPE;
 		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v3 7/8] net/dpaa2: reduce prints in queue count functions
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
                       ` (5 preceding siblings ...)
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 6/8] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
@ 2020-05-04 12:41     ` Hemant Agrawal
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 8/8] net/dpaa2: use cong group id for multiple tcs Hemant Agrawal
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-04 12:41 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Hemant Agrawal

changing them to DP as it is impacting l3fwd-power apps

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 817e9e0316..fd766a2184 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -898,8 +898,6 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	struct qbman_fq_query_np_rslt state;
 	uint32_t frame_cnt = 0;
 
-	PMD_INIT_FUNC_TRACE();
-
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
@@ -915,7 +913,7 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DEBUG("RX frame count for q(%d) is %u",
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
 				rx_queue_id, frame_cnt);
 	}
 	return frame_cnt;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v3 8/8] net/dpaa2: use cong group id for multiple tcs
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
                       ` (6 preceding siblings ...)
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 7/8] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
@ 2020-05-04 12:41     ` Hemant Agrawal
  2020-05-06 21:38       ` Ferruh Yigit
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
  8 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-04 12:41 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Flow id may not work when used with multipel tcs.
The CGID will be provided in the INDEX field.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index fd766a2184..1bab3b064c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -676,7 +676,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 						DPNI_CP_CONGESTION_GROUP,
 						DPNI_QUEUE_RX,
 						dpaa2_q->tc_index,
-						flow_id, &taildrop);
+						dpaa2_q->cgid, &taildrop);
 		} else {
 			/*enabling per rx queue congestion control */
 			taildrop.threshold = CONG_THRESHOLD_RX_BYTES_Q;
@@ -703,7 +703,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
 					dpaa2_q->tc_index,
-					flow_id, &taildrop);
+					dpaa2_q->cgid, &taildrop);
 		} else {
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_QUEUE, DPNI_QUEUE_RX,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/8] bus/fslmc: fix dereferencing null pointer
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 1/8] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
@ 2020-05-06 21:08       ` Ferruh Yigit
  2020-05-06 21:09         ` Ferruh Yigit
  2020-05-06 21:14       ` Ferruh Yigit
  1 sibling, 1 reply; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-06 21:08 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: stable, Apeksha Gupta

On 5/4/2020 1:41 PM, Hemant Agrawal wrote:
> From: Apeksha Gupta <apeksha.gupta@nxp.com>
> 
> This patch fixees the nxp internal coverity reported
> null pointer dereferncing issue.

What is the coverity issue number? Can you please put it into commit log as
"Coverity issue: ###" format.

> 
> Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/8] bus/fslmc: fix dereferencing null pointer
  2020-05-06 21:08       ` Ferruh Yigit
@ 2020-05-06 21:09         ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-06 21:09 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: stable, Apeksha Gupta

On 5/6/2020 10:08 PM, Ferruh Yigit wrote:
> On 5/4/2020 1:41 PM, Hemant Agrawal wrote:
>> From: Apeksha Gupta <apeksha.gupta@nxp.com>
>>
>> This patch fixees the nxp internal coverity reported
>> null pointer dereferncing issue.
> 
> What is the coverity issue number? Can you please put it into commit log as
> "Coverity issue: ###" format.

Ahh, it says internal coverity, we need number only for public coverity, isn't
same issue reported on public coverity? If so we need that number.

> 
>>
>> Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> 


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/8] bus/fslmc: fix dereferencing null pointer
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 1/8] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
  2020-05-06 21:08       ` Ferruh Yigit
@ 2020-05-06 21:14       ` Ferruh Yigit
  1 sibling, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-06 21:14 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: stable, Apeksha Gupta

On 5/4/2020 1:41 PM, Hemant Agrawal wrote:
> From: Apeksha Gupta <apeksha.gupta@nxp.com>
> 
> This patch fixees the nxp internal coverity reported
> null pointer dereferncing issue.
> 
> Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
>  drivers/bus/fslmc/qbman/qbman_debug.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
> index 0bb2ce880f..34374ae4b6 100644
> --- a/drivers/bus/fslmc/qbman/qbman_debug.c
> +++ b/drivers/bus/fslmc/qbman/qbman_debug.c
> @@ -20,26 +20,27 @@ struct qbman_fq_query_desc {
>  	uint8_t verb;
>  	uint8_t reserved[3];
>  	uint32_t fqid;
> -	uint8_t reserved2[57];
> +	uint8_t reserved2[56];

Is decreasing 'reserved2' size related to null pointer de-referencing? This
looks unrelated.

>  };
>  
>  int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
>  			 struct qbman_fq_query_np_rslt *r)
>  {
>  	struct qbman_fq_query_desc *p;
> +	struct qbman_fq_query_np_rslt *var;
>  
>  	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
>  	if (!p)
>  		return -EBUSY;
>  
>  	p->fqid = fqid;
> -	*r = *(struct qbman_fq_query_np_rslt *)qbman_swp_mc_complete(s, p,
> -						QBMAN_FQ_QUERY_NP);
> -	if (!r) {
> +	var = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY_NP);
> +	if (!var) {
>  		pr_err("qbman: Query FQID %d NP fields failed, no response\n",
>  		       fqid);
>  		return -EIO;
>  	}
> +	*r = *var;
>  
>  	/* Decode the outcome */
>  	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY_NP);
> 


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/8] net/dpaa2: add default Rx params in devinfo
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 4/8] net/dpaa2: add default Rx params in devinfo Hemant Agrawal
@ 2020-05-06 21:29       ` Ferruh Yigit
  2020-05-07  5:35         ` Hemant Agrawal (OSS)
  0 siblings, 1 reply; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-06 21:29 UTC (permalink / raw)
  To: Hemant Agrawal, dev

On 5/4/2020 1:41 PM, Hemant Agrawal wrote:
> This patch adds default/preferred rx/tx params in dev info,
> specially the advertised burst size.
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  drivers/net/dpaa/dpaa_ethdev.c   |  4 ++++
>  drivers/net/dpaa/dpaa_ethdev.h   |  1 +
>  drivers/net/dpaa2/dpaa2_ethdev.c | 16 ++++++++++++++++
>  drivers/net/dpaa2/dpaa2_ethdev.h |  2 ++
>  4 files changed, 23 insertions(+)
> 
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
> index 5f81968d80..56eb5ec47c 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -363,6 +363,10 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
>  					dev_tx_offloads_nodis;
>  	dev_info->default_rxportconf.burst_size = DPAA_DEF_RX_BURST_SIZE;
>  	dev_info->default_txportconf.burst_size = DPAA_DEF_TX_BURST_SIZE;
> +	dev_info->default_rxportconf.nb_queues = 1;
> +	dev_info->default_txportconf.nb_queues = 1;
> +	dev_info->default_txportconf.ring_size = CGR_TX_CGR_THRESH;
> +	dev_info->default_rxportconf.ring_size = CGR_RX_PERFQ_THRESH;
>  
>  	return 0;
>  }
> diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
> index da06f1faa1..af9fc2105d 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.h
> +++ b/drivers/net/dpaa/dpaa_ethdev.h
> @@ -42,6 +42,7 @@
>  
>  /* RX queue tail drop threshold (CGR Based) in frame count */
>  #define CGR_RX_PERFQ_THRESH 256
> +#define CGR_TX_CGR_THRESH 512
>  
>  /*max mac filter for memac(8) including primary mac addr*/
>  #define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 4fc550a885..b70a2ac01c 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -275,6 +275,22 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  	dev_info->max_vmdq_pools = ETH_16_POOLS;
>  	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
>  
> +	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
> +	/* same is rx size for best perf */
> +	dev_info->default_txportconf.burst_size = dpaa2_dqrr_size;
> +
> +	dev_info->default_rxportconf.nb_queues = 1;
> +	dev_info->default_txportconf.nb_queues = 1;
> +	dev_info->default_txportconf.ring_size = CONG_ENTER_TX_THRESHOLD;
> +	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
> +
> +	if (dpaa2_svr_family == SVR_LX2160A) {
> +		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
> +				ETH_LINK_SPEED_40G |
> +				ETH_LINK_SPEED_50G |
> +				ETH_LINK_SPEED_100G;
> +	}

'speed_capa' is not default param, but anyway the "Speed capabilities" feature
of the PMD seems marked as 'P', does it change with this update? What is missing
for full support?

> +
>  	return 0;
>  }
>  
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
> index 31dca8c7b6..2c49a7f01f 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.h
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.h
> @@ -24,6 +24,8 @@
>  #define MAX_TX_QUEUES		16
>  #define MAX_DPNI		8
>  
> +#define DPAA2_RX_DEFAULT_NBDESC 512
> +
>  /*default tc to be used for ,congestion, distribution etc configuration. */
>  #define DPAA2_DEF_TC		0
>  
> 


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v3 8/8] net/dpaa2: use cong group id for multiple tcs
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 8/8] net/dpaa2: use cong group id for multiple tcs Hemant Agrawal
@ 2020-05-06 21:38       ` Ferruh Yigit
  2020-05-07  5:37         ` Hemant Agrawal (OSS)
  0 siblings, 1 reply; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-06 21:38 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: Jun Yang

On 5/4/2020 1:41 PM, Hemant Agrawal wrote:
> From: Jun Yang <jun.yang@nxp.com>
> 
> Flow id may not work when used with multipel tcs.
> The CGID will be provided in the INDEX field.

Hi Jun,

Can you please provide more information in commit log, why this change is done,
is it to fix something, if so what is broken with original code, why using "cong
group id" helps instead of using "flow_id" etc..

Thanks,
ferruh

> 
> Signed-off-by: Jun Yang <jun.yang@nxp.com>
> ---
>  drivers/net/dpaa2/dpaa2_ethdev.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index fd766a2184..1bab3b064c 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -676,7 +676,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  						DPNI_CP_CONGESTION_GROUP,
>  						DPNI_QUEUE_RX,
>  						dpaa2_q->tc_index,
> -						flow_id, &taildrop);
> +						dpaa2_q->cgid, &taildrop);
>  		} else {
>  			/*enabling per rx queue congestion control */
>  			taildrop.threshold = CONG_THRESHOLD_RX_BYTES_Q;
> @@ -703,7 +703,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
>  					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
>  					dpaa2_q->tc_index,
> -					flow_id, &taildrop);
> +					dpaa2_q->cgid, &taildrop);
>  		} else {
>  			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
>  					DPNI_CP_QUEUE, DPNI_QUEUE_RX,
> 


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/8] net/dpaa2: add default Rx params in devinfo
  2020-05-06 21:29       ` Ferruh Yigit
@ 2020-05-07  5:35         ` Hemant Agrawal (OSS)
  0 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal (OSS) @ 2020-05-07  5:35 UTC (permalink / raw)
  To: Ferruh Yigit, dev

Hi Ferruh,

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, May 7, 2020 3:00 AM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
> Subject: Re: [PATCH v3 4/8] net/dpaa2: add default Rx params in devinfo
> 
> On 5/4/2020 1:41 PM, Hemant Agrawal wrote:
> > This patch adds default/preferred rx/tx params in dev info, specially
> > the advertised burst size.
> >
> > +	if (dpaa2_svr_family == SVR_LX2160A) {
> > +		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
> > +				ETH_LINK_SPEED_40G |
> > +				ETH_LINK_SPEED_50G |
> > +				ETH_LINK_SPEED_100G;
> > +	}
> 
> 'speed_capa' is not default param, but anyway the "Speed capabilities"
> feature of the PMD seems marked as 'P', does it change with this update?
> What is missing for full support?
> 
[Hemant]  I missed to update the docs. I will update the doc to Y now.


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v3 8/8] net/dpaa2: use cong group id for multiple tcs
  2020-05-06 21:38       ` Ferruh Yigit
@ 2020-05-07  5:37         ` Hemant Agrawal (OSS)
  0 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal (OSS) @ 2020-05-07  5:37 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: Jun Yang

Hi Ferruh,

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Ferruh Yigit
> Sent: Thursday, May 7, 2020 3:09 AM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
> Cc: Jun Yang <jun.yang@nxp.com>
> Subject: Re: [dpdk-dev] [PATCH v3 8/8] net/dpaa2: use cong group id for
> multiple tcs
> 
> On 5/4/2020 1:41 PM, Hemant Agrawal wrote:
> > From: Jun Yang <jun.yang@nxp.com>
> >
> > Flow id may not work when used with multipel tcs.
> > The CGID will be provided in the INDEX field.
> 
> Hi Jun,
> 
> Can you please provide more information in commit log, why this change is
> done, is it to fix something, if so what is broken with original code, why using
> "cong group id" helps instead of using "flow_id" etc..
> 
[Hemant]  yes, this should be a bug fix with proper explanation.  I will get the v2 for it.

> Thanks,
> ferruh
> 
> >
> > Signed-off-by: Jun Yang <jun.yang@nxp.com>
> > ---
> >  drivers/net/dpaa2/dpaa2_ethdev.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> > b/drivers/net/dpaa2/dpaa2_ethdev.c
> > index fd766a2184..1bab3b064c 100644
> > --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> > +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> > @@ -676,7 +676,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >
> 	DPNI_CP_CONGESTION_GROUP,
> >  						DPNI_QUEUE_RX,
> >  						dpaa2_q->tc_index,
> > -						flow_id, &taildrop);
> > +						dpaa2_q->cgid, &taildrop);
> >  		} else {
> >  			/*enabling per rx queue congestion control */
> >  			taildrop.threshold = CONG_THRESHOLD_RX_BYTES_Q;
> @@ -703,7 +703,7
> > @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
> >  			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv-
> >token,
> >  					DPNI_CP_CONGESTION_GROUP,
> DPNI_QUEUE_RX,
> >  					dpaa2_q->tc_index,
> > -					flow_id, &taildrop);
> > +					dpaa2_q->cgid, &taildrop);
> >  		} else {
> >  			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv-
> >token,
> >  					DPNI_CP_QUEUE, DPNI_QUEUE_RX,
> >


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements
  2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
                       ` (7 preceding siblings ...)
  2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 8/8] net/dpaa2: use cong group id for multiple tcs Hemant Agrawal
@ 2020-05-07 10:46     ` Hemant Agrawal
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 1/9] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
                         ` (10 more replies)
  8 siblings, 11 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-07 10:46 UTC (permalink / raw)
  To: dev, ferruh.yigit

v4: address the review comments
v3: Limiting the patches to avoid ABI breakage.

Apeksha Gupta (1):
  bus/fslmc: fix dereferencing null pointer

Hemant Agrawal (3):
  net/dpaa2: add default values for Rx params in info
  net/dpaa2: reduce prints in queue count functions
  bus/fslmc: fix the size of qman fq desc

Jun Yang (1):
  net/dpaa2: fix cong group id for multiple tcs

Nipun Gupta (3):
  net/dpaa2: do not prefetch annotaion for physical mode
  drivers: dpaa2 enhance portal alloc failure log
  net/dpaa2: support UDP dst port based muxing

Rohit Raj (1):
  net/dpaa2: fix 10g port negotiation issue

 doc/guides/nics/features/dpaa2.ini          |  2 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     |  6 +--
 drivers/bus/fslmc/qbman/qbman_debug.c       |  9 ++--
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++-
 drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++--
 drivers/net/dpaa/dpaa_ethdev.c              |  4 ++
 drivers/net/dpaa/dpaa_ethdev.h              |  1 +
 drivers/net/dpaa2/dpaa2_ethdev.c            | 32 ++++++++----
 drivers/net/dpaa2/dpaa2_ethdev.h            |  2 +
 drivers/net/dpaa2/dpaa2_mux.c               | 24 ++++++++-
 drivers/net/dpaa2/dpaa2_rxtx.c              | 56 ++++++++++++++-------
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++--
 14 files changed, 135 insertions(+), 49 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v4 1/9] bus/fslmc: fix dereferencing null pointer
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
@ 2020-05-07 10:46       ` Hemant Agrawal
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 2/9] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
                         ` (9 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-07 10:46 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

This patch fixees the nxp internal coverity reported
null pointer dereferncing issue.

Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 0bb2ce880f..4cd0923acb 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -27,19 +27,20 @@ int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 			 struct qbman_fq_query_np_rslt *r)
 {
 	struct qbman_fq_query_desc *p;
+	struct qbman_fq_query_np_rslt *var;
 
 	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->fqid = fqid;
-	*r = *(struct qbman_fq_query_np_rslt *)qbman_swp_mc_complete(s, p,
-						QBMAN_FQ_QUERY_NP);
-	if (!r) {
+	var = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY_NP);
+	if (!var) {
 		pr_err("qbman: Query FQID %d NP fields failed, no response\n",
 		       fqid);
 		return -EIO;
 	}
+	*r = *var;
 
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY_NP);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v4 2/9] net/dpaa2: fix 10g port negotiation issue
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 1/9] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
@ 2020-05-07 10:46       ` Hemant Agrawal
  2020-05-07 14:36         ` Ferruh Yigit
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 3/9] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
                         ` (8 subsequent siblings)
  10 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-07 10:46 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fixed 10g port negotiation issue with another 10G/non 10G port.
Initialize the port link speed.

Fixes: c5acbb5ea20e ("net/dpaa2: support link status event")
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 2cde55e7cc..4fc550a885 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -553,9 +553,6 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
 		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
 
-	/* update the current status */
-	dpaa2_dev_link_update(dev, 0);
-
 	return 0;
 }
 
@@ -1757,6 +1754,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	/* changing tx burst function to start enqueues */
 	dev->tx_pkt_burst = dpaa2_dev_tx;
 	dev->data->dev_link.link_status = state.up;
+	dev->data->dev_link.link_speed = state.rate;
 
 	if (state.up)
 		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v4 3/9] net/dpaa2: do not prefetch annotaion for physical mode
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 1/9] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 2/9] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
@ 2020-05-07 10:46       ` Hemant Agrawal
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 4/9] net/dpaa2: add default values for Rx params in info Hemant Agrawal
                         ` (7 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-07 10:46 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

When IOVA is physical address do not prefetch the annotation
of the next frame, as there is a cost involved there to convert
the physical address to virtual address.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  6 ++--
 drivers/net/dpaa2/dpaa2_rxtx.c          | 40 +++++++++++++++----------
 2 files changed, 28 insertions(+), 18 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 44d3d49c7a..368fe7c688 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -395,8 +395,8 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
+#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
 
 #endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 703f0549ad..89a8221cb8 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -324,8 +324,8 @@ static inline struct rte_mbuf *__rte_hot
 eth_fd_to_mbuf(const struct qbman_fd *fd,
 	       int port_id)
 {
-	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
-		DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
+	void *iova_addr = DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(iova_addr,
 		     rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size);
 
 	/* need to repopulated some of the fields,
@@ -350,8 +350,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 		dpaa2_dev_rx_parse_new(mbuf, fd);
 	else
 		mbuf->packet_type = dpaa2_dev_rx_parse(mbuf,
-			(void *)((size_t)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd))
-			 + DPAA2_FD_PTA_SIZE));
+			(void *)((size_t)iova_addr + DPAA2_FD_PTA_SIZE));
 
 	DPAA2_PMD_DP_DEBUG("to mbuf - mbuf =%p, mbuf->buf_addr =%p, off = %d,"
 		"fd_off=%d fd =%" PRIx64 ", meta = %d  bpid =%d, len=%d\n",
@@ -518,7 +517,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, pull_size;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct queue_storage_info_t *q_storage = dpaa2_q->q_storage;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
@@ -617,12 +616,15 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 		fd = qbman_result_DQ_fd(dq_storage);
 
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 		if (dpaa2_svr_family != SVR_LX2160A) {
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
+			const struct qbman_fd *next_fd =
+				qbman_result_DQ_fd(dq_storage + 1);
 			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0((void *)(size_t)(DPAA2_GET_FD_ADDR(
-				      next_fd) + DPAA2_FD_PTA_SIZE + 16));
+			rte_prefetch0(DPAA2_IOVA_TO_VADDR((DPAA2_GET_FD_ADDR(
+				next_fd) + DPAA2_FD_PTA_SIZE + 16)));
 		}
+#endif
 
 		if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 			bufs[num_rx] = eth_sg_fd_to_mbuf(fd, eth_data->port_id);
@@ -753,7 +755,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, next_pull = nb_pkts, num_pulled;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
 
@@ -819,11 +821,19 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			}
 			fd = qbman_result_DQ_fd(dq_storage);
 
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
-			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0(
-				(void *)(size_t)(DPAA2_GET_FD_ADDR(next_fd)
-					+ DPAA2_FD_PTA_SIZE + 16));
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+			if (dpaa2_svr_family != SVR_LX2160A) {
+				const struct qbman_fd *next_fd =
+					qbman_result_DQ_fd(dq_storage + 1);
+
+				/* Prefetch Annotation address for the parse
+				 * results.
+				 */
+				rte_prefetch0((DPAA2_IOVA_TO_VADDR(
+					DPAA2_GET_FD_ADDR(next_fd) +
+					DPAA2_FD_PTA_SIZE + 16)));
+			}
+#endif
 
 			if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 				bufs[num_rx] = eth_sg_fd_to_mbuf(fd,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v4 4/9] net/dpaa2: add default values for Rx params in info
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                         ` (2 preceding siblings ...)
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 3/9] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
@ 2020-05-07 10:46       ` Hemant Agrawal
  2020-05-07 14:30         ` Ferruh Yigit
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 5/9] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
                         ` (6 subsequent siblings)
  10 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-07 10:46 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Hemant Agrawal

This patch adds default/preferred rx/tx params in dev info,
specially the advertised burst size.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa2.ini |  2 +-
 drivers/net/dpaa/dpaa_ethdev.c     |  4 ++++
 drivers/net/dpaa/dpaa_ethdev.h     |  1 +
 drivers/net/dpaa2/dpaa2_ethdev.c   | 16 ++++++++++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h   |  2 ++
 5 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/dpaa2.ini b/doc/guides/nics/features/dpaa2.ini
index 6ebbab4b80..c2214fbd50 100644
--- a/doc/guides/nics/features/dpaa2.ini
+++ b/doc/guides/nics/features/dpaa2.ini
@@ -4,7 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
-Speed capabilities   = P
+Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5f81968d80..56eb5ec47c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -363,6 +363,10 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 					dev_tx_offloads_nodis;
 	dev_info->default_rxportconf.burst_size = DPAA_DEF_RX_BURST_SIZE;
 	dev_info->default_txportconf.burst_size = DPAA_DEF_TX_BURST_SIZE;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_txportconf.ring_size = CGR_TX_CGR_THRESH;
+	dev_info->default_rxportconf.ring_size = CGR_RX_PERFQ_THRESH;
 
 	return 0;
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index da06f1faa1..af9fc2105d 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -42,6 +42,7 @@
 
 /* RX queue tail drop threshold (CGR Based) in frame count */
 #define CGR_RX_PERFQ_THRESH 256
+#define CGR_TX_CGR_THRESH 512
 
 /*max mac filter for memac(8) including primary mac addr*/
 #define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4fc550a885..b70a2ac01c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -275,6 +275,22 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
 
+	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
+	/* same is rx size for best perf */
+	dev_info->default_txportconf.burst_size = dpaa2_dqrr_size;
+
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_txportconf.ring_size = CONG_ENTER_TX_THRESHOLD;
+	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
+
+	if (dpaa2_svr_family == SVR_LX2160A) {
+		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
+				ETH_LINK_SPEED_40G |
+				ETH_LINK_SPEED_50G |
+				ETH_LINK_SPEED_100G;
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 31dca8c7b6..2c49a7f01f 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -24,6 +24,8 @@
 #define MAX_TX_QUEUES		16
 #define MAX_DPNI		8
 
+#define DPAA2_RX_DEFAULT_NBDESC 512
+
 /*default tc to be used for ,congestion, distribution etc configuration. */
 #define DPAA2_DEF_TC		0
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v4 5/9] drivers: dpaa2 enhance portal alloc failure log
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                         ` (3 preceding siblings ...)
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 4/9] net/dpaa2: add default values for Rx params in info Hemant Agrawal
@ 2020-05-07 10:46       ` Hemant Agrawal
  2020-05-07 14:31         ` Ferruh Yigit
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 6/9] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
                         ` (5 subsequent siblings)
  10 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-07 10:46 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

The change adds printing the thread id when portal allocation
failure occurs

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++++++--
 drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++++++--
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++++++++---
 drivers/net/dpaa2/dpaa2_ethdev.c            |  4 +++-
 drivers/net/dpaa2/dpaa2_rxtx.c              | 16 ++++++++++++----
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++++++--
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++++++++---
 7 files changed, 51 insertions(+), 17 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 0919f3bf47..256a9a1955 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1459,7 +1459,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1641,7 +1643,9 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 2be6e12f66..a196ad4c64 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -74,7 +74,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -273,7 +275,9 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 48887beb7e..fa9b53e64d 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -69,7 +69,9 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_MEMPOOL_ERR("Failure in affining portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			goto err1;
 		}
 	}
@@ -198,7 +200,9 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return;
 		}
 	}
@@ -317,7 +321,9 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return ret;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index b70a2ac01c..817e9e0316 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -903,7 +903,9 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return -EINVAL;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 89a8221cb8..630f8c73c7 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -762,7 +762,9 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -882,7 +884,9 @@ uint16_t dpaa2_dev_tx_conf(void *queue)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1021,7 +1025,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1282,7 +1288,9 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
index 997d1c8739..7c21c6a528 100644
--- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
+++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
@@ -70,7 +70,9 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -133,7 +135,9 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c905954004..d5202d6522 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -666,7 +666,9 @@ dpdmai_dev_enqueue_multi(struct dpaa2_dpdmai_dev *dpdmai_dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -788,7 +790,9 @@ dpdmai_dev_dequeue_multijob_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -929,7 +933,9 @@ dpdmai_dev_dequeue_multijob_no_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v4 6/9] net/dpaa2: support UDP dst port based muxing
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                         ` (4 preceding siblings ...)
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 5/9] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
@ 2020-05-07 10:46       ` Hemant Agrawal
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 7/9] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
                         ` (4 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-07 10:46 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

This change adds DPDMUX support to bifurcate traffic on
the basis of UDP destination port.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index af90adb828..9ac8806faf 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
  */
 
 #include <sys/queue.h>
@@ -99,6 +99,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	case RTE_FLOW_ITEM_TYPE_IPV4:
 	{
 		const struct rte_flow_item_ipv4 *spec;
+
 		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_IP;
 		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_IP_PROTO;
 		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
@@ -113,10 +114,31 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	}
 	break;
 
+	case RTE_FLOW_ITEM_TYPE_UDP:
+	{
+		const struct rte_flow_item_udp *spec;
+		uint16_t udp_dst_port;
+
+		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_UDP;
+		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
+		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
+		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
+		kg_cfg.num_extracts = 1;
+
+		spec = (const struct rte_flow_item_udp *)pattern[0]->spec;
+		udp_dst_port = rte_constant_bswap16(spec->hdr.dst_port);
+		memcpy((void *)key_iova, (const void *)&udp_dst_port,
+							sizeof(rte_be16_t));
+		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
+		key_size = sizeof(uint16_t);
+	}
+	break;
+
 	case RTE_FLOW_ITEM_TYPE_ETH:
 	{
 		const struct rte_flow_item_eth *spec;
 		uint16_t eth_type;
+
 		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_ETH;
 		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_ETH_TYPE;
 		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v4 7/9] net/dpaa2: reduce prints in queue count functions
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                         ` (5 preceding siblings ...)
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 6/9] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
@ 2020-05-07 10:46       ` Hemant Agrawal
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 8/9] net/dpaa2: fix cong group id for multiple tcs Hemant Agrawal
                         ` (3 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-07 10:46 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Hemant Agrawal

changing them to DP as it is impacting l3fwd-power apps

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 817e9e0316..fd766a2184 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -898,8 +898,6 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	struct qbman_fq_query_np_rslt state;
 	uint32_t frame_cnt = 0;
 
-	PMD_INIT_FUNC_TRACE();
-
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
@@ -915,7 +913,7 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DEBUG("RX frame count for q(%d) is %u",
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
 				rx_queue_id, frame_cnt);
 	}
 	return frame_cnt;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v4 8/9] net/dpaa2: fix cong group id for multiple tcs
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                         ` (6 preceding siblings ...)
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 7/9] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
@ 2020-05-07 10:46       ` Hemant Agrawal
  2020-05-07 14:33         ` Ferruh Yigit
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 9/9] bus/fslmc: fix the size of qman fq desc Hemant Agrawal
                         ` (2 subsequent siblings)
  10 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-07 10:46 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Jun Yang

From: Jun Yang <jun.yang@nxp.com>

Flow id will not work when used with multiple traffic
classes. The CGID shall be provided in the INDEX field.

Fixes: 13b856ac02a8 ("net/dpaa2: support taildrop on frame count basis")
Cc: stable@dpdk.org

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index fd766a2184..1bab3b064c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -676,7 +676,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 						DPNI_CP_CONGESTION_GROUP,
 						DPNI_QUEUE_RX,
 						dpaa2_q->tc_index,
-						flow_id, &taildrop);
+						dpaa2_q->cgid, &taildrop);
 		} else {
 			/*enabling per rx queue congestion control */
 			taildrop.threshold = CONG_THRESHOLD_RX_BYTES_Q;
@@ -703,7 +703,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
 					dpaa2_q->tc_index,
-					flow_id, &taildrop);
+					dpaa2_q->cgid, &taildrop);
 		} else {
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_QUEUE, DPNI_QUEUE_RX,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v4 9/9] bus/fslmc: fix the size of qman fq desc
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                         ` (7 preceding siblings ...)
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 8/9] net/dpaa2: fix cong group id for multiple tcs Hemant Agrawal
@ 2020-05-07 10:46       ` Hemant Agrawal
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-07 10:46 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Hemant Agrawal

correct the qman_fq_desc as per the HW defined size

Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
Cc: stable@dpdk.org

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 4cd0923acb..34374ae4b6 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -20,7 +20,7 @@ struct qbman_fq_query_desc {
 	uint8_t verb;
 	uint8_t reserved[3];
 	uint32_t fqid;
-	uint8_t reserved2[57];
+	uint8_t reserved2[56];
 };
 
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v4 4/9] net/dpaa2: add default values for Rx params in info
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 4/9] net/dpaa2: add default values for Rx params in info Hemant Agrawal
@ 2020-05-07 14:30         ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-07 14:30 UTC (permalink / raw)
  To: Hemant Agrawal, dev

On 5/7/2020 11:46 AM, Hemant Agrawal wrote:
> This patch adds default/preferred rx/tx params in dev info,
> specially the advertised burst size.
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  doc/guides/nics/features/dpaa2.ini |  2 +-
>  drivers/net/dpaa/dpaa_ethdev.c     |  4 ++++
>  drivers/net/dpaa/dpaa_ethdev.h     |  1 +
>  drivers/net/dpaa2/dpaa2_ethdev.c   | 16 ++++++++++++++++
>  drivers/net/dpaa2/dpaa2_ethdev.h   |  2 ++
>  5 files changed, 24 insertions(+), 1 deletion(-)
> 
> diff --git a/doc/guides/nics/features/dpaa2.ini b/doc/guides/nics/features/dpaa2.ini
> index 6ebbab4b80..c2214fbd50 100644
> --- a/doc/guides/nics/features/dpaa2.ini
> +++ b/doc/guides/nics/features/dpaa2.ini
> @@ -4,7 +4,7 @@
>  ; Refer to default.ini for the full list of available PMD features.
>  ;
>  [Features]
> -Speed capabilities   = P
> +Speed capabilities   = Y
>  Link status          = Y
>  Link status event    = Y
>  Queue start/stop     = Y
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
> index 5f81968d80..56eb5ec47c 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -363,6 +363,10 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
>  					dev_tx_offloads_nodis;
>  	dev_info->default_rxportconf.burst_size = DPAA_DEF_RX_BURST_SIZE;
>  	dev_info->default_txportconf.burst_size = DPAA_DEF_TX_BURST_SIZE;
> +	dev_info->default_rxportconf.nb_queues = 1;
> +	dev_info->default_txportconf.nb_queues = 1;
> +	dev_info->default_txportconf.ring_size = CGR_TX_CGR_THRESH;
> +	dev_info->default_rxportconf.ring_size = CGR_RX_PERFQ_THRESH;
>  
>  	return 0;
>  }
> diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
> index da06f1faa1..af9fc2105d 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.h
> +++ b/drivers/net/dpaa/dpaa_ethdev.h
> @@ -42,6 +42,7 @@
>  
>  /* RX queue tail drop threshold (CGR Based) in frame count */
>  #define CGR_RX_PERFQ_THRESH 256
> +#define CGR_TX_CGR_THRESH 512
>  
>  /*max mac filter for memac(8) including primary mac addr*/
>  #define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 4fc550a885..b70a2ac01c 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -275,6 +275,22 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  	dev_info->max_vmdq_pools = ETH_16_POOLS;
>  	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
>  
> +	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
> +	/* same is rx size for best perf */
> +	dev_info->default_txportconf.burst_size = dpaa2_dqrr_size;
> +
> +	dev_info->default_rxportconf.nb_queues = 1;
> +	dev_info->default_txportconf.nb_queues = 1;
> +	dev_info->default_txportconf.ring_size = CONG_ENTER_TX_THRESHOLD;
> +	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
> +
> +	if (dpaa2_svr_family == SVR_LX2160A) {
> +		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
> +				ETH_LINK_SPEED_40G |
> +				ETH_LINK_SPEED_50G |
> +				ETH_LINK_SPEED_100G;
> +	}

Can you please split the speed_capa changes into another patch? In case if we
need to fix related to it later it would be confusing to refer default values patch.

> +
>  	return 0;
>  }
>  
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
> index 31dca8c7b6..2c49a7f01f 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.h
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.h
> @@ -24,6 +24,8 @@
>  #define MAX_TX_QUEUES		16
>  #define MAX_DPNI		8
>  
> +#define DPAA2_RX_DEFAULT_NBDESC 512
> +
>  /*default tc to be used for ,congestion, distribution etc configuration. */
>  #define DPAA2_DEF_TC		0
>  
> 


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v4 5/9] drivers: dpaa2 enhance portal alloc failure log
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 5/9] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
@ 2020-05-07 14:31         ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-07 14:31 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: Nipun Gupta

On 5/7/2020 11:46 AM, Hemant Agrawal wrote:
> From: Nipun Gupta <nipun.gupta@nxp.com>
> 
> The change adds printing the thread id when portal allocation
> failure occurs

Not just adds the tid, also changes the log itself.

> 
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> ---
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++++++--
>  drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++++++--
>  drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++++++++---
>  drivers/net/dpaa2/dpaa2_ethdev.c            |  4 +++-
>  drivers/net/dpaa2/dpaa2_rxtx.c              | 16 ++++++++++++----
>  drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++++++--
>  drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++++++++---
>  7 files changed, 51 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> index 0919f3bf47..256a9a1955 100644
> --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> @@ -1459,7 +1459,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
>  	if (!DPAA2_PER_LCORE_DPIO) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_SEC_ERR("Failure in affining portal");
> +			DPAA2_SEC_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> @@ -1641,7 +1643,9 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
>  	if (!DPAA2_PER_LCORE_DPIO) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_SEC_ERR("Failure in affining portal");
> +			DPAA2_SEC_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
> index 2be6e12f66..a196ad4c64 100644
> --- a/drivers/event/dpaa2/dpaa2_eventdev.c
> +++ b/drivers/event/dpaa2/dpaa2_eventdev.c
> @@ -74,7 +74,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
>  		/* Affine current thread context to a qman portal */
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret < 0) {
> -			DPAA2_EVENTDEV_ERR("Failure in affining portal");
> +			DPAA2_EVENTDEV_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> @@ -273,7 +275,9 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[],
>  		/* Affine current thread context to a qman portal */
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret < 0) {
> -			DPAA2_EVENTDEV_ERR("Failure in affining portal");
> +			DPAA2_EVENTDEV_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
> index 48887beb7e..fa9b53e64d 100644
> --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
> +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
> @@ -69,7 +69,9 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_MEMPOOL_ERR("Failure in affining portal");
> +			DPAA2_MEMPOOL_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			goto err1;
>  		}
>  	}
> @@ -198,7 +200,9 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret != 0) {
> -			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
> +			DPAA2_MEMPOOL_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return;
>  		}
>  	}
> @@ -317,7 +321,9 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret != 0) {
> -			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
> +			DPAA2_MEMPOOL_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return ret;
>  		}
>  	}
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index b70a2ac01c..817e9e0316 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -903,7 +903,9 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_PMD_ERR("Failure in affining portal");
> +			DPAA2_PMD_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return -EINVAL;
>  		}
>  	}
> diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
> index 89a8221cb8..630f8c73c7 100644
> --- a/drivers/net/dpaa2/dpaa2_rxtx.c
> +++ b/drivers/net/dpaa2/dpaa2_rxtx.c
> @@ -762,7 +762,9 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_PMD_ERR("Failure in affining portal\n");
> +			DPAA2_PMD_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> @@ -882,7 +884,9 @@ uint16_t dpaa2_dev_tx_conf(void *queue)
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_PMD_ERR("Failure in affining portal\n");
> +			DPAA2_PMD_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> @@ -1021,7 +1025,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_PMD_ERR("Failure in affining portal");
> +			DPAA2_PMD_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> @@ -1282,7 +1288,9 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_PMD_ERR("Failure in affining portal");
> +			DPAA2_PMD_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
> index 997d1c8739..7c21c6a528 100644
> --- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
> +++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
> @@ -70,7 +70,9 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_CMDIF_ERR("Failure in affining portal\n");
> +			DPAA2_CMDIF_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> @@ -133,7 +135,9 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_CMDIF_ERR("Failure in affining portal\n");
> +			DPAA2_CMDIF_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
> index c905954004..d5202d6522 100644
> --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
> +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
> @@ -666,7 +666,9 @@ dpdmai_dev_enqueue_multi(struct dpaa2_dpdmai_dev *dpdmai_dev,
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_QDMA_ERR("Failure in affining portal");
> +			DPAA2_QDMA_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> @@ -788,7 +790,9 @@ dpdmai_dev_dequeue_multijob_prefetch(
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_QDMA_ERR("Failure in affining portal");
> +			DPAA2_QDMA_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> @@ -929,7 +933,9 @@ dpdmai_dev_dequeue_multijob_no_prefetch(
>  	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
>  		ret = dpaa2_affine_qbman_swp();
>  		if (ret) {
> -			DPAA2_QDMA_ERR("Failure in affining portal");
> +			DPAA2_QDMA_ERR(
> +				"Failed to allocate IO portal, tid: %d\n",
> +				rte_gettid());
>  			return 0;
>  		}
>  	}
> 


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v4 8/9] net/dpaa2: fix cong group id for multiple tcs
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 8/9] net/dpaa2: fix cong group id for multiple tcs Hemant Agrawal
@ 2020-05-07 14:33         ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-07 14:33 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: stable, Jun Yang

On 5/7/2020 11:46 AM, Hemant Agrawal wrote:
> From: Jun Yang <jun.yang@nxp.com>
> 
> Flow id will not work when used with multiple traffic
> classes. The CGID shall be provided in the INDEX field.

Can you please add more detail, same also asked in prev version [1]. Btw what
does "flow id won't work" mean, and what is cong or cgid?

Thanks.

[1]

Can you please provide more information in commit log, why this change is done,
is it to fix something, if so what is broken with original code, why using "cong
group id" helps instead of using "flow_id" etc..


> 
> Fixes: 13b856ac02a8 ("net/dpaa2: support taildrop on frame count basis")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Jun Yang <jun.yang@nxp.com>
> ---
>  drivers/net/dpaa2/dpaa2_ethdev.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index fd766a2184..1bab3b064c 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -676,7 +676,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  						DPNI_CP_CONGESTION_GROUP,
>  						DPNI_QUEUE_RX,
>  						dpaa2_q->tc_index,
> -						flow_id, &taildrop);
> +						dpaa2_q->cgid, &taildrop);
>  		} else {
>  			/*enabling per rx queue congestion control */
>  			taildrop.threshold = CONG_THRESHOLD_RX_BYTES_Q;
> @@ -703,7 +703,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
>  					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
>  					dpaa2_q->tc_index,
> -					flow_id, &taildrop);
> +					dpaa2_q->cgid, &taildrop);
>  		} else {
>  			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
>  					DPNI_CP_QUEUE, DPNI_QUEUE_RX,
> 


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/9] net/dpaa2: fix 10g port negotiation issue
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 2/9] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
@ 2020-05-07 14:36         ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-07 14:36 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: stable, Rohit Raj

On 5/7/2020 11:46 AM, Hemant Agrawal wrote:
> From: Rohit Raj <rohit.raj@nxp.com>
> 

s/10g/10G in the title

> Fixed 10g port negotiation issue with another 10G/non 10G port.

Can be good to explain how it is fixed.

> Initialize the port link speed.
> 
> Fixes: c5acbb5ea20e ("net/dpaa2: support link status event")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
> ---
>  drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 2cde55e7cc..4fc550a885 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -553,9 +553,6 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
>  	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
>  		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
>  
> -	/* update the current status */
> -	dpaa2_dev_link_update(dev, 0);
> -
>  	return 0;
>  }
>  
> @@ -1757,6 +1754,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
>  	/* changing tx burst function to start enqueues */
>  	dev->tx_pkt_burst = dpaa2_dev_tx;
>  	dev->data->dev_link.link_status = state.up;
> +	dev->data->dev_link.link_speed = state.rate;
>  
>  	if (state.up)
>  		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
> 


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                         ` (8 preceding siblings ...)
  2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 9/9] bus/fslmc: fix the size of qman fq desc Hemant Agrawal
@ 2020-05-08 12:59       ` Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 1/9] net/dpaa2: fix 10G port negotiation issue Hemant Agrawal
                           ` (8 more replies)
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
  10 siblings, 9 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 12:59 UTC (permalink / raw)
  To: dev, ferruh.yigit

v5: split the default param patch and enhance commit details
v4: address the review comments
v3: Limiting the patches to avoid ABI breakage.

Hemant Agrawal (4):
  net/dpaa2: add default values for Rx params in info
  net/dpaa2: reduce prints in queue count functions
  bus/fslmc: fix the size of qman fq desc
  net/dpaa2: add the support for additional link speeds

Jun Yang (1):
  net/dpaa2: fix cong group id for multiple tcs

Nipun Gupta (3):
  net/dpaa2: do not prefetch annotaion for physical mode
  drivers: dpaa2 enhance portal alloc failure log
  net/dpaa2: support UDP dst port based muxing

Rohit Raj (1):
  net/dpaa2: fix 10G port negotiation issue

 doc/guides/nics/features/dpaa2.ini          |  2 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     |  6 +--
 drivers/bus/fslmc/qbman/qbman_debug.c       |  2 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++-
 drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++--
 drivers/net/dpaa/dpaa_ethdev.c              |  4 ++
 drivers/net/dpaa/dpaa_ethdev.h              |  1 +
 drivers/net/dpaa2/dpaa2_ethdev.c            | 32 ++++++++----
 drivers/net/dpaa2/dpaa2_ethdev.h            |  2 +
 drivers/net/dpaa2/dpaa2_mux.c               | 24 ++++++++-
 drivers/net/dpaa2/dpaa2_rxtx.c              | 56 ++++++++++++++-------
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++--
 14 files changed, 131 insertions(+), 46 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v5 1/9] net/dpaa2: fix 10G port negotiation issue
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
@ 2020-05-08 12:59         ` Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 2/9] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
                           ` (7 subsequent siblings)
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 12:59 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fixed 10G port negotiation issue with another 10G/non 10G port.

When running testpmd with 10G interfaces on 10BaseT interface
on LS2088ARDB, the ports were showing link as down.

This was identified to be caused by the setting of link as down
during config.
Also, the line rate was not being updated in device link params,
thus having the incorrect link speed in status (as 0).

Fixes: c5acbb5ea20e ("net/dpaa2: support link status event")
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 2cde55e7cc..4fc550a885 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -553,9 +553,6 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
 		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
 
-	/* update the current status */
-	dpaa2_dev_link_update(dev, 0);
-
 	return 0;
 }
 
@@ -1757,6 +1754,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	/* changing tx burst function to start enqueues */
 	dev->tx_pkt_burst = dpaa2_dev_tx;
 	dev->data->dev_link.link_status = state.up;
+	dev->data->dev_link.link_speed = state.rate;
 
 	if (state.up)
 		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v5 2/9] net/dpaa2: do not prefetch annotaion for physical mode
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 1/9] net/dpaa2: fix 10G port negotiation issue Hemant Agrawal
@ 2020-05-08 12:59         ` Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 3/9] net/dpaa2: add default values for Rx params in info Hemant Agrawal
                           ` (6 subsequent siblings)
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 12:59 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

When IOVA is physical address do not prefetch the annotation
of the next frame, as there is a cost involved there to convert
the physical address to virtual address.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  6 ++--
 drivers/net/dpaa2/dpaa2_rxtx.c          | 40 +++++++++++++++----------
 2 files changed, 28 insertions(+), 18 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 44d3d49c7a..368fe7c688 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -395,8 +395,8 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
+#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
 
 #endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 703f0549ad..89a8221cb8 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -324,8 +324,8 @@ static inline struct rte_mbuf *__rte_hot
 eth_fd_to_mbuf(const struct qbman_fd *fd,
 	       int port_id)
 {
-	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
-		DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
+	void *iova_addr = DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(iova_addr,
 		     rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size);
 
 	/* need to repopulated some of the fields,
@@ -350,8 +350,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 		dpaa2_dev_rx_parse_new(mbuf, fd);
 	else
 		mbuf->packet_type = dpaa2_dev_rx_parse(mbuf,
-			(void *)((size_t)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd))
-			 + DPAA2_FD_PTA_SIZE));
+			(void *)((size_t)iova_addr + DPAA2_FD_PTA_SIZE));
 
 	DPAA2_PMD_DP_DEBUG("to mbuf - mbuf =%p, mbuf->buf_addr =%p, off = %d,"
 		"fd_off=%d fd =%" PRIx64 ", meta = %d  bpid =%d, len=%d\n",
@@ -518,7 +517,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, pull_size;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct queue_storage_info_t *q_storage = dpaa2_q->q_storage;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
@@ -617,12 +616,15 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 		fd = qbman_result_DQ_fd(dq_storage);
 
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 		if (dpaa2_svr_family != SVR_LX2160A) {
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
+			const struct qbman_fd *next_fd =
+				qbman_result_DQ_fd(dq_storage + 1);
 			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0((void *)(size_t)(DPAA2_GET_FD_ADDR(
-				      next_fd) + DPAA2_FD_PTA_SIZE + 16));
+			rte_prefetch0(DPAA2_IOVA_TO_VADDR((DPAA2_GET_FD_ADDR(
+				next_fd) + DPAA2_FD_PTA_SIZE + 16)));
 		}
+#endif
 
 		if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 			bufs[num_rx] = eth_sg_fd_to_mbuf(fd, eth_data->port_id);
@@ -753,7 +755,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, next_pull = nb_pkts, num_pulled;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
 
@@ -819,11 +821,19 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			}
 			fd = qbman_result_DQ_fd(dq_storage);
 
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
-			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0(
-				(void *)(size_t)(DPAA2_GET_FD_ADDR(next_fd)
-					+ DPAA2_FD_PTA_SIZE + 16));
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+			if (dpaa2_svr_family != SVR_LX2160A) {
+				const struct qbman_fd *next_fd =
+					qbman_result_DQ_fd(dq_storage + 1);
+
+				/* Prefetch Annotation address for the parse
+				 * results.
+				 */
+				rte_prefetch0((DPAA2_IOVA_TO_VADDR(
+					DPAA2_GET_FD_ADDR(next_fd) +
+					DPAA2_FD_PTA_SIZE + 16)));
+			}
+#endif
 
 			if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 				bufs[num_rx] = eth_sg_fd_to_mbuf(fd,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v5 3/9] net/dpaa2: add default values for Rx params in info
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 1/9] net/dpaa2: fix 10G port negotiation issue Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 2/9] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
@ 2020-05-08 12:59         ` Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 4/9] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
                           ` (5 subsequent siblings)
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 12:59 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Hemant Agrawal

This patch adds default/preferred rx/tx params in dev info,
specially the advertised burst size.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c   | 4 ++++
 drivers/net/dpaa/dpaa_ethdev.h   | 1 +
 drivers/net/dpaa2/dpaa2_ethdev.c | 9 +++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h | 2 ++
 4 files changed, 16 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5f81968d80..56eb5ec47c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -363,6 +363,10 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 					dev_tx_offloads_nodis;
 	dev_info->default_rxportconf.burst_size = DPAA_DEF_RX_BURST_SIZE;
 	dev_info->default_txportconf.burst_size = DPAA_DEF_TX_BURST_SIZE;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_txportconf.ring_size = CGR_TX_CGR_THRESH;
+	dev_info->default_rxportconf.ring_size = CGR_RX_PERFQ_THRESH;
 
 	return 0;
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index da06f1faa1..af9fc2105d 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -42,6 +42,7 @@
 
 /* RX queue tail drop threshold (CGR Based) in frame count */
 #define CGR_RX_PERFQ_THRESH 256
+#define CGR_TX_CGR_THRESH 512
 
 /*max mac filter for memac(8) including primary mac addr*/
 #define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4fc550a885..9817c9324b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -275,6 +275,15 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
 
+	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
+	/* same is rx size for best perf */
+	dev_info->default_txportconf.burst_size = dpaa2_dqrr_size;
+
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_txportconf.ring_size = CONG_ENTER_TX_THRESHOLD;
+	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
+
 	return 0;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 31dca8c7b6..2c49a7f01f 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -24,6 +24,8 @@
 #define MAX_TX_QUEUES		16
 #define MAX_DPNI		8
 
+#define DPAA2_RX_DEFAULT_NBDESC 512
+
 /*default tc to be used for ,congestion, distribution etc configuration. */
 #define DPAA2_DEF_TC		0
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v5 4/9] drivers: dpaa2 enhance portal alloc failure log
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                           ` (2 preceding siblings ...)
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 3/9] net/dpaa2: add default values for Rx params in info Hemant Agrawal
@ 2020-05-08 12:59         ` Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 5/9] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
                           ` (4 subsequent siblings)
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 12:59 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

Update the portal allocation failure log to print the thread id
as well.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++++++--
 drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++++++--
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++++++++---
 drivers/net/dpaa2/dpaa2_ethdev.c            |  4 +++-
 drivers/net/dpaa2/dpaa2_rxtx.c              | 16 ++++++++++++----
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++++++--
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++++++++---
 7 files changed, 51 insertions(+), 17 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 0919f3bf47..256a9a1955 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1459,7 +1459,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1641,7 +1643,9 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 2be6e12f66..a196ad4c64 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -74,7 +74,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -273,7 +275,9 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 48887beb7e..fa9b53e64d 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -69,7 +69,9 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_MEMPOOL_ERR("Failure in affining portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			goto err1;
 		}
 	}
@@ -198,7 +200,9 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return;
 		}
 	}
@@ -317,7 +321,9 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return ret;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9817c9324b..0be61cda04 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -896,7 +896,9 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return -EINVAL;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 89a8221cb8..630f8c73c7 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -762,7 +762,9 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -882,7 +884,9 @@ uint16_t dpaa2_dev_tx_conf(void *queue)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1021,7 +1025,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1282,7 +1288,9 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
index 997d1c8739..7c21c6a528 100644
--- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
+++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
@@ -70,7 +70,9 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -133,7 +135,9 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c905954004..d5202d6522 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -666,7 +666,9 @@ dpdmai_dev_enqueue_multi(struct dpaa2_dpdmai_dev *dpdmai_dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -788,7 +790,9 @@ dpdmai_dev_dequeue_multijob_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -929,7 +933,9 @@ dpdmai_dev_dequeue_multijob_no_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v5 5/9] net/dpaa2: support UDP dst port based muxing
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                           ` (3 preceding siblings ...)
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 4/9] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
@ 2020-05-08 12:59         ` Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 6/9] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
                           ` (3 subsequent siblings)
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 12:59 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

This change adds DPDMUX support to bifurcate traffic on
the basis of UDP destination port.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index af90adb828..9ac8806faf 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
  */
 
 #include <sys/queue.h>
@@ -99,6 +99,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	case RTE_FLOW_ITEM_TYPE_IPV4:
 	{
 		const struct rte_flow_item_ipv4 *spec;
+
 		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_IP;
 		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_IP_PROTO;
 		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
@@ -113,10 +114,31 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	}
 	break;
 
+	case RTE_FLOW_ITEM_TYPE_UDP:
+	{
+		const struct rte_flow_item_udp *spec;
+		uint16_t udp_dst_port;
+
+		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_UDP;
+		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
+		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
+		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
+		kg_cfg.num_extracts = 1;
+
+		spec = (const struct rte_flow_item_udp *)pattern[0]->spec;
+		udp_dst_port = rte_constant_bswap16(spec->hdr.dst_port);
+		memcpy((void *)key_iova, (const void *)&udp_dst_port,
+							sizeof(rte_be16_t));
+		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
+		key_size = sizeof(uint16_t);
+	}
+	break;
+
 	case RTE_FLOW_ITEM_TYPE_ETH:
 	{
 		const struct rte_flow_item_eth *spec;
 		uint16_t eth_type;
+
 		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_ETH;
 		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_ETH_TYPE;
 		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v5 6/9] net/dpaa2: reduce prints in queue count functions
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                           ` (4 preceding siblings ...)
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 5/9] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
@ 2020-05-08 12:59         ` Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 7/9] net/dpaa2: fix cong group id for multiple tcs Hemant Agrawal
                           ` (2 subsequent siblings)
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 12:59 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Hemant Agrawal

changing them to DP as it is impacting l3fwd-power apps

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 0be61cda04..08f9832eb8 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -891,8 +891,6 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	struct qbman_fq_query_np_rslt state;
 	uint32_t frame_cnt = 0;
 
-	PMD_INIT_FUNC_TRACE();
-
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
@@ -908,7 +906,7 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DEBUG("RX frame count for q(%d) is %u",
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
 				rx_queue_id, frame_cnt);
 	}
 	return frame_cnt;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v5 7/9] net/dpaa2: fix cong group id for multiple tcs
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                           ` (5 preceding siblings ...)
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 6/9] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
@ 2020-05-08 12:59         ` Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 8/9] bus/fslmc: fix the size of qman fq desc Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 9/9] net/dpaa2: add the support for additional link speeds Hemant Agrawal
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 12:59 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Jun Yang

From: Jun Yang <jun.yang@nxp.com>

When using a single TC, flow id is same as congestion group id.
However in case of multiple traffic classes, same flow id values
are used within each traffc classs, which causes incorrect
traffic behavior e.g. halting of traffic.
This patches changes to use the cgid as the index which works
for single as well as multiple traffic classes.

Fixes: 13b856ac02a8 ("net/dpaa2: support taildrop on frame count basis")
Cc: stable@dpdk.org

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 08f9832eb8..d9960b01f7 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -669,7 +669,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 						DPNI_CP_CONGESTION_GROUP,
 						DPNI_QUEUE_RX,
 						dpaa2_q->tc_index,
-						flow_id, &taildrop);
+						dpaa2_q->cgid, &taildrop);
 		} else {
 			/*enabling per rx queue congestion control */
 			taildrop.threshold = CONG_THRESHOLD_RX_BYTES_Q;
@@ -696,7 +696,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
 					dpaa2_q->tc_index,
-					flow_id, &taildrop);
+					dpaa2_q->cgid, &taildrop);
 		} else {
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_QUEUE, DPNI_QUEUE_RX,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v5 8/9] bus/fslmc: fix the size of qman fq desc
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                           ` (6 preceding siblings ...)
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 7/9] net/dpaa2: fix cong group id for multiple tcs Hemant Agrawal
@ 2020-05-08 12:59         ` Hemant Agrawal
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 9/9] net/dpaa2: add the support for additional link speeds Hemant Agrawal
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 12:59 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Hemant Agrawal

correct the qman_fq_desc as per the HW defined size

Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
Cc: stable@dpdk.org

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 4cd0923acb..34374ae4b6 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -20,7 +20,7 @@ struct qbman_fq_query_desc {
 	uint8_t verb;
 	uint8_t reserved[3];
 	uint32_t fqid;
-	uint8_t reserved2[57];
+	uint8_t reserved2[56];
 };
 
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v5 9/9] net/dpaa2: add the support for additional link speeds
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                           ` (7 preceding siblings ...)
  2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 8/9] bus/fslmc: fix the size of qman fq desc Hemant Agrawal
@ 2020-05-08 12:59         ` Hemant Agrawal
  8 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 12:59 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Hemant Agrawal

This patch adds the support for additional link speed
supported by LX2160A platforms.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa2.ini | 2 +-
 drivers/net/dpaa2/dpaa2_ethdev.c   | 7 +++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/dpaa2.ini b/doc/guides/nics/features/dpaa2.ini
index 6ebbab4b80..c2214fbd50 100644
--- a/doc/guides/nics/features/dpaa2.ini
+++ b/doc/guides/nics/features/dpaa2.ini
@@ -4,7 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
-Speed capabilities   = P
+Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index d9960b01f7..1bab3b064c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -284,6 +284,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->default_txportconf.ring_size = CONG_ENTER_TX_THRESHOLD;
 	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
 
+	if (dpaa2_svr_family == SVR_LX2160A) {
+		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
+				ETH_LINK_SPEED_40G |
+				ETH_LINK_SPEED_50G |
+				ETH_LINK_SPEED_100G;
+	}
+
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement
  2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
                         ` (9 preceding siblings ...)
  2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
@ 2020-05-08 13:02       ` Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 01/10] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
                           ` (10 more replies)
  10 siblings, 11 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit

v6: missed one patch
v5: split the default param patch and enhance commit details
v4: address the review comments
v3: Limiting the patches to avoid ABI breakage.

Apeksha Gupta (1):
  bus/fslmc: fix dereferencing null pointer

Hemant Agrawal (4):
  net/dpaa2: add default values for Rx params in info
  net/dpaa2: reduce prints in queue count functions
  bus/fslmc: fix the size of qman fq desc
  net/dpaa2: add the support for additional link speeds

Jun Yang (1):
  net/dpaa2: fix cong group id for multiple tcs

Nipun Gupta (3):
  net/dpaa2: do not prefetch annotaion for physical mode
  drivers: dpaa2 enhance portal alloc failure log
  net/dpaa2: support UDP dst port based muxing

Rohit Raj (1):
  net/dpaa2: fix 10G port negotiation issue

 doc/guides/nics/features/dpaa2.ini          |  2 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h     |  6 +--
 drivers/bus/fslmc/qbman/qbman_debug.c       |  9 ++--
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++-
 drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++--
 drivers/net/dpaa/dpaa_ethdev.c              |  4 ++
 drivers/net/dpaa/dpaa_ethdev.h              |  1 +
 drivers/net/dpaa2/dpaa2_ethdev.c            | 32 ++++++++----
 drivers/net/dpaa2/dpaa2_ethdev.h            |  2 +
 drivers/net/dpaa2/dpaa2_mux.c               | 24 ++++++++-
 drivers/net/dpaa2/dpaa2_rxtx.c              | 56 ++++++++++++++-------
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++--
 14 files changed, 135 insertions(+), 49 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 01/10] bus/fslmc: fix dereferencing null pointer
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
@ 2020-05-08 13:02         ` Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 02/10] net/dpaa2: fix 10G port negotiation issue Hemant Agrawal
                           ` (9 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Apeksha Gupta

From: Apeksha Gupta <apeksha.gupta@nxp.com>

This patch fixees the nxp internal coverity reported
null pointer dereferncing issue.

Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 0bb2ce880f..4cd0923acb 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -27,19 +27,20 @@ int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 			 struct qbman_fq_query_np_rslt *r)
 {
 	struct qbman_fq_query_desc *p;
+	struct qbman_fq_query_np_rslt *var;
 
 	p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
 	if (!p)
 		return -EBUSY;
 
 	p->fqid = fqid;
-	*r = *(struct qbman_fq_query_np_rslt *)qbman_swp_mc_complete(s, p,
-						QBMAN_FQ_QUERY_NP);
-	if (!r) {
+	var = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY_NP);
+	if (!var) {
 		pr_err("qbman: Query FQID %d NP fields failed, no response\n",
 		       fqid);
 		return -EIO;
 	}
+	*r = *var;
 
 	/* Decode the outcome */
 	QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY_NP);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 02/10] net/dpaa2: fix 10G port negotiation issue
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 01/10] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
@ 2020-05-08 13:02         ` Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 03/10] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
                           ` (8 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Rohit Raj

From: Rohit Raj <rohit.raj@nxp.com>

Fixed 10G port negotiation issue with another 10G/non 10G port.

When running testpmd with 10G interfaces on 10BaseT interface
on LS2088ARDB, the ports were showing link as down.

This was identified to be caused by the setting of link as down
during config.
Also, the line rate was not being updated in device link params,
thus having the incorrect link speed in status (as 0).

Fixes: c5acbb5ea20e ("net/dpaa2: support link status event")
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 2cde55e7cc..4fc550a885 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -553,9 +553,6 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
 		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
 
-	/* update the current status */
-	dpaa2_dev_link_update(dev, 0);
-
 	return 0;
 }
 
@@ -1757,6 +1754,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev)
 	/* changing tx burst function to start enqueues */
 	dev->tx_pkt_burst = dpaa2_dev_tx;
 	dev->data->dev_link.link_status = state.up;
+	dev->data->dev_link.link_speed = state.rate;
 
 	if (state.up)
 		DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 03/10] net/dpaa2: do not prefetch annotaion for physical mode
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 01/10] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 02/10] net/dpaa2: fix 10G port negotiation issue Hemant Agrawal
@ 2020-05-08 13:02         ` Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 04/10] net/dpaa2: add default values for Rx params in info Hemant Agrawal
                           ` (7 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

When IOVA is physical address do not prefetch the annotation
of the next frame, as there is a cost involved there to convert
the physical address to virtual address.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h |  6 ++--
 drivers/net/dpaa2/dpaa2_rxtx.c          | 40 +++++++++++++++----------
 2 files changed, 28 insertions(+), 18 deletions(-)

diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 44d3d49c7a..368fe7c688 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -395,8 +395,8 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
 #else	/* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
 
 #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr)
-#define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr)
-#define DPAA2_IOVA_TO_VADDR(_iova) (_iova)
+#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr)
+#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova)
 #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type)
 
 #endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 703f0549ad..89a8221cb8 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
  *   Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved.
- *   Copyright 2016-2019 NXP
+ *   Copyright 2016-2020 NXP
  *
  */
 
@@ -324,8 +324,8 @@ static inline struct rte_mbuf *__rte_hot
 eth_fd_to_mbuf(const struct qbman_fd *fd,
 	       int port_id)
 {
-	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
-		DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
+	void *iova_addr = DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(iova_addr,
 		     rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size);
 
 	/* need to repopulated some of the fields,
@@ -350,8 +350,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 		dpaa2_dev_rx_parse_new(mbuf, fd);
 	else
 		mbuf->packet_type = dpaa2_dev_rx_parse(mbuf,
-			(void *)((size_t)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd))
-			 + DPAA2_FD_PTA_SIZE));
+			(void *)((size_t)iova_addr + DPAA2_FD_PTA_SIZE));
 
 	DPAA2_PMD_DP_DEBUG("to mbuf - mbuf =%p, mbuf->buf_addr =%p, off = %d,"
 		"fd_off=%d fd =%" PRIx64 ", meta = %d  bpid =%d, len=%d\n",
@@ -518,7 +517,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, pull_size;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct queue_storage_info_t *q_storage = dpaa2_q->q_storage;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
@@ -617,12 +616,15 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 		fd = qbman_result_DQ_fd(dq_storage);
 
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
 		if (dpaa2_svr_family != SVR_LX2160A) {
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
+			const struct qbman_fd *next_fd =
+				qbman_result_DQ_fd(dq_storage + 1);
 			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0((void *)(size_t)(DPAA2_GET_FD_ADDR(
-				      next_fd) + DPAA2_FD_PTA_SIZE + 16));
+			rte_prefetch0(DPAA2_IOVA_TO_VADDR((DPAA2_GET_FD_ADDR(
+				next_fd) + DPAA2_FD_PTA_SIZE + 16)));
 		}
+#endif
 
 		if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 			bufs[num_rx] = eth_sg_fd_to_mbuf(fd, eth_data->port_id);
@@ -753,7 +755,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int ret, num_rx = 0, next_pull = nb_pkts, num_pulled;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
-	const struct qbman_fd *fd, *next_fd;
+	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
 
@@ -819,11 +821,19 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			}
 			fd = qbman_result_DQ_fd(dq_storage);
 
-			next_fd = qbman_result_DQ_fd(dq_storage + 1);
-			/* Prefetch Annotation address for the parse results */
-			rte_prefetch0(
-				(void *)(size_t)(DPAA2_GET_FD_ADDR(next_fd)
-					+ DPAA2_FD_PTA_SIZE + 16));
+#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA
+			if (dpaa2_svr_family != SVR_LX2160A) {
+				const struct qbman_fd *next_fd =
+					qbman_result_DQ_fd(dq_storage + 1);
+
+				/* Prefetch Annotation address for the parse
+				 * results.
+				 */
+				rte_prefetch0((DPAA2_IOVA_TO_VADDR(
+					DPAA2_GET_FD_ADDR(next_fd) +
+					DPAA2_FD_PTA_SIZE + 16)));
+			}
+#endif
 
 			if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg))
 				bufs[num_rx] = eth_sg_fd_to_mbuf(fd,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 04/10] net/dpaa2: add default values for Rx params in info
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
                           ` (2 preceding siblings ...)
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 03/10] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
@ 2020-05-08 13:02         ` Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 05/10] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
                           ` (6 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Hemant Agrawal

This patch adds default/preferred rx/tx params in dev info,
specially the advertised burst size.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c   | 4 ++++
 drivers/net/dpaa/dpaa_ethdev.h   | 1 +
 drivers/net/dpaa2/dpaa2_ethdev.c | 9 +++++++++
 drivers/net/dpaa2/dpaa2_ethdev.h | 2 ++
 4 files changed, 16 insertions(+)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 5f81968d80..56eb5ec47c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -363,6 +363,10 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 					dev_tx_offloads_nodis;
 	dev_info->default_rxportconf.burst_size = DPAA_DEF_RX_BURST_SIZE;
 	dev_info->default_txportconf.burst_size = DPAA_DEF_TX_BURST_SIZE;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_txportconf.ring_size = CGR_TX_CGR_THRESH;
+	dev_info->default_rxportconf.ring_size = CGR_RX_PERFQ_THRESH;
 
 	return 0;
 }
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index da06f1faa1..af9fc2105d 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -42,6 +42,7 @@
 
 /* RX queue tail drop threshold (CGR Based) in frame count */
 #define CGR_RX_PERFQ_THRESH 256
+#define CGR_TX_CGR_THRESH 512
 
 /*max mac filter for memac(8) including primary mac addr*/
 #define DPAA_MAX_MAC_FILTER (MEMAC_NUM_OF_PADDRS + 1)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4fc550a885..9817c9324b 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -275,6 +275,15 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vmdq_pools = ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
 
+	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
+	/* same is rx size for best perf */
+	dev_info->default_txportconf.burst_size = dpaa2_dqrr_size;
+
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_txportconf.ring_size = CONG_ENTER_TX_THRESHOLD;
+	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
+
 	return 0;
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 31dca8c7b6..2c49a7f01f 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -24,6 +24,8 @@
 #define MAX_TX_QUEUES		16
 #define MAX_DPNI		8
 
+#define DPAA2_RX_DEFAULT_NBDESC 512
+
 /*default tc to be used for ,congestion, distribution etc configuration. */
 #define DPAA2_DEF_TC		0
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 05/10] drivers: dpaa2 enhance portal alloc failure log
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
                           ` (3 preceding siblings ...)
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 04/10] net/dpaa2: add default values for Rx params in info Hemant Agrawal
@ 2020-05-08 13:02         ` Hemant Agrawal
  2020-05-08 16:07           ` Ferruh Yigit
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 06/10] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
                           ` (5 subsequent siblings)
  10 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

Update the portal allocation failure log to print the thread id
as well.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  8 ++++++--
 drivers/event/dpaa2/dpaa2_eventdev.c        |  8 ++++++--
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c    | 12 +++++++++---
 drivers/net/dpaa2/dpaa2_ethdev.c            |  4 +++-
 drivers/net/dpaa2/dpaa2_rxtx.c              | 16 ++++++++++++----
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c       |  8 ++++++--
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c         | 12 +++++++++---
 7 files changed, 51 insertions(+), 17 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 0919f3bf47..256a9a1955 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1459,7 +1459,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1641,7 +1643,9 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
 	if (!DPAA2_PER_LCORE_DPIO) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_SEC_ERR("Failure in affining portal");
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 2be6e12f66..a196ad4c64 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -74,7 +74,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -273,7 +275,9 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[],
 		/* Affine current thread context to a qman portal */
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
-			DPAA2_EVENTDEV_ERR("Failure in affining portal");
+			DPAA2_EVENTDEV_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 48887beb7e..fa9b53e64d 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -69,7 +69,9 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_MEMPOOL_ERR("Failure in affining portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			goto err1;
 		}
 	}
@@ -198,7 +200,9 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return;
 		}
 	}
@@ -317,7 +321,9 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
-			DPAA2_MEMPOOL_ERR("Failed to allocate IO portal");
+			DPAA2_MEMPOOL_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return ret;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9817c9324b..0be61cda04 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -896,7 +896,9 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return -EINVAL;
 		}
 	}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 89a8221cb8..630f8c73c7 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -762,7 +762,9 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -882,7 +884,9 @@ uint16_t dpaa2_dev_tx_conf(void *queue)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal\n");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1021,7 +1025,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -1282,7 +1288,9 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failure in affining portal");
+			DPAA2_PMD_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
index 997d1c8739..7c21c6a528 100644
--- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
+++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
@@ -70,7 +70,9 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -133,7 +135,9 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_CMDIF_ERR("Failure in affining portal\n");
+			DPAA2_CMDIF_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c905954004..d5202d6522 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -666,7 +666,9 @@ dpdmai_dev_enqueue_multi(struct dpaa2_dpdmai_dev *dpdmai_dev,
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -788,7 +790,9 @@ dpdmai_dev_dequeue_multijob_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
@@ -929,7 +933,9 @@ dpdmai_dev_dequeue_multijob_no_prefetch(
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_QDMA_ERR("Failure in affining portal");
+			DPAA2_QDMA_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
 			return 0;
 		}
 	}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 06/10] net/dpaa2: support UDP dst port based muxing
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
                           ` (4 preceding siblings ...)
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 05/10] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
@ 2020-05-08 13:02         ` Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 07/10] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
                           ` (4 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Nipun Gupta

From: Nipun Gupta <nipun.gupta@nxp.com>

This change adds DPDMUX support to bifurcate traffic on
the basis of UDP destination port.

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 drivers/net/dpaa2/dpaa2_mux.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index af90adb828..9ac8806faf 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2020 NXP
  */
 
 #include <sys/queue.h>
@@ -99,6 +99,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	case RTE_FLOW_ITEM_TYPE_IPV4:
 	{
 		const struct rte_flow_item_ipv4 *spec;
+
 		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_IP;
 		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_IP_PROTO;
 		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
@@ -113,10 +114,31 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	}
 	break;
 
+	case RTE_FLOW_ITEM_TYPE_UDP:
+	{
+		const struct rte_flow_item_udp *spec;
+		uint16_t udp_dst_port;
+
+		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_UDP;
+		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_UDP_PORT_DST;
+		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
+		kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD;
+		kg_cfg.num_extracts = 1;
+
+		spec = (const struct rte_flow_item_udp *)pattern[0]->spec;
+		udp_dst_port = rte_constant_bswap16(spec->hdr.dst_port);
+		memcpy((void *)key_iova, (const void *)&udp_dst_port,
+							sizeof(rte_be16_t));
+		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
+		key_size = sizeof(uint16_t);
+	}
+	break;
+
 	case RTE_FLOW_ITEM_TYPE_ETH:
 	{
 		const struct rte_flow_item_eth *spec;
 		uint16_t eth_type;
+
 		kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_ETH;
 		kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_ETH_TYPE;
 		kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 07/10] net/dpaa2: reduce prints in queue count functions
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
                           ` (5 preceding siblings ...)
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 06/10] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
@ 2020-05-08 13:02         ` Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 08/10] net/dpaa2: fix cong group id for multiple tcs Hemant Agrawal
                           ` (3 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Hemant Agrawal

changing them to DP as it is impacting l3fwd-power apps

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 0be61cda04..08f9832eb8 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -891,8 +891,6 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	struct qbman_fq_query_np_rslt state;
 	uint32_t frame_cnt = 0;
 
-	PMD_INIT_FUNC_TRACE();
-
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
@@ -908,7 +906,7 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DEBUG("RX frame count for q(%d) is %u",
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
 				rx_queue_id, frame_cnt);
 	}
 	return frame_cnt;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 08/10] net/dpaa2: fix cong group id for multiple tcs
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
                           ` (6 preceding siblings ...)
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 07/10] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
@ 2020-05-08 13:02         ` Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 09/10] bus/fslmc: fix the size of qman fq desc Hemant Agrawal
                           ` (2 subsequent siblings)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Jun Yang

From: Jun Yang <jun.yang@nxp.com>

When using a single TC, flow id is same as congestion group id.
However in case of multiple traffic classes, same flow id values
are used within each traffc classs, which causes incorrect
traffic behavior e.g. halting of traffic.
This patches changes to use the cgid as the index which works
for single as well as multiple traffic classes.

Fixes: 13b856ac02a8 ("net/dpaa2: support taildrop on frame count basis")
Cc: stable@dpdk.org

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 08f9832eb8..d9960b01f7 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -669,7 +669,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 						DPNI_CP_CONGESTION_GROUP,
 						DPNI_QUEUE_RX,
 						dpaa2_q->tc_index,
-						flow_id, &taildrop);
+						dpaa2_q->cgid, &taildrop);
 		} else {
 			/*enabling per rx queue congestion control */
 			taildrop.threshold = CONG_THRESHOLD_RX_BYTES_Q;
@@ -696,7 +696,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX,
 					dpaa2_q->tc_index,
-					flow_id, &taildrop);
+					dpaa2_q->cgid, &taildrop);
 		} else {
 			ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token,
 					DPNI_CP_QUEUE, DPNI_QUEUE_RX,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 09/10] bus/fslmc: fix the size of qman fq desc
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
                           ` (7 preceding siblings ...)
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 08/10] net/dpaa2: fix cong group id for multiple tcs Hemant Agrawal
@ 2020-05-08 13:02         ` Hemant Agrawal
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 10/10] net/dpaa2: add the support for additional link speeds Hemant Agrawal
  2020-05-08 13:08         ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal (OSS)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: stable, Hemant Agrawal

correct the qman_fq_desc as per the HW defined size

Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API")
Cc: stable@dpdk.org

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/bus/fslmc/qbman/qbman_debug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index 4cd0923acb..34374ae4b6 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -20,7 +20,7 @@ struct qbman_fq_query_desc {
 	uint8_t verb;
 	uint8_t reserved[3];
 	uint32_t fqid;
-	uint8_t reserved2[57];
+	uint8_t reserved2[56];
 };
 
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* [dpdk-dev] [PATCH v6 10/10] net/dpaa2: add the support for additional link speeds
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
                           ` (8 preceding siblings ...)
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 09/10] bus/fslmc: fix the size of qman fq desc Hemant Agrawal
@ 2020-05-08 13:02         ` Hemant Agrawal
  2020-05-08 13:08         ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal (OSS)
  10 siblings, 0 replies; 109+ messages in thread
From: Hemant Agrawal @ 2020-05-08 13:02 UTC (permalink / raw)
  To: dev, ferruh.yigit; +Cc: Hemant Agrawal

This patch adds the support for additional link speed
supported by LX2160A platforms.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 doc/guides/nics/features/dpaa2.ini | 2 +-
 drivers/net/dpaa2/dpaa2_ethdev.c   | 7 +++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/dpaa2.ini b/doc/guides/nics/features/dpaa2.ini
index 6ebbab4b80..c2214fbd50 100644
--- a/doc/guides/nics/features/dpaa2.ini
+++ b/doc/guides/nics/features/dpaa2.ini
@@ -4,7 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
-Speed capabilities   = P
+Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index d9960b01f7..1bab3b064c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -284,6 +284,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->default_txportconf.ring_size = CONG_ENTER_TX_THRESHOLD;
 	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
 
+	if (dpaa2_svr_family == SVR_LX2160A) {
+		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
+				ETH_LINK_SPEED_40G |
+				ETH_LINK_SPEED_50G |
+				ETH_LINK_SPEED_100G;
+	}
+
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement
  2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
                           ` (9 preceding siblings ...)
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 10/10] net/dpaa2: add the support for additional link speeds Hemant Agrawal
@ 2020-05-08 13:08         ` Hemant Agrawal (OSS)
  2020-05-08 19:32           ` Ferruh Yigit
  10 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal (OSS) @ 2020-05-08 13:08 UTC (permalink / raw)
  To: Hemant Agrawal, dev, ferruh.yigit

Series-Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v6 05/10] drivers: dpaa2 enhance portal alloc failure log
  2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 05/10] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
@ 2020-05-08 16:07           ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-08 16:07 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: Nipun Gupta, dpdk-techboard

On 5/8/2020 2:02 PM, Hemant Agrawal wrote:
> From: Nipun Gupta <nipun.gupta@nxp.com>
> 
> Update the portal allocation failure log to print the thread id
> as well.
> 
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>

Off the topic.

This is the patch 70000 in patchwork!
Thanks to everyone contributed!

https://patches.dpdk.org/patch/70000/

The historical numbers from DPDK patchwork:
70000 - May    8, 2020 (224 days) [ 7 months, 11 days / 32 weeks ]
60000 - Sept. 27, 2019 (248 days)
50000 - Jan.  22, 2019 (253 days)
40000 - May   14, 2018 (217 days)
30000 - Oct.   9, 2017 (258 days)
20000 - Jan.  25, 2017 (372 days)
10000 - Jan.  20, 2016 (645 days)
00001 - April 16, 2014

Again around ~250 days for 10K patch, but seems we got a little faster recently.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement
  2020-05-08 13:08         ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal (OSS)
@ 2020-05-08 19:32           ` Ferruh Yigit
  0 siblings, 0 replies; 109+ messages in thread
From: Ferruh Yigit @ 2020-05-08 19:32 UTC (permalink / raw)
  To: Hemant Agrawal (OSS), Hemant Agrawal, dev

On 5/8/2020 2:08 PM, Hemant Agrawal (OSS) wrote:
> Series-Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> 

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 109+ messages in thread

end of thread, other threads:[~2020-05-08 19:32 UTC | newest]

Thread overview: 109+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-02 14:58 [dpdk-dev] [PATCH 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
2020-03-02 13:01 ` David Marchand
2020-03-05  9:06   ` Hemant Agrawal (OSS)
2020-03-05  9:09     ` David Marchand
2020-03-05  9:19       ` Hemant Agrawal (OSS)
2020-03-06 10:12         ` David Marchand
2020-03-10 10:36           ` Dodji Seketeli
2020-04-07 10:25             ` Hemant Agrawal
2020-04-07 12:20               ` Thomas Monjalon
2020-04-08  7:20               ` Dodji Seketeli
2020-04-08  7:52                 ` Dodji Seketeli
2020-04-08 12:35                   ` Thomas Monjalon
2020-03-02 14:58 ` [dpdk-dev] [PATCH 01/16] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 02/16] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 03/16] bus/fslmc: combine thread specific variables Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 04/16] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 05/16] bus/fslmc: support handle portal alloc failure Hemant Agrawal
2020-03-09 17:00   ` Ferruh Yigit
2020-03-09 17:04     ` Ferruh Yigit
2020-03-02 14:58 ` [dpdk-dev] [PATCH 06/16] bus/fslmc: limit pthread destructor called for dpaa2 only Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 07/16] bus/fslmc: support portal migration Hemant Agrawal
2020-03-03 17:43   ` Ferruh Yigit
2020-03-02 14:58 ` [dpdk-dev] [PATCH 08/16] drivers: enhance portal allocation failure log Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 09/16] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 10/16] net/dpaa: return error on multiple mp config Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop Hemant Agrawal
2020-03-03 16:59   ` Ferruh Yigit
2020-03-04  8:43     ` Hemant Agrawal (OSS)
2020-03-04  8:49     ` David Marchand
2020-03-03 17:02   ` Ferruh Yigit
2020-03-05  6:49     ` Gagandeep Singh
2020-03-05 14:14       ` Ferruh Yigit
2020-03-02 14:58 ` [dpdk-dev] [PATCH 12/16] net/dpaa: add 2.5G support Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 13/16] net/dpaa: update process specific device info Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 14/16] bus/dpaa: enable link state interrupt Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 15/16] bus/dpaa: enable set link status Hemant Agrawal
2020-03-02 14:58 ` [dpdk-dev] [PATCH 16/16] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
2020-03-06  9:57 ` [dpdk-dev] [PATCH v2 00/16] NXP DPAAx fixes and enhancements Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 01/16] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 02/16] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 03/16] bus/fslmc: combine thread specific variables Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 04/16] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 05/16] bus/fslmc: support handle portal alloc failure Hemant Agrawal
2020-03-13 16:20     ` Ferruh Yigit
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 06/16] bus/fslmc: limit pthread destructor called for dpaa2 only Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 07/16] bus/fslmc: support portal migration Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 08/16] drivers: enhance portal allocation failure log Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 09/16] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 10/16] net/dpaa: return error on multiple mp config Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 11/16] net/dpaa: enable Tx queue taildrop Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 12/16] net/dpaa: add 2.5G support Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 13/16] net/dpaa: update process specific device info Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 14/16] bus/dpaa: enable link state interrupt Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 15/16] bus/dpaa: enable set link status Hemant Agrawal
2020-03-06  9:57   ` [dpdk-dev] [PATCH v2 16/16] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
2020-05-04 12:41   ` [dpdk-dev] [PATCH v3 0/8] NXP DPAAx fixes and enhancements Hemant Agrawal
2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 1/8] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
2020-05-06 21:08       ` Ferruh Yigit
2020-05-06 21:09         ` Ferruh Yigit
2020-05-06 21:14       ` Ferruh Yigit
2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 2/8] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 3/8] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 4/8] net/dpaa2: add default Rx params in devinfo Hemant Agrawal
2020-05-06 21:29       ` Ferruh Yigit
2020-05-07  5:35         ` Hemant Agrawal (OSS)
2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 5/8] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 6/8] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 7/8] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
2020-05-04 12:41     ` [dpdk-dev] [PATCH v3 8/8] net/dpaa2: use cong group id for multiple tcs Hemant Agrawal
2020-05-06 21:38       ` Ferruh Yigit
2020-05-07  5:37         ` Hemant Agrawal (OSS)
2020-05-07 10:46     ` [dpdk-dev] [PATCH v4 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 1/9] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 2/9] net/dpaa2: fix 10g port negotiation issue Hemant Agrawal
2020-05-07 14:36         ` Ferruh Yigit
2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 3/9] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 4/9] net/dpaa2: add default values for Rx params in info Hemant Agrawal
2020-05-07 14:30         ` Ferruh Yigit
2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 5/9] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
2020-05-07 14:31         ` Ferruh Yigit
2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 6/9] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 7/9] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 8/9] net/dpaa2: fix cong group id for multiple tcs Hemant Agrawal
2020-05-07 14:33         ` Ferruh Yigit
2020-05-07 10:46       ` [dpdk-dev] [PATCH v4 9/9] bus/fslmc: fix the size of qman fq desc Hemant Agrawal
2020-05-08 12:59       ` [dpdk-dev] [PATCH v5 0/9] NXP DPAAx fixes and enhancements Hemant Agrawal
2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 1/9] net/dpaa2: fix 10G port negotiation issue Hemant Agrawal
2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 2/9] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 3/9] net/dpaa2: add default values for Rx params in info Hemant Agrawal
2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 4/9] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 5/9] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 6/9] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 7/9] net/dpaa2: fix cong group id for multiple tcs Hemant Agrawal
2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 8/9] bus/fslmc: fix the size of qman fq desc Hemant Agrawal
2020-05-08 12:59         ` [dpdk-dev] [PATCH v5 9/9] net/dpaa2: add the support for additional link speeds Hemant Agrawal
2020-05-08 13:02       ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal
2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 01/10] bus/fslmc: fix dereferencing null pointer Hemant Agrawal
2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 02/10] net/dpaa2: fix 10G port negotiation issue Hemant Agrawal
2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 03/10] net/dpaa2: do not prefetch annotaion for physical mode Hemant Agrawal
2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 04/10] net/dpaa2: add default values for Rx params in info Hemant Agrawal
2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 05/10] drivers: dpaa2 enhance portal alloc failure log Hemant Agrawal
2020-05-08 16:07           ` Ferruh Yigit
2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 06/10] net/dpaa2: support UDP dst port based muxing Hemant Agrawal
2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 07/10] net/dpaa2: reduce prints in queue count functions Hemant Agrawal
2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 08/10] net/dpaa2: fix cong group id for multiple tcs Hemant Agrawal
2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 09/10] bus/fslmc: fix the size of qman fq desc Hemant Agrawal
2020-05-08 13:02         ` [dpdk-dev] [PATCH v6 10/10] net/dpaa2: add the support for additional link speeds Hemant Agrawal
2020-05-08 13:08         ` [dpdk-dev] [PATCH v6 00/10] NXP DPAAx fixes and enhancement Hemant Agrawal (OSS)
2020-05-08 19:32           ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).