DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support
@ 2020-03-12 11:18 Nithin Dabilpuram
  2020-03-12 11:18 ` [dpdk-dev] [PATCH 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
                   ` (13 more replies)
  0 siblings, 14 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:18 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

Add support to traffic management api in OCTEON TX2 PMD.
This support applies to CN96xx of C0 silicon version.

This series depends on http://patchwork.dpdk.org/patch/66344/

Depends-on:series-66344

Krzysztof Kanas (3):
  net/octeontx2: add tm node suspend and resume cb
  net/octeontx2: add tx queue ratelimit callback
  net/octeontx2: add tm capability callbacks

Nithin Dabilpuram (8):
  net/octeontx2: setup link config based on BP level
  net/octeontx2: restructure tm helper functions
  net/octeontx2: add dynamic topology update support
  net/octeontx2: add tm node add and delete cb
  net/octeontx2: add tm hierarchy commit callback
  net/octeontx2: add tm stats and shaper profile cbs
  net/octeontx2: add tm dynamic topology update cb
  net/octeontx2: add tm debug support

 doc/guides/nics/features/octeontx2.ini    |    1 +
 doc/guides/nics/octeontx2.rst             |   15 +
 drivers/common/octeontx2/otx2_dev.h       |    9 +
 drivers/net/octeontx2/otx2_ethdev.c       |    5 +-
 drivers/net/octeontx2/otx2_ethdev.h       |    3 +
 drivers/net/octeontx2/otx2_ethdev_debug.c |  274 +++
 drivers/net/octeontx2/otx2_tm.c           | 2660 ++++++++++++++++++++++++-----
 drivers/net/octeontx2/otx2_tm.h           |  101 +-
 8 files changed, 2588 insertions(+), 480 deletions(-)

-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 01/11] net/octeontx2: setup link config based on BP level
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
@ 2020-03-12 11:18 ` Nithin Dabilpuram
  2020-03-12 11:18 ` [dpdk-dev] [PATCH 02/11] net/octeontx2: restructure tm helper functions Nithin Dabilpuram
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:18 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

Configure NIX_AF_TL3_TL2X_LINKX_CFG using schq at
level based on NIX_AF_PSE_CHANNEL_LEVEL[BP_LEVEL].

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/octeontx2/otx2_ethdev.h |  1 +
 drivers/net/octeontx2/otx2_tm.c     | 16 +++++++++++++++-
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e5684f9..b7d5386 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -304,6 +304,7 @@ struct otx2_eth_dev {
 	/* Contiguous queues */
 	uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
 	uint16_t otx2_tm_root_lvl;
+	uint16_t link_cfg_lvl;
 	uint16_t tm_flags;
 	uint16_t tm_leaf_cnt;
 	struct otx2_nix_tm_node_list node_list;
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index ba615ce..2364e03 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -437,6 +437,16 @@ populate_tm_registers(struct otx2_eth_dev *dev,
 		*reg++ = NIX_AF_TL3X_SCHEDULE(schq);
 		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
 		req->num_regs++;
+
+		/* Link configuration */
+		if (!otx2_dev_is_sdp(dev) &&
+		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
+			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
+						nix_get_link(dev));
+			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
+			req->num_regs++;
+		}
+
 		if (pir.rate && pir.burst) {
 			*reg++ = NIX_AF_TL3X_PIR(schq);
 			*regval++ = shaper2regval(&pir) | 1;
@@ -471,7 +481,10 @@ populate_tm_registers(struct otx2_eth_dev *dev,
 		else
 			*regval++ = (strict_schedul_prio << 24) | rr_quantum;
 		req->num_regs++;
-		if (!otx2_dev_is_sdp(dev)) {
+
+		/* Link configuration */
+		if (!otx2_dev_is_sdp(dev) &&
+		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
 			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
 						nix_get_link(dev));
 			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
@@ -1144,6 +1157,7 @@ nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev)
 		return rc;
 
 	nix_tm_copy_rsp_to_dev(dev, rsp);
+	dev->link_cfg_lvl = rsp->link_cfg_lvl;
 
 	nix_tm_assign_hw_id(dev);
 	return 0;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 02/11] net/octeontx2: restructure tm helper functions
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
  2020-03-12 11:18 ` [dpdk-dev] [PATCH 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
@ 2020-03-12 11:18 ` Nithin Dabilpuram
  2020-03-12 11:18 ` [dpdk-dev] [PATCH 03/11] net/octeontx2: add dynamic topology update support Nithin Dabilpuram
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:18 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

Restructure traffic manager helper function by splitting to
multiple sets of register configurations like shaping, scheduling
and topology config.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 689 ++++++++++++++++++++++------------------
 drivers/net/octeontx2/otx2_tm.h |  85 ++---
 2 files changed, 417 insertions(+), 357 deletions(-)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 2364e03..057297a 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -94,52 +94,50 @@ nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
 }
 
 static inline uint64_t
-shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks,
-		   uint64_t value, uint64_t *exponent_p,
+shaper_rate_to_nix(uint64_t value, uint64_t *exponent_p,
 		   uint64_t *mantissa_p, uint64_t *div_exp_p)
 {
 	uint64_t div_exp, exponent, mantissa;
 
 	/* Boundary checks */
-	if (value < MIN_SHAPER_RATE(cclk_hz, cclk_ticks) ||
-	    value > MAX_SHAPER_RATE(cclk_hz, cclk_ticks))
+	if (value < MIN_SHAPER_RATE ||
+	    value > MAX_SHAPER_RATE)
 		return 0;
 
-	if (value <= SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, 0)) {
+	if (value <= SHAPER_RATE(0, 0, 0)) {
 		/* Calculate rate div_exp and mantissa using
 		 * the following formula:
 		 *
-		 * value = (cclk_hz * (256 + mantissa)
-		 *              / ((cclk_ticks << div_exp) * 256)
+		 * value = (2E6 * (256 + mantissa)
+		 *              / ((1 << div_exp) * 256))
 		 */
 		div_exp = 0;
 		exponent = 0;
 		mantissa = MAX_RATE_MANTISSA;
 
-		while (value < (cclk_hz / (cclk_ticks << div_exp)))
+		while (value < (NIX_SHAPER_RATE_CONST / (1 << div_exp)))
 			div_exp += 1;
 
 		while (value <
-		       ((cclk_hz * (256 + mantissa)) /
-			((cclk_ticks << div_exp) * 256)))
+		       ((NIX_SHAPER_RATE_CONST * (256 + mantissa)) /
+			((1 << div_exp) * 256)))
 			mantissa -= 1;
 	} else {
 		/* Calculate rate exponent and mantissa using
 		 * the following formula:
 		 *
-		 * value = (cclk_hz * ((256 + mantissa) << exponent)
-		 *              / (cclk_ticks * 256)
+		 * value = (2E6 * ((256 + mantissa) << exponent)) / 256
 		 *
 		 */
 		div_exp = 0;
 		exponent = MAX_RATE_EXPONENT;
 		mantissa = MAX_RATE_MANTISSA;
 
-		while (value < (cclk_hz * (1 << exponent)) / cclk_ticks)
+		while (value < (NIX_SHAPER_RATE_CONST * (1 << exponent)))
 			exponent -= 1;
 
-		while (value < (cclk_hz * ((256 + mantissa) << exponent)) /
-		       (cclk_ticks * 256))
+		while (value < ((NIX_SHAPER_RATE_CONST *
+				((256 + mantissa) << exponent)) / 256))
 			mantissa -= 1;
 	}
 
@@ -155,20 +153,7 @@ shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks,
 		*mantissa_p = mantissa;
 
 	/* Calculate real rate value */
-	return SHAPER_RATE(cclk_hz, cclk_ticks, exponent, mantissa, div_exp);
-}
-
-static inline uint64_t
-lx_shaper_rate_to_nix(uint64_t cclk_hz, uint32_t hw_lvl,
-		      uint64_t value, uint64_t *exponent,
-		      uint64_t *mantissa, uint64_t *div_exp)
-{
-	if (hw_lvl == NIX_TXSCH_LVL_TL1)
-		return shaper_rate_to_nix(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS,
-					  value, exponent, mantissa, div_exp);
-	else
-		return shaper_rate_to_nix(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS,
-					  value, exponent, mantissa, div_exp);
+	return SHAPER_RATE(exponent, mantissa, div_exp);
 }
 
 static inline uint64_t
@@ -207,329 +192,394 @@ shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p,
 	return SHAPER_BURST(exponent, mantissa);
 }
 
-static int
-configure_shaper_cir_pir_reg(struct otx2_eth_dev *dev,
-			     struct otx2_nix_tm_node *tm_node,
-			     struct shaper_params *cir,
-			     struct shaper_params *pir)
-{
-	uint32_t shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
-	struct otx2_nix_tm_shaper_profile *shaper_profile = NULL;
-	struct rte_tm_shaper_params *param;
-
-	shaper_profile_id = tm_node->params.shaper_profile_id;
-
-	shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
-	if (shaper_profile) {
-		param = &shaper_profile->profile;
-		/* Calculate CIR exponent and mantissa */
-		if (param->committed.rate)
-			cir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
-							  tm_node->hw_lvl_id,
-							  param->committed.rate,
-							  &cir->exponent,
-							  &cir->mantissa,
-							  &cir->div_exp);
-
-		/* Calculate PIR exponent and mantissa */
-		if (param->peak.rate)
-			pir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
-							  tm_node->hw_lvl_id,
-							  param->peak.rate,
-							  &pir->exponent,
-							  &pir->mantissa,
-							  &pir->div_exp);
-
-		/* Calculate CIR burst exponent and mantissa */
-		if (param->committed.size)
-			cir->burst = shaper_burst_to_nix(param->committed.size,
-							 &cir->burst_exponent,
-							 &cir->burst_mantissa);
-
-		/* Calculate PIR burst exponent and mantissa */
-		if (param->peak.size)
-			pir->burst = shaper_burst_to_nix(param->peak.size,
-							 &pir->burst_exponent,
-							 &pir->burst_mantissa);
-	}
-
-	return 0;
-}
-
-static int
-send_tm_reqval(struct otx2_mbox *mbox, struct nix_txschq_config *req)
+static void
+shaper_config_to_nix(struct otx2_nix_tm_shaper_profile *profile,
+		     struct shaper_params *cir,
+		     struct shaper_params *pir)
 {
-	int rc;
-
-	if (req->num_regs > MAX_REGS_PER_MBOX_MSG)
-		return -ERANGE;
-
-	rc = otx2_mbox_process(mbox);
-	if (rc)
-		return rc;
-
-	req->num_regs = 0;
-	return 0;
+	struct rte_tm_shaper_params *param = &profile->params;
+
+	if (!profile)
+		return;
+
+	/* Calculate CIR exponent and mantissa */
+	if (param->committed.rate)
+		cir->rate = shaper_rate_to_nix(param->committed.rate,
+					       &cir->exponent,
+					       &cir->mantissa,
+					       &cir->div_exp);
+
+	/* Calculate PIR exponent and mantissa */
+	if (param->peak.rate)
+		pir->rate = shaper_rate_to_nix(param->peak.rate,
+					       &pir->exponent,
+					       &pir->mantissa,
+					       &pir->div_exp);
+
+	/* Calculate CIR burst exponent and mantissa */
+	if (param->committed.size)
+		cir->burst = shaper_burst_to_nix(param->committed.size,
+						 &cir->burst_exponent,
+						 &cir->burst_mantissa);
+
+	/* Calculate PIR burst exponent and mantissa */
+	if (param->peak.size)
+		pir->burst = shaper_burst_to_nix(param->peak.size,
+						 &pir->burst_exponent,
+						 &pir->burst_mantissa);
 }
 
 static int
-populate_tm_registers(struct otx2_eth_dev *dev,
-		      struct otx2_nix_tm_node *tm_node)
+populate_tm_tl1_default(struct otx2_eth_dev *dev, uint32_t schq)
 {
-	uint64_t strict_schedul_prio, rr_prio;
 	struct otx2_mbox *mbox = dev->mbox;
-	volatile uint64_t *reg, *regval;
-	uint64_t parent = 0, child = 0;
-	struct shaper_params cir, pir;
 	struct nix_txschq_config *req;
+
+	/*
+	 * Default config for TL1.
+	 * For VF this is always ignored.
+	 */
+
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = NIX_TXSCH_LVL_TL1;
+
+	/* Set DWRR quantum */
+	req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
+	req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
+	req->num_regs++;
+
+	req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
+	req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
+	req->num_regs++;
+
+	req->reg[2] = NIX_AF_TL1X_CIR(schq);
+	req->regval[2] = 0;
+	req->num_regs++;
+
+	return otx2_mbox_process(mbox);
+}
+
+static uint8_t
+prepare_tm_sched_reg(struct otx2_eth_dev *dev,
+		     struct otx2_nix_tm_node *tm_node,
+		     volatile uint64_t *reg, volatile uint64_t *regval)
+{
+	uint64_t strict_prio = tm_node->priority;
+	uint32_t hw_lvl = tm_node->hw_lvl;
+	uint32_t schq = tm_node->hw_id;
 	uint64_t rr_quantum;
-	uint32_t hw_lvl;
-	uint32_t schq;
-	int rc;
+	uint8_t k = 0;
+
+	rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
+
+	/* For children to root, strict prio is default if either
+	 * device root is TL2 or TL1 Static Priority is disabled.
+	 */
+	if (hw_lvl == NIX_TXSCH_LVL_TL2 &&
+	    (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
+	     dev->tm_flags & NIX_TM_TL1_NO_SP))
+		strict_prio = TXSCH_TL1_DFLT_RR_PRIO;
+
+	otx2_tm_dbg("Schedule config node %s(%u) lvl %u id %u, "
+		     "prio 0x%" PRIx64 ", rr_quantum 0x%" PRIx64 " (%p)",
+		     nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
+		     tm_node->id, strict_prio, rr_quantum, tm_node);
+
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_SMQ:
+		reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL1:
+		reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
+		regval[k] = rr_quantum;
+		k++;
+
+		break;
+	}
+
+	return k;
+}
+
+static uint8_t
+prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node,
+		      struct otx2_nix_tm_shaper_profile *profile,
+		      volatile uint64_t *reg, volatile uint64_t *regval)
+{
+	struct shaper_params cir, pir;
+	uint32_t schq = tm_node->hw_id;
+	uint8_t k = 0;
 
 	memset(&cir, 0, sizeof(cir));
 	memset(&pir, 0, sizeof(pir));
+	shaper_config_to_nix(profile, &cir, &pir);
 
-	/* Skip leaf nodes */
-	if (tm_node->hw_lvl_id == NIX_TXSCH_LVL_CNT)
-		return 0;
+	otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, "
+		    "pir %" PRIu64 "(%" PRIu64 "B),"
+		     " cir %" PRIu64 "(%" PRIu64 "B) (%p)",
+		     nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
+		     tm_node->id, pir.rate, pir.burst,
+		     cir.rate, cir.burst, tm_node);
+
+	switch (tm_node->hw_lvl) {
+	case NIX_TXSCH_LVL_SMQ:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_MDQX_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_MDQX_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED ALG */
+		reg[k] = NIX_AF_MDQX_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_TL4X_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_TL4X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED algo */
+		reg[k] = NIX_AF_TL4X_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_TL3X_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_TL3X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED algo */
+		reg[k] = NIX_AF_TL3X_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_TL2X_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_TL2X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED algo */
+		reg[k] = NIX_AF_TL2X_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL1:
+		/* Configure CIR */
+		reg[k] = NIX_AF_TL1X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+		break;
+	}
+
+	return k;
+}
+
+static int
+populate_tm_reg(struct otx2_eth_dev *dev,
+		struct otx2_nix_tm_node *tm_node)
+{
+	struct otx2_nix_tm_shaper_profile *profile;
+	uint64_t regval_mask[MAX_REGS_PER_MBOX_MSG];
+	uint64_t regval[MAX_REGS_PER_MBOX_MSG];
+	uint64_t reg[MAX_REGS_PER_MBOX_MSG];
+	struct otx2_mbox *mbox = dev->mbox;
+	uint64_t parent = 0, child = 0;
+	uint32_t hw_lvl, rr_prio, schq;
+	struct nix_txschq_config *req;
+	int rc = -EFAULT;
+	uint8_t k = 0;
+
+	memset(regval_mask, 0, sizeof(regval_mask));
+	profile = nix_tm_shaper_profile_search(dev,
+					tm_node->params.shaper_profile_id);
+	rr_prio = tm_node->rr_prio;
+	hw_lvl = tm_node->hw_lvl;
+	schq = tm_node->hw_id;
 
 	/* Root node will not have a parent node */
-	if (tm_node->hw_lvl_id == dev->otx2_tm_root_lvl)
+	if (hw_lvl == dev->otx2_tm_root_lvl)
 		parent = tm_node->parent_hw_id;
 	else
 		parent = tm_node->parent->hw_id;
 
 	/* Do we need this trigger to configure TL1 */
 	if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
-	    tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
-		schq = parent;
-		/*
-		 * Default config for TL1.
-		 * For VF this is always ignored.
-		 */
-
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = NIX_TXSCH_LVL_TL1;
-
-		/* Set DWRR quantum */
-		req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
-		req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
-		req->num_regs++;
-
-		req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
-		req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
-		req->num_regs++;
-
-		req->reg[2] = NIX_AF_TL1X_CIR(schq);
-		req->regval[2] = 0;
-		req->num_regs++;
-
-		rc = send_tm_reqval(mbox, req);
+	    hw_lvl == dev->otx2_tm_root_lvl) {
+		rc = populate_tm_tl1_default(dev, parent);
 		if (rc)
 			goto error;
 	}
 
-	if (tm_node->hw_lvl_id != NIX_TXSCH_LVL_SMQ)
+	if (hw_lvl != NIX_TXSCH_LVL_SMQ)
 		child = find_prio_anchor(dev, tm_node->id);
 
-	rr_prio = tm_node->rr_prio;
-	hw_lvl = tm_node->hw_lvl_id;
-	strict_schedul_prio = tm_node->priority;
-	schq = tm_node->hw_id;
-	rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX) /
-		MAX_SCHED_WEIGHT;
-
-	configure_shaper_cir_pir_reg(dev, tm_node, &cir, &pir);
-
-	otx2_tm_dbg("Configure node %p, lvl %u hw_lvl %u, id %u, hw_id %u,"
-		     "parent_hw_id %" PRIx64 ", pir %" PRIx64 ", cir %" PRIx64,
-		     tm_node, tm_node->level_id, hw_lvl,
-		     tm_node->id, schq, parent, pir.rate, cir.rate);
-
-	rc = -EFAULT;
-
+	/* Override default rr_prio when TL1
+	 * Static Priority is disabled
+	 */
+	if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
+	    dev->tm_flags & NIX_TM_TL1_NO_SP) {
+		rr_prio = TXSCH_TL1_DFLT_RR_PRIO;
+		child = 0;
+	}
+
+	otx2_tm_dbg("Topology config node %s(%u)->%s(%lu) lvl %u, id %u"
+		    " prio_anchor %lu rr_prio %u (%p)", nix_hwlvl2str(hw_lvl),
+		    schq, nix_hwlvl2str(hw_lvl + 1), parent, tm_node->lvl,
+		    tm_node->id, child, rr_prio, tm_node);
+
+	/* Prepare Topology and Link config */
 	switch (hw_lvl) {
 	case NIX_TXSCH_LVL_SMQ:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		reg = req->reg;
-		regval = req->regval;
-		req->num_regs = 0;
 
 		/* Set xoff which will be cleared later */
-		*reg++ = NIX_AF_SMQX_CFG(schq);
-		*regval++ = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) |
-				(NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
-		req->num_regs++;
-		*reg++ = NIX_AF_MDQX_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_MDQX_SCHEDULE(schq);
-		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_MDQX_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
+		reg[k] = NIX_AF_SMQX_CFG(schq);
+		regval[k] = BIT_ULL(50);
+		regval_mask[k] = ~BIT_ULL(50);
+		k++;
 
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_MDQX_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_MDQX_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
 
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL4:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_TL4X_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
+
+		reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1);
+		k++;
 
-		*reg++ = NIX_AF_TL4X_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL4X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1);
-		req->num_regs++;
-		*reg++ = NIX_AF_TL4X_SCHEDULE(schq);
-		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_TL4X_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL4X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
 		/* Configure TL4 to send to SDP channel instead of CGX/LBK */
 		if (otx2_dev_is_sdp(dev)) {
-			*reg++ = NIX_AF_TL4X_SDP_LINK_CFG(schq);
-			*regval++ = BIT_ULL(12);
-			req->num_regs++;
+			reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
+			regval[k] = BIT_ULL(12);
+			k++;
 		}
-
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL3:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_TL3X_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
 
-		*reg++ = NIX_AF_TL3X_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL3X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1);
-		req->num_regs++;
-		*reg++ = NIX_AF_TL3X_SCHEDULE(schq);
-		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
+		reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1);
+		k++;
 
 		/* Link configuration */
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
-			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
+			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
 						nix_get_link(dev));
-			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
-			req->num_regs++;
+			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
+			k++;
 		}
 
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_TL3X_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL3X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
-
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL2:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_TL2X_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
 
-		*reg++ = NIX_AF_TL2X_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL2X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1);
-		req->num_regs++;
-		*reg++ = NIX_AF_TL2X_SCHEDULE(schq);
-		if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2)
-			*regval++ = (1 << 24) | rr_quantum;
-		else
-			*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
+		reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1);
+		k++;
 
 		/* Link configuration */
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
-			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
+			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
 						nix_get_link(dev));
-			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
-			req->num_regs++;
-		}
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_TL2X_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL2X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
+			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
+			k++;
 		}
 
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL1:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
+		k++;
 
-		*reg++ = NIX_AF_TL1X_SCHEDULE(schq);
-		*regval++ = rr_quantum;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL1X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
-		req->num_regs++;
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL1X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
-
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	}
 
+	/* Prepare schedule config */
+	k += prepare_tm_sched_reg(dev, tm_node, &reg[k], &regval[k]);
+
+	/* Prepare shaping config */
+	k += prepare_tm_shaper_reg(tm_node, profile, &reg[k], &regval[k]);
+
+	if (!k)
+		return 0;
+
+	/* Copy and send config mbox */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = hw_lvl;
+	req->num_regs = k;
+
+	otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+	otx2_mbox_memcpy(req->regval, regval, sizeof(uint64_t) * k);
+	otx2_mbox_memcpy(req->regval_mask, regval_mask, sizeof(uint64_t) * k);
+
+	rc = otx2_mbox_process(mbox);
+	if (rc)
+		goto error;
+
 	return 0;
 error:
 	otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc);
@@ -541,13 +591,14 @@ static int
 nix_tm_txsch_reg_config(struct otx2_eth_dev *dev)
 {
 	struct otx2_nix_tm_node *tm_node;
-	uint32_t lvl;
+	uint32_t hw_lvl;
 	int rc = 0;
 
-	for (lvl = 0; lvl < (uint32_t)dev->otx2_tm_root_lvl + 1; lvl++) {
+	for (hw_lvl = 0; hw_lvl <= dev->otx2_tm_root_lvl; hw_lvl++) {
 		TAILQ_FOREACH(tm_node, &dev->node_list, node) {
-			if (tm_node->hw_lvl_id == lvl) {
-				rc = populate_tm_registers(dev, tm_node);
+			if (tm_node->hw_lvl == hw_lvl &&
+			    tm_node->hw_lvl != NIX_TXSCH_LVL_CNT) {
+				rc = populate_tm_reg(dev, tm_node);
 				if (rc)
 					goto exit;
 			}
@@ -637,8 +688,8 @@ nix_tm_update_parent_info(struct otx2_eth_dev *dev)
 static int
 nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 			uint32_t parent_node_id, uint32_t priority,
-			uint32_t weight, uint16_t hw_lvl_id,
-			uint16_t level_id, bool user,
+			uint32_t weight, uint16_t hw_lvl,
+			uint16_t lvl, bool user,
 			struct rte_tm_node_params *params)
 {
 	struct otx2_nix_tm_shaper_profile *shaper_profile;
@@ -655,8 +706,8 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 	if (!tm_node)
 		return -ENOMEM;
 
-	tm_node->level_id = level_id;
-	tm_node->hw_lvl_id = hw_lvl_id;
+	tm_node->lvl = lvl;
+	tm_node->hw_lvl = hw_lvl;
 
 	tm_node->id = node_id;
 	tm_node->priority = priority;
@@ -935,18 +986,18 @@ nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
 			continue;
 
 		if (nix_tm_have_tl1_access(dev) &&
-		    tm_node->hw_lvl_id ==  NIX_TXSCH_LVL_TL1)
+		    tm_node->hw_lvl ==  NIX_TXSCH_LVL_TL1)
 			skip_node = true;
 
 		otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)",
-			    tm_node->id,  tm_node->hw_lvl_id,
+			    tm_node->id,  tm_node->hw_lvl,
 			    tm_node->hw_id, tm_node);
 		/* Free specific HW resource if requested */
 		if (!skip_node && flags_mask &&
 		    tm_node->flags & NIX_TM_NODE_HWRES) {
 			req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
 			req->flags = 0;
-			req->schq_lvl = tm_node->hw_lvl_id;
+			req->schq_lvl = tm_node->hw_lvl;
 			req->schq = tm_node->hw_id;
 			rc = otx2_mbox_process(mbox);
 			if (rc)
@@ -1010,17 +1061,17 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 	uint32_t l_id, schq_index;
 
 	otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)",
-		    child->id, child->level_id, child->hw_lvl_id, child);
+		    child->id, child->lvl, child->hw_lvl, child);
 
 	child->flags |= NIX_TM_NODE_HWRES;
 
 	/* Process root nodes */
 	if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
-	    child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+	    child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
 		int idx = 0;
 		uint32_t tschq_con_index;
 
-		l_id = child->hw_lvl_id;
+		l_id = child->hw_lvl;
 		tschq_con_index = dev->txschq_contig_index[l_id];
 		hw_id = dev->txschq_contig_list[l_id][tschq_con_index];
 		child->hw_id = hw_id;
@@ -1032,10 +1083,10 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 		return 0;
 	}
 	if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 &&
-	    child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+	    child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
 		uint32_t tschq_con_index;
 
-		l_id = child->hw_lvl_id;
+		l_id = child->hw_lvl;
 		tschq_con_index = dev->txschq_index[l_id];
 		hw_id = dev->txschq_list[l_id][tschq_con_index];
 		child->hw_id = hw_id;
@@ -1044,7 +1095,7 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 	}
 
 	/* Process children with parents */
-	l_id = child->hw_lvl_id;
+	l_id = child->hw_lvl;
 	schq_index = dev->txschq_index[l_id];
 	schq_con_index = dev->txschq_contig_index[l_id];
 
@@ -1069,8 +1120,8 @@ nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
 
 	for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) {
 		TAILQ_FOREACH(parent, &dev->node_list, node) {
-			child_hw_lvl = parent->hw_lvl_id - 1;
-			if (parent->hw_lvl_id != i)
+			child_hw_lvl = parent->hw_lvl - 1;
+			if (parent->hw_lvl != i)
 				continue;
 			TAILQ_FOREACH(child, &dev->node_list, node) {
 				if (!child->parent)
@@ -1087,7 +1138,7 @@ nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
 			 * Explicitly assign id to parent node if it
 			 * doesn't have a parent
 			 */
-			if (parent->hw_lvl_id == dev->otx2_tm_root_lvl)
+			if (parent->hw_lvl == dev->otx2_tm_root_lvl)
 				nix_tm_assign_id_to_node(dev, parent, NULL);
 		}
 	}
@@ -1102,7 +1153,7 @@ nix_tm_count_req_schq(struct otx2_eth_dev *dev,
 	uint8_t contig_count;
 
 	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
-		if (lvl == tm_node->hw_lvl_id) {
+		if (lvl == tm_node->hw_lvl) {
 			req->schq[lvl - 1] += tm_node->rr_num;
 			if (tm_node->max_prio != UINT32_MAX) {
 				contig_count = tm_node->max_prio + 1;
@@ -1111,7 +1162,7 @@ nix_tm_count_req_schq(struct otx2_eth_dev *dev,
 		}
 		if (lvl == dev->otx2_tm_root_lvl &&
 		    dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 &&
-		    tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
+		    tm_node->hw_lvl == dev->otx2_tm_root_lvl) {
 			req->schq_contig[dev->otx2_tm_root_lvl]++;
 		}
 	}
@@ -1192,7 +1243,7 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 			continue;
 
 		/* Enable xmit on sq */
-		if (tm_node->level_id != OTX2_TM_LVL_QUEUE) {
+		if (tm_node->lvl != OTX2_TM_LVL_QUEUE) {
 			tm_node->flags |= NIX_TM_NODE_ENABLED;
 			continue;
 		}
@@ -1210,8 +1261,7 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 		txq = eth_dev->data->tx_queues[sq];
 
 		smq = tm_node->parent->hw_id;
-		rr_quantum = (tm_node->weight *
-			      NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT;
+		rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
 
 		rc = nix_tm_sw_xon(txq, smq, rr_quantum);
 		if (rc)
@@ -1332,6 +1382,7 @@ void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
 
 int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
 {
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct otx2_eth_dev  *dev = otx2_eth_pmd_priv(eth_dev);
 	uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
 	int rc;
@@ -1347,6 +1398,13 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
 	nix_tm_clear_shaper_profiles(dev);
 	dev->tm_flags = NIX_TM_DEFAULT_TREE;
 
+	/* Disable TL1 Static Priority when VF's are enabled
+	 * as otherwise VF's TL2 reallocation will be needed
+	 * runtime to support a specific topology of PF.
+	 */
+	if (pci_dev->max_vfs)
+		dev->tm_flags |= NIX_TM_TL1_NO_SP;
+
 	rc = nix_tm_prepare_default_tree(eth_dev);
 	if (rc != 0)
 		return rc;
@@ -1397,15 +1455,14 @@ otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 		tm_node = nix_tm_node_search(dev, sq, true);
 
 	/* Check if we found a valid leaf node */
-	if (!tm_node || tm_node->level_id != OTX2_TM_LVL_QUEUE ||
+	if (!tm_node || tm_node->lvl != OTX2_TM_LVL_QUEUE ||
 	    !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
 		return -EIO;
 	}
 
 	/* Get SMQ Id of leaf node's parent */
 	*smq = tm_node->parent->hw_id;
-	*rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX)
-		/ MAX_SCHED_WEIGHT;
+	*rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
 
 	rc = nix_smq_xoff(dev, *smq, false);
 	if (rc)
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 4712b09..ad7727e 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -10,6 +10,7 @@
 #include <rte_tm_driver.h>
 
 #define NIX_TM_DEFAULT_TREE	BIT_ULL(0)
+#define NIX_TM_TL1_NO_SP	BIT_ULL(3)
 
 struct otx2_eth_dev;
 
@@ -27,16 +28,18 @@ struct otx2_nix_tm_node {
 	uint32_t hw_id;
 	uint32_t priority;
 	uint32_t weight;
-	uint16_t level_id;
-	uint16_t hw_lvl_id;
+	uint16_t lvl;
+	uint16_t hw_lvl;
 	uint32_t rr_prio;
 	uint32_t rr_num;
 	uint32_t max_prio;
 	uint32_t parent_hw_id;
-	uint32_t flags;
+	uint32_t flags:16;
 #define NIX_TM_NODE_HWRES	BIT_ULL(0)
 #define NIX_TM_NODE_ENABLED	BIT_ULL(1)
 #define NIX_TM_NODE_USER	BIT_ULL(2)
+	/* Shaper algorithm for RED state @NIX_REDALG_E */
+	uint32_t red_algo:2;
 	struct otx2_nix_tm_node *parent;
 	struct rte_tm_node_params params;
 };
@@ -45,7 +48,7 @@ struct otx2_nix_tm_shaper_profile {
 	TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper;
 	uint32_t shaper_profile_id;
 	uint32_t reference_count;
-	struct rte_tm_shaper_params profile;
+	struct rte_tm_shaper_params params; /* Rate in bits/sec */
 };
 
 struct shaper_params {
@@ -63,6 +66,10 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 
 #define MAX_SCHED_WEIGHT ((uint8_t)~0)
 #define NIX_TM_RR_QUANTUM_MAX (BIT_ULL(24) - 1)
+#define NIX_TM_WEIGHT_TO_RR_QUANTUM(__weight)			\
+		((((__weight) & MAX_SCHED_WEIGHT) *             \
+		  NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT)
+
 
 /* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT  */
 /* = NIX_MAX_HW_MTU */
@@ -73,52 +80,27 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 #define MAX_RATE_EXPONENT 0xf
 #define MAX_RATE_MANTISSA 0xff
 
-/** NIX rate limiter time-wheel resolution */
-#define L1_TIME_WHEEL_CCLK_TICKS 240
-#define LX_TIME_WHEEL_CCLK_TICKS 860
+#define NIX_SHAPER_RATE_CONST ((uint64_t)2E6)
 
-#define CCLK_HZ 1000000000
-
-/* NIX rate calculation
- *	CCLK = coprocessor-clock frequency in MHz
- *	CCLK_TICKS = rate limiter time-wheel resolution
- *
+/* NIX rate calculation in Bits/Sec
  *	PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
  *		<< NIX_*_PIR[RATE_EXPONENT]) / 256
- *	PIR = (CCLK / (CCLK_TICKS << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
- *		* PIR_ADD
+ *	PIR = (2E6 * PIR_ADD / (1 << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
  *
  *	CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
  *		<< NIX_*_CIR[RATE_EXPONENT]) / 256
- *	CIR = (CCLK / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
- *		* CIR_ADD
+ *	CIR = (2E6 * CIR_ADD / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
  */
-#define SHAPER_RATE(cclk_hz, cclk_ticks, \
-			exponent, mantissa, div_exp) \
-	(((uint64_t)(cclk_hz) * ((256 + (mantissa)) << (exponent))) \
-		/ (((cclk_ticks) << (div_exp)) * 256))
+#define SHAPER_RATE(exponent, mantissa, div_exp) \
+	((NIX_SHAPER_RATE_CONST * ((256 + (mantissa)) << (exponent)))\
+		/ (((1ull << (div_exp)) * 256)))
 
-#define L1_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
-	SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS, \
-			exponent, mantissa, div_exp)
+/* 96xx rate limits in Bits/Sec */
+#define MIN_SHAPER_RATE \
+	SHAPER_RATE(0, 0, MAX_RATE_DIV_EXP)
 
-#define LX_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
-	SHAPER_RATE(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS, \
-			exponent, mantissa, div_exp)
-
-/* Shaper rate limits */
-#define MIN_SHAPER_RATE(cclk_hz, cclk_ticks) \
-	SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, MAX_RATE_DIV_EXP)
-
-#define MAX_SHAPER_RATE(cclk_hz, cclk_ticks) \
-	SHAPER_RATE(cclk_hz, cclk_ticks, MAX_RATE_EXPONENT, \
-			MAX_RATE_MANTISSA, 0)
-
-#define MIN_L1_SHAPER_RATE(cclk_hz) \
-	MIN_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
-
-#define MAX_L1_SHAPER_RATE(cclk_hz) \
-	MAX_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
+#define MAX_SHAPER_RATE \
+	SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0)
 
 /** TM Shaper - low level operations */
 
@@ -150,4 +132,25 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 #define TXSCH_TL1_DFLT_RR_QTM  ((1 << 24) - 1)
 #define TXSCH_TL1_DFLT_RR_PRIO 1
 
+static inline const char *
+nix_hwlvl2str(uint32_t hw_lvl)
+{
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_MDQ:
+		return "SMQ/MDQ";
+	case NIX_TXSCH_LVL_TL4:
+		return "TL4";
+	case NIX_TXSCH_LVL_TL3:
+		return "TL3";
+	case NIX_TXSCH_LVL_TL2:
+		return "TL2";
+	case NIX_TXSCH_LVL_TL1:
+		return "TL1";
+	default:
+		break;
+	}
+
+	return "???";
+}
+
 #endif /* __OTX2_TM_H__ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 03/11] net/octeontx2: add dynamic topology update support
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
  2020-03-12 11:18 ` [dpdk-dev] [PATCH 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
  2020-03-12 11:18 ` [dpdk-dev] [PATCH 02/11] net/octeontx2: restructure tm helper functions Nithin Dabilpuram
@ 2020-03-12 11:18 ` Nithin Dabilpuram
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 04/11] net/octeontx2: add tm node add and delete cb Nithin Dabilpuram
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:18 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

Modify resource allocation and freeing logic to support
dynamic topology commit while to traffic is flowing.
This patch also modifies SQ flush to timeout based on minimum shaper
rate configured. SQ flush is further split to pre/post
functions to adhere to HW spec of 96XX C0.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/common/octeontx2/otx2_dev.h |   9 +
 drivers/net/octeontx2/otx2_ethdev.c |   3 +-
 drivers/net/octeontx2/otx2_ethdev.h |   1 +
 drivers/net/octeontx2/otx2_tm.c     | 538 +++++++++++++++++++++++++++---------
 drivers/net/octeontx2/otx2_tm.h     |   7 +-
 5 files changed, 417 insertions(+), 141 deletions(-)

diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h
index 0b0a949..13b75e1 100644
--- a/drivers/common/octeontx2/otx2_dev.h
+++ b/drivers/common/octeontx2/otx2_dev.h
@@ -46,6 +46,15 @@
 	((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) &&	\
 	 (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
 
+#define otx2_dev_is_96xx_Cx(dev)				\
+	((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) &&	\
+	 (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
+
+#define otx2_dev_is_96xx_C0(dev)				\
+	((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) &&	\
+	 (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) &&	\
+	 (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
+
 struct otx2_dev;
 
 /* Link status callback */
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e60f490..6896797 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -992,7 +992,7 @@ otx2_nix_tx_queue_release(void *_txq)
 	otx2_nix_dbg("Releasing txq %u", txq->sq);
 
 	/* Flush and disable tm */
-	otx2_nix_tm_sw_xoff(txq, eth_dev->data->dev_started);
+	otx2_nix_sq_flush_pre(txq, eth_dev->data->dev_started);
 
 	/* Free sqb's and disable sq */
 	nix_sq_uninit(txq);
@@ -1001,6 +1001,7 @@ otx2_nix_tx_queue_release(void *_txq)
 		rte_mempool_free(txq->sqb_pool);
 		txq->sqb_pool = NULL;
 	}
+	otx2_nix_sq_flush_post(txq);
 	rte_free(txq);
 }
 
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index b7d5386..6679652 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -307,6 +307,7 @@ struct otx2_eth_dev {
 	uint16_t link_cfg_lvl;
 	uint16_t tm_flags;
 	uint16_t tm_leaf_cnt;
+	uint64_t tm_rate_min;
 	struct otx2_nix_tm_node_list node_list;
 	struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
 	struct otx2_rss_info rss_info;
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 057297a..b6da668 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -59,8 +59,16 @@ static bool
 nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
 {
 	bool is_lbk = otx2_dev_is_lbk(dev);
-	return otx2_dev_is_pf(dev) && !otx2_dev_is_Ax(dev) &&
-		!is_lbk && !dev->maxvf;
+	return otx2_dev_is_pf(dev) && !otx2_dev_is_Ax(dev) && !is_lbk;
+}
+
+static bool
+nix_tm_is_leaf(struct otx2_eth_dev *dev, int lvl)
+{
+	if (nix_tm_have_tl1_access(dev))
+		return (lvl == OTX2_TM_LVL_QUEUE);
+
+	return (lvl == OTX2_TM_LVL_SCH4);
 }
 
 static int
@@ -424,6 +432,48 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node,
 	return k;
 }
 
+static uint8_t
+prepare_tm_sw_xoff(struct otx2_nix_tm_node *tm_node, bool enable,
+		   volatile uint64_t *reg, volatile uint64_t *regval)
+{
+	uint32_t hw_lvl = tm_node->hw_lvl;
+	uint32_t schq = tm_node->hw_id;
+	uint8_t k = 0;
+
+	otx2_tm_dbg("sw xoff config node %s(%u) lvl %u id %u, enable %u (%p)",
+		    nix_hwlvl2str(hw_lvl), schq, tm_node->lvl,
+		    tm_node->id, enable, tm_node);
+
+	regval[k] = enable;
+
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_MDQ:
+		reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL1:
+		reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
+		k++;
+		break;
+	default:
+		break;
+	}
+
+	return k;
+}
+
 static int
 populate_tm_reg(struct otx2_eth_dev *dev,
 		struct otx2_nix_tm_node *tm_node)
@@ -692,12 +742,13 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 			uint16_t lvl, bool user,
 			struct rte_tm_node_params *params)
 {
-	struct otx2_nix_tm_shaper_profile *shaper_profile;
+	struct otx2_nix_tm_shaper_profile *profile;
 	struct otx2_nix_tm_node *tm_node, *parent_node;
-	uint32_t shaper_profile_id;
+	struct shaper_params cir, pir;
+	uint32_t profile_id;
 
-	shaper_profile_id = params->shaper_profile_id;
-	shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
+	profile_id = params->shaper_profile_id;
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
 
 	parent_node = nix_tm_node_search(dev, parent_node_id, user);
 
@@ -709,6 +760,10 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 	tm_node->lvl = lvl;
 	tm_node->hw_lvl = hw_lvl;
 
+	/* Maintain minimum weight */
+	if (!weight)
+		weight = 1;
+
 	tm_node->id = node_id;
 	tm_node->priority = priority;
 	tm_node->weight = weight;
@@ -720,10 +775,22 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 		tm_node->flags = NIX_TM_NODE_USER;
 	rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
 
-	if (shaper_profile)
-		shaper_profile->reference_count++;
+	if (profile)
+		profile->reference_count++;
+
+	memset(&cir, 0, sizeof(cir));
+	memset(&pir, 0, sizeof(pir));
+	shaper_config_to_nix(profile, &cir, &pir);
+
 	tm_node->parent = parent_node;
 	tm_node->parent_hw_id = UINT32_MAX;
+	/* C0 doesn't support STALL when both PIR & CIR are enabled */
+	if (lvl < OTX2_TM_LVL_QUEUE &&
+	    otx2_dev_is_96xx_Cx(dev) &&
+	    pir.rate && cir.rate)
+		tm_node->red_algo = NIX_REDALG_DISCARD;
+	else
+		tm_node->red_algo = NIX_REDALG_STD;
 
 	TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node);
 
@@ -747,24 +814,67 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
 }
 
 static int
-nix_smq_xoff(struct otx2_eth_dev *dev, uint16_t smq, bool enable)
+nix_clear_path_xoff(struct otx2_eth_dev *dev,
+		    struct otx2_nix_tm_node *tm_node)
+{
+	struct nix_txschq_config *req;
+	struct otx2_nix_tm_node *p;
+	int rc;
+
+	/* Manipulating SW_XOFF not supported on Ax */
+	if (otx2_dev_is_Ax(dev))
+		return 0;
+
+	/* Enable nodes in path for flush to succeed */
+	if (!nix_tm_is_leaf(dev, tm_node->lvl))
+		p = tm_node;
+	else
+		p = tm_node->parent;
+	while (p) {
+		if (!(p->flags & NIX_TM_NODE_ENABLED) &&
+		    (p->flags & NIX_TM_NODE_HWRES)) {
+			req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+			req->lvl = p->hw_lvl;
+			req->num_regs = prepare_tm_sw_xoff(p, false, req->reg,
+							   req->regval);
+			rc = otx2_mbox_process(dev->mbox);
+			if (rc)
+				return rc;
+
+			p->flags |= NIX_TM_NODE_ENABLED;
+		}
+		p = p->parent;
+	}
+
+	return 0;
+}
+
+static int
+nix_smq_xoff(struct otx2_eth_dev *dev,
+	     struct otx2_nix_tm_node *tm_node,
+	     bool enable)
 {
 	struct otx2_mbox *mbox = dev->mbox;
 	struct nix_txschq_config *req;
+	uint16_t smq;
+	int rc;
+
+	smq = tm_node->hw_id;
+	otx2_tm_dbg("Setting SMQ %u XOFF/FLUSH to %s", smq,
+		    enable ? "enable" : "disable");
+
+	rc = nix_clear_path_xoff(dev, tm_node);
+	if (rc)
+		return rc;
 
 	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
 	req->lvl = NIX_TXSCH_LVL_SMQ;
 	req->num_regs = 1;
 
 	req->reg[0] = NIX_AF_SMQX_CFG(smq);
-	/* Unmodified fields */
-	req->regval[0] = ((uint64_t)NIX_MAX_VTAG_INS << 36) |
-				(NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
-
-	if (enable)
-		req->regval[0] |= BIT_ULL(50) | BIT_ULL(49);
-	else
-		req->regval[0] |= 0;
+	req->regval[0] = enable ? (BIT_ULL(50) | BIT_ULL(49)) : 0;
+	req->regval_mask[0] = enable ?
+				~(BIT_ULL(50) | BIT_ULL(49)) : ~BIT_ULL(50);
 
 	return otx2_mbox_process(mbox);
 }
@@ -780,6 +890,9 @@ otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
 	uint64_t aura_handle;
 	int rc;
 
+	otx2_tm_dbg("Setting SQ %u SQB aura FC to %s", txq->sq,
+		    enable ? "enable" : "disable");
+
 	lf = otx2_npa_lf_obj_get();
 	if (!lf)
 		return -EFAULT;
@@ -824,22 +937,41 @@ otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
 	return 0;
 }
 
-static void
+static int
 nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
 {
 	uint16_t sqb_cnt, head_off, tail_off;
 	struct otx2_eth_dev *dev = txq->dev;
+	uint64_t wdata, val, prev;
 	uint16_t sq = txq->sq;
-	uint64_t reg, val;
 	int64_t *regaddr;
+	uint64_t timeout;/* 10's of usec */
+
+	/* Wait for enough time based on shaper min rate */
+	timeout = (txq->qconf.nb_desc * NIX_MAX_HW_FRS * 8 * 1E5);
+	timeout = timeout / dev->tm_rate_min;
+	if (!timeout)
+		timeout = 10000;
+
+	wdata = ((uint64_t)sq << 32);
+	regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
+	val = otx2_atomic64_add_nosync(wdata, regaddr);
+
+	/* Spin multiple iterations as "txq->fc_cache_pkts" can still
+	 * have space to send pkts even though fc_mem is disabled
+	 */
 
 	while (true) {
-		reg = ((uint64_t)sq << 32);
-		regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
-		val = otx2_atomic64_add_nosync(reg, regaddr);
+		prev = val;
+		rte_delay_us(10);
+		val = otx2_atomic64_add_nosync(wdata, regaddr);
+		/* Continue on error */
+		if (val & BIT_ULL(63))
+			continue;
+
+		if (prev != val)
+			continue;
 
-		regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
-		val = otx2_atomic64_add_nosync(reg, regaddr);
 		sqb_cnt = val & 0xFFFF;
 		head_off = (val >> 20) & 0x3F;
 		tail_off = (val >> 28) & 0x3F;
@@ -850,68 +982,94 @@ nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
 			break;
 		}
 
-		rte_pause();
+		/* Timeout */
+		if (!timeout)
+			goto exit;
+		timeout--;
 	}
+
+	return 0;
+exit:
+	return -EFAULT;
 }
 
-int
-otx2_nix_tm_sw_xoff(void *__txq, bool dev_started)
+/* Flush and disable tx queue and its parent SMQ */
+int otx2_nix_sq_flush_pre(void *_txq, bool dev_started)
 {
-	struct otx2_eth_txq *txq = __txq;
-	struct otx2_eth_dev *dev = txq->dev;
-	struct otx2_mbox *mbox = dev->mbox;
-	struct nix_aq_enq_req *req;
-	struct nix_aq_enq_rsp *rsp;
-	uint16_t smq;
+	struct otx2_nix_tm_node *tm_node, *sibling;
+	struct otx2_eth_txq *txq;
+	struct otx2_eth_dev *dev;
+	uint16_t sq;
+	bool user;
 	int rc;
 
-	/* Get smq from sq */
-	req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
-	req->qidx = txq->sq;
-	req->ctype = NIX_AQ_CTYPE_SQ;
-	req->op = NIX_AQ_INSTOP_READ;
-	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
-	if (rc) {
-		otx2_err("Failed to get smq, rc=%d", rc);
-		return -EIO;
+	txq = _txq;
+	dev = txq->dev;
+	sq = txq->sq;
+
+	user = !!(dev->tm_flags & NIX_TM_COMMITTED);
+
+	/* Find the node for this SQ */
+	tm_node = nix_tm_node_search(dev, sq, user);
+	if (!tm_node || !(tm_node->flags & NIX_TM_NODE_ENABLED)) {
+		otx2_err("Invalid node/state for sq %u", sq);
+		return -EFAULT;
 	}
 
-	/* Check if sq is enabled */
-	if (!rsp->sq.ena)
-		return 0;
-
-	smq = rsp->sq.smq;
-
 	/* Enable CGX RXTX to drain pkts */
 	if (!dev_started) {
 		rc = otx2_cgx_rxtx_start(dev);
-		if (rc)
+		if (rc) {
+			otx2_err("cgx start failed, rc=%d", rc);
 			return rc;
-	}
-
-	rc = otx2_nix_sq_sqb_aura_fc(txq, false);
-	if (rc < 0) {
-		otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
-		goto cleanup;
+		}
 	}
 
 	/* Disable smq xoff for case it was enabled earlier */
-	rc = nix_smq_xoff(dev, smq, false);
+	rc = nix_smq_xoff(dev, tm_node->parent, false);
 	if (rc) {
-		otx2_err("Failed to enable smq for sq %u, rc=%d", txq->sq, rc);
-		goto cleanup;
-	}
-
-	/* Wait for sq entries to be flushed */
-	nix_txq_flush_sq_spin(txq);
-
-	/* Flush and enable smq xoff */
-	rc = nix_smq_xoff(dev, smq, true);
-	if (rc) {
-		otx2_err("Failed to disable smq for sq %u, rc=%d", txq->sq, rc);
+		otx2_err("Failed to enable smq %u, rc=%d",
+			 tm_node->parent->hw_id, rc);
 		return rc;
 	}
 
+	/* As per HRM, to disable an SQ, all other SQ's
+	 * that feed to same SMQ must be paused before SMQ flush.
+	 */
+	TAILQ_FOREACH(sibling, &dev->node_list, node) {
+		if (sibling->parent != tm_node->parent)
+			continue;
+		if (!(sibling->flags & NIX_TM_NODE_ENABLED))
+			continue;
+
+		sq = sibling->id;
+		txq = dev->eth_dev->data->tx_queues[sq];
+		if (!txq)
+			continue;
+
+		rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+		if (rc) {
+			otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
+			goto cleanup;
+		}
+
+		/* Wait for sq entries to be flushed */
+		rc = nix_txq_flush_sq_spin(txq);
+		if (rc) {
+			otx2_err("Failed to drain sq %u, rc=%d\n", txq->sq, rc);
+			return rc;
+		}
+	}
+
+	tm_node->flags &= ~NIX_TM_NODE_ENABLED;
+
+	/* Disable and flush */
+	rc = nix_smq_xoff(dev, tm_node->parent, true);
+	if (rc) {
+		otx2_err("Failed to disable smq %u, rc=%d",
+			 tm_node->parent->hw_id, rc);
+		goto cleanup;
+	}
 cleanup:
 	/* Restore cgx state */
 	if (!dev_started)
@@ -920,47 +1078,120 @@ otx2_nix_tm_sw_xoff(void *__txq, bool dev_started)
 	return rc;
 }
 
+int otx2_nix_sq_flush_post(void *_txq)
+{
+	struct otx2_nix_tm_node *tm_node, *sibling;
+	struct otx2_eth_txq *txq = _txq;
+	struct otx2_eth_txq *s_txq;
+	struct otx2_eth_dev *dev;
+	bool once = false;
+	uint16_t sq, s_sq;
+	bool user;
+	int rc;
+
+	dev = txq->dev;
+	sq = txq->sq;
+	user = !!(dev->tm_flags & NIX_TM_COMMITTED);
+
+	/* Find the node for this SQ */
+	tm_node = nix_tm_node_search(dev, sq, user);
+	if (!tm_node) {
+		otx2_err("Invalid node for sq %u", sq);
+		return -EFAULT;
+	}
+
+	/* Enable all the siblings back */
+	TAILQ_FOREACH(sibling, &dev->node_list, node) {
+		if (sibling->parent != tm_node->parent)
+			continue;
+
+		if (sibling->id == sq)
+			continue;
+
+		if (!(sibling->flags & NIX_TM_NODE_ENABLED))
+			continue;
+
+		s_sq = sibling->id;
+		s_txq = dev->eth_dev->data->tx_queues[s_sq];
+		if (!s_txq)
+			continue;
+
+		if (!once) {
+			/* Enable back if any SQ is still present */
+			rc = nix_smq_xoff(dev, tm_node->parent, false);
+			if (rc) {
+				otx2_err("Failed to enable smq %u, rc=%d",
+					 tm_node->parent->hw_id, rc);
+				return rc;
+			}
+			once = true;
+		}
+
+		rc = otx2_nix_sq_sqb_aura_fc(s_txq, true);
+		if (rc) {
+			otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
+			return rc;
+		}
+	}
+
+	return 0;
+}
+
 static int
-nix_tm_sw_xon(struct otx2_eth_txq *txq,
-	      uint16_t smq, uint32_t rr_quantum)
+nix_sq_sched_data(struct otx2_eth_dev *dev,
+		  struct otx2_nix_tm_node *tm_node,
+		  bool rr_quantum_only)
 {
-	struct otx2_eth_dev *dev = txq->dev;
+	struct rte_eth_dev *eth_dev = dev->eth_dev;
 	struct otx2_mbox *mbox = dev->mbox;
+	uint16_t sq = tm_node->id, smq;
 	struct nix_aq_enq_req *req;
+	uint64_t rr_quantum;
 	int rc;
 
-	otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum %u",
-		    txq->sq, txq->sq, rr_quantum);
-	/* Set smq from sq */
+	smq = tm_node->parent->hw_id;
+	rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
+
+	if (rr_quantum_only)
+		otx2_tm_dbg("Update sq(%u) rr_quantum 0x%lx", sq, rr_quantum);
+	else
+		otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum 0x%lx",
+			    sq, smq, rr_quantum);
+
+	if (sq > eth_dev->data->nb_tx_queues)
+		return -EFAULT;
+
 	req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
-	req->qidx = txq->sq;
+	req->qidx = sq;
 	req->ctype = NIX_AQ_CTYPE_SQ;
 	req->op = NIX_AQ_INSTOP_WRITE;
-	req->sq.smq = smq;
+
+	/* smq update only when needed */
+	if (!rr_quantum_only) {
+		req->sq.smq = smq;
+		req->sq_mask.smq = ~req->sq_mask.smq;
+	}
 	req->sq.smq_rr_quantum = rr_quantum;
-	req->sq_mask.smq = ~req->sq_mask.smq;
 	req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum;
 
 	rc = otx2_mbox_process(mbox);
-	if (rc) {
+	if (rc)
 		otx2_err("Failed to set smq, rc=%d", rc);
-		return -EIO;
-	}
+	return rc;
+}
+
+int otx2_nix_sq_enable(void *_txq)
+{
+	struct otx2_eth_txq *txq = _txq;
+	int rc;
 
 	/* Enable sqb_aura fc */
 	rc = otx2_nix_sq_sqb_aura_fc(txq, true);
-	if (rc < 0) {
+	if (rc) {
 		otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
 		return rc;
 	}
 
-	/* Disable smq xoff */
-	rc = nix_smq_xoff(dev, smq, false);
-	if (rc) {
-		otx2_err("Failed to enable smq for sq %u", txq->sq);
-		return rc;
-	}
-
 	return 0;
 }
 
@@ -968,12 +1199,11 @@ static int
 nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
 		      uint32_t flags, bool hw_only)
 {
-	struct otx2_nix_tm_shaper_profile *shaper_profile;
+	struct otx2_nix_tm_shaper_profile *profile;
 	struct otx2_nix_tm_node *tm_node, *next_node;
 	struct otx2_mbox *mbox = dev->mbox;
 	struct nix_txsch_free_req *req;
-	uint32_t shaper_profile_id;
-	bool skip_node = false;
+	uint32_t profile_id;
 	int rc = 0;
 
 	next_node = TAILQ_FIRST(&dev->node_list);
@@ -985,37 +1215,40 @@ nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
 		if ((tm_node->flags & flags_mask) != flags)
 			continue;
 
-		if (nix_tm_have_tl1_access(dev) &&
-		    tm_node->hw_lvl ==  NIX_TXSCH_LVL_TL1)
-			skip_node = true;
-
-		otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)",
-			    tm_node->id,  tm_node->hw_lvl,
-			    tm_node->hw_id, tm_node);
-		/* Free specific HW resource if requested */
-		if (!skip_node && flags_mask &&
+		if (!nix_tm_is_leaf(dev, tm_node->lvl) &&
+		    tm_node->hw_lvl != NIX_TXSCH_LVL_TL1 &&
 		    tm_node->flags & NIX_TM_NODE_HWRES) {
+			/* Free specific HW resource */
+			otx2_tm_dbg("Free hwres %s(%u) lvl %u id %u (%p)",
+				    nix_hwlvl2str(tm_node->hw_lvl),
+				    tm_node->hw_id, tm_node->lvl,
+				    tm_node->id, tm_node);
+
+			rc = nix_clear_path_xoff(dev, tm_node);
+			if (rc)
+				return rc;
+
 			req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
 			req->flags = 0;
 			req->schq_lvl = tm_node->hw_lvl;
 			req->schq = tm_node->hw_id;
 			rc = otx2_mbox_process(mbox);
 			if (rc)
-				break;
-		} else {
-			skip_node = false;
+				return rc;
+			tm_node->flags &= ~NIX_TM_NODE_HWRES;
 		}
-		tm_node->flags &= ~NIX_TM_NODE_HWRES;
 
 		/* Leave software elements if needed */
 		if (hw_only)
 			continue;
 
-		shaper_profile_id = tm_node->params.shaper_profile_id;
-		shaper_profile =
-			nix_tm_shaper_profile_search(dev, shaper_profile_id);
-		if (shaper_profile)
-			shaper_profile->reference_count--;
+		otx2_tm_dbg("Free node lvl %u id %u (%p)",
+			    tm_node->lvl, tm_node->id, tm_node);
+
+		profile_id = tm_node->params.shaper_profile_id;
+		profile = nix_tm_shaper_profile_search(dev, profile_id);
+		if (profile)
+			profile->reference_count--;
 
 		TAILQ_REMOVE(&dev->node_list, tm_node, node);
 		rte_free(tm_node);
@@ -1060,8 +1293,8 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 	uint32_t hw_id, schq_con_index, prio_offset;
 	uint32_t l_id, schq_index;
 
-	otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)",
-		    child->id, child->lvl, child->hw_lvl, child);
+	otx2_tm_dbg("Assign hw id for child node %s lvl %u id %u (%p)",
+		    nix_hwlvl2str(child->hw_lvl), child->lvl, child->id, child);
 
 	child->flags |= NIX_TM_NODE_HWRES;
 
@@ -1219,8 +1452,8 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
 	struct otx2_nix_tm_node *tm_node;
-	uint16_t sq, smq, rr_quantum;
 	struct otx2_eth_txq *txq;
+	uint16_t sq;
 	int rc;
 
 	nix_tm_update_parent_info(dev);
@@ -1237,42 +1470,68 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 		return rc;
 	}
 
-	/* Enable xmit as all the topology is ready */
-	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
-		if (tm_node->flags & NIX_TM_NODE_ENABLED)
-			continue;
+	/* Trigger MTU recalulate as SMQ needs MTU conf */
+	if (eth_dev->data->dev_started && eth_dev->data->nb_rx_queues) {
+		rc = otx2_nix_recalc_mtu(eth_dev);
+		if (rc) {
+			otx2_err("TM MTU update failed, rc=%d", rc);
+			return rc;
+		}
+	}
 
-		/* Enable xmit on sq */
-		if (tm_node->lvl != OTX2_TM_LVL_QUEUE) {
+	/* Mark all non-leaf's as enabled */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!nix_tm_is_leaf(dev, tm_node->lvl))
 			tm_node->flags |= NIX_TM_NODE_ENABLED;
+	}
+
+	if (!xmit_enable)
+		return 0;
+
+	/* Update SQ Sched Data while SQ is idle */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!nix_tm_is_leaf(dev, tm_node->lvl))
 			continue;
+
+		rc = nix_sq_sched_data(dev, tm_node, false);
+		if (rc) {
+			otx2_err("SQ %u sched update failed, rc=%d",
+				 tm_node->id, rc);
+			return rc;
+		}
+	}
+
+	/* Finally XON all SMQ's */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+			continue;
+
+		rc = nix_smq_xoff(dev, tm_node, false);
+		if (rc) {
+			otx2_err("Failed to enable smq %u, rc=%d",
+				 tm_node->hw_id, rc);
+			return rc;
 		}
+	}
 
-		/* Don't enable SMQ or mark as enable */
-		if (!xmit_enable)
+	/* Enable xmit as all the topology is ready */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!nix_tm_is_leaf(dev, tm_node->lvl))
 			continue;
 
 		sq = tm_node->id;
-		if (sq > eth_dev->data->nb_tx_queues) {
-			rc = -EFAULT;
-			break;
-		}
-
 		txq = eth_dev->data->tx_queues[sq];
 
-		smq = tm_node->parent->hw_id;
-		rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
-		rc = nix_tm_sw_xon(txq, smq, rr_quantum);
-		if (rc)
-			break;
+		rc = otx2_nix_sq_enable(txq);
+		if (rc) {
+			otx2_err("TM sw xon failed on SQ %u, rc=%d",
+				 tm_node->id, rc);
+			return rc;
+		}
 		tm_node->flags |= NIX_TM_NODE_ENABLED;
 	}
 
-	if (rc)
-		otx2_err("TM failed to enable xmit on sq %u, rc=%d", sq, rc);
-
-	return rc;
+	return 0;
 }
 
 static int
@@ -1282,7 +1541,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 	uint32_t def = eth_dev->data->nb_tx_queues;
 	struct rte_tm_node_params params;
 	uint32_t leaf_parent, i;
-	int rc = 0;
+	int rc = 0, leaf_level;
 
 	/* Default params */
 	memset(&params, 0, sizeof(params));
@@ -1325,6 +1584,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 			goto exit;
 
 		leaf_parent = def + 4;
+		leaf_level = OTX2_TM_LVL_QUEUE;
 	} else {
 		dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
 		rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
@@ -1356,6 +1616,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 			goto exit;
 
 		leaf_parent = def + 3;
+		leaf_level = OTX2_TM_LVL_SCH4;
 	}
 
 	/* Add leaf nodes */
@@ -1363,7 +1624,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 		rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0,
 					     DEFAULT_RR_WEIGHT,
 					     NIX_TXSCH_LVL_CNT,
-					     OTX2_TM_LVL_QUEUE, false, &params);
+					     leaf_level, false, &params);
 		if (rc)
 			break;
 	}
@@ -1378,6 +1639,7 @@ void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
 
 	TAILQ_INIT(&dev->node_list);
 	TAILQ_INIT(&dev->shaper_profile_list);
+	dev->tm_rate_min = 1E9; /* 1Gbps */
 }
 
 int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
@@ -1455,7 +1717,7 @@ otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 		tm_node = nix_tm_node_search(dev, sq, true);
 
 	/* Check if we found a valid leaf node */
-	if (!tm_node || tm_node->lvl != OTX2_TM_LVL_QUEUE ||
+	if (!tm_node || !nix_tm_is_leaf(dev, tm_node->lvl) ||
 	    !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
 		return -EIO;
 	}
@@ -1464,7 +1726,7 @@ otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 	*smq = tm_node->parent->hw_id;
 	*rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
 
-	rc = nix_smq_xoff(dev, *smq, false);
+	rc = nix_smq_xoff(dev, tm_node->parent, false);
 	if (rc)
 		return rc;
 	tm_node->flags |= NIX_TM_NODE_ENABLED;
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index ad7727e..413120a 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -10,6 +10,7 @@
 #include <rte_tm_driver.h>
 
 #define NIX_TM_DEFAULT_TREE	BIT_ULL(0)
+#define NIX_TM_COMMITTED	BIT_ULL(1)
 #define NIX_TM_TL1_NO_SP	BIT_ULL(3)
 
 struct otx2_eth_dev;
@@ -19,7 +20,9 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 			      uint32_t *rr_quantum, uint16_t *smq);
-int otx2_nix_tm_sw_xoff(void *_txq, bool dev_started);
+int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
+int otx2_nix_sq_flush_post(void *_txq);
+int otx2_nix_sq_enable(void *_txq);
 int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
 
 struct otx2_nix_tm_node {
@@ -40,6 +43,7 @@ struct otx2_nix_tm_node {
 #define NIX_TM_NODE_USER	BIT_ULL(2)
 	/* Shaper algorithm for RED state @NIX_REDALG_E */
 	uint32_t red_algo:2;
+
 	struct otx2_nix_tm_node *parent;
 	struct rte_tm_node_params params;
 };
@@ -70,7 +74,6 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 		((((__weight) & MAX_SCHED_WEIGHT) *             \
 		  NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT)
 
-
 /* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT  */
 /* = NIX_MAX_HW_MTU */
 #define DEFAULT_RR_WEIGHT 71
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 04/11] net/octeontx2: add tm node add and delete cb
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (2 preceding siblings ...)
  2020-03-12 11:18 ` [dpdk-dev] [PATCH 03/11] net/octeontx2: add dynamic topology update support Nithin Dabilpuram
@ 2020-03-12 11:19 ` Nithin Dabilpuram
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 05/11] net/octeontx2: add tm node suspend and resume cb Nithin Dabilpuram
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:19 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

Adds support to Traffic Management callbacks "node_add"
and "node_delete". These callbacks doesn't support
dynamic node addition or deletion post hierarchy commit.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 271 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.h |   2 +
 2 files changed, 273 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index b6da668..579265c 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1534,6 +1534,277 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 	return 0;
 }
 
+static uint16_t
+nix_tm_lvl2nix(struct otx2_eth_dev *dev, uint32_t lvl)
+{
+	if (nix_tm_have_tl1_access(dev)) {
+		switch (lvl) {
+		case OTX2_TM_LVL_ROOT:
+			return NIX_TXSCH_LVL_TL1;
+		case OTX2_TM_LVL_SCH1:
+			return NIX_TXSCH_LVL_TL2;
+		case OTX2_TM_LVL_SCH2:
+			return NIX_TXSCH_LVL_TL3;
+		case OTX2_TM_LVL_SCH3:
+			return NIX_TXSCH_LVL_TL4;
+		case OTX2_TM_LVL_SCH4:
+			return NIX_TXSCH_LVL_SMQ;
+		default:
+			return NIX_TXSCH_LVL_CNT;
+		}
+	} else {
+		switch (lvl) {
+		case OTX2_TM_LVL_ROOT:
+			return NIX_TXSCH_LVL_TL2;
+		case OTX2_TM_LVL_SCH1:
+			return NIX_TXSCH_LVL_TL3;
+		case OTX2_TM_LVL_SCH2:
+			return NIX_TXSCH_LVL_TL4;
+		case OTX2_TM_LVL_SCH3:
+			return NIX_TXSCH_LVL_SMQ;
+		default:
+			return NIX_TXSCH_LVL_CNT;
+		}
+	}
+}
+
+static uint16_t
+nix_max_prio(struct otx2_eth_dev *dev, uint16_t hw_lvl)
+{
+	if (hw_lvl >= NIX_TXSCH_LVL_CNT)
+		return 0;
+
+	/* MDQ doesn't support SP */
+	if (hw_lvl == NIX_TXSCH_LVL_MDQ)
+		return 0;
+
+	/* PF's TL1 with VF's enabled doesn't support SP */
+	if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
+	    (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
+	     (dev->tm_flags & NIX_TM_TL1_NO_SP)))
+		return 0;
+
+	return TXSCH_TLX_SP_PRIO_MAX - 1;
+}
+
+
+static int
+validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
+	      uint32_t parent_id, uint32_t priority,
+	      struct rte_tm_error *error)
+{
+	uint8_t priorities[TXSCH_TLX_SP_PRIO_MAX];
+	struct otx2_nix_tm_node *tm_node;
+	uint32_t rr_num = 0;
+	int i;
+
+	/* Validate priority against max */
+	if (priority > nix_max_prio(dev, nix_tm_lvl2nix(dev, lvl - 1))) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "unsupported priority value";
+		return -EINVAL;
+	}
+
+	if (parent_id == RTE_TM_NODE_ID_NULL)
+		return 0;
+
+	memset(priorities, 0, TXSCH_TLX_SP_PRIO_MAX);
+	priorities[priority] = 1;
+
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!tm_node->parent)
+			continue;
+
+		if (!(tm_node->flags & NIX_TM_NODE_USER))
+			continue;
+
+		if (tm_node->parent->id != parent_id)
+			continue;
+
+		priorities[tm_node->priority]++;
+	}
+
+	for (i = 0; i < TXSCH_TLX_SP_PRIO_MAX; i++)
+		if (priorities[i] > 1)
+			rr_num++;
+
+	/* At max, one rr groups per parent */
+	if (rr_num > 1) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+		error->message = "multiple DWRR node priority";
+		return -EINVAL;
+	}
+
+	/* Check for previous priority to avoid holes in priorities */
+	if (priority && !priorities[priority - 1]) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+		error->message = "priority not in order";
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
+		uint32_t parent_node_id, uint32_t priority,
+		uint32_t weight, uint32_t lvl,
+		struct rte_tm_node_params *params,
+		struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *parent_node;
+	int rc, clear_on_fail = 0;
+	uint32_t exp_next_lvl;
+	uint16_t hw_lvl;
+
+	/* we don't support dynamic updates */
+	if (dev->tm_flags & NIX_TM_COMMITTED) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "dynamic update not supported";
+		return -EIO;
+	}
+
+	/* Leaf nodes have to be same priority */
+	if (nix_tm_is_leaf(dev, lvl) && priority != 0) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "queue shapers must be priority 0";
+		return -EIO;
+	}
+
+	parent_node = nix_tm_node_search(dev, parent_node_id, true);
+
+	/* find the right level */
+	if (lvl == RTE_TM_NODE_LEVEL_ID_ANY) {
+		if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+			lvl = OTX2_TM_LVL_ROOT;
+		} else if (parent_node) {
+			lvl = parent_node->lvl + 1;
+		} else {
+			/* Neigher proper parent nor proper level id given */
+			error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+			error->message = "invalid parent node id";
+			return -ERANGE;
+		}
+	}
+
+	/* Translate rte_tm level id's to nix hw level id's */
+	hw_lvl = nix_tm_lvl2nix(dev, lvl);
+	if (hw_lvl == NIX_TXSCH_LVL_CNT &&
+	    !nix_tm_is_leaf(dev, lvl)) {
+		error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
+		error->message = "invalid level id";
+		return -ERANGE;
+	}
+
+	if (node_id < dev->tm_leaf_cnt)
+		exp_next_lvl = NIX_TXSCH_LVL_SMQ;
+	else
+		exp_next_lvl = hw_lvl + 1;
+
+	/* Check if there is no parent node yet */
+	if (hw_lvl != dev->otx2_tm_root_lvl &&
+	    (!parent_node || parent_node->hw_lvl != exp_next_lvl)) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+		error->message = "invalid parent node id";
+		return -EINVAL;
+	}
+
+	/* Check if a node already exists */
+	if (nix_tm_node_search(dev, node_id, true)) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "node already exists";
+		return -EINVAL;
+	}
+
+	/* Check if shaper profile exists for non leaf node */
+	if (!nix_tm_is_leaf(dev, lvl) &&
+	    params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE &&
+	    !nix_tm_shaper_profile_search(dev, params->shaper_profile_id)) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+		error->message = "invalid shaper profile";
+		return -EINVAL;
+	}
+
+	/* Check if there is second DWRR already in siblings or holes in prio */
+	if (validate_prio(dev, lvl, parent_node_id, priority, error))
+		return -EINVAL;
+
+	if (weight > MAX_SCHED_WEIGHT) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+		error->message = "max weight exceeded";
+		return -EINVAL;
+	}
+
+	rc = nix_tm_node_add_to_list(dev, node_id, parent_node_id,
+				     priority, weight, hw_lvl,
+				     lvl, true, params);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		/* cleanup user added nodes */
+		if (clear_on_fail)
+			nix_tm_free_resources(dev, NIX_TM_NODE_USER,
+					      NIX_TM_NODE_USER, false);
+		error->message = "failed to add node";
+		return rc;
+	}
+	error->type = RTE_TM_ERROR_TYPE_NONE;
+	return 0;
+}
+
+static int
+nix_tm_node_delete(struct rte_eth_dev *eth_dev, uint32_t node_id,
+		   struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node, *child_node;
+	struct otx2_nix_tm_shaper_profile *profile;
+	uint32_t profile_id;
+
+	/* we don't support dynamic updates yet */
+	if (dev->tm_flags & NIX_TM_COMMITTED) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "hierarchy exists";
+		return -EIO;
+	}
+
+	if (node_id == RTE_TM_NODE_ID_NULL) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "invalid node id";
+		return -EINVAL;
+	}
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	/* Check for any existing children */
+	TAILQ_FOREACH(child_node, &dev->node_list, node) {
+		if (child_node->parent == tm_node) {
+			error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+			error->message = "children exist";
+			return -EINVAL;
+		}
+	}
+
+	/* Remove shaper profile reference */
+	profile_id = tm_node->params.shaper_profile_id;
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
+	profile->reference_count--;
+
+	TAILQ_REMOVE(&dev->node_list, tm_node, node);
+	rte_free(tm_node);
+	return 0;
+}
+
+const struct rte_tm_ops otx2_tm_ops = {
+	.node_add = nix_tm_node_add,
+	.node_delete = nix_tm_node_delete,
+};
+
 static int
 nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 {
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 413120a..ebb4e90 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -135,6 +135,8 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 #define TXSCH_TL1_DFLT_RR_QTM  ((1 << 24) - 1)
 #define TXSCH_TL1_DFLT_RR_PRIO 1
 
+#define TXSCH_TLX_SP_PRIO_MAX 10
+
 static inline const char *
 nix_hwlvl2str(uint32_t hw_lvl)
 {
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 05/11] net/octeontx2: add tm node suspend and resume cb
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (3 preceding siblings ...)
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 04/11] net/octeontx2: add tm node add and delete cb Nithin Dabilpuram
@ 2020-03-12 11:19 ` Nithin Dabilpuram
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 06/11] net/octeontx2: add tm hierarchy commit callback Nithin Dabilpuram
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:19 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

From: Krzysztof Kanas <kkanas@marvell.com>

Add TM support to suspend and resume nodes post hierarchy
commit.

Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 81 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 81 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 579265c..175d1d5 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1534,6 +1534,28 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 	return 0;
 }
 
+static int
+send_tm_reqval(struct otx2_mbox *mbox,
+	       struct nix_txschq_config *req,
+	       struct rte_tm_error *error)
+{
+	int rc;
+
+	if (!req->num_regs ||
+	    req->num_regs > MAX_REGS_PER_MBOX_MSG) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "invalid config";
+		return -EIO;
+	}
+
+	rc = otx2_mbox_process(mbox);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+	}
+	return rc;
+}
+
 static uint16_t
 nix_tm_lvl2nix(struct otx2_eth_dev *dev, uint32_t lvl)
 {
@@ -1800,9 +1822,68 @@ nix_tm_node_delete(struct rte_eth_dev *eth_dev, uint32_t node_id,
 	return 0;
 }
 
+static int
+nix_tm_node_suspend_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			   struct rte_tm_error *error, bool suspend)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	struct otx2_nix_tm_node *tm_node;
+	struct nix_txschq_config *req;
+	uint16_t flags;
+	int rc;
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "hierarchy doesn't exist";
+		return -EINVAL;
+	}
+
+	flags = tm_node->flags;
+	flags = suspend ? (flags & ~NIX_TM_NODE_ENABLED) :
+		(flags | NIX_TM_NODE_ENABLED);
+
+	if (tm_node->flags == flags)
+		return 0;
+
+	/* send mbox for state change */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+
+	req->lvl = tm_node->hw_lvl;
+	req->num_regs =	prepare_tm_sw_xoff(tm_node, suspend,
+					   req->reg, req->regval);
+	rc = send_tm_reqval(mbox, req, error);
+	if (!rc)
+		tm_node->flags = flags;
+	return rc;
+}
+
+static int
+nix_tm_node_suspend(struct rte_eth_dev *eth_dev, uint32_t node_id,
+		    struct rte_tm_error *error)
+{
+	return nix_tm_node_suspend_resume(eth_dev, node_id, error, true);
+}
+
+static int
+nix_tm_node_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
+		   struct rte_tm_error *error)
+{
+	return nix_tm_node_suspend_resume(eth_dev, node_id, error, false);
+}
+
 const struct rte_tm_ops otx2_tm_ops = {
 	.node_add = nix_tm_node_add,
 	.node_delete = nix_tm_node_delete,
+	.node_suspend = nix_tm_node_suspend,
+	.node_resume = nix_tm_node_resume,
 };
 
 static int
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 06/11] net/octeontx2: add tm hierarchy commit callback
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (4 preceding siblings ...)
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 05/11] net/octeontx2: add tm node suspend and resume cb Nithin Dabilpuram
@ 2020-03-12 11:19 ` Nithin Dabilpuram
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 07/11] net/octeontx2: add tm stats and shaper profile cbs Nithin Dabilpuram
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:19 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

Add TM hierarchy commit callback to support enabling
newly created topology.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 170 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 170 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 175d1d5..ae779a5 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1668,6 +1668,101 @@ validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
 }
 
 static int
+nix_xmit_disable(struct rte_eth_dev *eth_dev)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
+	uint16_t sqb_cnt, head_off, tail_off;
+	struct otx2_nix_tm_node *tm_node;
+	struct otx2_eth_txq *txq;
+	uint64_t wdata, val;
+	int i, rc;
+
+	otx2_tm_dbg("Disabling xmit on %s", eth_dev->data->name);
+
+	/* Enable CGX RXTX to drain pkts */
+	if (!eth_dev->data->dev_started) {
+		rc = otx2_cgx_rxtx_start(dev);
+		if (rc)
+			return rc;
+	}
+
+	/* XON all SMQ's */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+			continue;
+		if (!(tm_node->flags & NIX_TM_NODE_HWRES))
+			continue;
+
+		rc = nix_smq_xoff(dev, tm_node, false);
+		if (rc) {
+			otx2_err("Failed to enable smq %u, rc=%d",
+				 tm_node->hw_id, rc);
+			goto cleanup;
+		}
+	}
+
+	/* Flush all tx queues */
+	for (i = 0; i < sq_cnt; i++) {
+		txq = eth_dev->data->tx_queues[i];
+
+		rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+		if (rc) {
+			otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
+			goto cleanup;
+		}
+
+		/* Wait for sq entries to be flushed */
+		rc = nix_txq_flush_sq_spin(txq);
+		if (rc) {
+			otx2_err("Failed to drain sq, rc=%d\n", rc);
+			goto cleanup;
+		}
+	}
+
+	/* XOFF & Flush all SMQ's. HRM mandates
+	 * all SQ's empty before SMQ flush is issued.
+	 */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+			continue;
+		if (!(tm_node->flags & NIX_TM_NODE_HWRES))
+			continue;
+
+		rc = nix_smq_xoff(dev, tm_node, true);
+		if (rc) {
+			otx2_err("Failed to enable smq %u, rc=%d",
+				 tm_node->hw_id, rc);
+			goto cleanup;
+		}
+	}
+
+	/* Verify sanity of all tx queues */
+	for (i = 0; i < sq_cnt; i++) {
+		txq = eth_dev->data->tx_queues[i];
+
+		wdata = ((uint64_t)txq->sq << 32);
+		val = otx2_atomic64_add_nosync(wdata,
+			       (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS));
+
+		sqb_cnt = val & 0xFFFF;
+		head_off = (val >> 20) & 0x3F;
+		tail_off = (val >> 28) & 0x3F;
+
+		if (sqb_cnt > 1 || head_off != tail_off ||
+		    (*txq->fc_mem != txq->nb_sqb_bufs))
+			otx2_err("Failed to gracefully flush sq %u", txq->sq);
+	}
+
+cleanup:
+	/* restore cgx state */
+	if (!eth_dev->data->dev_started)
+		rc |= otx2_cgx_rxtx_stop(dev);
+
+	return rc;
+}
+
+static int
 nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
 		uint32_t parent_node_id, uint32_t priority,
 		uint32_t weight, uint32_t lvl,
@@ -1879,11 +1974,86 @@ nix_tm_node_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
 	return nix_tm_node_suspend_resume(eth_dev, node_id, error, false);
 }
 
+static int
+nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
+			int clear_on_fail,
+			struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node;
+	uint32_t leaf_cnt = 0;
+	int rc;
+
+	if (dev->tm_flags & NIX_TM_COMMITTED) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "hierarchy exists";
+		return -EINVAL;
+	}
+
+	/* Check if we have all the leaf nodes */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->flags & NIX_TM_NODE_USER &&
+		    tm_node->id < dev->tm_leaf_cnt)
+			leaf_cnt++;
+	}
+
+	if (leaf_cnt != dev->tm_leaf_cnt) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "incomplete hierarchy";
+		return -EINVAL;
+	}
+
+	/*
+	 * Disable xmit will be enabled when
+	 * new topology is available.
+	 */
+	rc = nix_xmit_disable(eth_dev);
+	if (rc) {
+		otx2_err("failed to disable TX, rc=%d", rc);
+		return -EIO;
+	}
+
+	/* Delete default/ratelimit tree */
+	if (dev->tm_flags & (NIX_TM_DEFAULT_TREE)) {
+		rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER, 0, false);
+		if (rc) {
+			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+			error->message = "failed to free default resources";
+			return rc;
+		}
+		dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE);
+	}
+
+	/* Free up user alloc'ed resources */
+	rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER,
+				   NIX_TM_NODE_USER, true);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "failed to free user resources";
+		return rc;
+	}
+
+	rc = nix_tm_alloc_resources(eth_dev, true);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "alloc resources failed";
+		/* TODO should we restore default config ? */
+		if (clear_on_fail)
+			nix_tm_free_resources(dev, 0, 0, false);
+		return rc;
+	}
+
+	error->type = RTE_TM_ERROR_TYPE_NONE;
+	dev->tm_flags |= NIX_TM_COMMITTED;
+	return 0;
+}
+
 const struct rte_tm_ops otx2_tm_ops = {
 	.node_add = nix_tm_node_add,
 	.node_delete = nix_tm_node_delete,
 	.node_suspend = nix_tm_node_suspend,
 	.node_resume = nix_tm_node_resume,
+	.hierarchy_commit = nix_tm_hierarchy_commit,
 };
 
 static int
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 07/11] net/octeontx2: add tm stats and shaper profile cbs
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (5 preceding siblings ...)
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 06/11] net/octeontx2: add tm hierarchy commit callback Nithin Dabilpuram
@ 2020-03-12 11:19 ` Nithin Dabilpuram
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 08/11] net/octeontx2: add tm dynamic topology update cb Nithin Dabilpuram
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:19 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

Add TM support for stats read and private shaper
profile addition or deletion.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 271 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.h |   4 +
 2 files changed, 275 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index ae779a5..6cc07fc 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1668,6 +1668,47 @@ validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
 }
 
 static int
+read_tm_reg(struct otx2_mbox *mbox, uint64_t reg,
+	    uint64_t *regval, uint32_t hw_lvl)
+{
+	volatile struct nix_txschq_config *req;
+	struct nix_txschq_config *rsp;
+	int rc;
+
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->read = 1;
+	req->lvl = hw_lvl;
+	req->reg[0] = reg;
+	req->num_regs = 1;
+
+	rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
+	if (rc)
+		return rc;
+	*regval = rsp->regval[0];
+	return 0;
+}
+
+/* Search for min rate in topology */
+static void
+nix_tm_shaper_profile_update_min(struct otx2_eth_dev *dev)
+{
+	struct otx2_nix_tm_shaper_profile *profile;
+	uint64_t rate_min = 1E9; /* 1 Gbps */
+
+	TAILQ_FOREACH(profile, &dev->shaper_profile_list, shaper) {
+		if (profile->params.peak.rate &&
+		    profile->params.peak.rate < rate_min)
+			rate_min = profile->params.peak.rate;
+
+		if (profile->params.committed.rate &&
+		    profile->params.committed.rate < rate_min)
+			rate_min = profile->params.committed.rate;
+	}
+
+	dev->tm_rate_min = rate_min;
+}
+
+static int
 nix_xmit_disable(struct rte_eth_dev *eth_dev)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
@@ -1763,6 +1804,145 @@ nix_xmit_disable(struct rte_eth_dev *eth_dev)
 }
 
 static int
+nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
+		     int *is_leaf, struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node;
+
+	if (is_leaf == NULL) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		return -EINVAL;
+	}
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (node_id == RTE_TM_NODE_ID_NULL || !tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		return -EINVAL;
+	}
+	if (nix_tm_is_leaf(dev, tm_node->lvl))
+		*is_leaf = true;
+	else
+		*is_leaf = false;
+
+	return 0;
+}
+
+static int
+nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev,
+			  uint32_t profile_id,
+			  struct rte_tm_shaper_params *params,
+			  struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_shaper_profile *profile;
+
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
+	if (profile) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+		error->message = "shaper profile ID exist";
+		return -EINVAL;
+	}
+
+	/* Committed rate and burst size can be enabled/disabled */
+	if (params->committed.size || params->committed.rate) {
+		if (params->committed.size < MIN_SHAPER_BURST ||
+		    params->committed.size > MAX_SHAPER_BURST) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+			return -EINVAL;
+		} else if (!shaper_rate_to_nix(params->committed.rate * 8,
+					       NULL, NULL, NULL)) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+			error->message = "shaper committed rate invalid";
+			return -EINVAL;
+		}
+	}
+
+	/* Peak rate and burst size can be enabled/disabled */
+	if (params->peak.size || params->peak.rate) {
+		if (params->peak.size < MIN_SHAPER_BURST ||
+		    params->peak.size > MAX_SHAPER_BURST) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+			return -EINVAL;
+		} else if (!shaper_rate_to_nix(params->peak.rate * 8,
+					       NULL, NULL, NULL)) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+			error->message = "shaper peak rate invalid";
+			return -EINVAL;
+		}
+	}
+
+	profile = rte_zmalloc("otx2_nix_tm_shaper_profile",
+			      sizeof(struct otx2_nix_tm_shaper_profile), 0);
+	if (!profile)
+		return -ENOMEM;
+
+	profile->shaper_profile_id = profile_id;
+	rte_memcpy(&profile->params, params,
+		   sizeof(struct rte_tm_shaper_params));
+	TAILQ_INSERT_TAIL(&dev->shaper_profile_list, profile, shaper);
+
+	otx2_tm_dbg("Added TM shaper profile %u, "
+		    " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64
+		    ", cbs %" PRIu64 " , adj %u",
+		    profile_id,
+		    params->peak.rate * 8,
+		    params->peak.size,
+		    params->committed.rate * 8,
+		    params->committed.size,
+		    params->pkt_length_adjust);
+
+	/* Translate rate as bits per second */
+	profile->params.peak.rate = profile->params.peak.rate * 8;
+	profile->params.committed.rate = profile->params.committed.rate * 8;
+	/* Always use PIR for single rate shaping */
+	if (!params->peak.rate && params->committed.rate) {
+		profile->params.peak = profile->params.committed;
+		memset(&profile->params.committed, 0,
+		       sizeof(profile->params.committed));
+	}
+
+	/* update min rate */
+	nix_tm_shaper_profile_update_min(dev);
+	return 0;
+}
+
+static int
+nix_tm_shaper_profile_delete(struct rte_eth_dev *eth_dev,
+			     uint32_t profile_id,
+			     struct rte_tm_error *error)
+{
+	struct otx2_nix_tm_shaper_profile *profile;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
+
+	if (!profile) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+		error->message = "shaper profile ID not exist";
+		return -EINVAL;
+	}
+
+	if (profile->reference_count) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+		error->message = "shaper profile in use";
+		return -EINVAL;
+	}
+
+	otx2_tm_dbg("Removing TM shaper profile %u", profile_id);
+	TAILQ_REMOVE(&dev->shaper_profile_list, profile, shaper);
+	rte_free(profile);
+
+	/* update min rate */
+	nix_tm_shaper_profile_update_min(dev);
+	return 0;
+}
+
+static int
 nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
 		uint32_t parent_node_id, uint32_t priority,
 		uint32_t weight, uint32_t lvl,
@@ -2048,12 +2228,103 @@ nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
 	return 0;
 }
 
+static int
+nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
+		       struct rte_tm_node_stats *stats, uint64_t *stats_mask,
+		       int clear, struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node;
+	uint64_t reg, val;
+	int64_t *addr;
+	int rc = 0;
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	/* Stats support only for leaf node or TL1 root */
+	if (nix_tm_is_leaf(dev, tm_node->lvl)) {
+		reg = (((uint64_t)tm_node->id) << 32);
+
+		/* Packets */
+		addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
+		val = otx2_atomic64_add_nosync(reg, addr);
+		if (val & OP_ERR)
+			val = 0;
+		stats->n_pkts = val - tm_node->last_pkts;
+
+		/* Bytes */
+		addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
+		val = otx2_atomic64_add_nosync(reg, addr);
+		if (val & OP_ERR)
+			val = 0;
+		stats->n_bytes = val - tm_node->last_bytes;
+
+		if (clear) {
+			tm_node->last_pkts = stats->n_pkts;
+			tm_node->last_bytes = stats->n_bytes;
+		}
+
+		*stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES;
+
+	} else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "stats read error";
+
+		/* RED Drop packets */
+		reg = NIX_AF_TL1X_DROPPED_PACKETS(tm_node->hw_id);
+		rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
+		if (rc)
+			goto exit;
+		stats->leaf.n_pkts_dropped[RTE_COLOR_RED] =
+						val - tm_node->last_pkts;
+
+		/* RED Drop bytes */
+		reg = NIX_AF_TL1X_DROPPED_BYTES(tm_node->hw_id);
+		rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
+		if (rc)
+			goto exit;
+		stats->leaf.n_bytes_dropped[RTE_COLOR_RED] =
+						val - tm_node->last_bytes;
+
+		/* Clear stats */
+		if (clear) {
+			tm_node->last_pkts =
+				stats->leaf.n_pkts_dropped[RTE_COLOR_RED];
+			tm_node->last_bytes =
+				stats->leaf.n_bytes_dropped[RTE_COLOR_RED];
+		}
+
+		*stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
+			RTE_TM_STATS_N_BYTES_RED_DROPPED;
+
+	} else {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "unsupported node";
+		rc = -EINVAL;
+	}
+
+exit:
+	return rc;
+}
+
 const struct rte_tm_ops otx2_tm_ops = {
+	.node_type_get = nix_tm_node_type_get,
+
+	.shaper_profile_add = nix_tm_shaper_profile_add,
+	.shaper_profile_delete = nix_tm_shaper_profile_delete,
+
 	.node_add = nix_tm_node_add,
 	.node_delete = nix_tm_node_delete,
 	.node_suspend = nix_tm_node_suspend,
 	.node_resume = nix_tm_node_resume,
 	.hierarchy_commit = nix_tm_hierarchy_commit,
+
+	.node_stats_read = nix_tm_node_stats_read,
 };
 
 static int
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index ebb4e90..20e2069 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -46,6 +46,10 @@ struct otx2_nix_tm_node {
 
 	struct otx2_nix_tm_node *parent;
 	struct rte_tm_node_params params;
+
+	/* Last stats */
+	uint64_t last_pkts;
+	uint64_t last_bytes;
 };
 
 struct otx2_nix_tm_shaper_profile {
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 08/11] net/octeontx2: add tm dynamic topology update cb
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (6 preceding siblings ...)
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 07/11] net/octeontx2: add tm stats and shaper profile cbs Nithin Dabilpuram
@ 2020-03-12 11:19 ` Nithin Dabilpuram
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 09/11] net/octeontx2: add tm debug support Nithin Dabilpuram
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:19 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

Add dynamic parent and shaper update callbacks that
can be used to change RR Quantum or PIR/CIR rate dynamically
post hierarchy commit. Dynamic parent update callback only
supports updating RR quantum of a given child with respect to
its parent. There is no support yet to change priority or parent
itself.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 190 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 190 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 6cc07fc..f84d166 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -2229,6 +2229,194 @@ nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
 }
 
 static int
+nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev,
+			  uint32_t node_id,
+			  uint32_t profile_id,
+			  struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_shaper_profile *profile = NULL;
+	struct otx2_mbox *mbox = dev->mbox;
+	struct otx2_nix_tm_node *tm_node;
+	struct nix_txschq_config *req;
+	uint8_t k;
+	int rc;
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node || nix_tm_is_leaf(dev, tm_node->lvl)) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "invalid node";
+		return -EINVAL;
+	}
+
+	if (profile_id == tm_node->params.shaper_profile_id)
+		return 0;
+
+	if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+		profile = nix_tm_shaper_profile_search(dev, profile_id);
+		if (!profile) {
+			error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+			error->message = "shaper profile ID not exist";
+			return -EINVAL;
+		}
+	}
+
+	tm_node->params.shaper_profile_id = profile_id;
+
+	/* Nothing to do if not yet committed */
+	if (!(dev->tm_flags & NIX_TM_COMMITTED))
+		return 0;
+
+	tm_node->flags &= ~NIX_TM_NODE_ENABLED;
+
+	/* Flush the specific node with SW_XOFF */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = tm_node->hw_lvl;
+	k = prepare_tm_sw_xoff(tm_node, true, req->reg, req->regval);
+	req->num_regs = k;
+
+	rc = send_tm_reqval(mbox, req, error);
+	if (rc)
+		return rc;
+
+	/* Update the PIR/CIR and clear SW XOFF */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = tm_node->hw_lvl;
+
+	k = prepare_tm_shaper_reg(tm_node, profile, req->reg, req->regval);
+
+	k += prepare_tm_sw_xoff(tm_node, false, &req->reg[k], &req->regval[k]);
+
+	req->num_regs = k;
+	rc = send_tm_reqval(mbox, req, error);
+	if (!rc)
+		tm_node->flags |= NIX_TM_NODE_ENABLED;
+	return rc;
+}
+
+static int
+nix_tm_node_parent_update(struct rte_eth_dev *eth_dev,
+			  uint32_t node_id, uint32_t new_parent_id,
+			  uint32_t priority, uint32_t weight,
+			  struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node, *sibling;
+	struct otx2_nix_tm_node *new_parent;
+	struct nix_txschq_config *req;
+	uint8_t k;
+	int rc;
+
+	if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "hierarchy doesn't exist";
+		return -EINVAL;
+	}
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	/* Parent id valid only for non root nodes */
+	if (tm_node->hw_lvl != dev->otx2_tm_root_lvl) {
+		new_parent = nix_tm_node_search(dev, new_parent_id, true);
+		if (!new_parent) {
+			error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+			error->message = "no such parent node";
+			return -EINVAL;
+		}
+
+		/* Current support is only for dynamic weight update */
+		if (tm_node->parent != new_parent ||
+		    tm_node->priority != priority) {
+			error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+			error->message = "only weight update supported";
+			return -EINVAL;
+		}
+	}
+
+	/* Skip if no change */
+	if (tm_node->weight == weight)
+		return 0;
+
+	tm_node->weight = weight;
+
+	/* For leaf nodes, SQ CTX needs update */
+	if (nix_tm_is_leaf(dev, tm_node->lvl)) {
+		/* Update SQ quantum data on the fly */
+		rc = nix_sq_sched_data(dev, tm_node, true);
+		if (rc) {
+			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+			error->message = "sq sched data update failed";
+			return rc;
+		}
+	} else {
+		/* XOFF Parent node */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->parent->hw_lvl;
+		req->num_regs = prepare_tm_sw_xoff(tm_node->parent, true,
+						   req->reg, req->regval);
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* XOFF this node and all other siblings */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->hw_lvl;
+
+		k = 0;
+		TAILQ_FOREACH(sibling, &dev->node_list, node) {
+			if (sibling->parent != tm_node->parent)
+				continue;
+			k += prepare_tm_sw_xoff(sibling, true, &req->reg[k],
+						&req->regval[k]);
+		}
+		req->num_regs = k;
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* Update new weight for current node */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->hw_lvl;
+		req->num_regs = prepare_tm_sched_reg(dev, tm_node,
+						     req->reg, req->regval);
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* XON this node and all other siblings */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->hw_lvl;
+
+		k = 0;
+		TAILQ_FOREACH(sibling, &dev->node_list, node) {
+			if (sibling->parent != tm_node->parent)
+				continue;
+			k += prepare_tm_sw_xoff(sibling, false, &req->reg[k],
+						&req->regval[k]);
+		}
+		req->num_regs = k;
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* XON Parent node */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->parent->hw_lvl;
+		req->num_regs = prepare_tm_sw_xoff(tm_node->parent, false,
+						   req->reg, req->regval);
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+	}
+	return 0;
+}
+
+static int
 nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
 		       struct rte_tm_node_stats *stats, uint64_t *stats_mask,
 		       int clear, struct rte_tm_error *error)
@@ -2324,6 +2512,8 @@ const struct rte_tm_ops otx2_tm_ops = {
 	.node_resume = nix_tm_node_resume,
 	.hierarchy_commit = nix_tm_hierarchy_commit,
 
+	.node_shaper_update = nix_tm_node_shaper_update,
+	.node_parent_update = nix_tm_node_parent_update,
 	.node_stats_read = nix_tm_node_stats_read,
 };
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 09/11] net/octeontx2: add tm debug support
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (7 preceding siblings ...)
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 08/11] net/octeontx2: add tm dynamic topology update cb Nithin Dabilpuram
@ 2020-03-12 11:19 ` Nithin Dabilpuram
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 10/11] net/octeontx2: add tx queue ratelimit callback Nithin Dabilpuram
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:19 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

Add debug support to TM to dump configured topology
and registers. Also enable debug dump when sq flush fails.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_ethdev.h       |   1 +
 drivers/net/octeontx2/otx2_ethdev_debug.c | 274 ++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.c           |   9 +-
 drivers/net/octeontx2/otx2_tm.h           |   1 +
 4 files changed, 281 insertions(+), 4 deletions(-)

diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 6679652..0ef90ce 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -459,6 +459,7 @@ int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
 			 struct rte_dev_reg_info *regs);
 int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
 void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
+void otx2_nix_tm_dump(struct otx2_eth_dev *dev);
 
 /* Stats */
 int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
index c8b4cd5..13e031e 100644
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -6,6 +6,7 @@
 
 #define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
 #define NIX_REG_INFO(reg) {reg, #reg}
+#define NIX_REG_NAME_SZ 48
 
 struct nix_lf_reg_info {
 	uint32_t offset;
@@ -498,3 +499,276 @@ otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
 	nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
 		 rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
 }
+
+static uint8_t
+prepare_nix_tm_reg_dump(uint16_t hw_lvl, uint16_t schq, uint16_t link,
+			uint64_t *reg, char regstr[][NIX_REG_NAME_SZ])
+{
+	uint8_t k = 0;
+
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_SMQ:
+		reg[k] = NIX_AF_SMQX_CFG(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_SMQ[%u]_CFG", schq);
+
+		reg[k] = NIX_AF_MDQX_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_MDQX_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_MDQX_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_MDQX_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		reg[k] = NIX_AF_TL4X_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SDP_LINK_CFG", schq);
+
+		reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL4X_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_TL4X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL4X_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		reg[k] = NIX_AF_TL3X_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
+
+		reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL3X_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_TL3X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL3X_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		reg[k] = NIX_AF_TL2X_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
+
+		reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL2X_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_TL2X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL2X_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL1:
+
+		reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL1X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_SW_XOFF", schq);
+
+		reg[k] = NIX_AF_TL1X_DROPPED_PACKETS(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_DROPPED_PACKETS", schq);
+		break;
+	default:
+		break;
+	}
+
+	if (k > MAX_REGS_PER_MBOX_MSG) {
+		nix_dump("\t!!!NIX TM Registers request overflow!!!");
+		return 0;
+	}
+	return k;
+}
+
+/* Dump TM hierarchy and registers */
+void
+otx2_nix_tm_dump(struct otx2_eth_dev *dev)
+{
+	char regstr[MAX_REGS_PER_MBOX_MSG * 2][NIX_REG_NAME_SZ];
+	struct otx2_nix_tm_node *tm_node, *root_node, *parent;
+	uint64_t reg[MAX_REGS_PER_MBOX_MSG * 2];
+	struct nix_txschq_config *req;
+	const char *lvlstr, *parent_lvlstr;
+	struct nix_txschq_config *rsp;
+	uint32_t schq, parent_schq;
+	int hw_lvl, j, k, rc;
+
+	nix_dump("===TM hierarchy and registers dump of %s===",
+		 dev->eth_dev->data->name);
+
+	root_node = NULL;
+
+	for (hw_lvl = 0; hw_lvl <= NIX_TXSCH_LVL_CNT; hw_lvl++) {
+		TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+			if (tm_node->hw_lvl != hw_lvl)
+				continue;
+
+			parent = tm_node->parent;
+			if (hw_lvl == NIX_TXSCH_LVL_CNT) {
+				lvlstr = "SQ";
+				schq = tm_node->id;
+			} else {
+				lvlstr = nix_hwlvl2str(tm_node->hw_lvl);
+				schq = tm_node->hw_id;
+			}
+
+			if (parent) {
+				parent_schq = parent->hw_id;
+				parent_lvlstr =
+					nix_hwlvl2str(parent->hw_lvl);
+			} else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
+				parent_schq = otx2_nix_get_link(dev);
+				parent_lvlstr = "LINK";
+			} else {
+				parent_schq = tm_node->parent_hw_id;
+				parent_lvlstr =
+					nix_hwlvl2str(tm_node->hw_lvl + 1);
+			}
+
+			nix_dump("%s_%d->%s_%d", lvlstr, schq,
+				 parent_lvlstr, parent_schq);
+
+			if (!(tm_node->flags & NIX_TM_NODE_HWRES))
+				continue;
+
+			/* Need to dump TL1 when root is TL2 */
+			if (tm_node->hw_lvl == dev->otx2_tm_root_lvl)
+				root_node = tm_node;
+
+			/* Dump registers only when HWRES is present */
+			k = prepare_nix_tm_reg_dump(tm_node->hw_lvl, schq,
+						    otx2_nix_get_link(dev), reg,
+						    regstr);
+			if (!k)
+				continue;
+
+			req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+			req->read = 1;
+			req->lvl = tm_node->hw_lvl;
+			req->num_regs = k;
+			otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+			rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
+			if (!rc) {
+				for (j = 0; j < k; j++)
+					nix_dump("\t%s=0x%016lx",
+						 regstr[j], rsp->regval[j]);
+			} else {
+				nix_dump("\t!!!Failed to dump registers!!!");
+			}
+		}
+		nix_dump("\n");
+	}
+
+	/* Dump TL1 node data when root level is TL2 */
+	if (root_node && root_node->hw_lvl == NIX_TXSCH_LVL_TL2) {
+		k = prepare_nix_tm_reg_dump(NIX_TXSCH_LVL_TL1,
+					    root_node->parent_hw_id,
+					    otx2_nix_get_link(dev),
+					    reg, regstr);
+		if (!k)
+			return;
+
+
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->read = 1;
+		req->lvl = NIX_TXSCH_LVL_TL1;
+		req->num_regs = k;
+		otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+		rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
+		if (!rc) {
+			for (j = 0; j < k; j++)
+				nix_dump("\t%s=0x%016lx",
+					 regstr[j], rsp->regval[j]);
+		} else {
+			nix_dump("\t!!!Failed to dump registers!!!");
+		}
+	}
+}
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index f84d166..29c61de 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -28,8 +28,8 @@ uint64_t shaper2regval(struct shaper_params *shaper)
 		(shaper->mantissa << 1);
 }
 
-static int
-nix_get_link(struct otx2_eth_dev *dev)
+int
+otx2_nix_get_link(struct otx2_eth_dev *dev)
 {
 	int link = 13 /* SDP */;
 	uint16_t lmac_chan;
@@ -574,7 +574,7 @@ populate_tm_reg(struct otx2_eth_dev *dev,
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
 			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
-						nix_get_link(dev));
+						otx2_nix_get_link(dev));
 			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
 			k++;
 		}
@@ -594,7 +594,7 @@ populate_tm_reg(struct otx2_eth_dev *dev,
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
 			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
-						nix_get_link(dev));
+						otx2_nix_get_link(dev));
 			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
 			k++;
 		}
@@ -990,6 +990,7 @@ nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
 
 	return 0;
 exit:
+	otx2_nix_tm_dump(dev);
 	return -EFAULT;
 }
 
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 20e2069..d5d58ec 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -23,6 +23,7 @@ int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
 int otx2_nix_sq_flush_post(void *_txq);
 int otx2_nix_sq_enable(void *_txq);
+int otx2_nix_get_link(struct otx2_eth_dev *dev);
 int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
 
 struct otx2_nix_tm_node {
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 10/11] net/octeontx2: add tx queue ratelimit callback
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (8 preceding siblings ...)
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 09/11] net/octeontx2: add tm debug support Nithin Dabilpuram
@ 2020-03-12 11:19 ` Nithin Dabilpuram
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 11/11] net/octeontx2: add tm capability callbacks Nithin Dabilpuram
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:19 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

From: Krzysztof Kanas <kkanas@marvell.com>

Add Tx queue ratelimiting support. This support is mutually
exclusive with TM support i.e when TM is configured, tx queue
ratelimiting config is no more valid.

Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/octeontx2/otx2_ethdev.c |   1 +
 drivers/net/octeontx2/otx2_tm.c     | 241 +++++++++++++++++++++++++++++++++++-
 drivers/net/octeontx2/otx2_tm.h     |   3 +
 3 files changed, 243 insertions(+), 2 deletions(-)

diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6896797..78b7f3a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2071,6 +2071,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
 	.rx_descriptor_status     = otx2_nix_rx_descriptor_status,
 	.tx_descriptor_status     = otx2_nix_tx_descriptor_status,
 	.tx_done_cleanup          = otx2_nix_tx_done_cleanup,
+	.set_queue_rate_limit     = otx2_nix_tm_set_queue_rate_limit,
 	.pool_ops_supported       = otx2_nix_pool_ops_supported,
 	.filter_ctrl              = otx2_nix_dev_filter_ctrl,
 	.get_module_info          = otx2_nix_get_module_info,
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 29c61de..bafb9aa 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -2195,14 +2195,15 @@ nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Delete default/ratelimit tree */
-	if (dev->tm_flags & (NIX_TM_DEFAULT_TREE)) {
+	if (dev->tm_flags & (NIX_TM_DEFAULT_TREE | NIX_TM_RATE_LIMIT_TREE)) {
 		rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER, 0, false);
 		if (rc) {
 			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 			error->message = "failed to free default resources";
 			return rc;
 		}
-		dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE);
+		dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE |
+				   NIX_TM_RATE_LIMIT_TREE);
 	}
 
 	/* Free up user alloc'ed resources */
@@ -2663,6 +2664,242 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int
+nix_tm_prepare_rate_limited_tree(struct rte_eth_dev *eth_dev)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	uint32_t def = eth_dev->data->nb_tx_queues;
+	struct rte_tm_node_params params;
+	uint32_t leaf_parent, i, rc = 0;
+
+	memset(&params, 0, sizeof(params));
+
+	if (nix_tm_have_tl1_access(dev)) {
+		dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
+		rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL1,
+					OTX2_TM_LVL_ROOT, false, &params);
+		if (rc)
+			goto error;
+		rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL2,
+					OTX2_TM_LVL_SCH1, false, &params);
+		if (rc)
+			goto error;
+		rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL3,
+					OTX2_TM_LVL_SCH2, false, &params);
+		if (rc)
+			goto error;
+		rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL4,
+					OTX2_TM_LVL_SCH3, false, &params);
+		if (rc)
+			goto error;
+		leaf_parent = def + 3;
+
+		/* Add per queue SMQ nodes */
+		for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+			rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
+						leaf_parent,
+						0, DEFAULT_RR_WEIGHT,
+						NIX_TXSCH_LVL_SMQ,
+						OTX2_TM_LVL_SCH4,
+						false, &params);
+			if (rc)
+				goto error;
+		}
+
+		/* Add leaf nodes */
+		for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+			rc = nix_tm_node_add_to_list(dev, i,
+						     leaf_parent + 1 + i, 0,
+						     DEFAULT_RR_WEIGHT,
+						     NIX_TXSCH_LVL_CNT,
+						     OTX2_TM_LVL_QUEUE,
+						     false, &params);
+		if (rc)
+			goto error;
+		}
+
+		return 0;
+	}
+
+	dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
+	rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+				DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL2,
+				OTX2_TM_LVL_ROOT, false, &params);
+	if (rc)
+		goto error;
+	rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+				DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL3,
+				OTX2_TM_LVL_SCH1, false, &params);
+	if (rc)
+		goto error;
+	rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+				     DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL4,
+				     OTX2_TM_LVL_SCH2, false, &params);
+	if (rc)
+		goto error;
+	leaf_parent = def + 2;
+
+	/* Add per queue SMQ nodes */
+	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+		rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
+					     leaf_parent,
+					     0, DEFAULT_RR_WEIGHT,
+					     NIX_TXSCH_LVL_SMQ,
+					     OTX2_TM_LVL_SCH3,
+					     false, &params);
+		if (rc)
+			goto error;
+	}
+
+	/* Add leaf nodes */
+	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+		rc = nix_tm_node_add_to_list(dev, i, leaf_parent + 1 + i, 0,
+					     DEFAULT_RR_WEIGHT,
+					     NIX_TXSCH_LVL_CNT,
+					     OTX2_TM_LVL_SCH4,
+					     false, &params);
+		if (rc)
+			break;
+	}
+error:
+	return rc;
+}
+
+static int
+otx2_nix_tm_rate_limit_mdq(struct rte_eth_dev *eth_dev,
+			   struct otx2_nix_tm_node *tm_node,
+			   uint64_t tx_rate)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_shaper_profile profile;
+	struct otx2_mbox *mbox = dev->mbox;
+	volatile uint64_t *reg, *regval;
+	struct nix_txschq_config *req;
+	uint16_t flags;
+	uint8_t k = 0;
+	int rc;
+
+	flags = tm_node->flags;
+
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = NIX_TXSCH_LVL_MDQ;
+	reg = req->reg;
+	regval = req->regval;
+
+	if (tx_rate == 0) {
+		k += prepare_tm_sw_xoff(tm_node, true, &reg[k], &regval[k]);
+		flags &= ~NIX_TM_NODE_ENABLED;
+		goto exit;
+	}
+
+	if (!(flags & NIX_TM_NODE_ENABLED)) {
+		k += prepare_tm_sw_xoff(tm_node, false, &reg[k], &regval[k]);
+		flags |= NIX_TM_NODE_ENABLED;
+	}
+
+	/* Use only PIR for rate limit */
+	memset(&profile, 0, sizeof(profile));
+	profile.params.peak.rate = tx_rate;
+	/* Minimum burst of ~4us Bytes of Tx */
+	profile.params.peak.size = RTE_MAX(NIX_MAX_HW_FRS,
+					   (4ull * tx_rate) / (1E6 * 8));
+	if (!dev->tm_rate_min || dev->tm_rate_min > tx_rate)
+		dev->tm_rate_min = tx_rate;
+
+	k += prepare_tm_shaper_reg(tm_node, &profile, &reg[k], &regval[k]);
+exit:
+	req->num_regs = k;
+	rc = otx2_mbox_process(mbox);
+	if (rc)
+		return rc;
+
+	tm_node->flags = flags;
+	return 0;
+}
+
+int
+otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
+				uint16_t queue_idx, uint16_t tx_rate_mbps)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	uint64_t tx_rate = tx_rate_mbps * (uint64_t)1E6;
+	struct otx2_nix_tm_node *tm_node;
+	int rc;
+
+	/* Check for supported revisions */
+	if (otx2_dev_is_95xx_Ax(dev) ||
+	    otx2_dev_is_96xx_Ax(dev))
+		return -EINVAL;
+
+	if (queue_idx >= eth_dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	if (!(dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
+	    !(dev->tm_flags & NIX_TM_RATE_LIMIT_TREE))
+		goto error;
+
+	if ((dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
+	    eth_dev->data->nb_tx_queues > 1) {
+		/* For TM topology change ethdev needs to be stopped */
+		if (eth_dev->data->dev_started)
+			return -EBUSY;
+
+		/*
+		 * Disable xmit will be enabled when
+		 * new topology is available.
+		 */
+		rc = nix_xmit_disable(eth_dev);
+		if (rc) {
+			otx2_err("failed to disable TX, rc=%d", rc);
+			return -EIO;
+		}
+
+		rc = nix_tm_free_resources(dev, 0, 0, false);
+		if (rc < 0) {
+			otx2_tm_dbg("failed to free default resources, rc %d",
+				   rc);
+			return -EIO;
+		}
+
+		rc = nix_tm_prepare_rate_limited_tree(eth_dev);
+		if (rc < 0) {
+			otx2_tm_dbg("failed to prepare tm tree, rc=%d", rc);
+			return rc;
+		}
+
+		rc = nix_tm_alloc_resources(eth_dev, true);
+		if (rc != 0) {
+			otx2_tm_dbg("failed to allocate tm tree, rc=%d", rc);
+			return rc;
+		}
+
+		dev->tm_flags &= ~NIX_TM_DEFAULT_TREE;
+		dev->tm_flags |= NIX_TM_RATE_LIMIT_TREE;
+	}
+
+	tm_node = nix_tm_node_search(dev, queue_idx, false);
+
+	/* check if we found a valid leaf node */
+	if (!tm_node ||
+	    !nix_tm_is_leaf(dev, tm_node->lvl) ||
+	    !tm_node->parent ||
+	    tm_node->parent->hw_id == UINT32_MAX)
+		return -EIO;
+
+	return otx2_nix_tm_rate_limit_mdq(eth_dev, tm_node->parent, tx_rate);
+error:
+	otx2_tm_dbg("Unsupported TM tree 0x%0x", dev->tm_flags);
+	return -EINVAL;
+}
+
 int
 otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
 {
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index d5d58ec..7b1672e 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -11,6 +11,7 @@
 
 #define NIX_TM_DEFAULT_TREE	BIT_ULL(0)
 #define NIX_TM_COMMITTED	BIT_ULL(1)
+#define NIX_TM_RATE_LIMIT_TREE	BIT_ULL(2)
 #define NIX_TM_TL1_NO_SP	BIT_ULL(3)
 
 struct otx2_eth_dev;
@@ -20,6 +21,8 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 			      uint32_t *rr_quantum, uint16_t *smq);
+int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
+				     uint16_t queue_idx, uint16_t tx_rate);
 int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
 int otx2_nix_sq_flush_post(void *_txq);
 int otx2_nix_sq_enable(void *_txq);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH 11/11] net/octeontx2: add tm capability callbacks
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (9 preceding siblings ...)
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 10/11] net/octeontx2: add tx queue ratelimit callback Nithin Dabilpuram
@ 2020-03-12 11:19 ` Nithin Dabilpuram
  2020-03-13 11:08 ` [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Andrzej Ostruszka
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-12 11:19 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: Krzysztof Kanas, dev

From: Krzysztof Kanas <kkanas@marvell.com>

Add Traffic Management capability callbacks to provide
global, level and node capabilities. This patch also
adds documentation on Traffic Management Support.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 doc/guides/nics/features/octeontx2.ini |   1 +
 doc/guides/nics/octeontx2.rst          |  15 +++
 drivers/net/octeontx2/otx2_ethdev.c    |   1 +
 drivers/net/octeontx2/otx2_tm.c        | 232 +++++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.h        |   1 +
 5 files changed, 250 insertions(+)

diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 473fe56..fb13517 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -31,6 +31,7 @@ Inline protocol      = Y
 VLAN filter          = Y
 Flow control         = Y
 Flow API             = Y
+Rate limitation      = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 VLAN offload         = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 60187ec..6b885d6 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -39,6 +39,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
 - HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
 - Support Rx interrupt
 - Inline IPsec processing support
+- :ref:`Traffic Management API <tmapi>`
 
 Prerequisites
 -------------
@@ -213,6 +214,20 @@ Runtime Config Options
    parameters to all the PCIe devices if application requires to configure on
    all the ethdev ports.
 
+Traffic Management API
+----------------------
+
+OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
+configure the following features:
+
+1. Hierarchical scheduling
+2. Single rate - two color, Two rate - three color shaping
+
+Both DWRR and Static Priority(SP) hierarchial scheduling is supported.
+Every parent can have atmost 10 SP Children and unlimited DWRR children.
+Both PF & VF supports traffic management API with PF supporting 6 levels
+and VF supporting 5 levels of topology.
+
 Limitations
 -----------
 
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 78b7f3a..599a14c 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2026,6 +2026,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
 	.link_update              = otx2_nix_link_update,
 	.tx_queue_setup           = otx2_nix_tx_queue_setup,
 	.tx_queue_release         = otx2_nix_tx_queue_release,
+	.tm_ops_get               = otx2_nix_tm_ops_get,
 	.rx_queue_setup           = otx2_nix_rx_queue_setup,
 	.rx_queue_release         = otx2_nix_rx_queue_release,
 	.dev_start                = otx2_nix_dev_start,
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index bafb9aa..1ccb441 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1825,7 +1825,217 @@ nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
 		*is_leaf = true;
 	else
 		*is_leaf = false;
+	return 0;
+}
 
+static int
+nix_tm_capabilities_get(struct rte_eth_dev *eth_dev,
+			struct rte_tm_capabilities *cap,
+			struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	int rc, max_nr_nodes = 0, i;
+	struct free_rsrcs_rsp *rsp;
+
+	memset(cap, 0, sizeof(*cap));
+
+	otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
+	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+		return rc;
+	}
+
+	for (i = 0; i < NIX_TXSCH_LVL_TL1; i++)
+		max_nr_nodes += rsp->schq[i];
+
+	cap->n_nodes_max = max_nr_nodes + dev->tm_leaf_cnt;
+	/* TL1 level is reserved for PF */
+	cap->n_levels_max = nix_tm_have_tl1_access(dev) ?
+				OTX2_TM_LVL_MAX : OTX2_TM_LVL_MAX - 1;
+	cap->non_leaf_nodes_identical = 1;
+	cap->leaf_nodes_identical = 1;
+
+	/* Shaper Capabilities */
+	cap->shaper_private_n_max = max_nr_nodes;
+	cap->shaper_n_max = max_nr_nodes;
+	cap->shaper_private_dual_rate_n_max = max_nr_nodes;
+	cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+	cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+	cap->shaper_pkt_length_adjust_min = 0;
+	cap->shaper_pkt_length_adjust_max = 0;
+
+	/* Schedule Capabilites */
+	cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ];
+	cap->sched_sp_n_priorities_max = TXSCH_TLX_SP_PRIO_MAX;
+	cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max;
+	cap->sched_wfq_n_groups_max = 1;
+	cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+
+	cap->dynamic_update_mask =
+		RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL |
+		RTE_TM_UPDATE_NODE_SUSPEND_RESUME;
+	cap->stats_mask =
+		RTE_TM_STATS_N_PKTS |
+		RTE_TM_STATS_N_BYTES |
+		RTE_TM_STATS_N_PKTS_RED_DROPPED |
+		RTE_TM_STATS_N_BYTES_RED_DROPPED;
+
+	for (i = 0; i < RTE_COLORS; i++) {
+		cap->mark_vlan_dei_supported[i] = false;
+		cap->mark_ip_ecn_tcp_supported[i] = false;
+		cap->mark_ip_dscp_supported[i] = false;
+	}
+
+	return 0;
+}
+
+static int
+nix_tm_level_capabilities_get(struct rte_eth_dev *eth_dev, uint32_t lvl,
+			      struct rte_tm_level_capabilities *cap,
+			      struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	struct free_rsrcs_rsp *rsp;
+	uint16_t hw_lvl;
+	int rc;
+
+	memset(cap, 0, sizeof(*cap));
+
+	otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
+	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+		return rc;
+	}
+
+	hw_lvl = nix_tm_lvl2nix(dev, lvl);
+
+	if (nix_tm_is_leaf(dev, lvl)) {
+		/* Leaf */
+		cap->n_nodes_max = dev->tm_leaf_cnt;
+		cap->n_nodes_leaf_max = dev->tm_leaf_cnt;
+		cap->leaf_nodes_identical = 1;
+		cap->leaf.stats_mask =
+			RTE_TM_STATS_N_PKTS |
+			RTE_TM_STATS_N_BYTES;
+
+	} else if (lvl == OTX2_TM_LVL_ROOT) {
+		/* Root node, aka TL2(vf)/TL1(pf) */
+		cap->n_nodes_max = 1;
+		cap->n_nodes_nonleaf_max = 1;
+		cap->non_leaf_nodes_identical = 1;
+
+		cap->nonleaf.shaper_private_supported = true;
+		cap->nonleaf.shaper_private_dual_rate_supported =
+			nix_tm_have_tl1_access(dev) ? false : true;
+		cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+		cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+
+		cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
+		cap->nonleaf.sched_sp_n_priorities_max =
+					nix_max_prio(dev, hw_lvl) + 1;
+		cap->nonleaf.sched_wfq_n_groups_max = 1;
+		cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+
+		if (nix_tm_have_tl1_access(dev))
+			cap->nonleaf.stats_mask =
+				RTE_TM_STATS_N_PKTS_RED_DROPPED |
+				RTE_TM_STATS_N_BYTES_RED_DROPPED;
+	} else if ((lvl < OTX2_TM_LVL_MAX) &&
+		   (hw_lvl < NIX_TXSCH_LVL_CNT)) {
+		/* TL2, TL3, TL4, MDQ */
+		cap->n_nodes_max = rsp->schq[hw_lvl];
+		cap->n_nodes_nonleaf_max = cap->n_nodes_max;
+		cap->non_leaf_nodes_identical = 1;
+
+		cap->nonleaf.shaper_private_supported = true;
+		cap->nonleaf.shaper_private_dual_rate_supported = true;
+		cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+		cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+
+		/* MDQ doesn't support Strict Priority */
+		if (hw_lvl == NIX_TXSCH_LVL_MDQ)
+			cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
+		else
+			cap->nonleaf.sched_n_children_max =
+				rsp->schq[hw_lvl - 1];
+		cap->nonleaf.sched_sp_n_priorities_max =
+			nix_max_prio(dev, hw_lvl) + 1;
+		cap->nonleaf.sched_wfq_n_groups_max = 1;
+		cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+	} else {
+		/* unsupported level */
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		return rc;
+	}
+	return 0;
+}
+
+static int
+nix_tm_node_capabilities_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			     struct rte_tm_node_capabilities *cap,
+			     struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	struct otx2_nix_tm_node *tm_node;
+	struct free_rsrcs_rsp *rsp;
+	int rc, hw_lvl, lvl;
+
+	memset(cap, 0, sizeof(*cap));
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	hw_lvl = tm_node->hw_lvl;
+	lvl = tm_node->lvl;
+
+	/* Leaf node */
+	if (nix_tm_is_leaf(dev, lvl)) {
+		cap->stats_mask = RTE_TM_STATS_N_PKTS |
+					RTE_TM_STATS_N_BYTES;
+		return 0;
+	}
+
+	otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
+	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+		return rc;
+	}
+
+	/* Non Leaf Shaper */
+	cap->shaper_private_supported = true;
+	cap->shaper_private_dual_rate_supported =
+		(hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true;
+	cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+	cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+
+	/* Non Leaf Scheduler */
+	if (hw_lvl == NIX_TXSCH_LVL_MDQ)
+		cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
+	else
+		cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
+
+	cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1;
+	cap->nonleaf.sched_wfq_n_children_per_group_max =
+		cap->nonleaf.sched_n_children_max;
+	cap->nonleaf.sched_wfq_n_groups_max = 1;
+	cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+
+	if (hw_lvl == NIX_TXSCH_LVL_TL1)
+		cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
+			RTE_TM_STATS_N_BYTES_RED_DROPPED;
 	return 0;
 }
 
@@ -2505,6 +2715,10 @@ nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
 const struct rte_tm_ops otx2_tm_ops = {
 	.node_type_get = nix_tm_node_type_get,
 
+	.capabilities_get = nix_tm_capabilities_get,
+	.level_capabilities_get = nix_tm_level_capabilities_get,
+	.node_capabilities_get = nix_tm_node_capabilities_get,
+
 	.shaper_profile_add = nix_tm_shaper_profile_add,
 	.shaper_profile_delete = nix_tm_shaper_profile_delete,
 
@@ -2901,6 +3115,24 @@ otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
 }
 
 int
+otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *arg)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+	if (!arg)
+		return -EINVAL;
+
+	/* Check for supported revisions */
+	if (otx2_dev_is_95xx_Ax(dev) ||
+	    otx2_dev_is_96xx_Ax(dev))
+		return -EINVAL;
+
+	*(const void **)arg = &otx2_tm_ops;
+
+	return 0;
+}
+
+int
 otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 7b1672e..9675182 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -19,6 +19,7 @@ struct otx2_eth_dev;
 void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *ops);
 int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 			      uint32_t *rr_quantum, uint16_t *smq);
 int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (10 preceding siblings ...)
  2020-03-12 11:19 ` [dpdk-dev] [PATCH 11/11] net/octeontx2: add tm capability callbacks Nithin Dabilpuram
@ 2020-03-13 11:08 ` Andrzej Ostruszka
  2020-03-13 15:39   ` [dpdk-dev] [EXT] " Nithin Dabilpuram
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
  13 siblings, 1 reply; 41+ messages in thread
From: Andrzej Ostruszka @ 2020-03-13 11:08 UTC (permalink / raw)
  To: Nithin Dabilpuram; +Cc: dev

Nithin

On 3/12/20 12:18 PM, Nithin Dabilpuram wrote:
> Add support to traffic management api in OCTEON TX2 PMD.
> This support applies to CN96xx of C0 silicon version.
> 
> This series depends on http://patchwork.dpdk.org/patch/66344/
> 
> Depends-on:series-66344

If I understood the proposal correctly that Depends-on should be
"series-8815".  Just a note - should not matter since probably CI is not
yet consuming this tag.

With regards
Andrzej

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 00/11] net/octeontx2: add traffic manager support
  2020-03-13 11:08 ` [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Andrzej Ostruszka
@ 2020-03-13 15:39   ` Nithin Dabilpuram
  0 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-03-13 15:39 UTC (permalink / raw)
  To: Andrzej Ostruszka; +Cc: dev

On Fri, Mar 13, 2020 at 12:08:05PM +0100, Andrzej Ostruszka wrote:
> External Email
> 
> ----------------------------------------------------------------------
> Nithin
> 
> On 3/12/20 12:18 PM, Nithin Dabilpuram wrote:
> > Add support to traffic management api in OCTEON TX2 PMD.
> > This support applies to CN96xx of C0 silicon version.
> > 
> > This series depends on https://urldefense.proofpoint.com/v2/url?u=http-3A__patchwork.dpdk.org_patch_66344_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=smfUp6sXk22pbjt3o4zLsXWDOjS4OD0O_lu9_ehxK3s&s=oOdoe5iPhKh_vrAzJu88Lwa7V05jlE3SAEh4khxmAd8&e= 
> > 
> > Depends-on:series-66344
> 
> If I understood the proposal correctly that Depends-on should be
> "series-8815".  Just a note - should not matter since probably CI is not
> yet consuming this tag.
Ok, I misunderstood the patch id to series id. Thanks, will correct it in
next version.
> 
> With regards
> Andrzej

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 00/11] net/octeontx2: add traffic manager support
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (11 preceding siblings ...)
  2020-03-13 11:08 ` [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Andrzej Ostruszka
@ 2020-04-02 19:34 ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
                     ` (10 more replies)
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
  13 siblings, 11 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  Cc: dev, jerinj, kkanas, Nithin Dabilpuram

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Add support to traffic management api in OCTEON TX2 PMD.
This support applies to CN96xx of C0 silicon version.

This series depends on http://patchwork.dpdk.org/patch/66344/

Depends-on:series-8815

v2:
* Update release notes of 20.05
* Prefix tm function pointers to start with otx2_ to be inline
  with exiting convention
* Use nix_lf_rx_start|stop instead of cgx_start_stop for
  handling other scenarios with VF. 
* Fix git log errors

Krzysztof Kanas (3):
  net/octeontx2: add tm node suspend and resume cb
  net/octeontx2: add tx queue ratelimit callback
  net/octeontx2: add tm capability callbacks

Nithin Dabilpuram (8):
  net/octeontx2: setup link config based on BP level
  net/octeontx2: restructure tm helper functions
  net/octeontx2: add dynamic topology update support
  net/octeontx2: add tm node add and delete cb
  net/octeontx2: add tm hierarchy commit callback
  net/octeontx2: add tm stats and shaper profile cbs
  net/octeontx2: add tm dynamic topology update cb
  net/octeontx2: add tm debug support

 doc/guides/nics/features/octeontx2.ini    |    1 +
 doc/guides/nics/octeontx2.rst             |   15 +
 doc/guides/rel_notes/release_20_05.rst    |    8 +
 drivers/common/octeontx2/otx2_dev.h       |    9 +
 drivers/net/octeontx2/otx2_ethdev.c       |    5 +-
 drivers/net/octeontx2/otx2_ethdev.h       |    3 +
 drivers/net/octeontx2/otx2_ethdev_debug.c |  311 ++++
 drivers/net/octeontx2/otx2_tm.c           | 2676 ++++++++++++++++++++++++-----
 drivers/net/octeontx2/otx2_tm.h           |  101 +-
 9 files changed, 2646 insertions(+), 483 deletions(-)

-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 01/11] net/octeontx2: setup link config based on BP level
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 02/11] net/octeontx2: restructure tm helper functions Nithin Dabilpuram
                     ` (9 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Configure NIX_AF_TL3_TL2X_LINKX_CFG using schq at
level based on NIX_AF_PSE_CHANNEL_LEVEL[BP_LEVEL].

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/octeontx2/otx2_ethdev.h |  1 +
 drivers/net/octeontx2/otx2_tm.c     | 16 +++++++++++++++-
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e5684f9..b7d5386 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -304,6 +304,7 @@ struct otx2_eth_dev {
 	/* Contiguous queues */
 	uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
 	uint16_t otx2_tm_root_lvl;
+	uint16_t link_cfg_lvl;
 	uint16_t tm_flags;
 	uint16_t tm_leaf_cnt;
 	struct otx2_nix_tm_node_list node_list;
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index ba615ce..2364e03 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -437,6 +437,16 @@ populate_tm_registers(struct otx2_eth_dev *dev,
 		*reg++ = NIX_AF_TL3X_SCHEDULE(schq);
 		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
 		req->num_regs++;
+
+		/* Link configuration */
+		if (!otx2_dev_is_sdp(dev) &&
+		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
+			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
+						nix_get_link(dev));
+			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
+			req->num_regs++;
+		}
+
 		if (pir.rate && pir.burst) {
 			*reg++ = NIX_AF_TL3X_PIR(schq);
 			*regval++ = shaper2regval(&pir) | 1;
@@ -471,7 +481,10 @@ populate_tm_registers(struct otx2_eth_dev *dev,
 		else
 			*regval++ = (strict_schedul_prio << 24) | rr_quantum;
 		req->num_regs++;
-		if (!otx2_dev_is_sdp(dev)) {
+
+		/* Link configuration */
+		if (!otx2_dev_is_sdp(dev) &&
+		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
 			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
 						nix_get_link(dev));
 			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
@@ -1144,6 +1157,7 @@ nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev)
 		return rc;
 
 	nix_tm_copy_rsp_to_dev(dev, rsp);
+	dev->link_cfg_lvl = rsp->link_cfg_lvl;
 
 	nix_tm_assign_hw_id(dev);
 	return 0;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 02/11] net/octeontx2: restructure tm helper functions
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 03/11] net/octeontx2: add dynamic topology update support Nithin Dabilpuram
                     ` (8 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Restructure traffic manager helper function by splitting to
multiple sets of register configurations like shaping, scheduling
and topology config.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 689 ++++++++++++++++++++++------------------
 drivers/net/octeontx2/otx2_tm.h |  85 ++---
 2 files changed, 417 insertions(+), 357 deletions(-)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 2364e03..057297a 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -94,52 +94,50 @@ nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
 }
 
 static inline uint64_t
-shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks,
-		   uint64_t value, uint64_t *exponent_p,
+shaper_rate_to_nix(uint64_t value, uint64_t *exponent_p,
 		   uint64_t *mantissa_p, uint64_t *div_exp_p)
 {
 	uint64_t div_exp, exponent, mantissa;
 
 	/* Boundary checks */
-	if (value < MIN_SHAPER_RATE(cclk_hz, cclk_ticks) ||
-	    value > MAX_SHAPER_RATE(cclk_hz, cclk_ticks))
+	if (value < MIN_SHAPER_RATE ||
+	    value > MAX_SHAPER_RATE)
 		return 0;
 
-	if (value <= SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, 0)) {
+	if (value <= SHAPER_RATE(0, 0, 0)) {
 		/* Calculate rate div_exp and mantissa using
 		 * the following formula:
 		 *
-		 * value = (cclk_hz * (256 + mantissa)
-		 *              / ((cclk_ticks << div_exp) * 256)
+		 * value = (2E6 * (256 + mantissa)
+		 *              / ((1 << div_exp) * 256))
 		 */
 		div_exp = 0;
 		exponent = 0;
 		mantissa = MAX_RATE_MANTISSA;
 
-		while (value < (cclk_hz / (cclk_ticks << div_exp)))
+		while (value < (NIX_SHAPER_RATE_CONST / (1 << div_exp)))
 			div_exp += 1;
 
 		while (value <
-		       ((cclk_hz * (256 + mantissa)) /
-			((cclk_ticks << div_exp) * 256)))
+		       ((NIX_SHAPER_RATE_CONST * (256 + mantissa)) /
+			((1 << div_exp) * 256)))
 			mantissa -= 1;
 	} else {
 		/* Calculate rate exponent and mantissa using
 		 * the following formula:
 		 *
-		 * value = (cclk_hz * ((256 + mantissa) << exponent)
-		 *              / (cclk_ticks * 256)
+		 * value = (2E6 * ((256 + mantissa) << exponent)) / 256
 		 *
 		 */
 		div_exp = 0;
 		exponent = MAX_RATE_EXPONENT;
 		mantissa = MAX_RATE_MANTISSA;
 
-		while (value < (cclk_hz * (1 << exponent)) / cclk_ticks)
+		while (value < (NIX_SHAPER_RATE_CONST * (1 << exponent)))
 			exponent -= 1;
 
-		while (value < (cclk_hz * ((256 + mantissa) << exponent)) /
-		       (cclk_ticks * 256))
+		while (value < ((NIX_SHAPER_RATE_CONST *
+				((256 + mantissa) << exponent)) / 256))
 			mantissa -= 1;
 	}
 
@@ -155,20 +153,7 @@ shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks,
 		*mantissa_p = mantissa;
 
 	/* Calculate real rate value */
-	return SHAPER_RATE(cclk_hz, cclk_ticks, exponent, mantissa, div_exp);
-}
-
-static inline uint64_t
-lx_shaper_rate_to_nix(uint64_t cclk_hz, uint32_t hw_lvl,
-		      uint64_t value, uint64_t *exponent,
-		      uint64_t *mantissa, uint64_t *div_exp)
-{
-	if (hw_lvl == NIX_TXSCH_LVL_TL1)
-		return shaper_rate_to_nix(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS,
-					  value, exponent, mantissa, div_exp);
-	else
-		return shaper_rate_to_nix(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS,
-					  value, exponent, mantissa, div_exp);
+	return SHAPER_RATE(exponent, mantissa, div_exp);
 }
 
 static inline uint64_t
@@ -207,329 +192,394 @@ shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p,
 	return SHAPER_BURST(exponent, mantissa);
 }
 
-static int
-configure_shaper_cir_pir_reg(struct otx2_eth_dev *dev,
-			     struct otx2_nix_tm_node *tm_node,
-			     struct shaper_params *cir,
-			     struct shaper_params *pir)
-{
-	uint32_t shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
-	struct otx2_nix_tm_shaper_profile *shaper_profile = NULL;
-	struct rte_tm_shaper_params *param;
-
-	shaper_profile_id = tm_node->params.shaper_profile_id;
-
-	shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
-	if (shaper_profile) {
-		param = &shaper_profile->profile;
-		/* Calculate CIR exponent and mantissa */
-		if (param->committed.rate)
-			cir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
-							  tm_node->hw_lvl_id,
-							  param->committed.rate,
-							  &cir->exponent,
-							  &cir->mantissa,
-							  &cir->div_exp);
-
-		/* Calculate PIR exponent and mantissa */
-		if (param->peak.rate)
-			pir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
-							  tm_node->hw_lvl_id,
-							  param->peak.rate,
-							  &pir->exponent,
-							  &pir->mantissa,
-							  &pir->div_exp);
-
-		/* Calculate CIR burst exponent and mantissa */
-		if (param->committed.size)
-			cir->burst = shaper_burst_to_nix(param->committed.size,
-							 &cir->burst_exponent,
-							 &cir->burst_mantissa);
-
-		/* Calculate PIR burst exponent and mantissa */
-		if (param->peak.size)
-			pir->burst = shaper_burst_to_nix(param->peak.size,
-							 &pir->burst_exponent,
-							 &pir->burst_mantissa);
-	}
-
-	return 0;
-}
-
-static int
-send_tm_reqval(struct otx2_mbox *mbox, struct nix_txschq_config *req)
+static void
+shaper_config_to_nix(struct otx2_nix_tm_shaper_profile *profile,
+		     struct shaper_params *cir,
+		     struct shaper_params *pir)
 {
-	int rc;
-
-	if (req->num_regs > MAX_REGS_PER_MBOX_MSG)
-		return -ERANGE;
-
-	rc = otx2_mbox_process(mbox);
-	if (rc)
-		return rc;
-
-	req->num_regs = 0;
-	return 0;
+	struct rte_tm_shaper_params *param = &profile->params;
+
+	if (!profile)
+		return;
+
+	/* Calculate CIR exponent and mantissa */
+	if (param->committed.rate)
+		cir->rate = shaper_rate_to_nix(param->committed.rate,
+					       &cir->exponent,
+					       &cir->mantissa,
+					       &cir->div_exp);
+
+	/* Calculate PIR exponent and mantissa */
+	if (param->peak.rate)
+		pir->rate = shaper_rate_to_nix(param->peak.rate,
+					       &pir->exponent,
+					       &pir->mantissa,
+					       &pir->div_exp);
+
+	/* Calculate CIR burst exponent and mantissa */
+	if (param->committed.size)
+		cir->burst = shaper_burst_to_nix(param->committed.size,
+						 &cir->burst_exponent,
+						 &cir->burst_mantissa);
+
+	/* Calculate PIR burst exponent and mantissa */
+	if (param->peak.size)
+		pir->burst = shaper_burst_to_nix(param->peak.size,
+						 &pir->burst_exponent,
+						 &pir->burst_mantissa);
 }
 
 static int
-populate_tm_registers(struct otx2_eth_dev *dev,
-		      struct otx2_nix_tm_node *tm_node)
+populate_tm_tl1_default(struct otx2_eth_dev *dev, uint32_t schq)
 {
-	uint64_t strict_schedul_prio, rr_prio;
 	struct otx2_mbox *mbox = dev->mbox;
-	volatile uint64_t *reg, *regval;
-	uint64_t parent = 0, child = 0;
-	struct shaper_params cir, pir;
 	struct nix_txschq_config *req;
+
+	/*
+	 * Default config for TL1.
+	 * For VF this is always ignored.
+	 */
+
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = NIX_TXSCH_LVL_TL1;
+
+	/* Set DWRR quantum */
+	req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
+	req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
+	req->num_regs++;
+
+	req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
+	req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
+	req->num_regs++;
+
+	req->reg[2] = NIX_AF_TL1X_CIR(schq);
+	req->regval[2] = 0;
+	req->num_regs++;
+
+	return otx2_mbox_process(mbox);
+}
+
+static uint8_t
+prepare_tm_sched_reg(struct otx2_eth_dev *dev,
+		     struct otx2_nix_tm_node *tm_node,
+		     volatile uint64_t *reg, volatile uint64_t *regval)
+{
+	uint64_t strict_prio = tm_node->priority;
+	uint32_t hw_lvl = tm_node->hw_lvl;
+	uint32_t schq = tm_node->hw_id;
 	uint64_t rr_quantum;
-	uint32_t hw_lvl;
-	uint32_t schq;
-	int rc;
+	uint8_t k = 0;
+
+	rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
+
+	/* For children to root, strict prio is default if either
+	 * device root is TL2 or TL1 Static Priority is disabled.
+	 */
+	if (hw_lvl == NIX_TXSCH_LVL_TL2 &&
+	    (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
+	     dev->tm_flags & NIX_TM_TL1_NO_SP))
+		strict_prio = TXSCH_TL1_DFLT_RR_PRIO;
+
+	otx2_tm_dbg("Schedule config node %s(%u) lvl %u id %u, "
+		     "prio 0x%" PRIx64 ", rr_quantum 0x%" PRIx64 " (%p)",
+		     nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
+		     tm_node->id, strict_prio, rr_quantum, tm_node);
+
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_SMQ:
+		reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL1:
+		reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
+		regval[k] = rr_quantum;
+		k++;
+
+		break;
+	}
+
+	return k;
+}
+
+static uint8_t
+prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node,
+		      struct otx2_nix_tm_shaper_profile *profile,
+		      volatile uint64_t *reg, volatile uint64_t *regval)
+{
+	struct shaper_params cir, pir;
+	uint32_t schq = tm_node->hw_id;
+	uint8_t k = 0;
 
 	memset(&cir, 0, sizeof(cir));
 	memset(&pir, 0, sizeof(pir));
+	shaper_config_to_nix(profile, &cir, &pir);
 
-	/* Skip leaf nodes */
-	if (tm_node->hw_lvl_id == NIX_TXSCH_LVL_CNT)
-		return 0;
+	otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, "
+		    "pir %" PRIu64 "(%" PRIu64 "B),"
+		     " cir %" PRIu64 "(%" PRIu64 "B) (%p)",
+		     nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
+		     tm_node->id, pir.rate, pir.burst,
+		     cir.rate, cir.burst, tm_node);
+
+	switch (tm_node->hw_lvl) {
+	case NIX_TXSCH_LVL_SMQ:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_MDQX_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_MDQX_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED ALG */
+		reg[k] = NIX_AF_MDQX_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_TL4X_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_TL4X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED algo */
+		reg[k] = NIX_AF_TL4X_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_TL3X_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_TL3X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED algo */
+		reg[k] = NIX_AF_TL3X_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_TL2X_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_TL2X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED algo */
+		reg[k] = NIX_AF_TL2X_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL1:
+		/* Configure CIR */
+		reg[k] = NIX_AF_TL1X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+		break;
+	}
+
+	return k;
+}
+
+static int
+populate_tm_reg(struct otx2_eth_dev *dev,
+		struct otx2_nix_tm_node *tm_node)
+{
+	struct otx2_nix_tm_shaper_profile *profile;
+	uint64_t regval_mask[MAX_REGS_PER_MBOX_MSG];
+	uint64_t regval[MAX_REGS_PER_MBOX_MSG];
+	uint64_t reg[MAX_REGS_PER_MBOX_MSG];
+	struct otx2_mbox *mbox = dev->mbox;
+	uint64_t parent = 0, child = 0;
+	uint32_t hw_lvl, rr_prio, schq;
+	struct nix_txschq_config *req;
+	int rc = -EFAULT;
+	uint8_t k = 0;
+
+	memset(regval_mask, 0, sizeof(regval_mask));
+	profile = nix_tm_shaper_profile_search(dev,
+					tm_node->params.shaper_profile_id);
+	rr_prio = tm_node->rr_prio;
+	hw_lvl = tm_node->hw_lvl;
+	schq = tm_node->hw_id;
 
 	/* Root node will not have a parent node */
-	if (tm_node->hw_lvl_id == dev->otx2_tm_root_lvl)
+	if (hw_lvl == dev->otx2_tm_root_lvl)
 		parent = tm_node->parent_hw_id;
 	else
 		parent = tm_node->parent->hw_id;
 
 	/* Do we need this trigger to configure TL1 */
 	if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
-	    tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
-		schq = parent;
-		/*
-		 * Default config for TL1.
-		 * For VF this is always ignored.
-		 */
-
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = NIX_TXSCH_LVL_TL1;
-
-		/* Set DWRR quantum */
-		req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
-		req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
-		req->num_regs++;
-
-		req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
-		req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
-		req->num_regs++;
-
-		req->reg[2] = NIX_AF_TL1X_CIR(schq);
-		req->regval[2] = 0;
-		req->num_regs++;
-
-		rc = send_tm_reqval(mbox, req);
+	    hw_lvl == dev->otx2_tm_root_lvl) {
+		rc = populate_tm_tl1_default(dev, parent);
 		if (rc)
 			goto error;
 	}
 
-	if (tm_node->hw_lvl_id != NIX_TXSCH_LVL_SMQ)
+	if (hw_lvl != NIX_TXSCH_LVL_SMQ)
 		child = find_prio_anchor(dev, tm_node->id);
 
-	rr_prio = tm_node->rr_prio;
-	hw_lvl = tm_node->hw_lvl_id;
-	strict_schedul_prio = tm_node->priority;
-	schq = tm_node->hw_id;
-	rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX) /
-		MAX_SCHED_WEIGHT;
-
-	configure_shaper_cir_pir_reg(dev, tm_node, &cir, &pir);
-
-	otx2_tm_dbg("Configure node %p, lvl %u hw_lvl %u, id %u, hw_id %u,"
-		     "parent_hw_id %" PRIx64 ", pir %" PRIx64 ", cir %" PRIx64,
-		     tm_node, tm_node->level_id, hw_lvl,
-		     tm_node->id, schq, parent, pir.rate, cir.rate);
-
-	rc = -EFAULT;
-
+	/* Override default rr_prio when TL1
+	 * Static Priority is disabled
+	 */
+	if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
+	    dev->tm_flags & NIX_TM_TL1_NO_SP) {
+		rr_prio = TXSCH_TL1_DFLT_RR_PRIO;
+		child = 0;
+	}
+
+	otx2_tm_dbg("Topology config node %s(%u)->%s(%lu) lvl %u, id %u"
+		    " prio_anchor %lu rr_prio %u (%p)", nix_hwlvl2str(hw_lvl),
+		    schq, nix_hwlvl2str(hw_lvl + 1), parent, tm_node->lvl,
+		    tm_node->id, child, rr_prio, tm_node);
+
+	/* Prepare Topology and Link config */
 	switch (hw_lvl) {
 	case NIX_TXSCH_LVL_SMQ:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		reg = req->reg;
-		regval = req->regval;
-		req->num_regs = 0;
 
 		/* Set xoff which will be cleared later */
-		*reg++ = NIX_AF_SMQX_CFG(schq);
-		*regval++ = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) |
-				(NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
-		req->num_regs++;
-		*reg++ = NIX_AF_MDQX_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_MDQX_SCHEDULE(schq);
-		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_MDQX_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
+		reg[k] = NIX_AF_SMQX_CFG(schq);
+		regval[k] = BIT_ULL(50);
+		regval_mask[k] = ~BIT_ULL(50);
+		k++;
 
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_MDQX_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_MDQX_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
 
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL4:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_TL4X_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
+
+		reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1);
+		k++;
 
-		*reg++ = NIX_AF_TL4X_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL4X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1);
-		req->num_regs++;
-		*reg++ = NIX_AF_TL4X_SCHEDULE(schq);
-		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_TL4X_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL4X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
 		/* Configure TL4 to send to SDP channel instead of CGX/LBK */
 		if (otx2_dev_is_sdp(dev)) {
-			*reg++ = NIX_AF_TL4X_SDP_LINK_CFG(schq);
-			*regval++ = BIT_ULL(12);
-			req->num_regs++;
+			reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
+			regval[k] = BIT_ULL(12);
+			k++;
 		}
-
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL3:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_TL3X_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
 
-		*reg++ = NIX_AF_TL3X_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL3X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1);
-		req->num_regs++;
-		*reg++ = NIX_AF_TL3X_SCHEDULE(schq);
-		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
+		reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1);
+		k++;
 
 		/* Link configuration */
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
-			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
+			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
 						nix_get_link(dev));
-			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
-			req->num_regs++;
+			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
+			k++;
 		}
 
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_TL3X_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL3X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
-
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL2:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_TL2X_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
 
-		*reg++ = NIX_AF_TL2X_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL2X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1);
-		req->num_regs++;
-		*reg++ = NIX_AF_TL2X_SCHEDULE(schq);
-		if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2)
-			*regval++ = (1 << 24) | rr_quantum;
-		else
-			*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
+		reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1);
+		k++;
 
 		/* Link configuration */
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
-			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
+			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
 						nix_get_link(dev));
-			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
-			req->num_regs++;
-		}
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_TL2X_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL2X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
+			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
+			k++;
 		}
 
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL1:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
+		k++;
 
-		*reg++ = NIX_AF_TL1X_SCHEDULE(schq);
-		*regval++ = rr_quantum;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL1X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
-		req->num_regs++;
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL1X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
-
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	}
 
+	/* Prepare schedule config */
+	k += prepare_tm_sched_reg(dev, tm_node, &reg[k], &regval[k]);
+
+	/* Prepare shaping config */
+	k += prepare_tm_shaper_reg(tm_node, profile, &reg[k], &regval[k]);
+
+	if (!k)
+		return 0;
+
+	/* Copy and send config mbox */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = hw_lvl;
+	req->num_regs = k;
+
+	otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+	otx2_mbox_memcpy(req->regval, regval, sizeof(uint64_t) * k);
+	otx2_mbox_memcpy(req->regval_mask, regval_mask, sizeof(uint64_t) * k);
+
+	rc = otx2_mbox_process(mbox);
+	if (rc)
+		goto error;
+
 	return 0;
 error:
 	otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc);
@@ -541,13 +591,14 @@ static int
 nix_tm_txsch_reg_config(struct otx2_eth_dev *dev)
 {
 	struct otx2_nix_tm_node *tm_node;
-	uint32_t lvl;
+	uint32_t hw_lvl;
 	int rc = 0;
 
-	for (lvl = 0; lvl < (uint32_t)dev->otx2_tm_root_lvl + 1; lvl++) {
+	for (hw_lvl = 0; hw_lvl <= dev->otx2_tm_root_lvl; hw_lvl++) {
 		TAILQ_FOREACH(tm_node, &dev->node_list, node) {
-			if (tm_node->hw_lvl_id == lvl) {
-				rc = populate_tm_registers(dev, tm_node);
+			if (tm_node->hw_lvl == hw_lvl &&
+			    tm_node->hw_lvl != NIX_TXSCH_LVL_CNT) {
+				rc = populate_tm_reg(dev, tm_node);
 				if (rc)
 					goto exit;
 			}
@@ -637,8 +688,8 @@ nix_tm_update_parent_info(struct otx2_eth_dev *dev)
 static int
 nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 			uint32_t parent_node_id, uint32_t priority,
-			uint32_t weight, uint16_t hw_lvl_id,
-			uint16_t level_id, bool user,
+			uint32_t weight, uint16_t hw_lvl,
+			uint16_t lvl, bool user,
 			struct rte_tm_node_params *params)
 {
 	struct otx2_nix_tm_shaper_profile *shaper_profile;
@@ -655,8 +706,8 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 	if (!tm_node)
 		return -ENOMEM;
 
-	tm_node->level_id = level_id;
-	tm_node->hw_lvl_id = hw_lvl_id;
+	tm_node->lvl = lvl;
+	tm_node->hw_lvl = hw_lvl;
 
 	tm_node->id = node_id;
 	tm_node->priority = priority;
@@ -935,18 +986,18 @@ nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
 			continue;
 
 		if (nix_tm_have_tl1_access(dev) &&
-		    tm_node->hw_lvl_id ==  NIX_TXSCH_LVL_TL1)
+		    tm_node->hw_lvl ==  NIX_TXSCH_LVL_TL1)
 			skip_node = true;
 
 		otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)",
-			    tm_node->id,  tm_node->hw_lvl_id,
+			    tm_node->id,  tm_node->hw_lvl,
 			    tm_node->hw_id, tm_node);
 		/* Free specific HW resource if requested */
 		if (!skip_node && flags_mask &&
 		    tm_node->flags & NIX_TM_NODE_HWRES) {
 			req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
 			req->flags = 0;
-			req->schq_lvl = tm_node->hw_lvl_id;
+			req->schq_lvl = tm_node->hw_lvl;
 			req->schq = tm_node->hw_id;
 			rc = otx2_mbox_process(mbox);
 			if (rc)
@@ -1010,17 +1061,17 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 	uint32_t l_id, schq_index;
 
 	otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)",
-		    child->id, child->level_id, child->hw_lvl_id, child);
+		    child->id, child->lvl, child->hw_lvl, child);
 
 	child->flags |= NIX_TM_NODE_HWRES;
 
 	/* Process root nodes */
 	if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
-	    child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+	    child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
 		int idx = 0;
 		uint32_t tschq_con_index;
 
-		l_id = child->hw_lvl_id;
+		l_id = child->hw_lvl;
 		tschq_con_index = dev->txschq_contig_index[l_id];
 		hw_id = dev->txschq_contig_list[l_id][tschq_con_index];
 		child->hw_id = hw_id;
@@ -1032,10 +1083,10 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 		return 0;
 	}
 	if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 &&
-	    child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+	    child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
 		uint32_t tschq_con_index;
 
-		l_id = child->hw_lvl_id;
+		l_id = child->hw_lvl;
 		tschq_con_index = dev->txschq_index[l_id];
 		hw_id = dev->txschq_list[l_id][tschq_con_index];
 		child->hw_id = hw_id;
@@ -1044,7 +1095,7 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 	}
 
 	/* Process children with parents */
-	l_id = child->hw_lvl_id;
+	l_id = child->hw_lvl;
 	schq_index = dev->txschq_index[l_id];
 	schq_con_index = dev->txschq_contig_index[l_id];
 
@@ -1069,8 +1120,8 @@ nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
 
 	for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) {
 		TAILQ_FOREACH(parent, &dev->node_list, node) {
-			child_hw_lvl = parent->hw_lvl_id - 1;
-			if (parent->hw_lvl_id != i)
+			child_hw_lvl = parent->hw_lvl - 1;
+			if (parent->hw_lvl != i)
 				continue;
 			TAILQ_FOREACH(child, &dev->node_list, node) {
 				if (!child->parent)
@@ -1087,7 +1138,7 @@ nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
 			 * Explicitly assign id to parent node if it
 			 * doesn't have a parent
 			 */
-			if (parent->hw_lvl_id == dev->otx2_tm_root_lvl)
+			if (parent->hw_lvl == dev->otx2_tm_root_lvl)
 				nix_tm_assign_id_to_node(dev, parent, NULL);
 		}
 	}
@@ -1102,7 +1153,7 @@ nix_tm_count_req_schq(struct otx2_eth_dev *dev,
 	uint8_t contig_count;
 
 	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
-		if (lvl == tm_node->hw_lvl_id) {
+		if (lvl == tm_node->hw_lvl) {
 			req->schq[lvl - 1] += tm_node->rr_num;
 			if (tm_node->max_prio != UINT32_MAX) {
 				contig_count = tm_node->max_prio + 1;
@@ -1111,7 +1162,7 @@ nix_tm_count_req_schq(struct otx2_eth_dev *dev,
 		}
 		if (lvl == dev->otx2_tm_root_lvl &&
 		    dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 &&
-		    tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
+		    tm_node->hw_lvl == dev->otx2_tm_root_lvl) {
 			req->schq_contig[dev->otx2_tm_root_lvl]++;
 		}
 	}
@@ -1192,7 +1243,7 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 			continue;
 
 		/* Enable xmit on sq */
-		if (tm_node->level_id != OTX2_TM_LVL_QUEUE) {
+		if (tm_node->lvl != OTX2_TM_LVL_QUEUE) {
 			tm_node->flags |= NIX_TM_NODE_ENABLED;
 			continue;
 		}
@@ -1210,8 +1261,7 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 		txq = eth_dev->data->tx_queues[sq];
 
 		smq = tm_node->parent->hw_id;
-		rr_quantum = (tm_node->weight *
-			      NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT;
+		rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
 
 		rc = nix_tm_sw_xon(txq, smq, rr_quantum);
 		if (rc)
@@ -1332,6 +1382,7 @@ void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
 
 int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
 {
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct otx2_eth_dev  *dev = otx2_eth_pmd_priv(eth_dev);
 	uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
 	int rc;
@@ -1347,6 +1398,13 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
 	nix_tm_clear_shaper_profiles(dev);
 	dev->tm_flags = NIX_TM_DEFAULT_TREE;
 
+	/* Disable TL1 Static Priority when VF's are enabled
+	 * as otherwise VF's TL2 reallocation will be needed
+	 * runtime to support a specific topology of PF.
+	 */
+	if (pci_dev->max_vfs)
+		dev->tm_flags |= NIX_TM_TL1_NO_SP;
+
 	rc = nix_tm_prepare_default_tree(eth_dev);
 	if (rc != 0)
 		return rc;
@@ -1397,15 +1455,14 @@ otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 		tm_node = nix_tm_node_search(dev, sq, true);
 
 	/* Check if we found a valid leaf node */
-	if (!tm_node || tm_node->level_id != OTX2_TM_LVL_QUEUE ||
+	if (!tm_node || tm_node->lvl != OTX2_TM_LVL_QUEUE ||
 	    !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
 		return -EIO;
 	}
 
 	/* Get SMQ Id of leaf node's parent */
 	*smq = tm_node->parent->hw_id;
-	*rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX)
-		/ MAX_SCHED_WEIGHT;
+	*rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
 
 	rc = nix_smq_xoff(dev, *smq, false);
 	if (rc)
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 4712b09..ad7727e 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -10,6 +10,7 @@
 #include <rte_tm_driver.h>
 
 #define NIX_TM_DEFAULT_TREE	BIT_ULL(0)
+#define NIX_TM_TL1_NO_SP	BIT_ULL(3)
 
 struct otx2_eth_dev;
 
@@ -27,16 +28,18 @@ struct otx2_nix_tm_node {
 	uint32_t hw_id;
 	uint32_t priority;
 	uint32_t weight;
-	uint16_t level_id;
-	uint16_t hw_lvl_id;
+	uint16_t lvl;
+	uint16_t hw_lvl;
 	uint32_t rr_prio;
 	uint32_t rr_num;
 	uint32_t max_prio;
 	uint32_t parent_hw_id;
-	uint32_t flags;
+	uint32_t flags:16;
 #define NIX_TM_NODE_HWRES	BIT_ULL(0)
 #define NIX_TM_NODE_ENABLED	BIT_ULL(1)
 #define NIX_TM_NODE_USER	BIT_ULL(2)
+	/* Shaper algorithm for RED state @NIX_REDALG_E */
+	uint32_t red_algo:2;
 	struct otx2_nix_tm_node *parent;
 	struct rte_tm_node_params params;
 };
@@ -45,7 +48,7 @@ struct otx2_nix_tm_shaper_profile {
 	TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper;
 	uint32_t shaper_profile_id;
 	uint32_t reference_count;
-	struct rte_tm_shaper_params profile;
+	struct rte_tm_shaper_params params; /* Rate in bits/sec */
 };
 
 struct shaper_params {
@@ -63,6 +66,10 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 
 #define MAX_SCHED_WEIGHT ((uint8_t)~0)
 #define NIX_TM_RR_QUANTUM_MAX (BIT_ULL(24) - 1)
+#define NIX_TM_WEIGHT_TO_RR_QUANTUM(__weight)			\
+		((((__weight) & MAX_SCHED_WEIGHT) *             \
+		  NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT)
+
 
 /* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT  */
 /* = NIX_MAX_HW_MTU */
@@ -73,52 +80,27 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 #define MAX_RATE_EXPONENT 0xf
 #define MAX_RATE_MANTISSA 0xff
 
-/** NIX rate limiter time-wheel resolution */
-#define L1_TIME_WHEEL_CCLK_TICKS 240
-#define LX_TIME_WHEEL_CCLK_TICKS 860
+#define NIX_SHAPER_RATE_CONST ((uint64_t)2E6)
 
-#define CCLK_HZ 1000000000
-
-/* NIX rate calculation
- *	CCLK = coprocessor-clock frequency in MHz
- *	CCLK_TICKS = rate limiter time-wheel resolution
- *
+/* NIX rate calculation in Bits/Sec
  *	PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
  *		<< NIX_*_PIR[RATE_EXPONENT]) / 256
- *	PIR = (CCLK / (CCLK_TICKS << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
- *		* PIR_ADD
+ *	PIR = (2E6 * PIR_ADD / (1 << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
  *
  *	CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
  *		<< NIX_*_CIR[RATE_EXPONENT]) / 256
- *	CIR = (CCLK / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
- *		* CIR_ADD
+ *	CIR = (2E6 * CIR_ADD / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
  */
-#define SHAPER_RATE(cclk_hz, cclk_ticks, \
-			exponent, mantissa, div_exp) \
-	(((uint64_t)(cclk_hz) * ((256 + (mantissa)) << (exponent))) \
-		/ (((cclk_ticks) << (div_exp)) * 256))
+#define SHAPER_RATE(exponent, mantissa, div_exp) \
+	((NIX_SHAPER_RATE_CONST * ((256 + (mantissa)) << (exponent)))\
+		/ (((1ull << (div_exp)) * 256)))
 
-#define L1_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
-	SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS, \
-			exponent, mantissa, div_exp)
+/* 96xx rate limits in Bits/Sec */
+#define MIN_SHAPER_RATE \
+	SHAPER_RATE(0, 0, MAX_RATE_DIV_EXP)
 
-#define LX_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
-	SHAPER_RATE(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS, \
-			exponent, mantissa, div_exp)
-
-/* Shaper rate limits */
-#define MIN_SHAPER_RATE(cclk_hz, cclk_ticks) \
-	SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, MAX_RATE_DIV_EXP)
-
-#define MAX_SHAPER_RATE(cclk_hz, cclk_ticks) \
-	SHAPER_RATE(cclk_hz, cclk_ticks, MAX_RATE_EXPONENT, \
-			MAX_RATE_MANTISSA, 0)
-
-#define MIN_L1_SHAPER_RATE(cclk_hz) \
-	MIN_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
-
-#define MAX_L1_SHAPER_RATE(cclk_hz) \
-	MAX_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
+#define MAX_SHAPER_RATE \
+	SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0)
 
 /** TM Shaper - low level operations */
 
@@ -150,4 +132,25 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 #define TXSCH_TL1_DFLT_RR_QTM  ((1 << 24) - 1)
 #define TXSCH_TL1_DFLT_RR_PRIO 1
 
+static inline const char *
+nix_hwlvl2str(uint32_t hw_lvl)
+{
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_MDQ:
+		return "SMQ/MDQ";
+	case NIX_TXSCH_LVL_TL4:
+		return "TL4";
+	case NIX_TXSCH_LVL_TL3:
+		return "TL3";
+	case NIX_TXSCH_LVL_TL2:
+		return "TL2";
+	case NIX_TXSCH_LVL_TL1:
+		return "TL1";
+	default:
+		break;
+	}
+
+	return "???";
+}
+
 #endif /* __OTX2_TM_H__ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 03/11] net/octeontx2: add dynamic topology update support
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 02/11] net/octeontx2: restructure tm helper functions Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 04/11] net/octeontx2: add tm node add and delete cb Nithin Dabilpuram
                     ` (7 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Vamsi Attunuru, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Modify resource allocation and freeing logic to support
dynamic topology commit while to traffic is flowing.
This patch also modifies SQ flush to timeout based on minimum shaper
rate configured. SQ flush is further split to pre/post
functions to adhere to HW spec of 96XX C0.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/common/octeontx2/otx2_dev.h |   9 +
 drivers/net/octeontx2/otx2_ethdev.c |   3 +-
 drivers/net/octeontx2/otx2_ethdev.h |   1 +
 drivers/net/octeontx2/otx2_tm.c     | 550 +++++++++++++++++++++++++++---------
 drivers/net/octeontx2/otx2_tm.h     |   7 +-
 5 files changed, 426 insertions(+), 144 deletions(-)

diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h
index 0b0a949..13b75e1 100644
--- a/drivers/common/octeontx2/otx2_dev.h
+++ b/drivers/common/octeontx2/otx2_dev.h
@@ -46,6 +46,15 @@
 	((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) &&	\
 	 (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
 
+#define otx2_dev_is_96xx_Cx(dev)				\
+	((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) &&	\
+	 (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
+
+#define otx2_dev_is_96xx_C0(dev)				\
+	((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) &&	\
+	 (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) &&	\
+	 (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
+
 struct otx2_dev;
 
 /* Link status callback */
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e60f490..6896797 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -992,7 +992,7 @@ otx2_nix_tx_queue_release(void *_txq)
 	otx2_nix_dbg("Releasing txq %u", txq->sq);
 
 	/* Flush and disable tm */
-	otx2_nix_tm_sw_xoff(txq, eth_dev->data->dev_started);
+	otx2_nix_sq_flush_pre(txq, eth_dev->data->dev_started);
 
 	/* Free sqb's and disable sq */
 	nix_sq_uninit(txq);
@@ -1001,6 +1001,7 @@ otx2_nix_tx_queue_release(void *_txq)
 		rte_mempool_free(txq->sqb_pool);
 		txq->sqb_pool = NULL;
 	}
+	otx2_nix_sq_flush_post(txq);
 	rte_free(txq);
 }
 
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index b7d5386..6679652 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -307,6 +307,7 @@ struct otx2_eth_dev {
 	uint16_t link_cfg_lvl;
 	uint16_t tm_flags;
 	uint16_t tm_leaf_cnt;
+	uint64_t tm_rate_min;
 	struct otx2_nix_tm_node_list node_list;
 	struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
 	struct otx2_rss_info rss_info;
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 057297a..01b327b 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -59,8 +59,16 @@ static bool
 nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
 {
 	bool is_lbk = otx2_dev_is_lbk(dev);
-	return otx2_dev_is_pf(dev) && !otx2_dev_is_Ax(dev) &&
-		!is_lbk && !dev->maxvf;
+	return otx2_dev_is_pf(dev) && !otx2_dev_is_Ax(dev) && !is_lbk;
+}
+
+static bool
+nix_tm_is_leaf(struct otx2_eth_dev *dev, int lvl)
+{
+	if (nix_tm_have_tl1_access(dev))
+		return (lvl == OTX2_TM_LVL_QUEUE);
+
+	return (lvl == OTX2_TM_LVL_SCH4);
 }
 
 static int
@@ -424,6 +432,48 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node,
 	return k;
 }
 
+static uint8_t
+prepare_tm_sw_xoff(struct otx2_nix_tm_node *tm_node, bool enable,
+		   volatile uint64_t *reg, volatile uint64_t *regval)
+{
+	uint32_t hw_lvl = tm_node->hw_lvl;
+	uint32_t schq = tm_node->hw_id;
+	uint8_t k = 0;
+
+	otx2_tm_dbg("sw xoff config node %s(%u) lvl %u id %u, enable %u (%p)",
+		    nix_hwlvl2str(hw_lvl), schq, tm_node->lvl,
+		    tm_node->id, enable, tm_node);
+
+	regval[k] = enable;
+
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_MDQ:
+		reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL1:
+		reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
+		k++;
+		break;
+	default:
+		break;
+	}
+
+	return k;
+}
+
 static int
 populate_tm_reg(struct otx2_eth_dev *dev,
 		struct otx2_nix_tm_node *tm_node)
@@ -692,12 +742,13 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 			uint16_t lvl, bool user,
 			struct rte_tm_node_params *params)
 {
-	struct otx2_nix_tm_shaper_profile *shaper_profile;
+	struct otx2_nix_tm_shaper_profile *profile;
 	struct otx2_nix_tm_node *tm_node, *parent_node;
-	uint32_t shaper_profile_id;
+	struct shaper_params cir, pir;
+	uint32_t profile_id;
 
-	shaper_profile_id = params->shaper_profile_id;
-	shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
+	profile_id = params->shaper_profile_id;
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
 
 	parent_node = nix_tm_node_search(dev, parent_node_id, user);
 
@@ -709,6 +760,10 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 	tm_node->lvl = lvl;
 	tm_node->hw_lvl = hw_lvl;
 
+	/* Maintain minimum weight */
+	if (!weight)
+		weight = 1;
+
 	tm_node->id = node_id;
 	tm_node->priority = priority;
 	tm_node->weight = weight;
@@ -720,10 +775,22 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 		tm_node->flags = NIX_TM_NODE_USER;
 	rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
 
-	if (shaper_profile)
-		shaper_profile->reference_count++;
+	if (profile)
+		profile->reference_count++;
+
+	memset(&cir, 0, sizeof(cir));
+	memset(&pir, 0, sizeof(pir));
+	shaper_config_to_nix(profile, &cir, &pir);
+
 	tm_node->parent = parent_node;
 	tm_node->parent_hw_id = UINT32_MAX;
+	/* C0 doesn't support STALL when both PIR & CIR are enabled */
+	if (lvl < OTX2_TM_LVL_QUEUE &&
+	    otx2_dev_is_96xx_Cx(dev) &&
+	    pir.rate && cir.rate)
+		tm_node->red_algo = NIX_REDALG_DISCARD;
+	else
+		tm_node->red_algo = NIX_REDALG_STD;
 
 	TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node);
 
@@ -747,24 +814,67 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
 }
 
 static int
-nix_smq_xoff(struct otx2_eth_dev *dev, uint16_t smq, bool enable)
+nix_clear_path_xoff(struct otx2_eth_dev *dev,
+		    struct otx2_nix_tm_node *tm_node)
+{
+	struct nix_txschq_config *req;
+	struct otx2_nix_tm_node *p;
+	int rc;
+
+	/* Manipulating SW_XOFF not supported on Ax */
+	if (otx2_dev_is_Ax(dev))
+		return 0;
+
+	/* Enable nodes in path for flush to succeed */
+	if (!nix_tm_is_leaf(dev, tm_node->lvl))
+		p = tm_node;
+	else
+		p = tm_node->parent;
+	while (p) {
+		if (!(p->flags & NIX_TM_NODE_ENABLED) &&
+		    (p->flags & NIX_TM_NODE_HWRES)) {
+			req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+			req->lvl = p->hw_lvl;
+			req->num_regs = prepare_tm_sw_xoff(p, false, req->reg,
+							   req->regval);
+			rc = otx2_mbox_process(dev->mbox);
+			if (rc)
+				return rc;
+
+			p->flags |= NIX_TM_NODE_ENABLED;
+		}
+		p = p->parent;
+	}
+
+	return 0;
+}
+
+static int
+nix_smq_xoff(struct otx2_eth_dev *dev,
+	     struct otx2_nix_tm_node *tm_node,
+	     bool enable)
 {
 	struct otx2_mbox *mbox = dev->mbox;
 	struct nix_txschq_config *req;
+	uint16_t smq;
+	int rc;
+
+	smq = tm_node->hw_id;
+	otx2_tm_dbg("Setting SMQ %u XOFF/FLUSH to %s", smq,
+		    enable ? "enable" : "disable");
+
+	rc = nix_clear_path_xoff(dev, tm_node);
+	if (rc)
+		return rc;
 
 	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
 	req->lvl = NIX_TXSCH_LVL_SMQ;
 	req->num_regs = 1;
 
 	req->reg[0] = NIX_AF_SMQX_CFG(smq);
-	/* Unmodified fields */
-	req->regval[0] = ((uint64_t)NIX_MAX_VTAG_INS << 36) |
-				(NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
-
-	if (enable)
-		req->regval[0] |= BIT_ULL(50) | BIT_ULL(49);
-	else
-		req->regval[0] |= 0;
+	req->regval[0] = enable ? (BIT_ULL(50) | BIT_ULL(49)) : 0;
+	req->regval_mask[0] = enable ?
+				~(BIT_ULL(50) | BIT_ULL(49)) : ~BIT_ULL(50);
 
 	return otx2_mbox_process(mbox);
 }
@@ -780,6 +890,9 @@ otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
 	uint64_t aura_handle;
 	int rc;
 
+	otx2_tm_dbg("Setting SQ %u SQB aura FC to %s", txq->sq,
+		    enable ? "enable" : "disable");
+
 	lf = otx2_npa_lf_obj_get();
 	if (!lf)
 		return -EFAULT;
@@ -824,22 +937,41 @@ otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
 	return 0;
 }
 
-static void
+static int
 nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
 {
 	uint16_t sqb_cnt, head_off, tail_off;
 	struct otx2_eth_dev *dev = txq->dev;
+	uint64_t wdata, val, prev;
 	uint16_t sq = txq->sq;
-	uint64_t reg, val;
 	int64_t *regaddr;
+	uint64_t timeout;/* 10's of usec */
+
+	/* Wait for enough time based on shaper min rate */
+	timeout = (txq->qconf.nb_desc * NIX_MAX_HW_FRS * 8 * 1E5);
+	timeout = timeout / dev->tm_rate_min;
+	if (!timeout)
+		timeout = 10000;
+
+	wdata = ((uint64_t)sq << 32);
+	regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
+	val = otx2_atomic64_add_nosync(wdata, regaddr);
+
+	/* Spin multiple iterations as "txq->fc_cache_pkts" can still
+	 * have space to send pkts even though fc_mem is disabled
+	 */
 
 	while (true) {
-		reg = ((uint64_t)sq << 32);
-		regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
-		val = otx2_atomic64_add_nosync(reg, regaddr);
+		prev = val;
+		rte_delay_us(10);
+		val = otx2_atomic64_add_nosync(wdata, regaddr);
+		/* Continue on error */
+		if (val & BIT_ULL(63))
+			continue;
+
+		if (prev != val)
+			continue;
 
-		regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
-		val = otx2_atomic64_add_nosync(reg, regaddr);
 		sqb_cnt = val & 0xFFFF;
 		head_off = (val >> 20) & 0x3F;
 		tail_off = (val >> 28) & 0x3F;
@@ -850,117 +982,222 @@ nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
 			break;
 		}
 
-		rte_pause();
+		/* Timeout */
+		if (!timeout)
+			goto exit;
+		timeout--;
 	}
+
+	return 0;
+exit:
+	return -EFAULT;
 }
 
-int
-otx2_nix_tm_sw_xoff(void *__txq, bool dev_started)
+/* Flush and disable tx queue and its parent SMQ */
+int otx2_nix_sq_flush_pre(void *_txq, bool dev_started)
 {
-	struct otx2_eth_txq *txq = __txq;
-	struct otx2_eth_dev *dev = txq->dev;
-	struct otx2_mbox *mbox = dev->mbox;
-	struct nix_aq_enq_req *req;
-	struct nix_aq_enq_rsp *rsp;
-	uint16_t smq;
+	struct otx2_nix_tm_node *tm_node, *sibling;
+	struct otx2_eth_txq *txq;
+	struct otx2_eth_dev *dev;
+	uint16_t sq;
+	bool user;
 	int rc;
 
-	/* Get smq from sq */
-	req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
-	req->qidx = txq->sq;
-	req->ctype = NIX_AQ_CTYPE_SQ;
-	req->op = NIX_AQ_INSTOP_READ;
-	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
-	if (rc) {
-		otx2_err("Failed to get smq, rc=%d", rc);
-		return -EIO;
+	txq = _txq;
+	dev = txq->dev;
+	sq = txq->sq;
+
+	user = !!(dev->tm_flags & NIX_TM_COMMITTED);
+
+	/* Find the node for this SQ */
+	tm_node = nix_tm_node_search(dev, sq, user);
+	if (!tm_node || !(tm_node->flags & NIX_TM_NODE_ENABLED)) {
+		otx2_err("Invalid node/state for sq %u", sq);
+		return -EFAULT;
 	}
 
-	/* Check if sq is enabled */
-	if (!rsp->sq.ena)
-		return 0;
-
-	smq = rsp->sq.smq;
-
 	/* Enable CGX RXTX to drain pkts */
 	if (!dev_started) {
-		rc = otx2_cgx_rxtx_start(dev);
-		if (rc)
+		/* Though it enables both RX MCAM Entries and CGX Link
+		 * we assume all the rx queues are stopped way back.
+		 */
+		otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
+		rc = otx2_mbox_process(dev->mbox);
+		if (rc) {
+			otx2_err("cgx start failed, rc=%d", rc);
 			return rc;
-	}
-
-	rc = otx2_nix_sq_sqb_aura_fc(txq, false);
-	if (rc < 0) {
-		otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
-		goto cleanup;
+		}
 	}
 
 	/* Disable smq xoff for case it was enabled earlier */
-	rc = nix_smq_xoff(dev, smq, false);
+	rc = nix_smq_xoff(dev, tm_node->parent, false);
 	if (rc) {
-		otx2_err("Failed to enable smq for sq %u, rc=%d", txq->sq, rc);
-		goto cleanup;
-	}
-
-	/* Wait for sq entries to be flushed */
-	nix_txq_flush_sq_spin(txq);
-
-	/* Flush and enable smq xoff */
-	rc = nix_smq_xoff(dev, smq, true);
-	if (rc) {
-		otx2_err("Failed to disable smq for sq %u, rc=%d", txq->sq, rc);
+		otx2_err("Failed to enable smq %u, rc=%d",
+			 tm_node->parent->hw_id, rc);
 		return rc;
 	}
 
+	/* As per HRM, to disable an SQ, all other SQ's
+	 * that feed to same SMQ must be paused before SMQ flush.
+	 */
+	TAILQ_FOREACH(sibling, &dev->node_list, node) {
+		if (sibling->parent != tm_node->parent)
+			continue;
+		if (!(sibling->flags & NIX_TM_NODE_ENABLED))
+			continue;
+
+		sq = sibling->id;
+		txq = dev->eth_dev->data->tx_queues[sq];
+		if (!txq)
+			continue;
+
+		rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+		if (rc) {
+			otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
+			goto cleanup;
+		}
+
+		/* Wait for sq entries to be flushed */
+		rc = nix_txq_flush_sq_spin(txq);
+		if (rc) {
+			otx2_err("Failed to drain sq %u, rc=%d\n", txq->sq, rc);
+			return rc;
+		}
+	}
+
+	tm_node->flags &= ~NIX_TM_NODE_ENABLED;
+
+	/* Disable and flush */
+	rc = nix_smq_xoff(dev, tm_node->parent, true);
+	if (rc) {
+		otx2_err("Failed to disable smq %u, rc=%d",
+			 tm_node->parent->hw_id, rc);
+		goto cleanup;
+	}
 cleanup:
 	/* Restore cgx state */
-	if (!dev_started)
-		rc |= otx2_cgx_rxtx_stop(dev);
+	if (!dev_started) {
+		otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
+		rc |= otx2_mbox_process(dev->mbox);
+	}
 
 	return rc;
 }
 
+int otx2_nix_sq_flush_post(void *_txq)
+{
+	struct otx2_nix_tm_node *tm_node, *sibling;
+	struct otx2_eth_txq *txq = _txq;
+	struct otx2_eth_txq *s_txq;
+	struct otx2_eth_dev *dev;
+	bool once = false;
+	uint16_t sq, s_sq;
+	bool user;
+	int rc;
+
+	dev = txq->dev;
+	sq = txq->sq;
+	user = !!(dev->tm_flags & NIX_TM_COMMITTED);
+
+	/* Find the node for this SQ */
+	tm_node = nix_tm_node_search(dev, sq, user);
+	if (!tm_node) {
+		otx2_err("Invalid node for sq %u", sq);
+		return -EFAULT;
+	}
+
+	/* Enable all the siblings back */
+	TAILQ_FOREACH(sibling, &dev->node_list, node) {
+		if (sibling->parent != tm_node->parent)
+			continue;
+
+		if (sibling->id == sq)
+			continue;
+
+		if (!(sibling->flags & NIX_TM_NODE_ENABLED))
+			continue;
+
+		s_sq = sibling->id;
+		s_txq = dev->eth_dev->data->tx_queues[s_sq];
+		if (!s_txq)
+			continue;
+
+		if (!once) {
+			/* Enable back if any SQ is still present */
+			rc = nix_smq_xoff(dev, tm_node->parent, false);
+			if (rc) {
+				otx2_err("Failed to enable smq %u, rc=%d",
+					 tm_node->parent->hw_id, rc);
+				return rc;
+			}
+			once = true;
+		}
+
+		rc = otx2_nix_sq_sqb_aura_fc(s_txq, true);
+		if (rc) {
+			otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
+			return rc;
+		}
+	}
+
+	return 0;
+}
+
 static int
-nix_tm_sw_xon(struct otx2_eth_txq *txq,
-	      uint16_t smq, uint32_t rr_quantum)
+nix_sq_sched_data(struct otx2_eth_dev *dev,
+		  struct otx2_nix_tm_node *tm_node,
+		  bool rr_quantum_only)
 {
-	struct otx2_eth_dev *dev = txq->dev;
+	struct rte_eth_dev *eth_dev = dev->eth_dev;
 	struct otx2_mbox *mbox = dev->mbox;
+	uint16_t sq = tm_node->id, smq;
 	struct nix_aq_enq_req *req;
+	uint64_t rr_quantum;
 	int rc;
 
-	otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum %u",
-		    txq->sq, txq->sq, rr_quantum);
-	/* Set smq from sq */
+	smq = tm_node->parent->hw_id;
+	rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
+
+	if (rr_quantum_only)
+		otx2_tm_dbg("Update sq(%u) rr_quantum 0x%lx", sq, rr_quantum);
+	else
+		otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum 0x%lx",
+			    sq, smq, rr_quantum);
+
+	if (sq > eth_dev->data->nb_tx_queues)
+		return -EFAULT;
+
 	req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
-	req->qidx = txq->sq;
+	req->qidx = sq;
 	req->ctype = NIX_AQ_CTYPE_SQ;
 	req->op = NIX_AQ_INSTOP_WRITE;
-	req->sq.smq = smq;
+
+	/* smq update only when needed */
+	if (!rr_quantum_only) {
+		req->sq.smq = smq;
+		req->sq_mask.smq = ~req->sq_mask.smq;
+	}
 	req->sq.smq_rr_quantum = rr_quantum;
-	req->sq_mask.smq = ~req->sq_mask.smq;
 	req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum;
 
 	rc = otx2_mbox_process(mbox);
-	if (rc) {
+	if (rc)
 		otx2_err("Failed to set smq, rc=%d", rc);
-		return -EIO;
-	}
+	return rc;
+}
+
+int otx2_nix_sq_enable(void *_txq)
+{
+	struct otx2_eth_txq *txq = _txq;
+	int rc;
 
 	/* Enable sqb_aura fc */
 	rc = otx2_nix_sq_sqb_aura_fc(txq, true);
-	if (rc < 0) {
+	if (rc) {
 		otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
 		return rc;
 	}
 
-	/* Disable smq xoff */
-	rc = nix_smq_xoff(dev, smq, false);
-	if (rc) {
-		otx2_err("Failed to enable smq for sq %u", txq->sq);
-		return rc;
-	}
-
 	return 0;
 }
 
@@ -968,12 +1205,11 @@ static int
 nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
 		      uint32_t flags, bool hw_only)
 {
-	struct otx2_nix_tm_shaper_profile *shaper_profile;
+	struct otx2_nix_tm_shaper_profile *profile;
 	struct otx2_nix_tm_node *tm_node, *next_node;
 	struct otx2_mbox *mbox = dev->mbox;
 	struct nix_txsch_free_req *req;
-	uint32_t shaper_profile_id;
-	bool skip_node = false;
+	uint32_t profile_id;
 	int rc = 0;
 
 	next_node = TAILQ_FIRST(&dev->node_list);
@@ -985,37 +1221,40 @@ nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
 		if ((tm_node->flags & flags_mask) != flags)
 			continue;
 
-		if (nix_tm_have_tl1_access(dev) &&
-		    tm_node->hw_lvl ==  NIX_TXSCH_LVL_TL1)
-			skip_node = true;
-
-		otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)",
-			    tm_node->id,  tm_node->hw_lvl,
-			    tm_node->hw_id, tm_node);
-		/* Free specific HW resource if requested */
-		if (!skip_node && flags_mask &&
+		if (!nix_tm_is_leaf(dev, tm_node->lvl) &&
+		    tm_node->hw_lvl != NIX_TXSCH_LVL_TL1 &&
 		    tm_node->flags & NIX_TM_NODE_HWRES) {
+			/* Free specific HW resource */
+			otx2_tm_dbg("Free hwres %s(%u) lvl %u id %u (%p)",
+				    nix_hwlvl2str(tm_node->hw_lvl),
+				    tm_node->hw_id, tm_node->lvl,
+				    tm_node->id, tm_node);
+
+			rc = nix_clear_path_xoff(dev, tm_node);
+			if (rc)
+				return rc;
+
 			req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
 			req->flags = 0;
 			req->schq_lvl = tm_node->hw_lvl;
 			req->schq = tm_node->hw_id;
 			rc = otx2_mbox_process(mbox);
 			if (rc)
-				break;
-		} else {
-			skip_node = false;
+				return rc;
+			tm_node->flags &= ~NIX_TM_NODE_HWRES;
 		}
-		tm_node->flags &= ~NIX_TM_NODE_HWRES;
 
 		/* Leave software elements if needed */
 		if (hw_only)
 			continue;
 
-		shaper_profile_id = tm_node->params.shaper_profile_id;
-		shaper_profile =
-			nix_tm_shaper_profile_search(dev, shaper_profile_id);
-		if (shaper_profile)
-			shaper_profile->reference_count--;
+		otx2_tm_dbg("Free node lvl %u id %u (%p)",
+			    tm_node->lvl, tm_node->id, tm_node);
+
+		profile_id = tm_node->params.shaper_profile_id;
+		profile = nix_tm_shaper_profile_search(dev, profile_id);
+		if (profile)
+			profile->reference_count--;
 
 		TAILQ_REMOVE(&dev->node_list, tm_node, node);
 		rte_free(tm_node);
@@ -1060,8 +1299,8 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 	uint32_t hw_id, schq_con_index, prio_offset;
 	uint32_t l_id, schq_index;
 
-	otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)",
-		    child->id, child->lvl, child->hw_lvl, child);
+	otx2_tm_dbg("Assign hw id for child node %s lvl %u id %u (%p)",
+		    nix_hwlvl2str(child->hw_lvl), child->lvl, child->id, child);
 
 	child->flags |= NIX_TM_NODE_HWRES;
 
@@ -1219,8 +1458,8 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
 	struct otx2_nix_tm_node *tm_node;
-	uint16_t sq, smq, rr_quantum;
 	struct otx2_eth_txq *txq;
+	uint16_t sq;
 	int rc;
 
 	nix_tm_update_parent_info(dev);
@@ -1237,42 +1476,68 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 		return rc;
 	}
 
-	/* Enable xmit as all the topology is ready */
-	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
-		if (tm_node->flags & NIX_TM_NODE_ENABLED)
-			continue;
+	/* Trigger MTU recalulate as SMQ needs MTU conf */
+	if (eth_dev->data->dev_started && eth_dev->data->nb_rx_queues) {
+		rc = otx2_nix_recalc_mtu(eth_dev);
+		if (rc) {
+			otx2_err("TM MTU update failed, rc=%d", rc);
+			return rc;
+		}
+	}
 
-		/* Enable xmit on sq */
-		if (tm_node->lvl != OTX2_TM_LVL_QUEUE) {
+	/* Mark all non-leaf's as enabled */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!nix_tm_is_leaf(dev, tm_node->lvl))
 			tm_node->flags |= NIX_TM_NODE_ENABLED;
+	}
+
+	if (!xmit_enable)
+		return 0;
+
+	/* Update SQ Sched Data while SQ is idle */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!nix_tm_is_leaf(dev, tm_node->lvl))
 			continue;
+
+		rc = nix_sq_sched_data(dev, tm_node, false);
+		if (rc) {
+			otx2_err("SQ %u sched update failed, rc=%d",
+				 tm_node->id, rc);
+			return rc;
+		}
+	}
+
+	/* Finally XON all SMQ's */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+			continue;
+
+		rc = nix_smq_xoff(dev, tm_node, false);
+		if (rc) {
+			otx2_err("Failed to enable smq %u, rc=%d",
+				 tm_node->hw_id, rc);
+			return rc;
 		}
+	}
 
-		/* Don't enable SMQ or mark as enable */
-		if (!xmit_enable)
+	/* Enable xmit as all the topology is ready */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!nix_tm_is_leaf(dev, tm_node->lvl))
 			continue;
 
 		sq = tm_node->id;
-		if (sq > eth_dev->data->nb_tx_queues) {
-			rc = -EFAULT;
-			break;
-		}
-
 		txq = eth_dev->data->tx_queues[sq];
 
-		smq = tm_node->parent->hw_id;
-		rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
-		rc = nix_tm_sw_xon(txq, smq, rr_quantum);
-		if (rc)
-			break;
+		rc = otx2_nix_sq_enable(txq);
+		if (rc) {
+			otx2_err("TM sw xon failed on SQ %u, rc=%d",
+				 tm_node->id, rc);
+			return rc;
+		}
 		tm_node->flags |= NIX_TM_NODE_ENABLED;
 	}
 
-	if (rc)
-		otx2_err("TM failed to enable xmit on sq %u, rc=%d", sq, rc);
-
-	return rc;
+	return 0;
 }
 
 static int
@@ -1282,7 +1547,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 	uint32_t def = eth_dev->data->nb_tx_queues;
 	struct rte_tm_node_params params;
 	uint32_t leaf_parent, i;
-	int rc = 0;
+	int rc = 0, leaf_level;
 
 	/* Default params */
 	memset(&params, 0, sizeof(params));
@@ -1325,6 +1590,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 			goto exit;
 
 		leaf_parent = def + 4;
+		leaf_level = OTX2_TM_LVL_QUEUE;
 	} else {
 		dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
 		rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
@@ -1356,6 +1622,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 			goto exit;
 
 		leaf_parent = def + 3;
+		leaf_level = OTX2_TM_LVL_SCH4;
 	}
 
 	/* Add leaf nodes */
@@ -1363,7 +1630,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 		rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0,
 					     DEFAULT_RR_WEIGHT,
 					     NIX_TXSCH_LVL_CNT,
-					     OTX2_TM_LVL_QUEUE, false, &params);
+					     leaf_level, false, &params);
 		if (rc)
 			break;
 	}
@@ -1378,6 +1645,7 @@ void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
 
 	TAILQ_INIT(&dev->node_list);
 	TAILQ_INIT(&dev->shaper_profile_list);
+	dev->tm_rate_min = 1E9; /* 1Gbps */
 }
 
 int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
@@ -1455,7 +1723,7 @@ otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 		tm_node = nix_tm_node_search(dev, sq, true);
 
 	/* Check if we found a valid leaf node */
-	if (!tm_node || tm_node->lvl != OTX2_TM_LVL_QUEUE ||
+	if (!tm_node || !nix_tm_is_leaf(dev, tm_node->lvl) ||
 	    !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
 		return -EIO;
 	}
@@ -1464,7 +1732,7 @@ otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 	*smq = tm_node->parent->hw_id;
 	*rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
 
-	rc = nix_smq_xoff(dev, *smq, false);
+	rc = nix_smq_xoff(dev, tm_node->parent, false);
 	if (rc)
 		return rc;
 	tm_node->flags |= NIX_TM_NODE_ENABLED;
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index ad7727e..413120a 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -10,6 +10,7 @@
 #include <rte_tm_driver.h>
 
 #define NIX_TM_DEFAULT_TREE	BIT_ULL(0)
+#define NIX_TM_COMMITTED	BIT_ULL(1)
 #define NIX_TM_TL1_NO_SP	BIT_ULL(3)
 
 struct otx2_eth_dev;
@@ -19,7 +20,9 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 			      uint32_t *rr_quantum, uint16_t *smq);
-int otx2_nix_tm_sw_xoff(void *_txq, bool dev_started);
+int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
+int otx2_nix_sq_flush_post(void *_txq);
+int otx2_nix_sq_enable(void *_txq);
 int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
 
 struct otx2_nix_tm_node {
@@ -40,6 +43,7 @@ struct otx2_nix_tm_node {
 #define NIX_TM_NODE_USER	BIT_ULL(2)
 	/* Shaper algorithm for RED state @NIX_REDALG_E */
 	uint32_t red_algo:2;
+
 	struct otx2_nix_tm_node *parent;
 	struct rte_tm_node_params params;
 };
@@ -70,7 +74,6 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 		((((__weight) & MAX_SCHED_WEIGHT) *             \
 		  NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT)
 
-
 /* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT  */
 /* = NIX_MAX_HW_MTU */
 #define DEFAULT_RR_WEIGHT 71
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 04/11] net/octeontx2: add tm node add and delete cb
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
                     ` (2 preceding siblings ...)
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 03/11] net/octeontx2: add dynamic topology update support Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 05/11] net/octeontx2: add tm node suspend and resume cb Nithin Dabilpuram
                     ` (6 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Adds support to Traffic Management callbacks "node_add"
and "node_delete". These callbacks doesn't support
dynamic node addition or deletion post hierarchy commit.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 271 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.h |   2 +
 2 files changed, 273 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 01b327b..bbd8e19 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1540,6 +1540,277 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 	return 0;
 }
 
+static uint16_t
+nix_tm_lvl2nix(struct otx2_eth_dev *dev, uint32_t lvl)
+{
+	if (nix_tm_have_tl1_access(dev)) {
+		switch (lvl) {
+		case OTX2_TM_LVL_ROOT:
+			return NIX_TXSCH_LVL_TL1;
+		case OTX2_TM_LVL_SCH1:
+			return NIX_TXSCH_LVL_TL2;
+		case OTX2_TM_LVL_SCH2:
+			return NIX_TXSCH_LVL_TL3;
+		case OTX2_TM_LVL_SCH3:
+			return NIX_TXSCH_LVL_TL4;
+		case OTX2_TM_LVL_SCH4:
+			return NIX_TXSCH_LVL_SMQ;
+		default:
+			return NIX_TXSCH_LVL_CNT;
+		}
+	} else {
+		switch (lvl) {
+		case OTX2_TM_LVL_ROOT:
+			return NIX_TXSCH_LVL_TL2;
+		case OTX2_TM_LVL_SCH1:
+			return NIX_TXSCH_LVL_TL3;
+		case OTX2_TM_LVL_SCH2:
+			return NIX_TXSCH_LVL_TL4;
+		case OTX2_TM_LVL_SCH3:
+			return NIX_TXSCH_LVL_SMQ;
+		default:
+			return NIX_TXSCH_LVL_CNT;
+		}
+	}
+}
+
+static uint16_t
+nix_max_prio(struct otx2_eth_dev *dev, uint16_t hw_lvl)
+{
+	if (hw_lvl >= NIX_TXSCH_LVL_CNT)
+		return 0;
+
+	/* MDQ doesn't support SP */
+	if (hw_lvl == NIX_TXSCH_LVL_MDQ)
+		return 0;
+
+	/* PF's TL1 with VF's enabled doesn't support SP */
+	if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
+	    (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
+	     (dev->tm_flags & NIX_TM_TL1_NO_SP)))
+		return 0;
+
+	return TXSCH_TLX_SP_PRIO_MAX - 1;
+}
+
+
+static int
+validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
+	      uint32_t parent_id, uint32_t priority,
+	      struct rte_tm_error *error)
+{
+	uint8_t priorities[TXSCH_TLX_SP_PRIO_MAX];
+	struct otx2_nix_tm_node *tm_node;
+	uint32_t rr_num = 0;
+	int i;
+
+	/* Validate priority against max */
+	if (priority > nix_max_prio(dev, nix_tm_lvl2nix(dev, lvl - 1))) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "unsupported priority value";
+		return -EINVAL;
+	}
+
+	if (parent_id == RTE_TM_NODE_ID_NULL)
+		return 0;
+
+	memset(priorities, 0, TXSCH_TLX_SP_PRIO_MAX);
+	priorities[priority] = 1;
+
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!tm_node->parent)
+			continue;
+
+		if (!(tm_node->flags & NIX_TM_NODE_USER))
+			continue;
+
+		if (tm_node->parent->id != parent_id)
+			continue;
+
+		priorities[tm_node->priority]++;
+	}
+
+	for (i = 0; i < TXSCH_TLX_SP_PRIO_MAX; i++)
+		if (priorities[i] > 1)
+			rr_num++;
+
+	/* At max, one rr groups per parent */
+	if (rr_num > 1) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+		error->message = "multiple DWRR node priority";
+		return -EINVAL;
+	}
+
+	/* Check for previous priority to avoid holes in priorities */
+	if (priority && !priorities[priority - 1]) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+		error->message = "priority not in order";
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
+		     uint32_t parent_node_id, uint32_t priority,
+		     uint32_t weight, uint32_t lvl,
+		     struct rte_tm_node_params *params,
+		     struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *parent_node;
+	int rc, clear_on_fail = 0;
+	uint32_t exp_next_lvl;
+	uint16_t hw_lvl;
+
+	/* we don't support dynamic updates */
+	if (dev->tm_flags & NIX_TM_COMMITTED) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "dynamic update not supported";
+		return -EIO;
+	}
+
+	/* Leaf nodes have to be same priority */
+	if (nix_tm_is_leaf(dev, lvl) && priority != 0) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "queue shapers must be priority 0";
+		return -EIO;
+	}
+
+	parent_node = nix_tm_node_search(dev, parent_node_id, true);
+
+	/* find the right level */
+	if (lvl == RTE_TM_NODE_LEVEL_ID_ANY) {
+		if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+			lvl = OTX2_TM_LVL_ROOT;
+		} else if (parent_node) {
+			lvl = parent_node->lvl + 1;
+		} else {
+			/* Neigher proper parent nor proper level id given */
+			error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+			error->message = "invalid parent node id";
+			return -ERANGE;
+		}
+	}
+
+	/* Translate rte_tm level id's to nix hw level id's */
+	hw_lvl = nix_tm_lvl2nix(dev, lvl);
+	if (hw_lvl == NIX_TXSCH_LVL_CNT &&
+	    !nix_tm_is_leaf(dev, lvl)) {
+		error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
+		error->message = "invalid level id";
+		return -ERANGE;
+	}
+
+	if (node_id < dev->tm_leaf_cnt)
+		exp_next_lvl = NIX_TXSCH_LVL_SMQ;
+	else
+		exp_next_lvl = hw_lvl + 1;
+
+	/* Check if there is no parent node yet */
+	if (hw_lvl != dev->otx2_tm_root_lvl &&
+	    (!parent_node || parent_node->hw_lvl != exp_next_lvl)) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+		error->message = "invalid parent node id";
+		return -EINVAL;
+	}
+
+	/* Check if a node already exists */
+	if (nix_tm_node_search(dev, node_id, true)) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "node already exists";
+		return -EINVAL;
+	}
+
+	/* Check if shaper profile exists for non leaf node */
+	if (!nix_tm_is_leaf(dev, lvl) &&
+	    params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE &&
+	    !nix_tm_shaper_profile_search(dev, params->shaper_profile_id)) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+		error->message = "invalid shaper profile";
+		return -EINVAL;
+	}
+
+	/* Check if there is second DWRR already in siblings or holes in prio */
+	if (validate_prio(dev, lvl, parent_node_id, priority, error))
+		return -EINVAL;
+
+	if (weight > MAX_SCHED_WEIGHT) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+		error->message = "max weight exceeded";
+		return -EINVAL;
+	}
+
+	rc = nix_tm_node_add_to_list(dev, node_id, parent_node_id,
+				     priority, weight, hw_lvl,
+				     lvl, true, params);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		/* cleanup user added nodes */
+		if (clear_on_fail)
+			nix_tm_free_resources(dev, NIX_TM_NODE_USER,
+					      NIX_TM_NODE_USER, false);
+		error->message = "failed to add node";
+		return rc;
+	}
+	error->type = RTE_TM_ERROR_TYPE_NONE;
+	return 0;
+}
+
+static int
+otx2_nix_tm_node_delete(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node, *child_node;
+	struct otx2_nix_tm_shaper_profile *profile;
+	uint32_t profile_id;
+
+	/* we don't support dynamic updates yet */
+	if (dev->tm_flags & NIX_TM_COMMITTED) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "hierarchy exists";
+		return -EIO;
+	}
+
+	if (node_id == RTE_TM_NODE_ID_NULL) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "invalid node id";
+		return -EINVAL;
+	}
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	/* Check for any existing children */
+	TAILQ_FOREACH(child_node, &dev->node_list, node) {
+		if (child_node->parent == tm_node) {
+			error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+			error->message = "children exist";
+			return -EINVAL;
+		}
+	}
+
+	/* Remove shaper profile reference */
+	profile_id = tm_node->params.shaper_profile_id;
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
+	profile->reference_count--;
+
+	TAILQ_REMOVE(&dev->node_list, tm_node, node);
+	rte_free(tm_node);
+	return 0;
+}
+
+const struct rte_tm_ops otx2_tm_ops = {
+	.node_add = otx2_nix_tm_node_add,
+	.node_delete = otx2_nix_tm_node_delete,
+};
+
 static int
 nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 {
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 413120a..ebb4e90 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -135,6 +135,8 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 #define TXSCH_TL1_DFLT_RR_QTM  ((1 << 24) - 1)
 #define TXSCH_TL1_DFLT_RR_PRIO 1
 
+#define TXSCH_TLX_SP_PRIO_MAX 10
+
 static inline const char *
 nix_hwlvl2str(uint32_t hw_lvl)
 {
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 05/11] net/octeontx2: add tm node suspend and resume cb
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
                     ` (3 preceding siblings ...)
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 04/11] net/octeontx2: add tm node add and delete cb Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 06/11] net/octeontx2: add tm hierarchy commit callback Nithin Dabilpuram
                     ` (5 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Krzysztof Kanas <kkanas@marvell.com>

Add TM support to suspend and resume nodes post hierarchy
commit.

Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 81 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 81 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index bbd8e19..36cc0a4 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1540,6 +1540,28 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 	return 0;
 }
 
+static int
+send_tm_reqval(struct otx2_mbox *mbox,
+	       struct nix_txschq_config *req,
+	       struct rte_tm_error *error)
+{
+	int rc;
+
+	if (!req->num_regs ||
+	    req->num_regs > MAX_REGS_PER_MBOX_MSG) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "invalid config";
+		return -EIO;
+	}
+
+	rc = otx2_mbox_process(mbox);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+	}
+	return rc;
+}
+
 static uint16_t
 nix_tm_lvl2nix(struct otx2_eth_dev *dev, uint32_t lvl)
 {
@@ -1806,9 +1828,68 @@ otx2_nix_tm_node_delete(struct rte_eth_dev *eth_dev, uint32_t node_id,
 	return 0;
 }
 
+static int
+nix_tm_node_suspend_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			   struct rte_tm_error *error, bool suspend)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	struct otx2_nix_tm_node *tm_node;
+	struct nix_txschq_config *req;
+	uint16_t flags;
+	int rc;
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "hierarchy doesn't exist";
+		return -EINVAL;
+	}
+
+	flags = tm_node->flags;
+	flags = suspend ? (flags & ~NIX_TM_NODE_ENABLED) :
+		(flags | NIX_TM_NODE_ENABLED);
+
+	if (tm_node->flags == flags)
+		return 0;
+
+	/* send mbox for state change */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+
+	req->lvl = tm_node->hw_lvl;
+	req->num_regs =	prepare_tm_sw_xoff(tm_node, suspend,
+					   req->reg, req->regval);
+	rc = send_tm_reqval(mbox, req, error);
+	if (!rc)
+		tm_node->flags = flags;
+	return rc;
+}
+
+static int
+otx2_nix_tm_node_suspend(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			 struct rte_tm_error *error)
+{
+	return nix_tm_node_suspend_resume(eth_dev, node_id, error, true);
+}
+
+static int
+otx2_nix_tm_node_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			struct rte_tm_error *error)
+{
+	return nix_tm_node_suspend_resume(eth_dev, node_id, error, false);
+}
+
 const struct rte_tm_ops otx2_tm_ops = {
 	.node_add = otx2_nix_tm_node_add,
 	.node_delete = otx2_nix_tm_node_delete,
+	.node_suspend = otx2_nix_tm_node_suspend,
+	.node_resume = otx2_nix_tm_node_resume,
 };
 
 static int
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 06/11] net/octeontx2: add tm hierarchy commit callback
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
                     ` (4 preceding siblings ...)
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 05/11] net/octeontx2: add tm node suspend and resume cb Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 07/11] net/octeontx2: add tm stats and shaper profile cbs Nithin Dabilpuram
                     ` (4 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Add TM hierarchy commit callback to support enabling
newly created topology.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 173 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 173 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 36cc0a4..ce5081e 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1674,6 +1674,104 @@ validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
 }
 
 static int
+nix_xmit_disable(struct rte_eth_dev *eth_dev)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
+	uint16_t sqb_cnt, head_off, tail_off;
+	struct otx2_nix_tm_node *tm_node;
+	struct otx2_eth_txq *txq;
+	uint64_t wdata, val;
+	int i, rc;
+
+	otx2_tm_dbg("Disabling xmit on %s", eth_dev->data->name);
+
+	/* Enable CGX RXTX to drain pkts */
+	if (!eth_dev->data->dev_started) {
+		otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
+		rc = otx2_mbox_process(dev->mbox);
+		if (rc)
+			return rc;
+	}
+
+	/* XON all SMQ's */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+			continue;
+		if (!(tm_node->flags & NIX_TM_NODE_HWRES))
+			continue;
+
+		rc = nix_smq_xoff(dev, tm_node, false);
+		if (rc) {
+			otx2_err("Failed to enable smq %u, rc=%d",
+				 tm_node->hw_id, rc);
+			goto cleanup;
+		}
+	}
+
+	/* Flush all tx queues */
+	for (i = 0; i < sq_cnt; i++) {
+		txq = eth_dev->data->tx_queues[i];
+
+		rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+		if (rc) {
+			otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
+			goto cleanup;
+		}
+
+		/* Wait for sq entries to be flushed */
+		rc = nix_txq_flush_sq_spin(txq);
+		if (rc) {
+			otx2_err("Failed to drain sq, rc=%d\n", rc);
+			goto cleanup;
+		}
+	}
+
+	/* XOFF & Flush all SMQ's. HRM mandates
+	 * all SQ's empty before SMQ flush is issued.
+	 */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+			continue;
+		if (!(tm_node->flags & NIX_TM_NODE_HWRES))
+			continue;
+
+		rc = nix_smq_xoff(dev, tm_node, true);
+		if (rc) {
+			otx2_err("Failed to enable smq %u, rc=%d",
+				 tm_node->hw_id, rc);
+			goto cleanup;
+		}
+	}
+
+	/* Verify sanity of all tx queues */
+	for (i = 0; i < sq_cnt; i++) {
+		txq = eth_dev->data->tx_queues[i];
+
+		wdata = ((uint64_t)txq->sq << 32);
+		val = otx2_atomic64_add_nosync(wdata,
+			       (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS));
+
+		sqb_cnt = val & 0xFFFF;
+		head_off = (val >> 20) & 0x3F;
+		tail_off = (val >> 28) & 0x3F;
+
+		if (sqb_cnt > 1 || head_off != tail_off ||
+		    (*txq->fc_mem != txq->nb_sqb_bufs))
+			otx2_err("Failed to gracefully flush sq %u", txq->sq);
+	}
+
+cleanup:
+	/* restore cgx state */
+	if (!eth_dev->data->dev_started) {
+		otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
+		rc |= otx2_mbox_process(dev->mbox);
+	}
+
+	return rc;
+}
+
+static int
 otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
 		     uint32_t parent_node_id, uint32_t priority,
 		     uint32_t weight, uint32_t lvl,
@@ -1885,11 +1983,86 @@ otx2_nix_tm_node_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
 	return nix_tm_node_suspend_resume(eth_dev, node_id, error, false);
 }
 
+static int
+otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
+			     int clear_on_fail,
+			     struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node;
+	uint32_t leaf_cnt = 0;
+	int rc;
+
+	if (dev->tm_flags & NIX_TM_COMMITTED) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "hierarchy exists";
+		return -EINVAL;
+	}
+
+	/* Check if we have all the leaf nodes */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->flags & NIX_TM_NODE_USER &&
+		    tm_node->id < dev->tm_leaf_cnt)
+			leaf_cnt++;
+	}
+
+	if (leaf_cnt != dev->tm_leaf_cnt) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "incomplete hierarchy";
+		return -EINVAL;
+	}
+
+	/*
+	 * Disable xmit will be enabled when
+	 * new topology is available.
+	 */
+	rc = nix_xmit_disable(eth_dev);
+	if (rc) {
+		otx2_err("failed to disable TX, rc=%d", rc);
+		return -EIO;
+	}
+
+	/* Delete default/ratelimit tree */
+	if (dev->tm_flags & (NIX_TM_DEFAULT_TREE)) {
+		rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER, 0, false);
+		if (rc) {
+			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+			error->message = "failed to free default resources";
+			return rc;
+		}
+		dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE);
+	}
+
+	/* Free up user alloc'ed resources */
+	rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER,
+				   NIX_TM_NODE_USER, true);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "failed to free user resources";
+		return rc;
+	}
+
+	rc = nix_tm_alloc_resources(eth_dev, true);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "alloc resources failed";
+		/* TODO should we restore default config ? */
+		if (clear_on_fail)
+			nix_tm_free_resources(dev, 0, 0, false);
+		return rc;
+	}
+
+	error->type = RTE_TM_ERROR_TYPE_NONE;
+	dev->tm_flags |= NIX_TM_COMMITTED;
+	return 0;
+}
+
 const struct rte_tm_ops otx2_tm_ops = {
 	.node_add = otx2_nix_tm_node_add,
 	.node_delete = otx2_nix_tm_node_delete,
 	.node_suspend = otx2_nix_tm_node_suspend,
 	.node_resume = otx2_nix_tm_node_resume,
+	.hierarchy_commit = otx2_nix_tm_hierarchy_commit,
 };
 
 static int
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 07/11] net/octeontx2: add tm stats and shaper profile cbs
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
                     ` (5 preceding siblings ...)
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 06/11] net/octeontx2: add tm hierarchy commit callback Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 08/11] net/octeontx2: add tm dynamic topology update cb Nithin Dabilpuram
                     ` (3 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Add TM support for stats read and private shaper
profile addition or deletion.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 272 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.h |   4 +
 2 files changed, 276 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index ce5081e..8230b5e 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1674,6 +1674,47 @@ validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
 }
 
 static int
+read_tm_reg(struct otx2_mbox *mbox, uint64_t reg,
+	    uint64_t *regval, uint32_t hw_lvl)
+{
+	volatile struct nix_txschq_config *req;
+	struct nix_txschq_config *rsp;
+	int rc;
+
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->read = 1;
+	req->lvl = hw_lvl;
+	req->reg[0] = reg;
+	req->num_regs = 1;
+
+	rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
+	if (rc)
+		return rc;
+	*regval = rsp->regval[0];
+	return 0;
+}
+
+/* Search for min rate in topology */
+static void
+nix_tm_shaper_profile_update_min(struct otx2_eth_dev *dev)
+{
+	struct otx2_nix_tm_shaper_profile *profile;
+	uint64_t rate_min = 1E9; /* 1 Gbps */
+
+	TAILQ_FOREACH(profile, &dev->shaper_profile_list, shaper) {
+		if (profile->params.peak.rate &&
+		    profile->params.peak.rate < rate_min)
+			rate_min = profile->params.peak.rate;
+
+		if (profile->params.committed.rate &&
+		    profile->params.committed.rate < rate_min)
+			rate_min = profile->params.committed.rate;
+	}
+
+	dev->tm_rate_min = rate_min;
+}
+
+static int
 nix_xmit_disable(struct rte_eth_dev *eth_dev)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
@@ -1772,6 +1813,145 @@ nix_xmit_disable(struct rte_eth_dev *eth_dev)
 }
 
 static int
+otx2_nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			  int *is_leaf, struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node;
+
+	if (is_leaf == NULL) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		return -EINVAL;
+	}
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (node_id == RTE_TM_NODE_ID_NULL || !tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		return -EINVAL;
+	}
+	if (nix_tm_is_leaf(dev, tm_node->lvl))
+		*is_leaf = true;
+	else
+		*is_leaf = false;
+
+	return 0;
+}
+
+static int
+otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev,
+			       uint32_t profile_id,
+			       struct rte_tm_shaper_params *params,
+			       struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_shaper_profile *profile;
+
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
+	if (profile) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+		error->message = "shaper profile ID exist";
+		return -EINVAL;
+	}
+
+	/* Committed rate and burst size can be enabled/disabled */
+	if (params->committed.size || params->committed.rate) {
+		if (params->committed.size < MIN_SHAPER_BURST ||
+		    params->committed.size > MAX_SHAPER_BURST) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+			return -EINVAL;
+		} else if (!shaper_rate_to_nix(params->committed.rate * 8,
+					       NULL, NULL, NULL)) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+			error->message = "shaper committed rate invalid";
+			return -EINVAL;
+		}
+	}
+
+	/* Peak rate and burst size can be enabled/disabled */
+	if (params->peak.size || params->peak.rate) {
+		if (params->peak.size < MIN_SHAPER_BURST ||
+		    params->peak.size > MAX_SHAPER_BURST) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+			return -EINVAL;
+		} else if (!shaper_rate_to_nix(params->peak.rate * 8,
+					       NULL, NULL, NULL)) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+			error->message = "shaper peak rate invalid";
+			return -EINVAL;
+		}
+	}
+
+	profile = rte_zmalloc("otx2_nix_tm_shaper_profile",
+			      sizeof(struct otx2_nix_tm_shaper_profile), 0);
+	if (!profile)
+		return -ENOMEM;
+
+	profile->shaper_profile_id = profile_id;
+	rte_memcpy(&profile->params, params,
+		   sizeof(struct rte_tm_shaper_params));
+	TAILQ_INSERT_TAIL(&dev->shaper_profile_list, profile, shaper);
+
+	otx2_tm_dbg("Added TM shaper profile %u, "
+		    " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64
+		    ", cbs %" PRIu64 " , adj %u",
+		    profile_id,
+		    params->peak.rate * 8,
+		    params->peak.size,
+		    params->committed.rate * 8,
+		    params->committed.size,
+		    params->pkt_length_adjust);
+
+	/* Translate rate as bits per second */
+	profile->params.peak.rate = profile->params.peak.rate * 8;
+	profile->params.committed.rate = profile->params.committed.rate * 8;
+	/* Always use PIR for single rate shaping */
+	if (!params->peak.rate && params->committed.rate) {
+		profile->params.peak = profile->params.committed;
+		memset(&profile->params.committed, 0,
+		       sizeof(profile->params.committed));
+	}
+
+	/* update min rate */
+	nix_tm_shaper_profile_update_min(dev);
+	return 0;
+}
+
+static int
+otx2_nix_tm_shaper_profile_delete(struct rte_eth_dev *eth_dev,
+				  uint32_t profile_id,
+				  struct rte_tm_error *error)
+{
+	struct otx2_nix_tm_shaper_profile *profile;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
+
+	if (!profile) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+		error->message = "shaper profile ID not exist";
+		return -EINVAL;
+	}
+
+	if (profile->reference_count) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+		error->message = "shaper profile in use";
+		return -EINVAL;
+	}
+
+	otx2_tm_dbg("Removing TM shaper profile %u", profile_id);
+	TAILQ_REMOVE(&dev->shaper_profile_list, profile, shaper);
+	rte_free(profile);
+
+	/* update min rate */
+	nix_tm_shaper_profile_update_min(dev);
+	return 0;
+}
+
+static int
 otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
 		     uint32_t parent_node_id, uint32_t priority,
 		     uint32_t weight, uint32_t lvl,
@@ -2057,12 +2237,104 @@ otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
 	return 0;
 }
 
+static int
+otx2_nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			    struct rte_tm_node_stats *stats,
+			    uint64_t *stats_mask, int clear,
+			    struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node;
+	uint64_t reg, val;
+	int64_t *addr;
+	int rc = 0;
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	/* Stats support only for leaf node or TL1 root */
+	if (nix_tm_is_leaf(dev, tm_node->lvl)) {
+		reg = (((uint64_t)tm_node->id) << 32);
+
+		/* Packets */
+		addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
+		val = otx2_atomic64_add_nosync(reg, addr);
+		if (val & OP_ERR)
+			val = 0;
+		stats->n_pkts = val - tm_node->last_pkts;
+
+		/* Bytes */
+		addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
+		val = otx2_atomic64_add_nosync(reg, addr);
+		if (val & OP_ERR)
+			val = 0;
+		stats->n_bytes = val - tm_node->last_bytes;
+
+		if (clear) {
+			tm_node->last_pkts = stats->n_pkts;
+			tm_node->last_bytes = stats->n_bytes;
+		}
+
+		*stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES;
+
+	} else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "stats read error";
+
+		/* RED Drop packets */
+		reg = NIX_AF_TL1X_DROPPED_PACKETS(tm_node->hw_id);
+		rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
+		if (rc)
+			goto exit;
+		stats->leaf.n_pkts_dropped[RTE_COLOR_RED] =
+						val - tm_node->last_pkts;
+
+		/* RED Drop bytes */
+		reg = NIX_AF_TL1X_DROPPED_BYTES(tm_node->hw_id);
+		rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
+		if (rc)
+			goto exit;
+		stats->leaf.n_bytes_dropped[RTE_COLOR_RED] =
+						val - tm_node->last_bytes;
+
+		/* Clear stats */
+		if (clear) {
+			tm_node->last_pkts =
+				stats->leaf.n_pkts_dropped[RTE_COLOR_RED];
+			tm_node->last_bytes =
+				stats->leaf.n_bytes_dropped[RTE_COLOR_RED];
+		}
+
+		*stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
+			RTE_TM_STATS_N_BYTES_RED_DROPPED;
+
+	} else {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "unsupported node";
+		rc = -EINVAL;
+	}
+
+exit:
+	return rc;
+}
+
 const struct rte_tm_ops otx2_tm_ops = {
+	.node_type_get = otx2_nix_tm_node_type_get,
+
+	.shaper_profile_add = otx2_nix_tm_shaper_profile_add,
+	.shaper_profile_delete = otx2_nix_tm_shaper_profile_delete,
+
 	.node_add = otx2_nix_tm_node_add,
 	.node_delete = otx2_nix_tm_node_delete,
 	.node_suspend = otx2_nix_tm_node_suspend,
 	.node_resume = otx2_nix_tm_node_resume,
 	.hierarchy_commit = otx2_nix_tm_hierarchy_commit,
+
+	.node_stats_read = otx2_nix_tm_node_stats_read,
 };
 
 static int
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index ebb4e90..20e2069 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -46,6 +46,10 @@ struct otx2_nix_tm_node {
 
 	struct otx2_nix_tm_node *parent;
 	struct rte_tm_node_params params;
+
+	/* Last stats */
+	uint64_t last_pkts;
+	uint64_t last_bytes;
 };
 
 struct otx2_nix_tm_shaper_profile {
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 08/11] net/octeontx2: add tm dynamic topology update cb
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
                     ` (6 preceding siblings ...)
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 07/11] net/octeontx2: add tm stats and shaper profile cbs Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 09/11] net/octeontx2: add tm debug support Nithin Dabilpuram
                     ` (2 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Add dynamic parent and shaper update callbacks that
can be used to change RR Quantum or PIR/CIR rate dynamically
post hierarchy commit. Dynamic parent update callback only
supports updating RR quantum of a given child with respect to
its parent. There is no support yet to change priority or parent
itself.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 190 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 190 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 8230b5e..5a5ba5e 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -2238,6 +2238,194 @@ otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
 }
 
 static int
+otx2_nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev,
+			       uint32_t node_id,
+			       uint32_t profile_id,
+			       struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_shaper_profile *profile = NULL;
+	struct otx2_mbox *mbox = dev->mbox;
+	struct otx2_nix_tm_node *tm_node;
+	struct nix_txschq_config *req;
+	uint8_t k;
+	int rc;
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node || nix_tm_is_leaf(dev, tm_node->lvl)) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "invalid node";
+		return -EINVAL;
+	}
+
+	if (profile_id == tm_node->params.shaper_profile_id)
+		return 0;
+
+	if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+		profile = nix_tm_shaper_profile_search(dev, profile_id);
+		if (!profile) {
+			error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+			error->message = "shaper profile ID not exist";
+			return -EINVAL;
+		}
+	}
+
+	tm_node->params.shaper_profile_id = profile_id;
+
+	/* Nothing to do if not yet committed */
+	if (!(dev->tm_flags & NIX_TM_COMMITTED))
+		return 0;
+
+	tm_node->flags &= ~NIX_TM_NODE_ENABLED;
+
+	/* Flush the specific node with SW_XOFF */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = tm_node->hw_lvl;
+	k = prepare_tm_sw_xoff(tm_node, true, req->reg, req->regval);
+	req->num_regs = k;
+
+	rc = send_tm_reqval(mbox, req, error);
+	if (rc)
+		return rc;
+
+	/* Update the PIR/CIR and clear SW XOFF */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = tm_node->hw_lvl;
+
+	k = prepare_tm_shaper_reg(tm_node, profile, req->reg, req->regval);
+
+	k += prepare_tm_sw_xoff(tm_node, false, &req->reg[k], &req->regval[k]);
+
+	req->num_regs = k;
+	rc = send_tm_reqval(mbox, req, error);
+	if (!rc)
+		tm_node->flags |= NIX_TM_NODE_ENABLED;
+	return rc;
+}
+
+static int
+otx2_nix_tm_node_parent_update(struct rte_eth_dev *eth_dev,
+			       uint32_t node_id, uint32_t new_parent_id,
+			       uint32_t priority, uint32_t weight,
+			       struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node, *sibling;
+	struct otx2_nix_tm_node *new_parent;
+	struct nix_txschq_config *req;
+	uint8_t k;
+	int rc;
+
+	if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "hierarchy doesn't exist";
+		return -EINVAL;
+	}
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	/* Parent id valid only for non root nodes */
+	if (tm_node->hw_lvl != dev->otx2_tm_root_lvl) {
+		new_parent = nix_tm_node_search(dev, new_parent_id, true);
+		if (!new_parent) {
+			error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+			error->message = "no such parent node";
+			return -EINVAL;
+		}
+
+		/* Current support is only for dynamic weight update */
+		if (tm_node->parent != new_parent ||
+		    tm_node->priority != priority) {
+			error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+			error->message = "only weight update supported";
+			return -EINVAL;
+		}
+	}
+
+	/* Skip if no change */
+	if (tm_node->weight == weight)
+		return 0;
+
+	tm_node->weight = weight;
+
+	/* For leaf nodes, SQ CTX needs update */
+	if (nix_tm_is_leaf(dev, tm_node->lvl)) {
+		/* Update SQ quantum data on the fly */
+		rc = nix_sq_sched_data(dev, tm_node, true);
+		if (rc) {
+			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+			error->message = "sq sched data update failed";
+			return rc;
+		}
+	} else {
+		/* XOFF Parent node */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->parent->hw_lvl;
+		req->num_regs = prepare_tm_sw_xoff(tm_node->parent, true,
+						   req->reg, req->regval);
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* XOFF this node and all other siblings */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->hw_lvl;
+
+		k = 0;
+		TAILQ_FOREACH(sibling, &dev->node_list, node) {
+			if (sibling->parent != tm_node->parent)
+				continue;
+			k += prepare_tm_sw_xoff(sibling, true, &req->reg[k],
+						&req->regval[k]);
+		}
+		req->num_regs = k;
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* Update new weight for current node */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->hw_lvl;
+		req->num_regs = prepare_tm_sched_reg(dev, tm_node,
+						     req->reg, req->regval);
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* XON this node and all other siblings */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->hw_lvl;
+
+		k = 0;
+		TAILQ_FOREACH(sibling, &dev->node_list, node) {
+			if (sibling->parent != tm_node->parent)
+				continue;
+			k += prepare_tm_sw_xoff(sibling, false, &req->reg[k],
+						&req->regval[k]);
+		}
+		req->num_regs = k;
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* XON Parent node */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->parent->hw_lvl;
+		req->num_regs = prepare_tm_sw_xoff(tm_node->parent, false,
+						   req->reg, req->regval);
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+	}
+	return 0;
+}
+
+static int
 otx2_nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
 			    struct rte_tm_node_stats *stats,
 			    uint64_t *stats_mask, int clear,
@@ -2334,6 +2522,8 @@ const struct rte_tm_ops otx2_tm_ops = {
 	.node_resume = otx2_nix_tm_node_resume,
 	.hierarchy_commit = otx2_nix_tm_hierarchy_commit,
 
+	.node_shaper_update = otx2_nix_tm_node_shaper_update,
+	.node_parent_update = otx2_nix_tm_node_parent_update,
 	.node_stats_read = otx2_nix_tm_node_stats_read,
 };
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 09/11] net/octeontx2: add tm debug support
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
                     ` (7 preceding siblings ...)
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 08/11] net/octeontx2: add tm dynamic topology update cb Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 10/11] net/octeontx2: add Tx queue ratelimit callback Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 11/11] net/octeontx2: add tm capability callbacks Nithin Dabilpuram
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Add debug support to TM to dump configured topology
and registers. Also enable debug dump when sq flush fails.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_ethdev.h       |   1 +
 drivers/net/octeontx2/otx2_ethdev_debug.c | 311 ++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.c           |   9 +-
 drivers/net/octeontx2/otx2_tm.h           |   1 +
 4 files changed, 318 insertions(+), 4 deletions(-)

diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 6679652..0ef90ce 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -459,6 +459,7 @@ int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
 			 struct rte_dev_reg_info *regs);
 int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
 void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
+void otx2_nix_tm_dump(struct otx2_eth_dev *dev);
 
 /* Stats */
 int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
index c8b4cd5..498c377 100644
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -6,6 +6,7 @@
 
 #define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
 #define NIX_REG_INFO(reg) {reg, #reg}
+#define NIX_REG_NAME_SZ 48
 
 struct nix_lf_reg_info {
 	uint32_t offset;
@@ -390,9 +391,14 @@ otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
 	int rc, q, rq = eth_dev->data->nb_rx_queues;
 	int sq = eth_dev->data->nb_tx_queues;
 	struct otx2_mbox *mbox = dev->mbox;
+	struct npa_aq_enq_rsp *npa_rsp;
+	struct npa_aq_enq_req *npa_aq;
+	struct otx2_npa_lf *npa_lf;
 	struct nix_aq_enq_rsp *rsp;
 	struct nix_aq_enq_req *aq;
 
+	npa_lf = otx2_npa_lf_obj_get();
+
 	for (q = 0; q < rq; q++) {
 		aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
 		aq->qidx = q;
@@ -438,6 +444,36 @@ otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
 		nix_dump("============== port=%d sq=%d ===============",
 			 eth_dev->data->port_id, q);
 		nix_lf_sq_dump(&rsp->sq);
+
+		if (!npa_lf) {
+			otx2_err("NPA LF doesn't exist");
+			continue;
+		}
+
+		/* Dump SQB Aura minimal info */
+		npa_aq = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+		npa_aq->aura_id = rsp->sq.sqb_aura;
+		npa_aq->ctype = NPA_AQ_CTYPE_AURA;
+		npa_aq->op = NPA_AQ_INSTOP_READ;
+
+		rc = otx2_mbox_process_msg(npa_lf->mbox, (void *)&npa_rsp);
+		if (rc) {
+			otx2_err("Failed to get sq's sqb_aura context");
+			continue;
+		}
+
+		nix_dump("\nSQB Aura W0: Pool addr\t\t0x%"PRIx64"",
+			 npa_rsp->aura.pool_addr);
+		nix_dump("SQB Aura W1: ena\t\t\t%d",
+			 npa_rsp->aura.ena);
+		nix_dump("SQB Aura W2: count\t\t%"PRIx64"",
+			 (uint64_t)npa_rsp->aura.count);
+		nix_dump("SQB Aura W3: limit\t\t%"PRIx64"",
+			 (uint64_t)npa_rsp->aura.limit);
+		nix_dump("SQB Aura W3: fc_ena\t\t%d",
+			 npa_rsp->aura.fc_ena);
+		nix_dump("SQB Aura W4: fc_addr\t\t0x%"PRIx64"\n",
+			 npa_rsp->aura.fc_addr);
 	}
 
 fail:
@@ -498,3 +534,278 @@ otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
 	nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
 		 rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
 }
+
+static uint8_t
+prepare_nix_tm_reg_dump(uint16_t hw_lvl, uint16_t schq, uint16_t link,
+			uint64_t *reg, char regstr[][NIX_REG_NAME_SZ])
+{
+	uint8_t k = 0;
+
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_SMQ:
+		reg[k] = NIX_AF_SMQX_CFG(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_SMQ[%u]_CFG", schq);
+
+		reg[k] = NIX_AF_MDQX_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_MDQX_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_MDQX_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_MDQX_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		reg[k] = NIX_AF_TL4X_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SDP_LINK_CFG", schq);
+
+		reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL4X_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_TL4X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL4X_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		reg[k] = NIX_AF_TL3X_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
+
+		reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL3X_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_TL3X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL3X_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		reg[k] = NIX_AF_TL2X_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
+
+		reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL2X_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_TL2X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL2X_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL1:
+
+		reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL1X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_SW_XOFF", schq);
+
+		reg[k] = NIX_AF_TL1X_DROPPED_PACKETS(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_DROPPED_PACKETS", schq);
+		break;
+	default:
+		break;
+	}
+
+	if (k > MAX_REGS_PER_MBOX_MSG) {
+		nix_dump("\t!!!NIX TM Registers request overflow!!!");
+		return 0;
+	}
+	return k;
+}
+
+/* Dump TM hierarchy and registers */
+void
+otx2_nix_tm_dump(struct otx2_eth_dev *dev)
+{
+	char regstr[MAX_REGS_PER_MBOX_MSG * 2][NIX_REG_NAME_SZ];
+	struct otx2_nix_tm_node *tm_node, *root_node, *parent;
+	uint64_t reg[MAX_REGS_PER_MBOX_MSG * 2];
+	struct nix_txschq_config *req;
+	const char *lvlstr, *parent_lvlstr;
+	struct nix_txschq_config *rsp;
+	uint32_t schq, parent_schq;
+	int hw_lvl, j, k, rc;
+
+	nix_dump("===TM hierarchy and registers dump of %s===",
+		 dev->eth_dev->data->name);
+
+	root_node = NULL;
+
+	for (hw_lvl = 0; hw_lvl <= NIX_TXSCH_LVL_CNT; hw_lvl++) {
+		TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+			if (tm_node->hw_lvl != hw_lvl)
+				continue;
+
+			parent = tm_node->parent;
+			if (hw_lvl == NIX_TXSCH_LVL_CNT) {
+				lvlstr = "SQ";
+				schq = tm_node->id;
+			} else {
+				lvlstr = nix_hwlvl2str(tm_node->hw_lvl);
+				schq = tm_node->hw_id;
+			}
+
+			if (parent) {
+				parent_schq = parent->hw_id;
+				parent_lvlstr =
+					nix_hwlvl2str(parent->hw_lvl);
+			} else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
+				parent_schq = otx2_nix_get_link(dev);
+				parent_lvlstr = "LINK";
+			} else {
+				parent_schq = tm_node->parent_hw_id;
+				parent_lvlstr =
+					nix_hwlvl2str(tm_node->hw_lvl + 1);
+			}
+
+			nix_dump("%s_%d->%s_%d", lvlstr, schq,
+				 parent_lvlstr, parent_schq);
+
+			if (!(tm_node->flags & NIX_TM_NODE_HWRES))
+				continue;
+
+			/* Need to dump TL1 when root is TL2 */
+			if (tm_node->hw_lvl == dev->otx2_tm_root_lvl)
+				root_node = tm_node;
+
+			/* Dump registers only when HWRES is present */
+			k = prepare_nix_tm_reg_dump(tm_node->hw_lvl, schq,
+						    otx2_nix_get_link(dev), reg,
+						    regstr);
+			if (!k)
+				continue;
+
+			req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+			req->read = 1;
+			req->lvl = tm_node->hw_lvl;
+			req->num_regs = k;
+			otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+			rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
+			if (!rc) {
+				for (j = 0; j < k; j++)
+					nix_dump("\t%s=0x%016lx",
+						 regstr[j], rsp->regval[j]);
+			} else {
+				nix_dump("\t!!!Failed to dump registers!!!");
+			}
+		}
+		nix_dump("\n");
+	}
+
+	/* Dump TL1 node data when root level is TL2 */
+	if (root_node && root_node->hw_lvl == NIX_TXSCH_LVL_TL2) {
+		k = prepare_nix_tm_reg_dump(NIX_TXSCH_LVL_TL1,
+					    root_node->parent_hw_id,
+					    otx2_nix_get_link(dev),
+					    reg, regstr);
+		if (!k)
+			return;
+
+
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->read = 1;
+		req->lvl = NIX_TXSCH_LVL_TL1;
+		req->num_regs = k;
+		otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+		rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
+		if (!rc) {
+			for (j = 0; j < k; j++)
+				nix_dump("\t%s=0x%016lx",
+					 regstr[j], rsp->regval[j]);
+		} else {
+			nix_dump("\t!!!Failed to dump registers!!!");
+		}
+	}
+
+	otx2_nix_queues_ctx_dump(dev->eth_dev);
+}
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 5a5ba5e..be0027f 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -28,8 +28,8 @@ uint64_t shaper2regval(struct shaper_params *shaper)
 		(shaper->mantissa << 1);
 }
 
-static int
-nix_get_link(struct otx2_eth_dev *dev)
+int
+otx2_nix_get_link(struct otx2_eth_dev *dev)
 {
 	int link = 13 /* SDP */;
 	uint16_t lmac_chan;
@@ -574,7 +574,7 @@ populate_tm_reg(struct otx2_eth_dev *dev,
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
 			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
-						nix_get_link(dev));
+						otx2_nix_get_link(dev));
 			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
 			k++;
 		}
@@ -594,7 +594,7 @@ populate_tm_reg(struct otx2_eth_dev *dev,
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
 			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
-						nix_get_link(dev));
+						otx2_nix_get_link(dev));
 			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
 			k++;
 		}
@@ -990,6 +990,7 @@ nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
 
 	return 0;
 exit:
+	otx2_nix_tm_dump(dev);
 	return -EFAULT;
 }
 
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 20e2069..d5d58ec 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -23,6 +23,7 @@ int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
 int otx2_nix_sq_flush_post(void *_txq);
 int otx2_nix_sq_enable(void *_txq);
+int otx2_nix_get_link(struct otx2_eth_dev *dev);
 int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
 
 struct otx2_nix_tm_node {
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 10/11] net/octeontx2: add Tx queue ratelimit callback
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
                     ` (8 preceding siblings ...)
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 09/11] net/octeontx2: add tm debug support Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 11/11] net/octeontx2: add tm capability callbacks Nithin Dabilpuram
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Krzysztof Kanas <kkanas@marvell.com>

Add Tx queue ratelimiting support. This support is mutually
exclusive with TM support i.e when TM is configured, tx queue
ratelimiting config is no more valid.

Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/octeontx2/otx2_ethdev.c |   1 +
 drivers/net/octeontx2/otx2_tm.c     | 241 +++++++++++++++++++++++++++++++++++-
 drivers/net/octeontx2/otx2_tm.h     |   3 +
 3 files changed, 243 insertions(+), 2 deletions(-)

diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6896797..78b7f3a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2071,6 +2071,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
 	.rx_descriptor_status     = otx2_nix_rx_descriptor_status,
 	.tx_descriptor_status     = otx2_nix_tx_descriptor_status,
 	.tx_done_cleanup          = otx2_nix_tx_done_cleanup,
+	.set_queue_rate_limit     = otx2_nix_tm_set_queue_rate_limit,
 	.pool_ops_supported       = otx2_nix_pool_ops_supported,
 	.filter_ctrl              = otx2_nix_dev_filter_ctrl,
 	.get_module_info          = otx2_nix_get_module_info,
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index be0027f..de5a59b 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -2204,14 +2204,15 @@ otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Delete default/ratelimit tree */
-	if (dev->tm_flags & (NIX_TM_DEFAULT_TREE)) {
+	if (dev->tm_flags & (NIX_TM_DEFAULT_TREE | NIX_TM_RATE_LIMIT_TREE)) {
 		rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER, 0, false);
 		if (rc) {
 			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 			error->message = "failed to free default resources";
 			return rc;
 		}
-		dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE);
+		dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE |
+				   NIX_TM_RATE_LIMIT_TREE);
 	}
 
 	/* Free up user alloc'ed resources */
@@ -2673,6 +2674,242 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int
+nix_tm_prepare_rate_limited_tree(struct rte_eth_dev *eth_dev)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	uint32_t def = eth_dev->data->nb_tx_queues;
+	struct rte_tm_node_params params;
+	uint32_t leaf_parent, i, rc = 0;
+
+	memset(&params, 0, sizeof(params));
+
+	if (nix_tm_have_tl1_access(dev)) {
+		dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
+		rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL1,
+					OTX2_TM_LVL_ROOT, false, &params);
+		if (rc)
+			goto error;
+		rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL2,
+					OTX2_TM_LVL_SCH1, false, &params);
+		if (rc)
+			goto error;
+		rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL3,
+					OTX2_TM_LVL_SCH2, false, &params);
+		if (rc)
+			goto error;
+		rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL4,
+					OTX2_TM_LVL_SCH3, false, &params);
+		if (rc)
+			goto error;
+		leaf_parent = def + 3;
+
+		/* Add per queue SMQ nodes */
+		for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+			rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
+						leaf_parent,
+						0, DEFAULT_RR_WEIGHT,
+						NIX_TXSCH_LVL_SMQ,
+						OTX2_TM_LVL_SCH4,
+						false, &params);
+			if (rc)
+				goto error;
+		}
+
+		/* Add leaf nodes */
+		for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+			rc = nix_tm_node_add_to_list(dev, i,
+						     leaf_parent + 1 + i, 0,
+						     DEFAULT_RR_WEIGHT,
+						     NIX_TXSCH_LVL_CNT,
+						     OTX2_TM_LVL_QUEUE,
+						     false, &params);
+		if (rc)
+			goto error;
+		}
+
+		return 0;
+	}
+
+	dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
+	rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+				DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL2,
+				OTX2_TM_LVL_ROOT, false, &params);
+	if (rc)
+		goto error;
+	rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+				DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL3,
+				OTX2_TM_LVL_SCH1, false, &params);
+	if (rc)
+		goto error;
+	rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+				     DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL4,
+				     OTX2_TM_LVL_SCH2, false, &params);
+	if (rc)
+		goto error;
+	leaf_parent = def + 2;
+
+	/* Add per queue SMQ nodes */
+	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+		rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
+					     leaf_parent,
+					     0, DEFAULT_RR_WEIGHT,
+					     NIX_TXSCH_LVL_SMQ,
+					     OTX2_TM_LVL_SCH3,
+					     false, &params);
+		if (rc)
+			goto error;
+	}
+
+	/* Add leaf nodes */
+	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+		rc = nix_tm_node_add_to_list(dev, i, leaf_parent + 1 + i, 0,
+					     DEFAULT_RR_WEIGHT,
+					     NIX_TXSCH_LVL_CNT,
+					     OTX2_TM_LVL_SCH4,
+					     false, &params);
+		if (rc)
+			break;
+	}
+error:
+	return rc;
+}
+
+static int
+otx2_nix_tm_rate_limit_mdq(struct rte_eth_dev *eth_dev,
+			   struct otx2_nix_tm_node *tm_node,
+			   uint64_t tx_rate)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_shaper_profile profile;
+	struct otx2_mbox *mbox = dev->mbox;
+	volatile uint64_t *reg, *regval;
+	struct nix_txschq_config *req;
+	uint16_t flags;
+	uint8_t k = 0;
+	int rc;
+
+	flags = tm_node->flags;
+
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = NIX_TXSCH_LVL_MDQ;
+	reg = req->reg;
+	regval = req->regval;
+
+	if (tx_rate == 0) {
+		k += prepare_tm_sw_xoff(tm_node, true, &reg[k], &regval[k]);
+		flags &= ~NIX_TM_NODE_ENABLED;
+		goto exit;
+	}
+
+	if (!(flags & NIX_TM_NODE_ENABLED)) {
+		k += prepare_tm_sw_xoff(tm_node, false, &reg[k], &regval[k]);
+		flags |= NIX_TM_NODE_ENABLED;
+	}
+
+	/* Use only PIR for rate limit */
+	memset(&profile, 0, sizeof(profile));
+	profile.params.peak.rate = tx_rate;
+	/* Minimum burst of ~4us Bytes of Tx */
+	profile.params.peak.size = RTE_MAX(NIX_MAX_HW_FRS,
+					   (4ull * tx_rate) / (1E6 * 8));
+	if (!dev->tm_rate_min || dev->tm_rate_min > tx_rate)
+		dev->tm_rate_min = tx_rate;
+
+	k += prepare_tm_shaper_reg(tm_node, &profile, &reg[k], &regval[k]);
+exit:
+	req->num_regs = k;
+	rc = otx2_mbox_process(mbox);
+	if (rc)
+		return rc;
+
+	tm_node->flags = flags;
+	return 0;
+}
+
+int
+otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
+				uint16_t queue_idx, uint16_t tx_rate_mbps)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	uint64_t tx_rate = tx_rate_mbps * (uint64_t)1E6;
+	struct otx2_nix_tm_node *tm_node;
+	int rc;
+
+	/* Check for supported revisions */
+	if (otx2_dev_is_95xx_Ax(dev) ||
+	    otx2_dev_is_96xx_Ax(dev))
+		return -EINVAL;
+
+	if (queue_idx >= eth_dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	if (!(dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
+	    !(dev->tm_flags & NIX_TM_RATE_LIMIT_TREE))
+		goto error;
+
+	if ((dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
+	    eth_dev->data->nb_tx_queues > 1) {
+		/* For TM topology change ethdev needs to be stopped */
+		if (eth_dev->data->dev_started)
+			return -EBUSY;
+
+		/*
+		 * Disable xmit will be enabled when
+		 * new topology is available.
+		 */
+		rc = nix_xmit_disable(eth_dev);
+		if (rc) {
+			otx2_err("failed to disable TX, rc=%d", rc);
+			return -EIO;
+		}
+
+		rc = nix_tm_free_resources(dev, 0, 0, false);
+		if (rc < 0) {
+			otx2_tm_dbg("failed to free default resources, rc %d",
+				   rc);
+			return -EIO;
+		}
+
+		rc = nix_tm_prepare_rate_limited_tree(eth_dev);
+		if (rc < 0) {
+			otx2_tm_dbg("failed to prepare tm tree, rc=%d", rc);
+			return rc;
+		}
+
+		rc = nix_tm_alloc_resources(eth_dev, true);
+		if (rc != 0) {
+			otx2_tm_dbg("failed to allocate tm tree, rc=%d", rc);
+			return rc;
+		}
+
+		dev->tm_flags &= ~NIX_TM_DEFAULT_TREE;
+		dev->tm_flags |= NIX_TM_RATE_LIMIT_TREE;
+	}
+
+	tm_node = nix_tm_node_search(dev, queue_idx, false);
+
+	/* check if we found a valid leaf node */
+	if (!tm_node ||
+	    !nix_tm_is_leaf(dev, tm_node->lvl) ||
+	    !tm_node->parent ||
+	    tm_node->parent->hw_id == UINT32_MAX)
+		return -EIO;
+
+	return otx2_nix_tm_rate_limit_mdq(eth_dev, tm_node->parent, tx_rate);
+error:
+	otx2_tm_dbg("Unsupported TM tree 0x%0x", dev->tm_flags);
+	return -EINVAL;
+}
+
 int
 otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
 {
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index d5d58ec..7b1672e 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -11,6 +11,7 @@
 
 #define NIX_TM_DEFAULT_TREE	BIT_ULL(0)
 #define NIX_TM_COMMITTED	BIT_ULL(1)
+#define NIX_TM_RATE_LIMIT_TREE	BIT_ULL(2)
 #define NIX_TM_TL1_NO_SP	BIT_ULL(3)
 
 struct otx2_eth_dev;
@@ -20,6 +21,8 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 			      uint32_t *rr_quantum, uint16_t *smq);
+int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
+				     uint16_t queue_idx, uint16_t tx_rate);
 int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
 int otx2_nix_sq_flush_post(void *_txq);
 int otx2_nix_sq_enable(void *_txq);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v2 11/11] net/octeontx2: add tm capability callbacks
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
                     ` (9 preceding siblings ...)
  2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 10/11] net/octeontx2: add Tx queue ratelimit callback Nithin Dabilpuram
@ 2020-04-02 19:34   ` Nithin Dabilpuram
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-02 19:34 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, John McNamara,
	Marko Kovacevic
  Cc: dev, kkanas

From: Krzysztof Kanas <kkanas@marvell.com>

Add Traffic Management capability callbacks to provide
global, level and node capabilities. This patch also
adds documentation on Traffic Management Support.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 doc/guides/nics/features/octeontx2.ini |   1 +
 doc/guides/nics/octeontx2.rst          |  15 +++
 doc/guides/rel_notes/release_20_05.rst |   8 ++
 drivers/net/octeontx2/otx2_ethdev.c    |   1 +
 drivers/net/octeontx2/otx2_tm.c        | 232 +++++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.h        |   1 +
 6 files changed, 258 insertions(+)

diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 473fe56..fb13517 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -31,6 +31,7 @@ Inline protocol      = Y
 VLAN filter          = Y
 Flow control         = Y
 Flow API             = Y
+Rate limitation      = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 VLAN offload         = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 60187ec..6b885d6 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -39,6 +39,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
 - HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
 - Support Rx interrupt
 - Inline IPsec processing support
+- :ref:`Traffic Management API <tmapi>`
 
 Prerequisites
 -------------
@@ -213,6 +214,20 @@ Runtime Config Options
    parameters to all the PCIe devices if application requires to configure on
    all the ethdev ports.
 
+Traffic Management API
+----------------------
+
+OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
+configure the following features:
+
+1. Hierarchical scheduling
+2. Single rate - two color, Two rate - three color shaping
+
+Both DWRR and Static Priority(SP) hierarchial scheduling is supported.
+Every parent can have atmost 10 SP Children and unlimited DWRR children.
+Both PF & VF supports traffic management API with PF supporting 6 levels
+and VF supporting 5 levels of topology.
+
 Limitations
 -----------
 
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 000bbf5..47a9825 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -62,6 +62,14 @@ New Features
 
   * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
 
+* **Updated Marvell OCTEON TX2 ethdev driver.**
+
+ Updated Marvell OCTEON TX2 ethdev driver with traffic manager support with
+ below features.
+
+ * Hierarchial Scheduling with DWRR and SP.
+ * Single rate - two color, Two rate - three color shaping.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 78b7f3a..599a14c 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2026,6 +2026,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
 	.link_update              = otx2_nix_link_update,
 	.tx_queue_setup           = otx2_nix_tx_queue_setup,
 	.tx_queue_release         = otx2_nix_tx_queue_release,
+	.tm_ops_get               = otx2_nix_tm_ops_get,
 	.rx_queue_setup           = otx2_nix_rx_queue_setup,
 	.rx_queue_release         = otx2_nix_rx_queue_release,
 	.dev_start                = otx2_nix_dev_start,
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index de5a59b..a2369e3 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1834,7 +1834,217 @@ otx2_nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
 		*is_leaf = true;
 	else
 		*is_leaf = false;
+	return 0;
+}
 
+static int
+otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev,
+		     struct rte_tm_capabilities *cap,
+		     struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	int rc, max_nr_nodes = 0, i;
+	struct free_rsrcs_rsp *rsp;
+
+	memset(cap, 0, sizeof(*cap));
+
+	otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
+	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+		return rc;
+	}
+
+	for (i = 0; i < NIX_TXSCH_LVL_TL1; i++)
+		max_nr_nodes += rsp->schq[i];
+
+	cap->n_nodes_max = max_nr_nodes + dev->tm_leaf_cnt;
+	/* TL1 level is reserved for PF */
+	cap->n_levels_max = nix_tm_have_tl1_access(dev) ?
+				OTX2_TM_LVL_MAX : OTX2_TM_LVL_MAX - 1;
+	cap->non_leaf_nodes_identical = 1;
+	cap->leaf_nodes_identical = 1;
+
+	/* Shaper Capabilities */
+	cap->shaper_private_n_max = max_nr_nodes;
+	cap->shaper_n_max = max_nr_nodes;
+	cap->shaper_private_dual_rate_n_max = max_nr_nodes;
+	cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+	cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+	cap->shaper_pkt_length_adjust_min = 0;
+	cap->shaper_pkt_length_adjust_max = 0;
+
+	/* Schedule Capabilites */
+	cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ];
+	cap->sched_sp_n_priorities_max = TXSCH_TLX_SP_PRIO_MAX;
+	cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max;
+	cap->sched_wfq_n_groups_max = 1;
+	cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+
+	cap->dynamic_update_mask =
+		RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL |
+		RTE_TM_UPDATE_NODE_SUSPEND_RESUME;
+	cap->stats_mask =
+		RTE_TM_STATS_N_PKTS |
+		RTE_TM_STATS_N_BYTES |
+		RTE_TM_STATS_N_PKTS_RED_DROPPED |
+		RTE_TM_STATS_N_BYTES_RED_DROPPED;
+
+	for (i = 0; i < RTE_COLORS; i++) {
+		cap->mark_vlan_dei_supported[i] = false;
+		cap->mark_ip_ecn_tcp_supported[i] = false;
+		cap->mark_ip_dscp_supported[i] = false;
+	}
+
+	return 0;
+}
+
+static int
+otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl,
+				   struct rte_tm_level_capabilities *cap,
+				   struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	struct free_rsrcs_rsp *rsp;
+	uint16_t hw_lvl;
+	int rc;
+
+	memset(cap, 0, sizeof(*cap));
+
+	otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
+	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+		return rc;
+	}
+
+	hw_lvl = nix_tm_lvl2nix(dev, lvl);
+
+	if (nix_tm_is_leaf(dev, lvl)) {
+		/* Leaf */
+		cap->n_nodes_max = dev->tm_leaf_cnt;
+		cap->n_nodes_leaf_max = dev->tm_leaf_cnt;
+		cap->leaf_nodes_identical = 1;
+		cap->leaf.stats_mask =
+			RTE_TM_STATS_N_PKTS |
+			RTE_TM_STATS_N_BYTES;
+
+	} else if (lvl == OTX2_TM_LVL_ROOT) {
+		/* Root node, aka TL2(vf)/TL1(pf) */
+		cap->n_nodes_max = 1;
+		cap->n_nodes_nonleaf_max = 1;
+		cap->non_leaf_nodes_identical = 1;
+
+		cap->nonleaf.shaper_private_supported = true;
+		cap->nonleaf.shaper_private_dual_rate_supported =
+			nix_tm_have_tl1_access(dev) ? false : true;
+		cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+		cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+
+		cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
+		cap->nonleaf.sched_sp_n_priorities_max =
+					nix_max_prio(dev, hw_lvl) + 1;
+		cap->nonleaf.sched_wfq_n_groups_max = 1;
+		cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+
+		if (nix_tm_have_tl1_access(dev))
+			cap->nonleaf.stats_mask =
+				RTE_TM_STATS_N_PKTS_RED_DROPPED |
+				RTE_TM_STATS_N_BYTES_RED_DROPPED;
+	} else if ((lvl < OTX2_TM_LVL_MAX) &&
+		   (hw_lvl < NIX_TXSCH_LVL_CNT)) {
+		/* TL2, TL3, TL4, MDQ */
+		cap->n_nodes_max = rsp->schq[hw_lvl];
+		cap->n_nodes_nonleaf_max = cap->n_nodes_max;
+		cap->non_leaf_nodes_identical = 1;
+
+		cap->nonleaf.shaper_private_supported = true;
+		cap->nonleaf.shaper_private_dual_rate_supported = true;
+		cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+		cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+
+		/* MDQ doesn't support Strict Priority */
+		if (hw_lvl == NIX_TXSCH_LVL_MDQ)
+			cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
+		else
+			cap->nonleaf.sched_n_children_max =
+				rsp->schq[hw_lvl - 1];
+		cap->nonleaf.sched_sp_n_priorities_max =
+			nix_max_prio(dev, hw_lvl) + 1;
+		cap->nonleaf.sched_wfq_n_groups_max = 1;
+		cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+	} else {
+		/* unsupported level */
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		return rc;
+	}
+	return 0;
+}
+
+static int
+otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			  struct rte_tm_node_capabilities *cap,
+			  struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	struct otx2_nix_tm_node *tm_node;
+	struct free_rsrcs_rsp *rsp;
+	int rc, hw_lvl, lvl;
+
+	memset(cap, 0, sizeof(*cap));
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	hw_lvl = tm_node->hw_lvl;
+	lvl = tm_node->lvl;
+
+	/* Leaf node */
+	if (nix_tm_is_leaf(dev, lvl)) {
+		cap->stats_mask = RTE_TM_STATS_N_PKTS |
+					RTE_TM_STATS_N_BYTES;
+		return 0;
+	}
+
+	otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
+	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+		return rc;
+	}
+
+	/* Non Leaf Shaper */
+	cap->shaper_private_supported = true;
+	cap->shaper_private_dual_rate_supported =
+		(hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true;
+	cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+	cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+
+	/* Non Leaf Scheduler */
+	if (hw_lvl == NIX_TXSCH_LVL_MDQ)
+		cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
+	else
+		cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
+
+	cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1;
+	cap->nonleaf.sched_wfq_n_children_per_group_max =
+		cap->nonleaf.sched_n_children_max;
+	cap->nonleaf.sched_wfq_n_groups_max = 1;
+	cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+
+	if (hw_lvl == NIX_TXSCH_LVL_TL1)
+		cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
+			RTE_TM_STATS_N_BYTES_RED_DROPPED;
 	return 0;
 }
 
@@ -2515,6 +2725,10 @@ otx2_nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
 const struct rte_tm_ops otx2_tm_ops = {
 	.node_type_get = otx2_nix_tm_node_type_get,
 
+	.capabilities_get = otx2_nix_tm_capa_get,
+	.level_capabilities_get = otx2_nix_tm_level_capa_get,
+	.node_capabilities_get = otx2_nix_tm_node_capa_get,
+
 	.shaper_profile_add = otx2_nix_tm_shaper_profile_add,
 	.shaper_profile_delete = otx2_nix_tm_shaper_profile_delete,
 
@@ -2911,6 +3125,24 @@ otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
 }
 
 int
+otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *arg)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+	if (!arg)
+		return -EINVAL;
+
+	/* Check for supported revisions */
+	if (otx2_dev_is_95xx_Ax(dev) ||
+	    otx2_dev_is_96xx_Ax(dev))
+		return -EINVAL;
+
+	*(const void **)arg = &otx2_tm_ops;
+
+	return 0;
+}
+
+int
 otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 7b1672e..9675182 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -19,6 +19,7 @@ struct otx2_eth_dev;
 void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *ops);
 int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 			      uint32_t *rr_quantum, uint16_t *smq);
 int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support
  2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                   ` (12 preceding siblings ...)
  2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
@ 2020-04-03  8:52 ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
                     ` (10 more replies)
  13 siblings, 11 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  Cc: dev, jerinj, kkanas, Nithin Dabilpuram

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Add support to traffic management api in OCTEON TX2 PMD.
This support applies to CN96xx of C0 silicon version.

This series depends on http://patchwork.dpdk.org/patch/66344/

Depends-on:series-8815

v3:
* Fix x86_32 print string issue

v2:
* Update release notes of 20.05
* Prefix tm function pointers to start with otx2_ to be inline
  with exiting convention
* Use nix_lf_rx_start|stop instead of cgx_start_stop for
  handling other scenarios with VF. 
* Fix git log errors

Krzysztof Kanas (3):
  net/octeontx2: add tm node suspend and resume cb
  net/octeontx2: add Tx queue ratelimit callback
  net/octeontx2: add tm capability callbacks

Nithin Dabilpuram (8):
  net/octeontx2: setup link config based on BP level
  net/octeontx2: restructure tm helper functions
  net/octeontx2: add dynamic topology update support
  net/octeontx2: add tm node add and delete cb
  net/octeontx2: add tm hierarchy commit callback
  net/octeontx2: add tm stats and shaper profile cbs
  net/octeontx2: add tm dynamic topology update cb
  net/octeontx2: add tm debug support

 doc/guides/nics/features/octeontx2.ini    |    1 +
 doc/guides/nics/octeontx2.rst             |   15 +
 doc/guides/rel_notes/release_20_05.rst    |    8 +
 drivers/common/octeontx2/otx2_dev.h       |    9 +
 drivers/net/octeontx2/otx2_ethdev.c       |    5 +-
 drivers/net/octeontx2/otx2_ethdev.h       |    3 +
 drivers/net/octeontx2/otx2_ethdev_debug.c |  311 ++++
 drivers/net/octeontx2/otx2_tm.c           | 2676 ++++++++++++++++++++++++-----
 drivers/net/octeontx2/otx2_tm.h           |  101 +-
 9 files changed, 2646 insertions(+), 483 deletions(-)

-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 01/11] net/octeontx2: setup link config based on BP level
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 02/11] net/octeontx2: restructure tm helper functions Nithin Dabilpuram
                     ` (9 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Configure NIX_AF_TL3_TL2X_LINKX_CFG using schq at
level based on NIX_AF_PSE_CHANNEL_LEVEL[BP_LEVEL].

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/octeontx2/otx2_ethdev.h |  1 +
 drivers/net/octeontx2/otx2_tm.c     | 16 +++++++++++++++-
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e5684f9..b7d5386 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -304,6 +304,7 @@ struct otx2_eth_dev {
 	/* Contiguous queues */
 	uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
 	uint16_t otx2_tm_root_lvl;
+	uint16_t link_cfg_lvl;
 	uint16_t tm_flags;
 	uint16_t tm_leaf_cnt;
 	struct otx2_nix_tm_node_list node_list;
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index ba615ce..2364e03 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -437,6 +437,16 @@ populate_tm_registers(struct otx2_eth_dev *dev,
 		*reg++ = NIX_AF_TL3X_SCHEDULE(schq);
 		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
 		req->num_regs++;
+
+		/* Link configuration */
+		if (!otx2_dev_is_sdp(dev) &&
+		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
+			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
+						nix_get_link(dev));
+			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
+			req->num_regs++;
+		}
+
 		if (pir.rate && pir.burst) {
 			*reg++ = NIX_AF_TL3X_PIR(schq);
 			*regval++ = shaper2regval(&pir) | 1;
@@ -471,7 +481,10 @@ populate_tm_registers(struct otx2_eth_dev *dev,
 		else
 			*regval++ = (strict_schedul_prio << 24) | rr_quantum;
 		req->num_regs++;
-		if (!otx2_dev_is_sdp(dev)) {
+
+		/* Link configuration */
+		if (!otx2_dev_is_sdp(dev) &&
+		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
 			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
 						nix_get_link(dev));
 			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
@@ -1144,6 +1157,7 @@ nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev)
 		return rc;
 
 	nix_tm_copy_rsp_to_dev(dev, rsp);
+	dev->link_cfg_lvl = rsp->link_cfg_lvl;
 
 	nix_tm_assign_hw_id(dev);
 	return 0;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 02/11] net/octeontx2: restructure tm helper functions
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 03/11] net/octeontx2: add dynamic topology update support Nithin Dabilpuram
                     ` (8 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Restructure traffic manager helper function by splitting to
multiple sets of register configurations like shaping, scheduling
and topology config.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 689 ++++++++++++++++++++++------------------
 drivers/net/octeontx2/otx2_tm.h |  85 ++---
 2 files changed, 417 insertions(+), 357 deletions(-)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 2364e03..108f44c 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -94,52 +94,50 @@ nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
 }
 
 static inline uint64_t
-shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks,
-		   uint64_t value, uint64_t *exponent_p,
+shaper_rate_to_nix(uint64_t value, uint64_t *exponent_p,
 		   uint64_t *mantissa_p, uint64_t *div_exp_p)
 {
 	uint64_t div_exp, exponent, mantissa;
 
 	/* Boundary checks */
-	if (value < MIN_SHAPER_RATE(cclk_hz, cclk_ticks) ||
-	    value > MAX_SHAPER_RATE(cclk_hz, cclk_ticks))
+	if (value < MIN_SHAPER_RATE ||
+	    value > MAX_SHAPER_RATE)
 		return 0;
 
-	if (value <= SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, 0)) {
+	if (value <= SHAPER_RATE(0, 0, 0)) {
 		/* Calculate rate div_exp and mantissa using
 		 * the following formula:
 		 *
-		 * value = (cclk_hz * (256 + mantissa)
-		 *              / ((cclk_ticks << div_exp) * 256)
+		 * value = (2E6 * (256 + mantissa)
+		 *              / ((1 << div_exp) * 256))
 		 */
 		div_exp = 0;
 		exponent = 0;
 		mantissa = MAX_RATE_MANTISSA;
 
-		while (value < (cclk_hz / (cclk_ticks << div_exp)))
+		while (value < (NIX_SHAPER_RATE_CONST / (1 << div_exp)))
 			div_exp += 1;
 
 		while (value <
-		       ((cclk_hz * (256 + mantissa)) /
-			((cclk_ticks << div_exp) * 256)))
+		       ((NIX_SHAPER_RATE_CONST * (256 + mantissa)) /
+			((1 << div_exp) * 256)))
 			mantissa -= 1;
 	} else {
 		/* Calculate rate exponent and mantissa using
 		 * the following formula:
 		 *
-		 * value = (cclk_hz * ((256 + mantissa) << exponent)
-		 *              / (cclk_ticks * 256)
+		 * value = (2E6 * ((256 + mantissa) << exponent)) / 256
 		 *
 		 */
 		div_exp = 0;
 		exponent = MAX_RATE_EXPONENT;
 		mantissa = MAX_RATE_MANTISSA;
 
-		while (value < (cclk_hz * (1 << exponent)) / cclk_ticks)
+		while (value < (NIX_SHAPER_RATE_CONST * (1 << exponent)))
 			exponent -= 1;
 
-		while (value < (cclk_hz * ((256 + mantissa) << exponent)) /
-		       (cclk_ticks * 256))
+		while (value < ((NIX_SHAPER_RATE_CONST *
+				((256 + mantissa) << exponent)) / 256))
 			mantissa -= 1;
 	}
 
@@ -155,20 +153,7 @@ shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks,
 		*mantissa_p = mantissa;
 
 	/* Calculate real rate value */
-	return SHAPER_RATE(cclk_hz, cclk_ticks, exponent, mantissa, div_exp);
-}
-
-static inline uint64_t
-lx_shaper_rate_to_nix(uint64_t cclk_hz, uint32_t hw_lvl,
-		      uint64_t value, uint64_t *exponent,
-		      uint64_t *mantissa, uint64_t *div_exp)
-{
-	if (hw_lvl == NIX_TXSCH_LVL_TL1)
-		return shaper_rate_to_nix(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS,
-					  value, exponent, mantissa, div_exp);
-	else
-		return shaper_rate_to_nix(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS,
-					  value, exponent, mantissa, div_exp);
+	return SHAPER_RATE(exponent, mantissa, div_exp);
 }
 
 static inline uint64_t
@@ -207,329 +192,394 @@ shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p,
 	return SHAPER_BURST(exponent, mantissa);
 }
 
-static int
-configure_shaper_cir_pir_reg(struct otx2_eth_dev *dev,
-			     struct otx2_nix_tm_node *tm_node,
-			     struct shaper_params *cir,
-			     struct shaper_params *pir)
-{
-	uint32_t shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
-	struct otx2_nix_tm_shaper_profile *shaper_profile = NULL;
-	struct rte_tm_shaper_params *param;
-
-	shaper_profile_id = tm_node->params.shaper_profile_id;
-
-	shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
-	if (shaper_profile) {
-		param = &shaper_profile->profile;
-		/* Calculate CIR exponent and mantissa */
-		if (param->committed.rate)
-			cir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
-							  tm_node->hw_lvl_id,
-							  param->committed.rate,
-							  &cir->exponent,
-							  &cir->mantissa,
-							  &cir->div_exp);
-
-		/* Calculate PIR exponent and mantissa */
-		if (param->peak.rate)
-			pir->rate = lx_shaper_rate_to_nix(CCLK_HZ,
-							  tm_node->hw_lvl_id,
-							  param->peak.rate,
-							  &pir->exponent,
-							  &pir->mantissa,
-							  &pir->div_exp);
-
-		/* Calculate CIR burst exponent and mantissa */
-		if (param->committed.size)
-			cir->burst = shaper_burst_to_nix(param->committed.size,
-							 &cir->burst_exponent,
-							 &cir->burst_mantissa);
-
-		/* Calculate PIR burst exponent and mantissa */
-		if (param->peak.size)
-			pir->burst = shaper_burst_to_nix(param->peak.size,
-							 &pir->burst_exponent,
-							 &pir->burst_mantissa);
-	}
-
-	return 0;
-}
-
-static int
-send_tm_reqval(struct otx2_mbox *mbox, struct nix_txschq_config *req)
+static void
+shaper_config_to_nix(struct otx2_nix_tm_shaper_profile *profile,
+		     struct shaper_params *cir,
+		     struct shaper_params *pir)
 {
-	int rc;
-
-	if (req->num_regs > MAX_REGS_PER_MBOX_MSG)
-		return -ERANGE;
-
-	rc = otx2_mbox_process(mbox);
-	if (rc)
-		return rc;
-
-	req->num_regs = 0;
-	return 0;
+	struct rte_tm_shaper_params *param = &profile->params;
+
+	if (!profile)
+		return;
+
+	/* Calculate CIR exponent and mantissa */
+	if (param->committed.rate)
+		cir->rate = shaper_rate_to_nix(param->committed.rate,
+					       &cir->exponent,
+					       &cir->mantissa,
+					       &cir->div_exp);
+
+	/* Calculate PIR exponent and mantissa */
+	if (param->peak.rate)
+		pir->rate = shaper_rate_to_nix(param->peak.rate,
+					       &pir->exponent,
+					       &pir->mantissa,
+					       &pir->div_exp);
+
+	/* Calculate CIR burst exponent and mantissa */
+	if (param->committed.size)
+		cir->burst = shaper_burst_to_nix(param->committed.size,
+						 &cir->burst_exponent,
+						 &cir->burst_mantissa);
+
+	/* Calculate PIR burst exponent and mantissa */
+	if (param->peak.size)
+		pir->burst = shaper_burst_to_nix(param->peak.size,
+						 &pir->burst_exponent,
+						 &pir->burst_mantissa);
 }
 
 static int
-populate_tm_registers(struct otx2_eth_dev *dev,
-		      struct otx2_nix_tm_node *tm_node)
+populate_tm_tl1_default(struct otx2_eth_dev *dev, uint32_t schq)
 {
-	uint64_t strict_schedul_prio, rr_prio;
 	struct otx2_mbox *mbox = dev->mbox;
-	volatile uint64_t *reg, *regval;
-	uint64_t parent = 0, child = 0;
-	struct shaper_params cir, pir;
 	struct nix_txschq_config *req;
+
+	/*
+	 * Default config for TL1.
+	 * For VF this is always ignored.
+	 */
+
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = NIX_TXSCH_LVL_TL1;
+
+	/* Set DWRR quantum */
+	req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
+	req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
+	req->num_regs++;
+
+	req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
+	req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
+	req->num_regs++;
+
+	req->reg[2] = NIX_AF_TL1X_CIR(schq);
+	req->regval[2] = 0;
+	req->num_regs++;
+
+	return otx2_mbox_process(mbox);
+}
+
+static uint8_t
+prepare_tm_sched_reg(struct otx2_eth_dev *dev,
+		     struct otx2_nix_tm_node *tm_node,
+		     volatile uint64_t *reg, volatile uint64_t *regval)
+{
+	uint64_t strict_prio = tm_node->priority;
+	uint32_t hw_lvl = tm_node->hw_lvl;
+	uint32_t schq = tm_node->hw_id;
 	uint64_t rr_quantum;
-	uint32_t hw_lvl;
-	uint32_t schq;
-	int rc;
+	uint8_t k = 0;
+
+	rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
+
+	/* For children to root, strict prio is default if either
+	 * device root is TL2 or TL1 Static Priority is disabled.
+	 */
+	if (hw_lvl == NIX_TXSCH_LVL_TL2 &&
+	    (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
+	     dev->tm_flags & NIX_TM_TL1_NO_SP))
+		strict_prio = TXSCH_TL1_DFLT_RR_PRIO;
+
+	otx2_tm_dbg("Schedule config node %s(%u) lvl %u id %u, "
+		     "prio 0x%" PRIx64 ", rr_quantum 0x%" PRIx64 " (%p)",
+		     nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
+		     tm_node->id, strict_prio, rr_quantum, tm_node);
+
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_SMQ:
+		reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
+		regval[k] = (strict_prio << 24) | rr_quantum;
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL1:
+		reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
+		regval[k] = rr_quantum;
+		k++;
+
+		break;
+	}
+
+	return k;
+}
+
+static uint8_t
+prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node,
+		      struct otx2_nix_tm_shaper_profile *profile,
+		      volatile uint64_t *reg, volatile uint64_t *regval)
+{
+	struct shaper_params cir, pir;
+	uint32_t schq = tm_node->hw_id;
+	uint8_t k = 0;
 
 	memset(&cir, 0, sizeof(cir));
 	memset(&pir, 0, sizeof(pir));
+	shaper_config_to_nix(profile, &cir, &pir);
 
-	/* Skip leaf nodes */
-	if (tm_node->hw_lvl_id == NIX_TXSCH_LVL_CNT)
-		return 0;
+	otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, "
+		    "pir %" PRIu64 "(%" PRIu64 "B),"
+		     " cir %" PRIu64 "(%" PRIu64 "B) (%p)",
+		     nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
+		     tm_node->id, pir.rate, pir.burst,
+		     cir.rate, cir.burst, tm_node);
+
+	switch (tm_node->hw_lvl) {
+	case NIX_TXSCH_LVL_SMQ:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_MDQX_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_MDQX_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED ALG */
+		reg[k] = NIX_AF_MDQX_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_TL4X_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_TL4X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED algo */
+		reg[k] = NIX_AF_TL4X_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_TL3X_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_TL3X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED algo */
+		reg[k] = NIX_AF_TL3X_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		/* Configure PIR, CIR */
+		reg[k] = NIX_AF_TL2X_PIR(schq);
+		regval[k] = (pir.rate && pir.burst) ?
+				(shaper2regval(&pir) | 1) : 0;
+		k++;
+
+		reg[k] = NIX_AF_TL2X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+
+		/* Configure RED algo */
+		reg[k] = NIX_AF_TL2X_SHAPE(schq);
+		regval[k] = ((uint64_t)tm_node->red_algo << 9);
+		k++;
+
+		break;
+	case NIX_TXSCH_LVL_TL1:
+		/* Configure CIR */
+		reg[k] = NIX_AF_TL1X_CIR(schq);
+		regval[k] = (cir.rate && cir.burst) ?
+				(shaper2regval(&cir) | 1) : 0;
+		k++;
+		break;
+	}
+
+	return k;
+}
+
+static int
+populate_tm_reg(struct otx2_eth_dev *dev,
+		struct otx2_nix_tm_node *tm_node)
+{
+	struct otx2_nix_tm_shaper_profile *profile;
+	uint64_t regval_mask[MAX_REGS_PER_MBOX_MSG];
+	uint64_t regval[MAX_REGS_PER_MBOX_MSG];
+	uint64_t reg[MAX_REGS_PER_MBOX_MSG];
+	struct otx2_mbox *mbox = dev->mbox;
+	uint64_t parent = 0, child = 0;
+	uint32_t hw_lvl, rr_prio, schq;
+	struct nix_txschq_config *req;
+	int rc = -EFAULT;
+	uint8_t k = 0;
+
+	memset(regval_mask, 0, sizeof(regval_mask));
+	profile = nix_tm_shaper_profile_search(dev,
+					tm_node->params.shaper_profile_id);
+	rr_prio = tm_node->rr_prio;
+	hw_lvl = tm_node->hw_lvl;
+	schq = tm_node->hw_id;
 
 	/* Root node will not have a parent node */
-	if (tm_node->hw_lvl_id == dev->otx2_tm_root_lvl)
+	if (hw_lvl == dev->otx2_tm_root_lvl)
 		parent = tm_node->parent_hw_id;
 	else
 		parent = tm_node->parent->hw_id;
 
 	/* Do we need this trigger to configure TL1 */
 	if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
-	    tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
-		schq = parent;
-		/*
-		 * Default config for TL1.
-		 * For VF this is always ignored.
-		 */
-
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = NIX_TXSCH_LVL_TL1;
-
-		/* Set DWRR quantum */
-		req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
-		req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
-		req->num_regs++;
-
-		req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
-		req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
-		req->num_regs++;
-
-		req->reg[2] = NIX_AF_TL1X_CIR(schq);
-		req->regval[2] = 0;
-		req->num_regs++;
-
-		rc = send_tm_reqval(mbox, req);
+	    hw_lvl == dev->otx2_tm_root_lvl) {
+		rc = populate_tm_tl1_default(dev, parent);
 		if (rc)
 			goto error;
 	}
 
-	if (tm_node->hw_lvl_id != NIX_TXSCH_LVL_SMQ)
+	if (hw_lvl != NIX_TXSCH_LVL_SMQ)
 		child = find_prio_anchor(dev, tm_node->id);
 
-	rr_prio = tm_node->rr_prio;
-	hw_lvl = tm_node->hw_lvl_id;
-	strict_schedul_prio = tm_node->priority;
-	schq = tm_node->hw_id;
-	rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX) /
-		MAX_SCHED_WEIGHT;
-
-	configure_shaper_cir_pir_reg(dev, tm_node, &cir, &pir);
-
-	otx2_tm_dbg("Configure node %p, lvl %u hw_lvl %u, id %u, hw_id %u,"
-		     "parent_hw_id %" PRIx64 ", pir %" PRIx64 ", cir %" PRIx64,
-		     tm_node, tm_node->level_id, hw_lvl,
-		     tm_node->id, schq, parent, pir.rate, cir.rate);
-
-	rc = -EFAULT;
-
+	/* Override default rr_prio when TL1
+	 * Static Priority is disabled
+	 */
+	if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
+	    dev->tm_flags & NIX_TM_TL1_NO_SP) {
+		rr_prio = TXSCH_TL1_DFLT_RR_PRIO;
+		child = 0;
+	}
+
+	otx2_tm_dbg("Topology config node %s(%u)->%s(%"PRIu64") lvl %u, id %u"
+		    " prio_anchor %"PRIu64" rr_prio %u (%p)",
+		    nix_hwlvl2str(hw_lvl), schq, nix_hwlvl2str(hw_lvl + 1),
+		    parent, tm_node->lvl, tm_node->id, child, rr_prio, tm_node);
+
+	/* Prepare Topology and Link config */
 	switch (hw_lvl) {
 	case NIX_TXSCH_LVL_SMQ:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		reg = req->reg;
-		regval = req->regval;
-		req->num_regs = 0;
 
 		/* Set xoff which will be cleared later */
-		*reg++ = NIX_AF_SMQX_CFG(schq);
-		*regval++ = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) |
-				(NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
-		req->num_regs++;
-		*reg++ = NIX_AF_MDQX_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_MDQX_SCHEDULE(schq);
-		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_MDQX_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
+		reg[k] = NIX_AF_SMQX_CFG(schq);
+		regval[k] = BIT_ULL(50);
+		regval_mask[k] = ~BIT_ULL(50);
+		k++;
 
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_MDQX_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_MDQX_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
 
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL4:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_TL4X_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
+
+		reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1);
+		k++;
 
-		*reg++ = NIX_AF_TL4X_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL4X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1);
-		req->num_regs++;
-		*reg++ = NIX_AF_TL4X_SCHEDULE(schq);
-		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_TL4X_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL4X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
 		/* Configure TL4 to send to SDP channel instead of CGX/LBK */
 		if (otx2_dev_is_sdp(dev)) {
-			*reg++ = NIX_AF_TL4X_SDP_LINK_CFG(schq);
-			*regval++ = BIT_ULL(12);
-			req->num_regs++;
+			reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
+			regval[k] = BIT_ULL(12);
+			k++;
 		}
-
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL3:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_TL3X_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
 
-		*reg++ = NIX_AF_TL3X_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL3X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1);
-		req->num_regs++;
-		*reg++ = NIX_AF_TL3X_SCHEDULE(schq);
-		*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
+		reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1);
+		k++;
 
 		/* Link configuration */
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
-			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
+			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
 						nix_get_link(dev));
-			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
-			req->num_regs++;
+			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
+			k++;
 		}
 
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_TL3X_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL3X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
-
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL2:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		/* Parent and schedule conf */
+		reg[k] = NIX_AF_TL2X_PARENT(schq);
+		regval[k] = parent << 16;
+		k++;
 
-		*reg++ = NIX_AF_TL2X_PARENT(schq);
-		*regval++ = parent << 16;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL2X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1);
-		req->num_regs++;
-		*reg++ = NIX_AF_TL2X_SCHEDULE(schq);
-		if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2)
-			*regval++ = (1 << 24) | rr_quantum;
-		else
-			*regval++ = (strict_schedul_prio << 24) | rr_quantum;
-		req->num_regs++;
+		reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1);
+		k++;
 
 		/* Link configuration */
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
-			*reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
+			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
 						nix_get_link(dev));
-			*regval++ = BIT_ULL(12) | nix_get_relchan(dev);
-			req->num_regs++;
-		}
-		if (pir.rate && pir.burst) {
-			*reg++ = NIX_AF_TL2X_PIR(schq);
-			*regval++ = shaper2regval(&pir) | 1;
-			req->num_regs++;
-		}
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL2X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
+			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
+			k++;
 		}
 
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	case NIX_TXSCH_LVL_TL1:
-		req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-		req->lvl = hw_lvl;
-		req->num_regs = 0;
-		reg = req->reg;
-		regval = req->regval;
+		reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
+		regval[k] = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
+		k++;
 
-		*reg++ = NIX_AF_TL1X_SCHEDULE(schq);
-		*regval++ = rr_quantum;
-		req->num_regs++;
-		*reg++ = NIX_AF_TL1X_TOPOLOGY(schq);
-		*regval++ = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
-		req->num_regs++;
-		if (cir.rate && cir.burst) {
-			*reg++ = NIX_AF_TL1X_CIR(schq);
-			*regval++ = shaper2regval(&cir) | 1;
-			req->num_regs++;
-		}
-
-		rc = send_tm_reqval(mbox, req);
-		if (rc)
-			goto error;
 		break;
 	}
 
+	/* Prepare schedule config */
+	k += prepare_tm_sched_reg(dev, tm_node, &reg[k], &regval[k]);
+
+	/* Prepare shaping config */
+	k += prepare_tm_shaper_reg(tm_node, profile, &reg[k], &regval[k]);
+
+	if (!k)
+		return 0;
+
+	/* Copy and send config mbox */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = hw_lvl;
+	req->num_regs = k;
+
+	otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+	otx2_mbox_memcpy(req->regval, regval, sizeof(uint64_t) * k);
+	otx2_mbox_memcpy(req->regval_mask, regval_mask, sizeof(uint64_t) * k);
+
+	rc = otx2_mbox_process(mbox);
+	if (rc)
+		goto error;
+
 	return 0;
 error:
 	otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc);
@@ -541,13 +591,14 @@ static int
 nix_tm_txsch_reg_config(struct otx2_eth_dev *dev)
 {
 	struct otx2_nix_tm_node *tm_node;
-	uint32_t lvl;
+	uint32_t hw_lvl;
 	int rc = 0;
 
-	for (lvl = 0; lvl < (uint32_t)dev->otx2_tm_root_lvl + 1; lvl++) {
+	for (hw_lvl = 0; hw_lvl <= dev->otx2_tm_root_lvl; hw_lvl++) {
 		TAILQ_FOREACH(tm_node, &dev->node_list, node) {
-			if (tm_node->hw_lvl_id == lvl) {
-				rc = populate_tm_registers(dev, tm_node);
+			if (tm_node->hw_lvl == hw_lvl &&
+			    tm_node->hw_lvl != NIX_TXSCH_LVL_CNT) {
+				rc = populate_tm_reg(dev, tm_node);
 				if (rc)
 					goto exit;
 			}
@@ -637,8 +688,8 @@ nix_tm_update_parent_info(struct otx2_eth_dev *dev)
 static int
 nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 			uint32_t parent_node_id, uint32_t priority,
-			uint32_t weight, uint16_t hw_lvl_id,
-			uint16_t level_id, bool user,
+			uint32_t weight, uint16_t hw_lvl,
+			uint16_t lvl, bool user,
 			struct rte_tm_node_params *params)
 {
 	struct otx2_nix_tm_shaper_profile *shaper_profile;
@@ -655,8 +706,8 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 	if (!tm_node)
 		return -ENOMEM;
 
-	tm_node->level_id = level_id;
-	tm_node->hw_lvl_id = hw_lvl_id;
+	tm_node->lvl = lvl;
+	tm_node->hw_lvl = hw_lvl;
 
 	tm_node->id = node_id;
 	tm_node->priority = priority;
@@ -935,18 +986,18 @@ nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
 			continue;
 
 		if (nix_tm_have_tl1_access(dev) &&
-		    tm_node->hw_lvl_id ==  NIX_TXSCH_LVL_TL1)
+		    tm_node->hw_lvl ==  NIX_TXSCH_LVL_TL1)
 			skip_node = true;
 
 		otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)",
-			    tm_node->id,  tm_node->hw_lvl_id,
+			    tm_node->id,  tm_node->hw_lvl,
 			    tm_node->hw_id, tm_node);
 		/* Free specific HW resource if requested */
 		if (!skip_node && flags_mask &&
 		    tm_node->flags & NIX_TM_NODE_HWRES) {
 			req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
 			req->flags = 0;
-			req->schq_lvl = tm_node->hw_lvl_id;
+			req->schq_lvl = tm_node->hw_lvl;
 			req->schq = tm_node->hw_id;
 			rc = otx2_mbox_process(mbox);
 			if (rc)
@@ -1010,17 +1061,17 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 	uint32_t l_id, schq_index;
 
 	otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)",
-		    child->id, child->level_id, child->hw_lvl_id, child);
+		    child->id, child->lvl, child->hw_lvl, child);
 
 	child->flags |= NIX_TM_NODE_HWRES;
 
 	/* Process root nodes */
 	if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
-	    child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+	    child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
 		int idx = 0;
 		uint32_t tschq_con_index;
 
-		l_id = child->hw_lvl_id;
+		l_id = child->hw_lvl;
 		tschq_con_index = dev->txschq_contig_index[l_id];
 		hw_id = dev->txschq_contig_list[l_id][tschq_con_index];
 		child->hw_id = hw_id;
@@ -1032,10 +1083,10 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 		return 0;
 	}
 	if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 &&
-	    child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) {
+	    child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
 		uint32_t tschq_con_index;
 
-		l_id = child->hw_lvl_id;
+		l_id = child->hw_lvl;
 		tschq_con_index = dev->txschq_index[l_id];
 		hw_id = dev->txschq_list[l_id][tschq_con_index];
 		child->hw_id = hw_id;
@@ -1044,7 +1095,7 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 	}
 
 	/* Process children with parents */
-	l_id = child->hw_lvl_id;
+	l_id = child->hw_lvl;
 	schq_index = dev->txschq_index[l_id];
 	schq_con_index = dev->txschq_contig_index[l_id];
 
@@ -1069,8 +1120,8 @@ nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
 
 	for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) {
 		TAILQ_FOREACH(parent, &dev->node_list, node) {
-			child_hw_lvl = parent->hw_lvl_id - 1;
-			if (parent->hw_lvl_id != i)
+			child_hw_lvl = parent->hw_lvl - 1;
+			if (parent->hw_lvl != i)
 				continue;
 			TAILQ_FOREACH(child, &dev->node_list, node) {
 				if (!child->parent)
@@ -1087,7 +1138,7 @@ nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
 			 * Explicitly assign id to parent node if it
 			 * doesn't have a parent
 			 */
-			if (parent->hw_lvl_id == dev->otx2_tm_root_lvl)
+			if (parent->hw_lvl == dev->otx2_tm_root_lvl)
 				nix_tm_assign_id_to_node(dev, parent, NULL);
 		}
 	}
@@ -1102,7 +1153,7 @@ nix_tm_count_req_schq(struct otx2_eth_dev *dev,
 	uint8_t contig_count;
 
 	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
-		if (lvl == tm_node->hw_lvl_id) {
+		if (lvl == tm_node->hw_lvl) {
 			req->schq[lvl - 1] += tm_node->rr_num;
 			if (tm_node->max_prio != UINT32_MAX) {
 				contig_count = tm_node->max_prio + 1;
@@ -1111,7 +1162,7 @@ nix_tm_count_req_schq(struct otx2_eth_dev *dev,
 		}
 		if (lvl == dev->otx2_tm_root_lvl &&
 		    dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 &&
-		    tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) {
+		    tm_node->hw_lvl == dev->otx2_tm_root_lvl) {
 			req->schq_contig[dev->otx2_tm_root_lvl]++;
 		}
 	}
@@ -1192,7 +1243,7 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 			continue;
 
 		/* Enable xmit on sq */
-		if (tm_node->level_id != OTX2_TM_LVL_QUEUE) {
+		if (tm_node->lvl != OTX2_TM_LVL_QUEUE) {
 			tm_node->flags |= NIX_TM_NODE_ENABLED;
 			continue;
 		}
@@ -1210,8 +1261,7 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 		txq = eth_dev->data->tx_queues[sq];
 
 		smq = tm_node->parent->hw_id;
-		rr_quantum = (tm_node->weight *
-			      NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT;
+		rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
 
 		rc = nix_tm_sw_xon(txq, smq, rr_quantum);
 		if (rc)
@@ -1332,6 +1382,7 @@ void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
 
 int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
 {
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct otx2_eth_dev  *dev = otx2_eth_pmd_priv(eth_dev);
 	uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
 	int rc;
@@ -1347,6 +1398,13 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
 	nix_tm_clear_shaper_profiles(dev);
 	dev->tm_flags = NIX_TM_DEFAULT_TREE;
 
+	/* Disable TL1 Static Priority when VF's are enabled
+	 * as otherwise VF's TL2 reallocation will be needed
+	 * runtime to support a specific topology of PF.
+	 */
+	if (pci_dev->max_vfs)
+		dev->tm_flags |= NIX_TM_TL1_NO_SP;
+
 	rc = nix_tm_prepare_default_tree(eth_dev);
 	if (rc != 0)
 		return rc;
@@ -1397,15 +1455,14 @@ otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 		tm_node = nix_tm_node_search(dev, sq, true);
 
 	/* Check if we found a valid leaf node */
-	if (!tm_node || tm_node->level_id != OTX2_TM_LVL_QUEUE ||
+	if (!tm_node || tm_node->lvl != OTX2_TM_LVL_QUEUE ||
 	    !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
 		return -EIO;
 	}
 
 	/* Get SMQ Id of leaf node's parent */
 	*smq = tm_node->parent->hw_id;
-	*rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX)
-		/ MAX_SCHED_WEIGHT;
+	*rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
 
 	rc = nix_smq_xoff(dev, *smq, false);
 	if (rc)
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 4712b09..ad7727e 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -10,6 +10,7 @@
 #include <rte_tm_driver.h>
 
 #define NIX_TM_DEFAULT_TREE	BIT_ULL(0)
+#define NIX_TM_TL1_NO_SP	BIT_ULL(3)
 
 struct otx2_eth_dev;
 
@@ -27,16 +28,18 @@ struct otx2_nix_tm_node {
 	uint32_t hw_id;
 	uint32_t priority;
 	uint32_t weight;
-	uint16_t level_id;
-	uint16_t hw_lvl_id;
+	uint16_t lvl;
+	uint16_t hw_lvl;
 	uint32_t rr_prio;
 	uint32_t rr_num;
 	uint32_t max_prio;
 	uint32_t parent_hw_id;
-	uint32_t flags;
+	uint32_t flags:16;
 #define NIX_TM_NODE_HWRES	BIT_ULL(0)
 #define NIX_TM_NODE_ENABLED	BIT_ULL(1)
 #define NIX_TM_NODE_USER	BIT_ULL(2)
+	/* Shaper algorithm for RED state @NIX_REDALG_E */
+	uint32_t red_algo:2;
 	struct otx2_nix_tm_node *parent;
 	struct rte_tm_node_params params;
 };
@@ -45,7 +48,7 @@ struct otx2_nix_tm_shaper_profile {
 	TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper;
 	uint32_t shaper_profile_id;
 	uint32_t reference_count;
-	struct rte_tm_shaper_params profile;
+	struct rte_tm_shaper_params params; /* Rate in bits/sec */
 };
 
 struct shaper_params {
@@ -63,6 +66,10 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 
 #define MAX_SCHED_WEIGHT ((uint8_t)~0)
 #define NIX_TM_RR_QUANTUM_MAX (BIT_ULL(24) - 1)
+#define NIX_TM_WEIGHT_TO_RR_QUANTUM(__weight)			\
+		((((__weight) & MAX_SCHED_WEIGHT) *             \
+		  NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT)
+
 
 /* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT  */
 /* = NIX_MAX_HW_MTU */
@@ -73,52 +80,27 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 #define MAX_RATE_EXPONENT 0xf
 #define MAX_RATE_MANTISSA 0xff
 
-/** NIX rate limiter time-wheel resolution */
-#define L1_TIME_WHEEL_CCLK_TICKS 240
-#define LX_TIME_WHEEL_CCLK_TICKS 860
+#define NIX_SHAPER_RATE_CONST ((uint64_t)2E6)
 
-#define CCLK_HZ 1000000000
-
-/* NIX rate calculation
- *	CCLK = coprocessor-clock frequency in MHz
- *	CCLK_TICKS = rate limiter time-wheel resolution
- *
+/* NIX rate calculation in Bits/Sec
  *	PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
  *		<< NIX_*_PIR[RATE_EXPONENT]) / 256
- *	PIR = (CCLK / (CCLK_TICKS << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
- *		* PIR_ADD
+ *	PIR = (2E6 * PIR_ADD / (1 << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
  *
  *	CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
  *		<< NIX_*_CIR[RATE_EXPONENT]) / 256
- *	CIR = (CCLK / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
- *		* CIR_ADD
+ *	CIR = (2E6 * CIR_ADD / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
  */
-#define SHAPER_RATE(cclk_hz, cclk_ticks, \
-			exponent, mantissa, div_exp) \
-	(((uint64_t)(cclk_hz) * ((256 + (mantissa)) << (exponent))) \
-		/ (((cclk_ticks) << (div_exp)) * 256))
+#define SHAPER_RATE(exponent, mantissa, div_exp) \
+	((NIX_SHAPER_RATE_CONST * ((256 + (mantissa)) << (exponent)))\
+		/ (((1ull << (div_exp)) * 256)))
 
-#define L1_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
-	SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS, \
-			exponent, mantissa, div_exp)
+/* 96xx rate limits in Bits/Sec */
+#define MIN_SHAPER_RATE \
+	SHAPER_RATE(0, 0, MAX_RATE_DIV_EXP)
 
-#define LX_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \
-	SHAPER_RATE(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS, \
-			exponent, mantissa, div_exp)
-
-/* Shaper rate limits */
-#define MIN_SHAPER_RATE(cclk_hz, cclk_ticks) \
-	SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, MAX_RATE_DIV_EXP)
-
-#define MAX_SHAPER_RATE(cclk_hz, cclk_ticks) \
-	SHAPER_RATE(cclk_hz, cclk_ticks, MAX_RATE_EXPONENT, \
-			MAX_RATE_MANTISSA, 0)
-
-#define MIN_L1_SHAPER_RATE(cclk_hz) \
-	MIN_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
-
-#define MAX_L1_SHAPER_RATE(cclk_hz) \
-	MAX_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS)
+#define MAX_SHAPER_RATE \
+	SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0)
 
 /** TM Shaper - low level operations */
 
@@ -150,4 +132,25 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 #define TXSCH_TL1_DFLT_RR_QTM  ((1 << 24) - 1)
 #define TXSCH_TL1_DFLT_RR_PRIO 1
 
+static inline const char *
+nix_hwlvl2str(uint32_t hw_lvl)
+{
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_MDQ:
+		return "SMQ/MDQ";
+	case NIX_TXSCH_LVL_TL4:
+		return "TL4";
+	case NIX_TXSCH_LVL_TL3:
+		return "TL3";
+	case NIX_TXSCH_LVL_TL2:
+		return "TL2";
+	case NIX_TXSCH_LVL_TL1:
+		return "TL1";
+	default:
+		break;
+	}
+
+	return "???";
+}
+
 #endif /* __OTX2_TM_H__ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 03/11] net/octeontx2: add dynamic topology update support
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 02/11] net/octeontx2: restructure tm helper functions Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 04/11] net/octeontx2: add tm node add and delete cb Nithin Dabilpuram
                     ` (7 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Vamsi Attunuru, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Modify resource allocation and freeing logic to support
dynamic topology commit while to traffic is flowing.
This patch also modifies SQ flush to timeout based on minimum shaper
rate configured. SQ flush is further split to pre/post
functions to adhere to HW spec of 96XX C0.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/common/octeontx2/otx2_dev.h |   9 +
 drivers/net/octeontx2/otx2_ethdev.c |   3 +-
 drivers/net/octeontx2/otx2_ethdev.h |   1 +
 drivers/net/octeontx2/otx2_tm.c     | 550 +++++++++++++++++++++++++++---------
 drivers/net/octeontx2/otx2_tm.h     |   7 +-
 5 files changed, 426 insertions(+), 144 deletions(-)

diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h
index 0b0a949..13b75e1 100644
--- a/drivers/common/octeontx2/otx2_dev.h
+++ b/drivers/common/octeontx2/otx2_dev.h
@@ -46,6 +46,15 @@
 	((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) &&	\
 	 (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
 
+#define otx2_dev_is_96xx_Cx(dev)				\
+	((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) &&	\
+	 (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
+
+#define otx2_dev_is_96xx_C0(dev)				\
+	((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) &&	\
+	 (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) &&	\
+	 (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
+
 struct otx2_dev;
 
 /* Link status callback */
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e60f490..6896797 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -992,7 +992,7 @@ otx2_nix_tx_queue_release(void *_txq)
 	otx2_nix_dbg("Releasing txq %u", txq->sq);
 
 	/* Flush and disable tm */
-	otx2_nix_tm_sw_xoff(txq, eth_dev->data->dev_started);
+	otx2_nix_sq_flush_pre(txq, eth_dev->data->dev_started);
 
 	/* Free sqb's and disable sq */
 	nix_sq_uninit(txq);
@@ -1001,6 +1001,7 @@ otx2_nix_tx_queue_release(void *_txq)
 		rte_mempool_free(txq->sqb_pool);
 		txq->sqb_pool = NULL;
 	}
+	otx2_nix_sq_flush_post(txq);
 	rte_free(txq);
 }
 
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index b7d5386..6679652 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -307,6 +307,7 @@ struct otx2_eth_dev {
 	uint16_t link_cfg_lvl;
 	uint16_t tm_flags;
 	uint16_t tm_leaf_cnt;
+	uint64_t tm_rate_min;
 	struct otx2_nix_tm_node_list node_list;
 	struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
 	struct otx2_rss_info rss_info;
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 108f44c..d1a4529 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -59,8 +59,16 @@ static bool
 nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
 {
 	bool is_lbk = otx2_dev_is_lbk(dev);
-	return otx2_dev_is_pf(dev) && !otx2_dev_is_Ax(dev) &&
-		!is_lbk && !dev->maxvf;
+	return otx2_dev_is_pf(dev) && !otx2_dev_is_Ax(dev) && !is_lbk;
+}
+
+static bool
+nix_tm_is_leaf(struct otx2_eth_dev *dev, int lvl)
+{
+	if (nix_tm_have_tl1_access(dev))
+		return (lvl == OTX2_TM_LVL_QUEUE);
+
+	return (lvl == OTX2_TM_LVL_SCH4);
 }
 
 static int
@@ -424,6 +432,48 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node,
 	return k;
 }
 
+static uint8_t
+prepare_tm_sw_xoff(struct otx2_nix_tm_node *tm_node, bool enable,
+		   volatile uint64_t *reg, volatile uint64_t *regval)
+{
+	uint32_t hw_lvl = tm_node->hw_lvl;
+	uint32_t schq = tm_node->hw_id;
+	uint8_t k = 0;
+
+	otx2_tm_dbg("sw xoff config node %s(%u) lvl %u id %u, enable %u (%p)",
+		    nix_hwlvl2str(hw_lvl), schq, tm_node->lvl,
+		    tm_node->id, enable, tm_node);
+
+	regval[k] = enable;
+
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_MDQ:
+		reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
+		k++;
+		break;
+	case NIX_TXSCH_LVL_TL1:
+		reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
+		k++;
+		break;
+	default:
+		break;
+	}
+
+	return k;
+}
+
 static int
 populate_tm_reg(struct otx2_eth_dev *dev,
 		struct otx2_nix_tm_node *tm_node)
@@ -692,12 +742,13 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 			uint16_t lvl, bool user,
 			struct rte_tm_node_params *params)
 {
-	struct otx2_nix_tm_shaper_profile *shaper_profile;
+	struct otx2_nix_tm_shaper_profile *profile;
 	struct otx2_nix_tm_node *tm_node, *parent_node;
-	uint32_t shaper_profile_id;
+	struct shaper_params cir, pir;
+	uint32_t profile_id;
 
-	shaper_profile_id = params->shaper_profile_id;
-	shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id);
+	profile_id = params->shaper_profile_id;
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
 
 	parent_node = nix_tm_node_search(dev, parent_node_id, user);
 
@@ -709,6 +760,10 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 	tm_node->lvl = lvl;
 	tm_node->hw_lvl = hw_lvl;
 
+	/* Maintain minimum weight */
+	if (!weight)
+		weight = 1;
+
 	tm_node->id = node_id;
 	tm_node->priority = priority;
 	tm_node->weight = weight;
@@ -720,10 +775,22 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
 		tm_node->flags = NIX_TM_NODE_USER;
 	rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
 
-	if (shaper_profile)
-		shaper_profile->reference_count++;
+	if (profile)
+		profile->reference_count++;
+
+	memset(&cir, 0, sizeof(cir));
+	memset(&pir, 0, sizeof(pir));
+	shaper_config_to_nix(profile, &cir, &pir);
+
 	tm_node->parent = parent_node;
 	tm_node->parent_hw_id = UINT32_MAX;
+	/* C0 doesn't support STALL when both PIR & CIR are enabled */
+	if (lvl < OTX2_TM_LVL_QUEUE &&
+	    otx2_dev_is_96xx_Cx(dev) &&
+	    pir.rate && cir.rate)
+		tm_node->red_algo = NIX_REDALG_DISCARD;
+	else
+		tm_node->red_algo = NIX_REDALG_STD;
 
 	TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node);
 
@@ -747,24 +814,67 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
 }
 
 static int
-nix_smq_xoff(struct otx2_eth_dev *dev, uint16_t smq, bool enable)
+nix_clear_path_xoff(struct otx2_eth_dev *dev,
+		    struct otx2_nix_tm_node *tm_node)
+{
+	struct nix_txschq_config *req;
+	struct otx2_nix_tm_node *p;
+	int rc;
+
+	/* Manipulating SW_XOFF not supported on Ax */
+	if (otx2_dev_is_Ax(dev))
+		return 0;
+
+	/* Enable nodes in path for flush to succeed */
+	if (!nix_tm_is_leaf(dev, tm_node->lvl))
+		p = tm_node;
+	else
+		p = tm_node->parent;
+	while (p) {
+		if (!(p->flags & NIX_TM_NODE_ENABLED) &&
+		    (p->flags & NIX_TM_NODE_HWRES)) {
+			req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+			req->lvl = p->hw_lvl;
+			req->num_regs = prepare_tm_sw_xoff(p, false, req->reg,
+							   req->regval);
+			rc = otx2_mbox_process(dev->mbox);
+			if (rc)
+				return rc;
+
+			p->flags |= NIX_TM_NODE_ENABLED;
+		}
+		p = p->parent;
+	}
+
+	return 0;
+}
+
+static int
+nix_smq_xoff(struct otx2_eth_dev *dev,
+	     struct otx2_nix_tm_node *tm_node,
+	     bool enable)
 {
 	struct otx2_mbox *mbox = dev->mbox;
 	struct nix_txschq_config *req;
+	uint16_t smq;
+	int rc;
+
+	smq = tm_node->hw_id;
+	otx2_tm_dbg("Setting SMQ %u XOFF/FLUSH to %s", smq,
+		    enable ? "enable" : "disable");
+
+	rc = nix_clear_path_xoff(dev, tm_node);
+	if (rc)
+		return rc;
 
 	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
 	req->lvl = NIX_TXSCH_LVL_SMQ;
 	req->num_regs = 1;
 
 	req->reg[0] = NIX_AF_SMQX_CFG(smq);
-	/* Unmodified fields */
-	req->regval[0] = ((uint64_t)NIX_MAX_VTAG_INS << 36) |
-				(NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS;
-
-	if (enable)
-		req->regval[0] |= BIT_ULL(50) | BIT_ULL(49);
-	else
-		req->regval[0] |= 0;
+	req->regval[0] = enable ? (BIT_ULL(50) | BIT_ULL(49)) : 0;
+	req->regval_mask[0] = enable ?
+				~(BIT_ULL(50) | BIT_ULL(49)) : ~BIT_ULL(50);
 
 	return otx2_mbox_process(mbox);
 }
@@ -780,6 +890,9 @@ otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
 	uint64_t aura_handle;
 	int rc;
 
+	otx2_tm_dbg("Setting SQ %u SQB aura FC to %s", txq->sq,
+		    enable ? "enable" : "disable");
+
 	lf = otx2_npa_lf_obj_get();
 	if (!lf)
 		return -EFAULT;
@@ -824,22 +937,41 @@ otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
 	return 0;
 }
 
-static void
+static int
 nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
 {
 	uint16_t sqb_cnt, head_off, tail_off;
 	struct otx2_eth_dev *dev = txq->dev;
+	uint64_t wdata, val, prev;
 	uint16_t sq = txq->sq;
-	uint64_t reg, val;
 	int64_t *regaddr;
+	uint64_t timeout;/* 10's of usec */
+
+	/* Wait for enough time based on shaper min rate */
+	timeout = (txq->qconf.nb_desc * NIX_MAX_HW_FRS * 8 * 1E5);
+	timeout = timeout / dev->tm_rate_min;
+	if (!timeout)
+		timeout = 10000;
+
+	wdata = ((uint64_t)sq << 32);
+	regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
+	val = otx2_atomic64_add_nosync(wdata, regaddr);
+
+	/* Spin multiple iterations as "txq->fc_cache_pkts" can still
+	 * have space to send pkts even though fc_mem is disabled
+	 */
 
 	while (true) {
-		reg = ((uint64_t)sq << 32);
-		regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
-		val = otx2_atomic64_add_nosync(reg, regaddr);
+		prev = val;
+		rte_delay_us(10);
+		val = otx2_atomic64_add_nosync(wdata, regaddr);
+		/* Continue on error */
+		if (val & BIT_ULL(63))
+			continue;
+
+		if (prev != val)
+			continue;
 
-		regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
-		val = otx2_atomic64_add_nosync(reg, regaddr);
 		sqb_cnt = val & 0xFFFF;
 		head_off = (val >> 20) & 0x3F;
 		tail_off = (val >> 28) & 0x3F;
@@ -850,117 +982,222 @@ nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
 			break;
 		}
 
-		rte_pause();
+		/* Timeout */
+		if (!timeout)
+			goto exit;
+		timeout--;
 	}
+
+	return 0;
+exit:
+	return -EFAULT;
 }
 
-int
-otx2_nix_tm_sw_xoff(void *__txq, bool dev_started)
+/* Flush and disable tx queue and its parent SMQ */
+int otx2_nix_sq_flush_pre(void *_txq, bool dev_started)
 {
-	struct otx2_eth_txq *txq = __txq;
-	struct otx2_eth_dev *dev = txq->dev;
-	struct otx2_mbox *mbox = dev->mbox;
-	struct nix_aq_enq_req *req;
-	struct nix_aq_enq_rsp *rsp;
-	uint16_t smq;
+	struct otx2_nix_tm_node *tm_node, *sibling;
+	struct otx2_eth_txq *txq;
+	struct otx2_eth_dev *dev;
+	uint16_t sq;
+	bool user;
 	int rc;
 
-	/* Get smq from sq */
-	req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
-	req->qidx = txq->sq;
-	req->ctype = NIX_AQ_CTYPE_SQ;
-	req->op = NIX_AQ_INSTOP_READ;
-	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
-	if (rc) {
-		otx2_err("Failed to get smq, rc=%d", rc);
-		return -EIO;
+	txq = _txq;
+	dev = txq->dev;
+	sq = txq->sq;
+
+	user = !!(dev->tm_flags & NIX_TM_COMMITTED);
+
+	/* Find the node for this SQ */
+	tm_node = nix_tm_node_search(dev, sq, user);
+	if (!tm_node || !(tm_node->flags & NIX_TM_NODE_ENABLED)) {
+		otx2_err("Invalid node/state for sq %u", sq);
+		return -EFAULT;
 	}
 
-	/* Check if sq is enabled */
-	if (!rsp->sq.ena)
-		return 0;
-
-	smq = rsp->sq.smq;
-
 	/* Enable CGX RXTX to drain pkts */
 	if (!dev_started) {
-		rc = otx2_cgx_rxtx_start(dev);
-		if (rc)
+		/* Though it enables both RX MCAM Entries and CGX Link
+		 * we assume all the rx queues are stopped way back.
+		 */
+		otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
+		rc = otx2_mbox_process(dev->mbox);
+		if (rc) {
+			otx2_err("cgx start failed, rc=%d", rc);
 			return rc;
-	}
-
-	rc = otx2_nix_sq_sqb_aura_fc(txq, false);
-	if (rc < 0) {
-		otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
-		goto cleanup;
+		}
 	}
 
 	/* Disable smq xoff for case it was enabled earlier */
-	rc = nix_smq_xoff(dev, smq, false);
+	rc = nix_smq_xoff(dev, tm_node->parent, false);
 	if (rc) {
-		otx2_err("Failed to enable smq for sq %u, rc=%d", txq->sq, rc);
-		goto cleanup;
-	}
-
-	/* Wait for sq entries to be flushed */
-	nix_txq_flush_sq_spin(txq);
-
-	/* Flush and enable smq xoff */
-	rc = nix_smq_xoff(dev, smq, true);
-	if (rc) {
-		otx2_err("Failed to disable smq for sq %u, rc=%d", txq->sq, rc);
+		otx2_err("Failed to enable smq %u, rc=%d",
+			 tm_node->parent->hw_id, rc);
 		return rc;
 	}
 
+	/* As per HRM, to disable an SQ, all other SQ's
+	 * that feed to same SMQ must be paused before SMQ flush.
+	 */
+	TAILQ_FOREACH(sibling, &dev->node_list, node) {
+		if (sibling->parent != tm_node->parent)
+			continue;
+		if (!(sibling->flags & NIX_TM_NODE_ENABLED))
+			continue;
+
+		sq = sibling->id;
+		txq = dev->eth_dev->data->tx_queues[sq];
+		if (!txq)
+			continue;
+
+		rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+		if (rc) {
+			otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
+			goto cleanup;
+		}
+
+		/* Wait for sq entries to be flushed */
+		rc = nix_txq_flush_sq_spin(txq);
+		if (rc) {
+			otx2_err("Failed to drain sq %u, rc=%d\n", txq->sq, rc);
+			return rc;
+		}
+	}
+
+	tm_node->flags &= ~NIX_TM_NODE_ENABLED;
+
+	/* Disable and flush */
+	rc = nix_smq_xoff(dev, tm_node->parent, true);
+	if (rc) {
+		otx2_err("Failed to disable smq %u, rc=%d",
+			 tm_node->parent->hw_id, rc);
+		goto cleanup;
+	}
 cleanup:
 	/* Restore cgx state */
-	if (!dev_started)
-		rc |= otx2_cgx_rxtx_stop(dev);
+	if (!dev_started) {
+		otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
+		rc |= otx2_mbox_process(dev->mbox);
+	}
 
 	return rc;
 }
 
+int otx2_nix_sq_flush_post(void *_txq)
+{
+	struct otx2_nix_tm_node *tm_node, *sibling;
+	struct otx2_eth_txq *txq = _txq;
+	struct otx2_eth_txq *s_txq;
+	struct otx2_eth_dev *dev;
+	bool once = false;
+	uint16_t sq, s_sq;
+	bool user;
+	int rc;
+
+	dev = txq->dev;
+	sq = txq->sq;
+	user = !!(dev->tm_flags & NIX_TM_COMMITTED);
+
+	/* Find the node for this SQ */
+	tm_node = nix_tm_node_search(dev, sq, user);
+	if (!tm_node) {
+		otx2_err("Invalid node for sq %u", sq);
+		return -EFAULT;
+	}
+
+	/* Enable all the siblings back */
+	TAILQ_FOREACH(sibling, &dev->node_list, node) {
+		if (sibling->parent != tm_node->parent)
+			continue;
+
+		if (sibling->id == sq)
+			continue;
+
+		if (!(sibling->flags & NIX_TM_NODE_ENABLED))
+			continue;
+
+		s_sq = sibling->id;
+		s_txq = dev->eth_dev->data->tx_queues[s_sq];
+		if (!s_txq)
+			continue;
+
+		if (!once) {
+			/* Enable back if any SQ is still present */
+			rc = nix_smq_xoff(dev, tm_node->parent, false);
+			if (rc) {
+				otx2_err("Failed to enable smq %u, rc=%d",
+					 tm_node->parent->hw_id, rc);
+				return rc;
+			}
+			once = true;
+		}
+
+		rc = otx2_nix_sq_sqb_aura_fc(s_txq, true);
+		if (rc) {
+			otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
+			return rc;
+		}
+	}
+
+	return 0;
+}
+
 static int
-nix_tm_sw_xon(struct otx2_eth_txq *txq,
-	      uint16_t smq, uint32_t rr_quantum)
+nix_sq_sched_data(struct otx2_eth_dev *dev,
+		  struct otx2_nix_tm_node *tm_node,
+		  bool rr_quantum_only)
 {
-	struct otx2_eth_dev *dev = txq->dev;
+	struct rte_eth_dev *eth_dev = dev->eth_dev;
 	struct otx2_mbox *mbox = dev->mbox;
+	uint16_t sq = tm_node->id, smq;
 	struct nix_aq_enq_req *req;
+	uint64_t rr_quantum;
 	int rc;
 
-	otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum %u",
-		    txq->sq, txq->sq, rr_quantum);
-	/* Set smq from sq */
+	smq = tm_node->parent->hw_id;
+	rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
+
+	if (rr_quantum_only)
+		otx2_tm_dbg("Update sq(%u) rr_quantum 0x%"PRIx64, sq, rr_quantum);
+	else
+		otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum 0x%"PRIx64,
+			    sq, smq, rr_quantum);
+
+	if (sq > eth_dev->data->nb_tx_queues)
+		return -EFAULT;
+
 	req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
-	req->qidx = txq->sq;
+	req->qidx = sq;
 	req->ctype = NIX_AQ_CTYPE_SQ;
 	req->op = NIX_AQ_INSTOP_WRITE;
-	req->sq.smq = smq;
+
+	/* smq update only when needed */
+	if (!rr_quantum_only) {
+		req->sq.smq = smq;
+		req->sq_mask.smq = ~req->sq_mask.smq;
+	}
 	req->sq.smq_rr_quantum = rr_quantum;
-	req->sq_mask.smq = ~req->sq_mask.smq;
 	req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum;
 
 	rc = otx2_mbox_process(mbox);
-	if (rc) {
+	if (rc)
 		otx2_err("Failed to set smq, rc=%d", rc);
-		return -EIO;
-	}
+	return rc;
+}
+
+int otx2_nix_sq_enable(void *_txq)
+{
+	struct otx2_eth_txq *txq = _txq;
+	int rc;
 
 	/* Enable sqb_aura fc */
 	rc = otx2_nix_sq_sqb_aura_fc(txq, true);
-	if (rc < 0) {
+	if (rc) {
 		otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
 		return rc;
 	}
 
-	/* Disable smq xoff */
-	rc = nix_smq_xoff(dev, smq, false);
-	if (rc) {
-		otx2_err("Failed to enable smq for sq %u", txq->sq);
-		return rc;
-	}
-
 	return 0;
 }
 
@@ -968,12 +1205,11 @@ static int
 nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
 		      uint32_t flags, bool hw_only)
 {
-	struct otx2_nix_tm_shaper_profile *shaper_profile;
+	struct otx2_nix_tm_shaper_profile *profile;
 	struct otx2_nix_tm_node *tm_node, *next_node;
 	struct otx2_mbox *mbox = dev->mbox;
 	struct nix_txsch_free_req *req;
-	uint32_t shaper_profile_id;
-	bool skip_node = false;
+	uint32_t profile_id;
 	int rc = 0;
 
 	next_node = TAILQ_FIRST(&dev->node_list);
@@ -985,37 +1221,40 @@ nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
 		if ((tm_node->flags & flags_mask) != flags)
 			continue;
 
-		if (nix_tm_have_tl1_access(dev) &&
-		    tm_node->hw_lvl ==  NIX_TXSCH_LVL_TL1)
-			skip_node = true;
-
-		otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)",
-			    tm_node->id,  tm_node->hw_lvl,
-			    tm_node->hw_id, tm_node);
-		/* Free specific HW resource if requested */
-		if (!skip_node && flags_mask &&
+		if (!nix_tm_is_leaf(dev, tm_node->lvl) &&
+		    tm_node->hw_lvl != NIX_TXSCH_LVL_TL1 &&
 		    tm_node->flags & NIX_TM_NODE_HWRES) {
+			/* Free specific HW resource */
+			otx2_tm_dbg("Free hwres %s(%u) lvl %u id %u (%p)",
+				    nix_hwlvl2str(tm_node->hw_lvl),
+				    tm_node->hw_id, tm_node->lvl,
+				    tm_node->id, tm_node);
+
+			rc = nix_clear_path_xoff(dev, tm_node);
+			if (rc)
+				return rc;
+
 			req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
 			req->flags = 0;
 			req->schq_lvl = tm_node->hw_lvl;
 			req->schq = tm_node->hw_id;
 			rc = otx2_mbox_process(mbox);
 			if (rc)
-				break;
-		} else {
-			skip_node = false;
+				return rc;
+			tm_node->flags &= ~NIX_TM_NODE_HWRES;
 		}
-		tm_node->flags &= ~NIX_TM_NODE_HWRES;
 
 		/* Leave software elements if needed */
 		if (hw_only)
 			continue;
 
-		shaper_profile_id = tm_node->params.shaper_profile_id;
-		shaper_profile =
-			nix_tm_shaper_profile_search(dev, shaper_profile_id);
-		if (shaper_profile)
-			shaper_profile->reference_count--;
+		otx2_tm_dbg("Free node lvl %u id %u (%p)",
+			    tm_node->lvl, tm_node->id, tm_node);
+
+		profile_id = tm_node->params.shaper_profile_id;
+		profile = nix_tm_shaper_profile_search(dev, profile_id);
+		if (profile)
+			profile->reference_count--;
 
 		TAILQ_REMOVE(&dev->node_list, tm_node, node);
 		rte_free(tm_node);
@@ -1060,8 +1299,8 @@ nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
 	uint32_t hw_id, schq_con_index, prio_offset;
 	uint32_t l_id, schq_index;
 
-	otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)",
-		    child->id, child->lvl, child->hw_lvl, child);
+	otx2_tm_dbg("Assign hw id for child node %s lvl %u id %u (%p)",
+		    nix_hwlvl2str(child->hw_lvl), child->lvl, child->id, child);
 
 	child->flags |= NIX_TM_NODE_HWRES;
 
@@ -1219,8 +1458,8 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
 	struct otx2_nix_tm_node *tm_node;
-	uint16_t sq, smq, rr_quantum;
 	struct otx2_eth_txq *txq;
+	uint16_t sq;
 	int rc;
 
 	nix_tm_update_parent_info(dev);
@@ -1237,42 +1476,68 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 		return rc;
 	}
 
-	/* Enable xmit as all the topology is ready */
-	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
-		if (tm_node->flags & NIX_TM_NODE_ENABLED)
-			continue;
+	/* Trigger MTU recalulate as SMQ needs MTU conf */
+	if (eth_dev->data->dev_started && eth_dev->data->nb_rx_queues) {
+		rc = otx2_nix_recalc_mtu(eth_dev);
+		if (rc) {
+			otx2_err("TM MTU update failed, rc=%d", rc);
+			return rc;
+		}
+	}
 
-		/* Enable xmit on sq */
-		if (tm_node->lvl != OTX2_TM_LVL_QUEUE) {
+	/* Mark all non-leaf's as enabled */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!nix_tm_is_leaf(dev, tm_node->lvl))
 			tm_node->flags |= NIX_TM_NODE_ENABLED;
+	}
+
+	if (!xmit_enable)
+		return 0;
+
+	/* Update SQ Sched Data while SQ is idle */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!nix_tm_is_leaf(dev, tm_node->lvl))
 			continue;
+
+		rc = nix_sq_sched_data(dev, tm_node, false);
+		if (rc) {
+			otx2_err("SQ %u sched update failed, rc=%d",
+				 tm_node->id, rc);
+			return rc;
+		}
+	}
+
+	/* Finally XON all SMQ's */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+			continue;
+
+		rc = nix_smq_xoff(dev, tm_node, false);
+		if (rc) {
+			otx2_err("Failed to enable smq %u, rc=%d",
+				 tm_node->hw_id, rc);
+			return rc;
 		}
+	}
 
-		/* Don't enable SMQ or mark as enable */
-		if (!xmit_enable)
+	/* Enable xmit as all the topology is ready */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!nix_tm_is_leaf(dev, tm_node->lvl))
 			continue;
 
 		sq = tm_node->id;
-		if (sq > eth_dev->data->nb_tx_queues) {
-			rc = -EFAULT;
-			break;
-		}
-
 		txq = eth_dev->data->tx_queues[sq];
 
-		smq = tm_node->parent->hw_id;
-		rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
-		rc = nix_tm_sw_xon(txq, smq, rr_quantum);
-		if (rc)
-			break;
+		rc = otx2_nix_sq_enable(txq);
+		if (rc) {
+			otx2_err("TM sw xon failed on SQ %u, rc=%d",
+				 tm_node->id, rc);
+			return rc;
+		}
 		tm_node->flags |= NIX_TM_NODE_ENABLED;
 	}
 
-	if (rc)
-		otx2_err("TM failed to enable xmit on sq %u, rc=%d", sq, rc);
-
-	return rc;
+	return 0;
 }
 
 static int
@@ -1282,7 +1547,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 	uint32_t def = eth_dev->data->nb_tx_queues;
 	struct rte_tm_node_params params;
 	uint32_t leaf_parent, i;
-	int rc = 0;
+	int rc = 0, leaf_level;
 
 	/* Default params */
 	memset(&params, 0, sizeof(params));
@@ -1325,6 +1590,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 			goto exit;
 
 		leaf_parent = def + 4;
+		leaf_level = OTX2_TM_LVL_QUEUE;
 	} else {
 		dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
 		rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
@@ -1356,6 +1622,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 			goto exit;
 
 		leaf_parent = def + 3;
+		leaf_level = OTX2_TM_LVL_SCH4;
 	}
 
 	/* Add leaf nodes */
@@ -1363,7 +1630,7 @@ nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 		rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0,
 					     DEFAULT_RR_WEIGHT,
 					     NIX_TXSCH_LVL_CNT,
-					     OTX2_TM_LVL_QUEUE, false, &params);
+					     leaf_level, false, &params);
 		if (rc)
 			break;
 	}
@@ -1378,6 +1645,7 @@ void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
 
 	TAILQ_INIT(&dev->node_list);
 	TAILQ_INIT(&dev->shaper_profile_list);
+	dev->tm_rate_min = 1E9; /* 1Gbps */
 }
 
 int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
@@ -1455,7 +1723,7 @@ otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 		tm_node = nix_tm_node_search(dev, sq, true);
 
 	/* Check if we found a valid leaf node */
-	if (!tm_node || tm_node->lvl != OTX2_TM_LVL_QUEUE ||
+	if (!tm_node || !nix_tm_is_leaf(dev, tm_node->lvl) ||
 	    !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
 		return -EIO;
 	}
@@ -1464,7 +1732,7 @@ otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 	*smq = tm_node->parent->hw_id;
 	*rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
 
-	rc = nix_smq_xoff(dev, *smq, false);
+	rc = nix_smq_xoff(dev, tm_node->parent, false);
 	if (rc)
 		return rc;
 	tm_node->flags |= NIX_TM_NODE_ENABLED;
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index ad7727e..413120a 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -10,6 +10,7 @@
 #include <rte_tm_driver.h>
 
 #define NIX_TM_DEFAULT_TREE	BIT_ULL(0)
+#define NIX_TM_COMMITTED	BIT_ULL(1)
 #define NIX_TM_TL1_NO_SP	BIT_ULL(3)
 
 struct otx2_eth_dev;
@@ -19,7 +20,9 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 			      uint32_t *rr_quantum, uint16_t *smq);
-int otx2_nix_tm_sw_xoff(void *_txq, bool dev_started);
+int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
+int otx2_nix_sq_flush_post(void *_txq);
+int otx2_nix_sq_enable(void *_txq);
 int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
 
 struct otx2_nix_tm_node {
@@ -40,6 +43,7 @@ struct otx2_nix_tm_node {
 #define NIX_TM_NODE_USER	BIT_ULL(2)
 	/* Shaper algorithm for RED state @NIX_REDALG_E */
 	uint32_t red_algo:2;
+
 	struct otx2_nix_tm_node *parent;
 	struct rte_tm_node_params params;
 };
@@ -70,7 +74,6 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 		((((__weight) & MAX_SCHED_WEIGHT) *             \
 		  NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT)
 
-
 /* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT  */
 /* = NIX_MAX_HW_MTU */
 #define DEFAULT_RR_WEIGHT 71
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 04/11] net/octeontx2: add tm node add and delete cb
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                     ` (2 preceding siblings ...)
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 03/11] net/octeontx2: add dynamic topology update support Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 05/11] net/octeontx2: add tm node suspend and resume cb Nithin Dabilpuram
                     ` (6 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Adds support to Traffic Management callbacks "node_add"
and "node_delete". These callbacks doesn't support
dynamic node addition or deletion post hierarchy commit.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 271 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.h |   2 +
 2 files changed, 273 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index d1a4529..3e2ace6 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1540,6 +1540,277 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 	return 0;
 }
 
+static uint16_t
+nix_tm_lvl2nix(struct otx2_eth_dev *dev, uint32_t lvl)
+{
+	if (nix_tm_have_tl1_access(dev)) {
+		switch (lvl) {
+		case OTX2_TM_LVL_ROOT:
+			return NIX_TXSCH_LVL_TL1;
+		case OTX2_TM_LVL_SCH1:
+			return NIX_TXSCH_LVL_TL2;
+		case OTX2_TM_LVL_SCH2:
+			return NIX_TXSCH_LVL_TL3;
+		case OTX2_TM_LVL_SCH3:
+			return NIX_TXSCH_LVL_TL4;
+		case OTX2_TM_LVL_SCH4:
+			return NIX_TXSCH_LVL_SMQ;
+		default:
+			return NIX_TXSCH_LVL_CNT;
+		}
+	} else {
+		switch (lvl) {
+		case OTX2_TM_LVL_ROOT:
+			return NIX_TXSCH_LVL_TL2;
+		case OTX2_TM_LVL_SCH1:
+			return NIX_TXSCH_LVL_TL3;
+		case OTX2_TM_LVL_SCH2:
+			return NIX_TXSCH_LVL_TL4;
+		case OTX2_TM_LVL_SCH3:
+			return NIX_TXSCH_LVL_SMQ;
+		default:
+			return NIX_TXSCH_LVL_CNT;
+		}
+	}
+}
+
+static uint16_t
+nix_max_prio(struct otx2_eth_dev *dev, uint16_t hw_lvl)
+{
+	if (hw_lvl >= NIX_TXSCH_LVL_CNT)
+		return 0;
+
+	/* MDQ doesn't support SP */
+	if (hw_lvl == NIX_TXSCH_LVL_MDQ)
+		return 0;
+
+	/* PF's TL1 with VF's enabled doesn't support SP */
+	if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
+	    (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
+	     (dev->tm_flags & NIX_TM_TL1_NO_SP)))
+		return 0;
+
+	return TXSCH_TLX_SP_PRIO_MAX - 1;
+}
+
+
+static int
+validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
+	      uint32_t parent_id, uint32_t priority,
+	      struct rte_tm_error *error)
+{
+	uint8_t priorities[TXSCH_TLX_SP_PRIO_MAX];
+	struct otx2_nix_tm_node *tm_node;
+	uint32_t rr_num = 0;
+	int i;
+
+	/* Validate priority against max */
+	if (priority > nix_max_prio(dev, nix_tm_lvl2nix(dev, lvl - 1))) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "unsupported priority value";
+		return -EINVAL;
+	}
+
+	if (parent_id == RTE_TM_NODE_ID_NULL)
+		return 0;
+
+	memset(priorities, 0, TXSCH_TLX_SP_PRIO_MAX);
+	priorities[priority] = 1;
+
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (!tm_node->parent)
+			continue;
+
+		if (!(tm_node->flags & NIX_TM_NODE_USER))
+			continue;
+
+		if (tm_node->parent->id != parent_id)
+			continue;
+
+		priorities[tm_node->priority]++;
+	}
+
+	for (i = 0; i < TXSCH_TLX_SP_PRIO_MAX; i++)
+		if (priorities[i] > 1)
+			rr_num++;
+
+	/* At max, one rr groups per parent */
+	if (rr_num > 1) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+		error->message = "multiple DWRR node priority";
+		return -EINVAL;
+	}
+
+	/* Check for previous priority to avoid holes in priorities */
+	if (priority && !priorities[priority - 1]) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+		error->message = "priority not in order";
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
+		     uint32_t parent_node_id, uint32_t priority,
+		     uint32_t weight, uint32_t lvl,
+		     struct rte_tm_node_params *params,
+		     struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *parent_node;
+	int rc, clear_on_fail = 0;
+	uint32_t exp_next_lvl;
+	uint16_t hw_lvl;
+
+	/* we don't support dynamic updates */
+	if (dev->tm_flags & NIX_TM_COMMITTED) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "dynamic update not supported";
+		return -EIO;
+	}
+
+	/* Leaf nodes have to be same priority */
+	if (nix_tm_is_leaf(dev, lvl) && priority != 0) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "queue shapers must be priority 0";
+		return -EIO;
+	}
+
+	parent_node = nix_tm_node_search(dev, parent_node_id, true);
+
+	/* find the right level */
+	if (lvl == RTE_TM_NODE_LEVEL_ID_ANY) {
+		if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+			lvl = OTX2_TM_LVL_ROOT;
+		} else if (parent_node) {
+			lvl = parent_node->lvl + 1;
+		} else {
+			/* Neigher proper parent nor proper level id given */
+			error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+			error->message = "invalid parent node id";
+			return -ERANGE;
+		}
+	}
+
+	/* Translate rte_tm level id's to nix hw level id's */
+	hw_lvl = nix_tm_lvl2nix(dev, lvl);
+	if (hw_lvl == NIX_TXSCH_LVL_CNT &&
+	    !nix_tm_is_leaf(dev, lvl)) {
+		error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
+		error->message = "invalid level id";
+		return -ERANGE;
+	}
+
+	if (node_id < dev->tm_leaf_cnt)
+		exp_next_lvl = NIX_TXSCH_LVL_SMQ;
+	else
+		exp_next_lvl = hw_lvl + 1;
+
+	/* Check if there is no parent node yet */
+	if (hw_lvl != dev->otx2_tm_root_lvl &&
+	    (!parent_node || parent_node->hw_lvl != exp_next_lvl)) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+		error->message = "invalid parent node id";
+		return -EINVAL;
+	}
+
+	/* Check if a node already exists */
+	if (nix_tm_node_search(dev, node_id, true)) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "node already exists";
+		return -EINVAL;
+	}
+
+	/* Check if shaper profile exists for non leaf node */
+	if (!nix_tm_is_leaf(dev, lvl) &&
+	    params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE &&
+	    !nix_tm_shaper_profile_search(dev, params->shaper_profile_id)) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+		error->message = "invalid shaper profile";
+		return -EINVAL;
+	}
+
+	/* Check if there is second DWRR already in siblings or holes in prio */
+	if (validate_prio(dev, lvl, parent_node_id, priority, error))
+		return -EINVAL;
+
+	if (weight > MAX_SCHED_WEIGHT) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+		error->message = "max weight exceeded";
+		return -EINVAL;
+	}
+
+	rc = nix_tm_node_add_to_list(dev, node_id, parent_node_id,
+				     priority, weight, hw_lvl,
+				     lvl, true, params);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		/* cleanup user added nodes */
+		if (clear_on_fail)
+			nix_tm_free_resources(dev, NIX_TM_NODE_USER,
+					      NIX_TM_NODE_USER, false);
+		error->message = "failed to add node";
+		return rc;
+	}
+	error->type = RTE_TM_ERROR_TYPE_NONE;
+	return 0;
+}
+
+static int
+otx2_nix_tm_node_delete(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node, *child_node;
+	struct otx2_nix_tm_shaper_profile *profile;
+	uint32_t profile_id;
+
+	/* we don't support dynamic updates yet */
+	if (dev->tm_flags & NIX_TM_COMMITTED) {
+		error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
+		error->message = "hierarchy exists";
+		return -EIO;
+	}
+
+	if (node_id == RTE_TM_NODE_ID_NULL) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "invalid node id";
+		return -EINVAL;
+	}
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	/* Check for any existing children */
+	TAILQ_FOREACH(child_node, &dev->node_list, node) {
+		if (child_node->parent == tm_node) {
+			error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+			error->message = "children exist";
+			return -EINVAL;
+		}
+	}
+
+	/* Remove shaper profile reference */
+	profile_id = tm_node->params.shaper_profile_id;
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
+	profile->reference_count--;
+
+	TAILQ_REMOVE(&dev->node_list, tm_node, node);
+	rte_free(tm_node);
+	return 0;
+}
+
+const struct rte_tm_ops otx2_tm_ops = {
+	.node_add = otx2_nix_tm_node_add,
+	.node_delete = otx2_nix_tm_node_delete,
+};
+
 static int
 nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
 {
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 413120a..ebb4e90 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -135,6 +135,8 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
 #define TXSCH_TL1_DFLT_RR_QTM  ((1 << 24) - 1)
 #define TXSCH_TL1_DFLT_RR_PRIO 1
 
+#define TXSCH_TLX_SP_PRIO_MAX 10
+
 static inline const char *
 nix_hwlvl2str(uint32_t hw_lvl)
 {
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 05/11] net/octeontx2: add tm node suspend and resume cb
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                     ` (3 preceding siblings ...)
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 04/11] net/octeontx2: add tm node add and delete cb Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 06/11] net/octeontx2: add tm hierarchy commit callback Nithin Dabilpuram
                     ` (5 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Krzysztof Kanas <kkanas@marvell.com>

Add TM support to suspend and resume nodes post hierarchy
commit.

Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 81 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 81 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 3e2ace6..b0e86f0 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1540,6 +1540,28 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
 	return 0;
 }
 
+static int
+send_tm_reqval(struct otx2_mbox *mbox,
+	       struct nix_txschq_config *req,
+	       struct rte_tm_error *error)
+{
+	int rc;
+
+	if (!req->num_regs ||
+	    req->num_regs > MAX_REGS_PER_MBOX_MSG) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "invalid config";
+		return -EIO;
+	}
+
+	rc = otx2_mbox_process(mbox);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+	}
+	return rc;
+}
+
 static uint16_t
 nix_tm_lvl2nix(struct otx2_eth_dev *dev, uint32_t lvl)
 {
@@ -1806,9 +1828,68 @@ otx2_nix_tm_node_delete(struct rte_eth_dev *eth_dev, uint32_t node_id,
 	return 0;
 }
 
+static int
+nix_tm_node_suspend_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			   struct rte_tm_error *error, bool suspend)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	struct otx2_nix_tm_node *tm_node;
+	struct nix_txschq_config *req;
+	uint16_t flags;
+	int rc;
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "hierarchy doesn't exist";
+		return -EINVAL;
+	}
+
+	flags = tm_node->flags;
+	flags = suspend ? (flags & ~NIX_TM_NODE_ENABLED) :
+		(flags | NIX_TM_NODE_ENABLED);
+
+	if (tm_node->flags == flags)
+		return 0;
+
+	/* send mbox for state change */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+
+	req->lvl = tm_node->hw_lvl;
+	req->num_regs =	prepare_tm_sw_xoff(tm_node, suspend,
+					   req->reg, req->regval);
+	rc = send_tm_reqval(mbox, req, error);
+	if (!rc)
+		tm_node->flags = flags;
+	return rc;
+}
+
+static int
+otx2_nix_tm_node_suspend(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			 struct rte_tm_error *error)
+{
+	return nix_tm_node_suspend_resume(eth_dev, node_id, error, true);
+}
+
+static int
+otx2_nix_tm_node_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			struct rte_tm_error *error)
+{
+	return nix_tm_node_suspend_resume(eth_dev, node_id, error, false);
+}
+
 const struct rte_tm_ops otx2_tm_ops = {
 	.node_add = otx2_nix_tm_node_add,
 	.node_delete = otx2_nix_tm_node_delete,
+	.node_suspend = otx2_nix_tm_node_suspend,
+	.node_resume = otx2_nix_tm_node_resume,
 };
 
 static int
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 06/11] net/octeontx2: add tm hierarchy commit callback
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                     ` (4 preceding siblings ...)
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 05/11] net/octeontx2: add tm node suspend and resume cb Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 07/11] net/octeontx2: add tm stats and shaper profile cbs Nithin Dabilpuram
                     ` (4 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Add TM hierarchy commit callback to support enabling
newly created topology.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 173 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 173 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index b0e86f0..1f8642a 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1674,6 +1674,104 @@ validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
 }
 
 static int
+nix_xmit_disable(struct rte_eth_dev *eth_dev)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
+	uint16_t sqb_cnt, head_off, tail_off;
+	struct otx2_nix_tm_node *tm_node;
+	struct otx2_eth_txq *txq;
+	uint64_t wdata, val;
+	int i, rc;
+
+	otx2_tm_dbg("Disabling xmit on %s", eth_dev->data->name);
+
+	/* Enable CGX RXTX to drain pkts */
+	if (!eth_dev->data->dev_started) {
+		otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
+		rc = otx2_mbox_process(dev->mbox);
+		if (rc)
+			return rc;
+	}
+
+	/* XON all SMQ's */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+			continue;
+		if (!(tm_node->flags & NIX_TM_NODE_HWRES))
+			continue;
+
+		rc = nix_smq_xoff(dev, tm_node, false);
+		if (rc) {
+			otx2_err("Failed to enable smq %u, rc=%d",
+				 tm_node->hw_id, rc);
+			goto cleanup;
+		}
+	}
+
+	/* Flush all tx queues */
+	for (i = 0; i < sq_cnt; i++) {
+		txq = eth_dev->data->tx_queues[i];
+
+		rc = otx2_nix_sq_sqb_aura_fc(txq, false);
+		if (rc) {
+			otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
+			goto cleanup;
+		}
+
+		/* Wait for sq entries to be flushed */
+		rc = nix_txq_flush_sq_spin(txq);
+		if (rc) {
+			otx2_err("Failed to drain sq, rc=%d\n", rc);
+			goto cleanup;
+		}
+	}
+
+	/* XOFF & Flush all SMQ's. HRM mandates
+	 * all SQ's empty before SMQ flush is issued.
+	 */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+			continue;
+		if (!(tm_node->flags & NIX_TM_NODE_HWRES))
+			continue;
+
+		rc = nix_smq_xoff(dev, tm_node, true);
+		if (rc) {
+			otx2_err("Failed to enable smq %u, rc=%d",
+				 tm_node->hw_id, rc);
+			goto cleanup;
+		}
+	}
+
+	/* Verify sanity of all tx queues */
+	for (i = 0; i < sq_cnt; i++) {
+		txq = eth_dev->data->tx_queues[i];
+
+		wdata = ((uint64_t)txq->sq << 32);
+		val = otx2_atomic64_add_nosync(wdata,
+			       (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS));
+
+		sqb_cnt = val & 0xFFFF;
+		head_off = (val >> 20) & 0x3F;
+		tail_off = (val >> 28) & 0x3F;
+
+		if (sqb_cnt > 1 || head_off != tail_off ||
+		    (*txq->fc_mem != txq->nb_sqb_bufs))
+			otx2_err("Failed to gracefully flush sq %u", txq->sq);
+	}
+
+cleanup:
+	/* restore cgx state */
+	if (!eth_dev->data->dev_started) {
+		otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
+		rc |= otx2_mbox_process(dev->mbox);
+	}
+
+	return rc;
+}
+
+static int
 otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
 		     uint32_t parent_node_id, uint32_t priority,
 		     uint32_t weight, uint32_t lvl,
@@ -1885,11 +1983,86 @@ otx2_nix_tm_node_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
 	return nix_tm_node_suspend_resume(eth_dev, node_id, error, false);
 }
 
+static int
+otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
+			     int clear_on_fail,
+			     struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node;
+	uint32_t leaf_cnt = 0;
+	int rc;
+
+	if (dev->tm_flags & NIX_TM_COMMITTED) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "hierarchy exists";
+		return -EINVAL;
+	}
+
+	/* Check if we have all the leaf nodes */
+	TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+		if (tm_node->flags & NIX_TM_NODE_USER &&
+		    tm_node->id < dev->tm_leaf_cnt)
+			leaf_cnt++;
+	}
+
+	if (leaf_cnt != dev->tm_leaf_cnt) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "incomplete hierarchy";
+		return -EINVAL;
+	}
+
+	/*
+	 * Disable xmit will be enabled when
+	 * new topology is available.
+	 */
+	rc = nix_xmit_disable(eth_dev);
+	if (rc) {
+		otx2_err("failed to disable TX, rc=%d", rc);
+		return -EIO;
+	}
+
+	/* Delete default/ratelimit tree */
+	if (dev->tm_flags & (NIX_TM_DEFAULT_TREE)) {
+		rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER, 0, false);
+		if (rc) {
+			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+			error->message = "failed to free default resources";
+			return rc;
+		}
+		dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE);
+	}
+
+	/* Free up user alloc'ed resources */
+	rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER,
+				   NIX_TM_NODE_USER, true);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "failed to free user resources";
+		return rc;
+	}
+
+	rc = nix_tm_alloc_resources(eth_dev, true);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "alloc resources failed";
+		/* TODO should we restore default config ? */
+		if (clear_on_fail)
+			nix_tm_free_resources(dev, 0, 0, false);
+		return rc;
+	}
+
+	error->type = RTE_TM_ERROR_TYPE_NONE;
+	dev->tm_flags |= NIX_TM_COMMITTED;
+	return 0;
+}
+
 const struct rte_tm_ops otx2_tm_ops = {
 	.node_add = otx2_nix_tm_node_add,
 	.node_delete = otx2_nix_tm_node_delete,
 	.node_suspend = otx2_nix_tm_node_suspend,
 	.node_resume = otx2_nix_tm_node_resume,
+	.hierarchy_commit = otx2_nix_tm_hierarchy_commit,
 };
 
 static int
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 07/11] net/octeontx2: add tm stats and shaper profile cbs
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                     ` (5 preceding siblings ...)
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 06/11] net/octeontx2: add tm hierarchy commit callback Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 08/11] net/octeontx2: add tm dynamic topology update cb Nithin Dabilpuram
                     ` (3 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Add TM support for stats read and private shaper
profile addition or deletion.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 272 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.h |   4 +
 2 files changed, 276 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 1f8642a..68771d1 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1674,6 +1674,47 @@ validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
 }
 
 static int
+read_tm_reg(struct otx2_mbox *mbox, uint64_t reg,
+	    uint64_t *regval, uint32_t hw_lvl)
+{
+	volatile struct nix_txschq_config *req;
+	struct nix_txschq_config *rsp;
+	int rc;
+
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->read = 1;
+	req->lvl = hw_lvl;
+	req->reg[0] = reg;
+	req->num_regs = 1;
+
+	rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
+	if (rc)
+		return rc;
+	*regval = rsp->regval[0];
+	return 0;
+}
+
+/* Search for min rate in topology */
+static void
+nix_tm_shaper_profile_update_min(struct otx2_eth_dev *dev)
+{
+	struct otx2_nix_tm_shaper_profile *profile;
+	uint64_t rate_min = 1E9; /* 1 Gbps */
+
+	TAILQ_FOREACH(profile, &dev->shaper_profile_list, shaper) {
+		if (profile->params.peak.rate &&
+		    profile->params.peak.rate < rate_min)
+			rate_min = profile->params.peak.rate;
+
+		if (profile->params.committed.rate &&
+		    profile->params.committed.rate < rate_min)
+			rate_min = profile->params.committed.rate;
+	}
+
+	dev->tm_rate_min = rate_min;
+}
+
+static int
 nix_xmit_disable(struct rte_eth_dev *eth_dev)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
@@ -1772,6 +1813,145 @@ nix_xmit_disable(struct rte_eth_dev *eth_dev)
 }
 
 static int
+otx2_nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			  int *is_leaf, struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node;
+
+	if (is_leaf == NULL) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		return -EINVAL;
+	}
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (node_id == RTE_TM_NODE_ID_NULL || !tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		return -EINVAL;
+	}
+	if (nix_tm_is_leaf(dev, tm_node->lvl))
+		*is_leaf = true;
+	else
+		*is_leaf = false;
+
+	return 0;
+}
+
+static int
+otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev,
+			       uint32_t profile_id,
+			       struct rte_tm_shaper_params *params,
+			       struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_shaper_profile *profile;
+
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
+	if (profile) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+		error->message = "shaper profile ID exist";
+		return -EINVAL;
+	}
+
+	/* Committed rate and burst size can be enabled/disabled */
+	if (params->committed.size || params->committed.rate) {
+		if (params->committed.size < MIN_SHAPER_BURST ||
+		    params->committed.size > MAX_SHAPER_BURST) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+			return -EINVAL;
+		} else if (!shaper_rate_to_nix(params->committed.rate * 8,
+					       NULL, NULL, NULL)) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+			error->message = "shaper committed rate invalid";
+			return -EINVAL;
+		}
+	}
+
+	/* Peak rate and burst size can be enabled/disabled */
+	if (params->peak.size || params->peak.rate) {
+		if (params->peak.size < MIN_SHAPER_BURST ||
+		    params->peak.size > MAX_SHAPER_BURST) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+			return -EINVAL;
+		} else if (!shaper_rate_to_nix(params->peak.rate * 8,
+					       NULL, NULL, NULL)) {
+			error->type =
+				RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+			error->message = "shaper peak rate invalid";
+			return -EINVAL;
+		}
+	}
+
+	profile = rte_zmalloc("otx2_nix_tm_shaper_profile",
+			      sizeof(struct otx2_nix_tm_shaper_profile), 0);
+	if (!profile)
+		return -ENOMEM;
+
+	profile->shaper_profile_id = profile_id;
+	rte_memcpy(&profile->params, params,
+		   sizeof(struct rte_tm_shaper_params));
+	TAILQ_INSERT_TAIL(&dev->shaper_profile_list, profile, shaper);
+
+	otx2_tm_dbg("Added TM shaper profile %u, "
+		    " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64
+		    ", cbs %" PRIu64 " , adj %u",
+		    profile_id,
+		    params->peak.rate * 8,
+		    params->peak.size,
+		    params->committed.rate * 8,
+		    params->committed.size,
+		    params->pkt_length_adjust);
+
+	/* Translate rate as bits per second */
+	profile->params.peak.rate = profile->params.peak.rate * 8;
+	profile->params.committed.rate = profile->params.committed.rate * 8;
+	/* Always use PIR for single rate shaping */
+	if (!params->peak.rate && params->committed.rate) {
+		profile->params.peak = profile->params.committed;
+		memset(&profile->params.committed, 0,
+		       sizeof(profile->params.committed));
+	}
+
+	/* update min rate */
+	nix_tm_shaper_profile_update_min(dev);
+	return 0;
+}
+
+static int
+otx2_nix_tm_shaper_profile_delete(struct rte_eth_dev *eth_dev,
+				  uint32_t profile_id,
+				  struct rte_tm_error *error)
+{
+	struct otx2_nix_tm_shaper_profile *profile;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+	profile = nix_tm_shaper_profile_search(dev, profile_id);
+
+	if (!profile) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+		error->message = "shaper profile ID not exist";
+		return -EINVAL;
+	}
+
+	if (profile->reference_count) {
+		error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+		error->message = "shaper profile in use";
+		return -EINVAL;
+	}
+
+	otx2_tm_dbg("Removing TM shaper profile %u", profile_id);
+	TAILQ_REMOVE(&dev->shaper_profile_list, profile, shaper);
+	rte_free(profile);
+
+	/* update min rate */
+	nix_tm_shaper_profile_update_min(dev);
+	return 0;
+}
+
+static int
 otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
 		     uint32_t parent_node_id, uint32_t priority,
 		     uint32_t weight, uint32_t lvl,
@@ -2057,12 +2237,104 @@ otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
 	return 0;
 }
 
+static int
+otx2_nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			    struct rte_tm_node_stats *stats,
+			    uint64_t *stats_mask, int clear,
+			    struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node;
+	uint64_t reg, val;
+	int64_t *addr;
+	int rc = 0;
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	/* Stats support only for leaf node or TL1 root */
+	if (nix_tm_is_leaf(dev, tm_node->lvl)) {
+		reg = (((uint64_t)tm_node->id) << 32);
+
+		/* Packets */
+		addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
+		val = otx2_atomic64_add_nosync(reg, addr);
+		if (val & OP_ERR)
+			val = 0;
+		stats->n_pkts = val - tm_node->last_pkts;
+
+		/* Bytes */
+		addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
+		val = otx2_atomic64_add_nosync(reg, addr);
+		if (val & OP_ERR)
+			val = 0;
+		stats->n_bytes = val - tm_node->last_bytes;
+
+		if (clear) {
+			tm_node->last_pkts = stats->n_pkts;
+			tm_node->last_bytes = stats->n_bytes;
+		}
+
+		*stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES;
+
+	} else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "stats read error";
+
+		/* RED Drop packets */
+		reg = NIX_AF_TL1X_DROPPED_PACKETS(tm_node->hw_id);
+		rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
+		if (rc)
+			goto exit;
+		stats->leaf.n_pkts_dropped[RTE_COLOR_RED] =
+						val - tm_node->last_pkts;
+
+		/* RED Drop bytes */
+		reg = NIX_AF_TL1X_DROPPED_BYTES(tm_node->hw_id);
+		rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
+		if (rc)
+			goto exit;
+		stats->leaf.n_bytes_dropped[RTE_COLOR_RED] =
+						val - tm_node->last_bytes;
+
+		/* Clear stats */
+		if (clear) {
+			tm_node->last_pkts =
+				stats->leaf.n_pkts_dropped[RTE_COLOR_RED];
+			tm_node->last_bytes =
+				stats->leaf.n_bytes_dropped[RTE_COLOR_RED];
+		}
+
+		*stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
+			RTE_TM_STATS_N_BYTES_RED_DROPPED;
+
+	} else {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "unsupported node";
+		rc = -EINVAL;
+	}
+
+exit:
+	return rc;
+}
+
 const struct rte_tm_ops otx2_tm_ops = {
+	.node_type_get = otx2_nix_tm_node_type_get,
+
+	.shaper_profile_add = otx2_nix_tm_shaper_profile_add,
+	.shaper_profile_delete = otx2_nix_tm_shaper_profile_delete,
+
 	.node_add = otx2_nix_tm_node_add,
 	.node_delete = otx2_nix_tm_node_delete,
 	.node_suspend = otx2_nix_tm_node_suspend,
 	.node_resume = otx2_nix_tm_node_resume,
 	.hierarchy_commit = otx2_nix_tm_hierarchy_commit,
+
+	.node_stats_read = otx2_nix_tm_node_stats_read,
 };
 
 static int
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index ebb4e90..20e2069 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -46,6 +46,10 @@ struct otx2_nix_tm_node {
 
 	struct otx2_nix_tm_node *parent;
 	struct rte_tm_node_params params;
+
+	/* Last stats */
+	uint64_t last_pkts;
+	uint64_t last_bytes;
 };
 
 struct otx2_nix_tm_shaper_profile {
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 08/11] net/octeontx2: add tm dynamic topology update cb
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                     ` (6 preceding siblings ...)
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 07/11] net/octeontx2: add tm stats and shaper profile cbs Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 09/11] net/octeontx2: add tm debug support Nithin Dabilpuram
                     ` (2 subsequent siblings)
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Add dynamic parent and shaper update callbacks that
can be used to change RR Quantum or PIR/CIR rate dynamically
post hierarchy commit. Dynamic parent update callback only
supports updating RR quantum of a given child with respect to
its parent. There is no support yet to change priority or parent
itself.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_tm.c | 190 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 190 insertions(+)

diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index 68771d1..d8e54ee 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -2238,6 +2238,194 @@ otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
 }
 
 static int
+otx2_nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev,
+			       uint32_t node_id,
+			       uint32_t profile_id,
+			       struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_shaper_profile *profile = NULL;
+	struct otx2_mbox *mbox = dev->mbox;
+	struct otx2_nix_tm_node *tm_node;
+	struct nix_txschq_config *req;
+	uint8_t k;
+	int rc;
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node || nix_tm_is_leaf(dev, tm_node->lvl)) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "invalid node";
+		return -EINVAL;
+	}
+
+	if (profile_id == tm_node->params.shaper_profile_id)
+		return 0;
+
+	if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+		profile = nix_tm_shaper_profile_search(dev, profile_id);
+		if (!profile) {
+			error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+			error->message = "shaper profile ID not exist";
+			return -EINVAL;
+		}
+	}
+
+	tm_node->params.shaper_profile_id = profile_id;
+
+	/* Nothing to do if not yet committed */
+	if (!(dev->tm_flags & NIX_TM_COMMITTED))
+		return 0;
+
+	tm_node->flags &= ~NIX_TM_NODE_ENABLED;
+
+	/* Flush the specific node with SW_XOFF */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = tm_node->hw_lvl;
+	k = prepare_tm_sw_xoff(tm_node, true, req->reg, req->regval);
+	req->num_regs = k;
+
+	rc = send_tm_reqval(mbox, req, error);
+	if (rc)
+		return rc;
+
+	/* Update the PIR/CIR and clear SW XOFF */
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = tm_node->hw_lvl;
+
+	k = prepare_tm_shaper_reg(tm_node, profile, req->reg, req->regval);
+
+	k += prepare_tm_sw_xoff(tm_node, false, &req->reg[k], &req->regval[k]);
+
+	req->num_regs = k;
+	rc = send_tm_reqval(mbox, req, error);
+	if (!rc)
+		tm_node->flags |= NIX_TM_NODE_ENABLED;
+	return rc;
+}
+
+static int
+otx2_nix_tm_node_parent_update(struct rte_eth_dev *eth_dev,
+			       uint32_t node_id, uint32_t new_parent_id,
+			       uint32_t priority, uint32_t weight,
+			       struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_node *tm_node, *sibling;
+	struct otx2_nix_tm_node *new_parent;
+	struct nix_txschq_config *req;
+	uint8_t k;
+	int rc;
+
+	if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "hierarchy doesn't exist";
+		return -EINVAL;
+	}
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	/* Parent id valid only for non root nodes */
+	if (tm_node->hw_lvl != dev->otx2_tm_root_lvl) {
+		new_parent = nix_tm_node_search(dev, new_parent_id, true);
+		if (!new_parent) {
+			error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+			error->message = "no such parent node";
+			return -EINVAL;
+		}
+
+		/* Current support is only for dynamic weight update */
+		if (tm_node->parent != new_parent ||
+		    tm_node->priority != priority) {
+			error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+			error->message = "only weight update supported";
+			return -EINVAL;
+		}
+	}
+
+	/* Skip if no change */
+	if (tm_node->weight == weight)
+		return 0;
+
+	tm_node->weight = weight;
+
+	/* For leaf nodes, SQ CTX needs update */
+	if (nix_tm_is_leaf(dev, tm_node->lvl)) {
+		/* Update SQ quantum data on the fly */
+		rc = nix_sq_sched_data(dev, tm_node, true);
+		if (rc) {
+			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+			error->message = "sq sched data update failed";
+			return rc;
+		}
+	} else {
+		/* XOFF Parent node */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->parent->hw_lvl;
+		req->num_regs = prepare_tm_sw_xoff(tm_node->parent, true,
+						   req->reg, req->regval);
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* XOFF this node and all other siblings */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->hw_lvl;
+
+		k = 0;
+		TAILQ_FOREACH(sibling, &dev->node_list, node) {
+			if (sibling->parent != tm_node->parent)
+				continue;
+			k += prepare_tm_sw_xoff(sibling, true, &req->reg[k],
+						&req->regval[k]);
+		}
+		req->num_regs = k;
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* Update new weight for current node */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->hw_lvl;
+		req->num_regs = prepare_tm_sched_reg(dev, tm_node,
+						     req->reg, req->regval);
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* XON this node and all other siblings */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->hw_lvl;
+
+		k = 0;
+		TAILQ_FOREACH(sibling, &dev->node_list, node) {
+			if (sibling->parent != tm_node->parent)
+				continue;
+			k += prepare_tm_sw_xoff(sibling, false, &req->reg[k],
+						&req->regval[k]);
+		}
+		req->num_regs = k;
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+
+		/* XON Parent node */
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->lvl = tm_node->parent->hw_lvl;
+		req->num_regs = prepare_tm_sw_xoff(tm_node->parent, false,
+						   req->reg, req->regval);
+		rc = send_tm_reqval(dev->mbox, req, error);
+		if (rc)
+			return rc;
+	}
+	return 0;
+}
+
+static int
 otx2_nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
 			    struct rte_tm_node_stats *stats,
 			    uint64_t *stats_mask, int clear,
@@ -2334,6 +2522,8 @@ const struct rte_tm_ops otx2_tm_ops = {
 	.node_resume = otx2_nix_tm_node_resume,
 	.hierarchy_commit = otx2_nix_tm_hierarchy_commit,
 
+	.node_shaper_update = otx2_nix_tm_node_shaper_update,
+	.node_parent_update = otx2_nix_tm_node_parent_update,
 	.node_stats_read = otx2_nix_tm_node_stats_read,
 };
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 09/11] net/octeontx2: add tm debug support
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                     ` (7 preceding siblings ...)
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 08/11] net/octeontx2: add tm dynamic topology update cb Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 10/11] net/octeontx2: add Tx queue ratelimit callback Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 11/11] net/octeontx2: add tm capability callbacks Nithin Dabilpuram
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Nithin Dabilpuram <ndabilpuram@marvell.com>

Add debug support to TM to dump configured topology
and registers. Also enable debug dump when sq flush fails.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 drivers/net/octeontx2/otx2_ethdev.h       |   1 +
 drivers/net/octeontx2/otx2_ethdev_debug.c | 311 ++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.c           |   9 +-
 drivers/net/octeontx2/otx2_tm.h           |   1 +
 4 files changed, 318 insertions(+), 4 deletions(-)

diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 6679652..0ef90ce 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -459,6 +459,7 @@ int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
 			 struct rte_dev_reg_info *regs);
 int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
 void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
+void otx2_nix_tm_dump(struct otx2_eth_dev *dev);
 
 /* Stats */
 int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
index c8b4cd5..6d951bc 100644
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ b/drivers/net/octeontx2/otx2_ethdev_debug.c
@@ -6,6 +6,7 @@
 
 #define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
 #define NIX_REG_INFO(reg) {reg, #reg}
+#define NIX_REG_NAME_SZ 48
 
 struct nix_lf_reg_info {
 	uint32_t offset;
@@ -390,9 +391,14 @@ otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
 	int rc, q, rq = eth_dev->data->nb_rx_queues;
 	int sq = eth_dev->data->nb_tx_queues;
 	struct otx2_mbox *mbox = dev->mbox;
+	struct npa_aq_enq_rsp *npa_rsp;
+	struct npa_aq_enq_req *npa_aq;
+	struct otx2_npa_lf *npa_lf;
 	struct nix_aq_enq_rsp *rsp;
 	struct nix_aq_enq_req *aq;
 
+	npa_lf = otx2_npa_lf_obj_get();
+
 	for (q = 0; q < rq; q++) {
 		aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
 		aq->qidx = q;
@@ -438,6 +444,36 @@ otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
 		nix_dump("============== port=%d sq=%d ===============",
 			 eth_dev->data->port_id, q);
 		nix_lf_sq_dump(&rsp->sq);
+
+		if (!npa_lf) {
+			otx2_err("NPA LF doesn't exist");
+			continue;
+		}
+
+		/* Dump SQB Aura minimal info */
+		npa_aq = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+		npa_aq->aura_id = rsp->sq.sqb_aura;
+		npa_aq->ctype = NPA_AQ_CTYPE_AURA;
+		npa_aq->op = NPA_AQ_INSTOP_READ;
+
+		rc = otx2_mbox_process_msg(npa_lf->mbox, (void *)&npa_rsp);
+		if (rc) {
+			otx2_err("Failed to get sq's sqb_aura context");
+			continue;
+		}
+
+		nix_dump("\nSQB Aura W0: Pool addr\t\t0x%"PRIx64"",
+			 npa_rsp->aura.pool_addr);
+		nix_dump("SQB Aura W1: ena\t\t\t%d",
+			 npa_rsp->aura.ena);
+		nix_dump("SQB Aura W2: count\t\t%"PRIx64"",
+			 (uint64_t)npa_rsp->aura.count);
+		nix_dump("SQB Aura W3: limit\t\t%"PRIx64"",
+			 (uint64_t)npa_rsp->aura.limit);
+		nix_dump("SQB Aura W3: fc_ena\t\t%d",
+			 npa_rsp->aura.fc_ena);
+		nix_dump("SQB Aura W4: fc_addr\t\t0x%"PRIx64"\n",
+			 npa_rsp->aura.fc_addr);
 	}
 
 fail:
@@ -498,3 +534,278 @@ otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
 	nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
 		 rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
 }
+
+static uint8_t
+prepare_nix_tm_reg_dump(uint16_t hw_lvl, uint16_t schq, uint16_t link,
+			uint64_t *reg, char regstr[][NIX_REG_NAME_SZ])
+{
+	uint8_t k = 0;
+
+	switch (hw_lvl) {
+	case NIX_TXSCH_LVL_SMQ:
+		reg[k] = NIX_AF_SMQX_CFG(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_SMQ[%u]_CFG", schq);
+
+		reg[k] = NIX_AF_MDQX_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_MDQX_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_MDQX_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_MDQX_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_MDQ[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL4:
+		reg[k] = NIX_AF_TL4X_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SDP_LINK_CFG", schq);
+
+		reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL4X_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_TL4X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL4X_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL4[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL3:
+		reg[k] = NIX_AF_TL3X_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
+
+		reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL3X_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_TL3X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL3X_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL2:
+		reg[k] = NIX_AF_TL2X_PARENT(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_PARENT", schq);
+
+		reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
+
+		reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL2X_PIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_PIR", schq);
+
+		reg[k] = NIX_AF_TL2X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL2X_SHAPE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_SHAPE", schq);
+
+		reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL2[%u]_SW_XOFF", schq);
+		break;
+	case NIX_TXSCH_LVL_TL1:
+
+		reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_TOPOLOGY", schq);
+
+		reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_SCHEDULE", schq);
+
+		reg[k] = NIX_AF_TL1X_CIR(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_CIR", schq);
+
+		reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_SW_XOFF", schq);
+
+		reg[k] = NIX_AF_TL1X_DROPPED_PACKETS(schq);
+		snprintf(regstr[k++], NIX_REG_NAME_SZ,
+			 "NIX_AF_TL1[%u]_DROPPED_PACKETS", schq);
+		break;
+	default:
+		break;
+	}
+
+	if (k > MAX_REGS_PER_MBOX_MSG) {
+		nix_dump("\t!!!NIX TM Registers request overflow!!!");
+		return 0;
+	}
+	return k;
+}
+
+/* Dump TM hierarchy and registers */
+void
+otx2_nix_tm_dump(struct otx2_eth_dev *dev)
+{
+	char regstr[MAX_REGS_PER_MBOX_MSG * 2][NIX_REG_NAME_SZ];
+	struct otx2_nix_tm_node *tm_node, *root_node, *parent;
+	uint64_t reg[MAX_REGS_PER_MBOX_MSG * 2];
+	struct nix_txschq_config *req;
+	const char *lvlstr, *parent_lvlstr;
+	struct nix_txschq_config *rsp;
+	uint32_t schq, parent_schq;
+	int hw_lvl, j, k, rc;
+
+	nix_dump("===TM hierarchy and registers dump of %s===",
+		 dev->eth_dev->data->name);
+
+	root_node = NULL;
+
+	for (hw_lvl = 0; hw_lvl <= NIX_TXSCH_LVL_CNT; hw_lvl++) {
+		TAILQ_FOREACH(tm_node, &dev->node_list, node) {
+			if (tm_node->hw_lvl != hw_lvl)
+				continue;
+
+			parent = tm_node->parent;
+			if (hw_lvl == NIX_TXSCH_LVL_CNT) {
+				lvlstr = "SQ";
+				schq = tm_node->id;
+			} else {
+				lvlstr = nix_hwlvl2str(tm_node->hw_lvl);
+				schq = tm_node->hw_id;
+			}
+
+			if (parent) {
+				parent_schq = parent->hw_id;
+				parent_lvlstr =
+					nix_hwlvl2str(parent->hw_lvl);
+			} else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
+				parent_schq = otx2_nix_get_link(dev);
+				parent_lvlstr = "LINK";
+			} else {
+				parent_schq = tm_node->parent_hw_id;
+				parent_lvlstr =
+					nix_hwlvl2str(tm_node->hw_lvl + 1);
+			}
+
+			nix_dump("%s_%d->%s_%d", lvlstr, schq,
+				 parent_lvlstr, parent_schq);
+
+			if (!(tm_node->flags & NIX_TM_NODE_HWRES))
+				continue;
+
+			/* Need to dump TL1 when root is TL2 */
+			if (tm_node->hw_lvl == dev->otx2_tm_root_lvl)
+				root_node = tm_node;
+
+			/* Dump registers only when HWRES is present */
+			k = prepare_nix_tm_reg_dump(tm_node->hw_lvl, schq,
+						    otx2_nix_get_link(dev), reg,
+						    regstr);
+			if (!k)
+				continue;
+
+			req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+			req->read = 1;
+			req->lvl = tm_node->hw_lvl;
+			req->num_regs = k;
+			otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+			rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
+			if (!rc) {
+				for (j = 0; j < k; j++)
+					nix_dump("\t%s=0x%016"PRIx64,
+						 regstr[j], rsp->regval[j]);
+			} else {
+				nix_dump("\t!!!Failed to dump registers!!!");
+			}
+		}
+		nix_dump("\n");
+	}
+
+	/* Dump TL1 node data when root level is TL2 */
+	if (root_node && root_node->hw_lvl == NIX_TXSCH_LVL_TL2) {
+		k = prepare_nix_tm_reg_dump(NIX_TXSCH_LVL_TL1,
+					    root_node->parent_hw_id,
+					    otx2_nix_get_link(dev),
+					    reg, regstr);
+		if (!k)
+			return;
+
+
+		req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
+		req->read = 1;
+		req->lvl = NIX_TXSCH_LVL_TL1;
+		req->num_regs = k;
+		otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+		rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
+		if (!rc) {
+			for (j = 0; j < k; j++)
+				nix_dump("\t%s=0x%016"PRIx64,
+					 regstr[j], rsp->regval[j]);
+		} else {
+			nix_dump("\t!!!Failed to dump registers!!!");
+		}
+	}
+
+	otx2_nix_queues_ctx_dump(dev->eth_dev);
+}
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index d8e54ee..c235c00 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -28,8 +28,8 @@ uint64_t shaper2regval(struct shaper_params *shaper)
 		(shaper->mantissa << 1);
 }
 
-static int
-nix_get_link(struct otx2_eth_dev *dev)
+int
+otx2_nix_get_link(struct otx2_eth_dev *dev)
 {
 	int link = 13 /* SDP */;
 	uint16_t lmac_chan;
@@ -574,7 +574,7 @@ populate_tm_reg(struct otx2_eth_dev *dev,
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
 			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
-						nix_get_link(dev));
+						otx2_nix_get_link(dev));
 			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
 			k++;
 		}
@@ -594,7 +594,7 @@ populate_tm_reg(struct otx2_eth_dev *dev,
 		if (!otx2_dev_is_sdp(dev) &&
 		    dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
 			reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
-						nix_get_link(dev));
+						otx2_nix_get_link(dev));
 			regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
 			k++;
 		}
@@ -990,6 +990,7 @@ nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
 
 	return 0;
 exit:
+	otx2_nix_tm_dump(dev);
 	return -EFAULT;
 }
 
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 20e2069..d5d58ec 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -23,6 +23,7 @@ int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
 int otx2_nix_sq_flush_post(void *_txq);
 int otx2_nix_sq_enable(void *_txq);
+int otx2_nix_get_link(struct otx2_eth_dev *dev);
 int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
 
 struct otx2_nix_tm_node {
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 10/11] net/octeontx2: add Tx queue ratelimit callback
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                     ` (8 preceding siblings ...)
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 09/11] net/octeontx2: add tm debug support Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 11/11] net/octeontx2: add tm capability callbacks Nithin Dabilpuram
  10 siblings, 0 replies; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K; +Cc: dev, kkanas

From: Krzysztof Kanas <kkanas@marvell.com>

Add Tx queue ratelimiting support. This support is mutually
exclusive with TM support i.e when TM is configured, tx queue
ratelimiting config is no more valid.

Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/octeontx2/otx2_ethdev.c |   1 +
 drivers/net/octeontx2/otx2_tm.c     | 241 +++++++++++++++++++++++++++++++++++-
 drivers/net/octeontx2/otx2_tm.h     |   3 +
 3 files changed, 243 insertions(+), 2 deletions(-)

diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 6896797..78b7f3a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2071,6 +2071,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
 	.rx_descriptor_status     = otx2_nix_rx_descriptor_status,
 	.tx_descriptor_status     = otx2_nix_tx_descriptor_status,
 	.tx_done_cleanup          = otx2_nix_tx_done_cleanup,
+	.set_queue_rate_limit     = otx2_nix_tm_set_queue_rate_limit,
 	.pool_ops_supported       = otx2_nix_pool_ops_supported,
 	.filter_ctrl              = otx2_nix_dev_filter_ctrl,
 	.get_module_info          = otx2_nix_get_module_info,
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index c235c00..c7b1f1f 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -2204,14 +2204,15 @@ otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Delete default/ratelimit tree */
-	if (dev->tm_flags & (NIX_TM_DEFAULT_TREE)) {
+	if (dev->tm_flags & (NIX_TM_DEFAULT_TREE | NIX_TM_RATE_LIMIT_TREE)) {
 		rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER, 0, false);
 		if (rc) {
 			error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
 			error->message = "failed to free default resources";
 			return rc;
 		}
-		dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE);
+		dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE |
+				   NIX_TM_RATE_LIMIT_TREE);
 	}
 
 	/* Free up user alloc'ed resources */
@@ -2673,6 +2674,242 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static int
+nix_tm_prepare_rate_limited_tree(struct rte_eth_dev *eth_dev)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	uint32_t def = eth_dev->data->nb_tx_queues;
+	struct rte_tm_node_params params;
+	uint32_t leaf_parent, i, rc = 0;
+
+	memset(&params, 0, sizeof(params));
+
+	if (nix_tm_have_tl1_access(dev)) {
+		dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
+		rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL1,
+					OTX2_TM_LVL_ROOT, false, &params);
+		if (rc)
+			goto error;
+		rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL2,
+					OTX2_TM_LVL_SCH1, false, &params);
+		if (rc)
+			goto error;
+		rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL3,
+					OTX2_TM_LVL_SCH2, false, &params);
+		if (rc)
+			goto error;
+		rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
+					DEFAULT_RR_WEIGHT,
+					NIX_TXSCH_LVL_TL4,
+					OTX2_TM_LVL_SCH3, false, &params);
+		if (rc)
+			goto error;
+		leaf_parent = def + 3;
+
+		/* Add per queue SMQ nodes */
+		for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+			rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
+						leaf_parent,
+						0, DEFAULT_RR_WEIGHT,
+						NIX_TXSCH_LVL_SMQ,
+						OTX2_TM_LVL_SCH4,
+						false, &params);
+			if (rc)
+				goto error;
+		}
+
+		/* Add leaf nodes */
+		for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+			rc = nix_tm_node_add_to_list(dev, i,
+						     leaf_parent + 1 + i, 0,
+						     DEFAULT_RR_WEIGHT,
+						     NIX_TXSCH_LVL_CNT,
+						     OTX2_TM_LVL_QUEUE,
+						     false, &params);
+		if (rc)
+			goto error;
+		}
+
+		return 0;
+	}
+
+	dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
+	rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
+				DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL2,
+				OTX2_TM_LVL_ROOT, false, &params);
+	if (rc)
+		goto error;
+	rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
+				DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL3,
+				OTX2_TM_LVL_SCH1, false, &params);
+	if (rc)
+		goto error;
+	rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
+				     DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL4,
+				     OTX2_TM_LVL_SCH2, false, &params);
+	if (rc)
+		goto error;
+	leaf_parent = def + 2;
+
+	/* Add per queue SMQ nodes */
+	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+		rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
+					     leaf_parent,
+					     0, DEFAULT_RR_WEIGHT,
+					     NIX_TXSCH_LVL_SMQ,
+					     OTX2_TM_LVL_SCH3,
+					     false, &params);
+		if (rc)
+			goto error;
+	}
+
+	/* Add leaf nodes */
+	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+		rc = nix_tm_node_add_to_list(dev, i, leaf_parent + 1 + i, 0,
+					     DEFAULT_RR_WEIGHT,
+					     NIX_TXSCH_LVL_CNT,
+					     OTX2_TM_LVL_SCH4,
+					     false, &params);
+		if (rc)
+			break;
+	}
+error:
+	return rc;
+}
+
+static int
+otx2_nix_tm_rate_limit_mdq(struct rte_eth_dev *eth_dev,
+			   struct otx2_nix_tm_node *tm_node,
+			   uint64_t tx_rate)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_nix_tm_shaper_profile profile;
+	struct otx2_mbox *mbox = dev->mbox;
+	volatile uint64_t *reg, *regval;
+	struct nix_txschq_config *req;
+	uint16_t flags;
+	uint8_t k = 0;
+	int rc;
+
+	flags = tm_node->flags;
+
+	req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
+	req->lvl = NIX_TXSCH_LVL_MDQ;
+	reg = req->reg;
+	regval = req->regval;
+
+	if (tx_rate == 0) {
+		k += prepare_tm_sw_xoff(tm_node, true, &reg[k], &regval[k]);
+		flags &= ~NIX_TM_NODE_ENABLED;
+		goto exit;
+	}
+
+	if (!(flags & NIX_TM_NODE_ENABLED)) {
+		k += prepare_tm_sw_xoff(tm_node, false, &reg[k], &regval[k]);
+		flags |= NIX_TM_NODE_ENABLED;
+	}
+
+	/* Use only PIR for rate limit */
+	memset(&profile, 0, sizeof(profile));
+	profile.params.peak.rate = tx_rate;
+	/* Minimum burst of ~4us Bytes of Tx */
+	profile.params.peak.size = RTE_MAX(NIX_MAX_HW_FRS,
+					   (4ull * tx_rate) / (1E6 * 8));
+	if (!dev->tm_rate_min || dev->tm_rate_min > tx_rate)
+		dev->tm_rate_min = tx_rate;
+
+	k += prepare_tm_shaper_reg(tm_node, &profile, &reg[k], &regval[k]);
+exit:
+	req->num_regs = k;
+	rc = otx2_mbox_process(mbox);
+	if (rc)
+		return rc;
+
+	tm_node->flags = flags;
+	return 0;
+}
+
+int
+otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
+				uint16_t queue_idx, uint16_t tx_rate_mbps)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	uint64_t tx_rate = tx_rate_mbps * (uint64_t)1E6;
+	struct otx2_nix_tm_node *tm_node;
+	int rc;
+
+	/* Check for supported revisions */
+	if (otx2_dev_is_95xx_Ax(dev) ||
+	    otx2_dev_is_96xx_Ax(dev))
+		return -EINVAL;
+
+	if (queue_idx >= eth_dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	if (!(dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
+	    !(dev->tm_flags & NIX_TM_RATE_LIMIT_TREE))
+		goto error;
+
+	if ((dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
+	    eth_dev->data->nb_tx_queues > 1) {
+		/* For TM topology change ethdev needs to be stopped */
+		if (eth_dev->data->dev_started)
+			return -EBUSY;
+
+		/*
+		 * Disable xmit will be enabled when
+		 * new topology is available.
+		 */
+		rc = nix_xmit_disable(eth_dev);
+		if (rc) {
+			otx2_err("failed to disable TX, rc=%d", rc);
+			return -EIO;
+		}
+
+		rc = nix_tm_free_resources(dev, 0, 0, false);
+		if (rc < 0) {
+			otx2_tm_dbg("failed to free default resources, rc %d",
+				   rc);
+			return -EIO;
+		}
+
+		rc = nix_tm_prepare_rate_limited_tree(eth_dev);
+		if (rc < 0) {
+			otx2_tm_dbg("failed to prepare tm tree, rc=%d", rc);
+			return rc;
+		}
+
+		rc = nix_tm_alloc_resources(eth_dev, true);
+		if (rc != 0) {
+			otx2_tm_dbg("failed to allocate tm tree, rc=%d", rc);
+			return rc;
+		}
+
+		dev->tm_flags &= ~NIX_TM_DEFAULT_TREE;
+		dev->tm_flags |= NIX_TM_RATE_LIMIT_TREE;
+	}
+
+	tm_node = nix_tm_node_search(dev, queue_idx, false);
+
+	/* check if we found a valid leaf node */
+	if (!tm_node ||
+	    !nix_tm_is_leaf(dev, tm_node->lvl) ||
+	    !tm_node->parent ||
+	    tm_node->parent->hw_id == UINT32_MAX)
+		return -EIO;
+
+	return otx2_nix_tm_rate_limit_mdq(eth_dev, tm_node->parent, tx_rate);
+error:
+	otx2_tm_dbg("Unsupported TM tree 0x%0x", dev->tm_flags);
+	return -EINVAL;
+}
+
 int
 otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
 {
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index d5d58ec..7b1672e 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -11,6 +11,7 @@
 
 #define NIX_TM_DEFAULT_TREE	BIT_ULL(0)
 #define NIX_TM_COMMITTED	BIT_ULL(1)
+#define NIX_TM_RATE_LIMIT_TREE	BIT_ULL(2)
 #define NIX_TM_TL1_NO_SP	BIT_ULL(3)
 
 struct otx2_eth_dev;
@@ -20,6 +21,8 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 			      uint32_t *rr_quantum, uint16_t *smq);
+int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
+				     uint16_t queue_idx, uint16_t tx_rate);
 int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
 int otx2_nix_sq_flush_post(void *_txq);
 int otx2_nix_sq_enable(void *_txq);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [dpdk-dev] [PATCH v3 11/11] net/octeontx2: add tm capability callbacks
  2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
                     ` (9 preceding siblings ...)
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 10/11] net/octeontx2: add Tx queue ratelimit callback Nithin Dabilpuram
@ 2020-04-03  8:52   ` Nithin Dabilpuram
  2020-04-06  5:48     ` Jerin Jacob
  10 siblings, 1 reply; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-03  8:52 UTC (permalink / raw)
  To: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, John McNamara,
	Marko Kovacevic
  Cc: dev, kkanas

From: Krzysztof Kanas <kkanas@marvell.com>

Add Traffic Management capability callbacks to provide
global, level and node capabilities. This patch also
adds documentation on Traffic Management Support.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
---
 doc/guides/nics/features/octeontx2.ini |   1 +
 doc/guides/nics/octeontx2.rst          |  15 +++
 doc/guides/rel_notes/release_20_05.rst |   8 ++
 drivers/net/octeontx2/otx2_ethdev.c    |   1 +
 drivers/net/octeontx2/otx2_tm.c        | 232 +++++++++++++++++++++++++++++++++
 drivers/net/octeontx2/otx2_tm.h        |   1 +
 6 files changed, 258 insertions(+)

diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
index 473fe56..fb13517 100644
--- a/doc/guides/nics/features/octeontx2.ini
+++ b/doc/guides/nics/features/octeontx2.ini
@@ -31,6 +31,7 @@ Inline protocol      = Y
 VLAN filter          = Y
 Flow control         = Y
 Flow API             = Y
+Rate limitation      = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 VLAN offload         = Y
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 60187ec..6b885d6 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -39,6 +39,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
 - HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
 - Support Rx interrupt
 - Inline IPsec processing support
+- :ref:`Traffic Management API <tmapi>`
 
 Prerequisites
 -------------
@@ -213,6 +214,20 @@ Runtime Config Options
    parameters to all the PCIe devices if application requires to configure on
    all the ethdev ports.
 
+Traffic Management API
+----------------------
+
+OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
+configure the following features:
+
+1. Hierarchical scheduling
+2. Single rate - two color, Two rate - three color shaping
+
+Both DWRR and Static Priority(SP) hierarchial scheduling is supported.
+Every parent can have atmost 10 SP Children and unlimited DWRR children.
+Both PF & VF supports traffic management API with PF supporting 6 levels
+and VF supporting 5 levels of topology.
+
 Limitations
 -----------
 
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 000bbf5..47a9825 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -62,6 +62,14 @@ New Features
 
   * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
 
+* **Updated Marvell OCTEON TX2 ethdev driver.**
+
+ Updated Marvell OCTEON TX2 ethdev driver with traffic manager support with
+ below features.
+
+ * Hierarchial Scheduling with DWRR and SP.
+ * Single rate - two color, Two rate - three color shaping.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 78b7f3a..599a14c 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2026,6 +2026,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = {
 	.link_update              = otx2_nix_link_update,
 	.tx_queue_setup           = otx2_nix_tx_queue_setup,
 	.tx_queue_release         = otx2_nix_tx_queue_release,
+	.tm_ops_get               = otx2_nix_tm_ops_get,
 	.rx_queue_setup           = otx2_nix_rx_queue_setup,
 	.rx_queue_release         = otx2_nix_rx_queue_release,
 	.dev_start                = otx2_nix_dev_start,
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
index c7b1f1f..e6c0b59 100644
--- a/drivers/net/octeontx2/otx2_tm.c
+++ b/drivers/net/octeontx2/otx2_tm.c
@@ -1834,7 +1834,217 @@ otx2_nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
 		*is_leaf = true;
 	else
 		*is_leaf = false;
+	return 0;
+}
 
+static int
+otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev,
+		     struct rte_tm_capabilities *cap,
+		     struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	int rc, max_nr_nodes = 0, i;
+	struct free_rsrcs_rsp *rsp;
+
+	memset(cap, 0, sizeof(*cap));
+
+	otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
+	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+		return rc;
+	}
+
+	for (i = 0; i < NIX_TXSCH_LVL_TL1; i++)
+		max_nr_nodes += rsp->schq[i];
+
+	cap->n_nodes_max = max_nr_nodes + dev->tm_leaf_cnt;
+	/* TL1 level is reserved for PF */
+	cap->n_levels_max = nix_tm_have_tl1_access(dev) ?
+				OTX2_TM_LVL_MAX : OTX2_TM_LVL_MAX - 1;
+	cap->non_leaf_nodes_identical = 1;
+	cap->leaf_nodes_identical = 1;
+
+	/* Shaper Capabilities */
+	cap->shaper_private_n_max = max_nr_nodes;
+	cap->shaper_n_max = max_nr_nodes;
+	cap->shaper_private_dual_rate_n_max = max_nr_nodes;
+	cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+	cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+	cap->shaper_pkt_length_adjust_min = 0;
+	cap->shaper_pkt_length_adjust_max = 0;
+
+	/* Schedule Capabilites */
+	cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ];
+	cap->sched_sp_n_priorities_max = TXSCH_TLX_SP_PRIO_MAX;
+	cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max;
+	cap->sched_wfq_n_groups_max = 1;
+	cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+
+	cap->dynamic_update_mask =
+		RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL |
+		RTE_TM_UPDATE_NODE_SUSPEND_RESUME;
+	cap->stats_mask =
+		RTE_TM_STATS_N_PKTS |
+		RTE_TM_STATS_N_BYTES |
+		RTE_TM_STATS_N_PKTS_RED_DROPPED |
+		RTE_TM_STATS_N_BYTES_RED_DROPPED;
+
+	for (i = 0; i < RTE_COLORS; i++) {
+		cap->mark_vlan_dei_supported[i] = false;
+		cap->mark_ip_ecn_tcp_supported[i] = false;
+		cap->mark_ip_dscp_supported[i] = false;
+	}
+
+	return 0;
+}
+
+static int
+otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl,
+				   struct rte_tm_level_capabilities *cap,
+				   struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	struct free_rsrcs_rsp *rsp;
+	uint16_t hw_lvl;
+	int rc;
+
+	memset(cap, 0, sizeof(*cap));
+
+	otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
+	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+		return rc;
+	}
+
+	hw_lvl = nix_tm_lvl2nix(dev, lvl);
+
+	if (nix_tm_is_leaf(dev, lvl)) {
+		/* Leaf */
+		cap->n_nodes_max = dev->tm_leaf_cnt;
+		cap->n_nodes_leaf_max = dev->tm_leaf_cnt;
+		cap->leaf_nodes_identical = 1;
+		cap->leaf.stats_mask =
+			RTE_TM_STATS_N_PKTS |
+			RTE_TM_STATS_N_BYTES;
+
+	} else if (lvl == OTX2_TM_LVL_ROOT) {
+		/* Root node, aka TL2(vf)/TL1(pf) */
+		cap->n_nodes_max = 1;
+		cap->n_nodes_nonleaf_max = 1;
+		cap->non_leaf_nodes_identical = 1;
+
+		cap->nonleaf.shaper_private_supported = true;
+		cap->nonleaf.shaper_private_dual_rate_supported =
+			nix_tm_have_tl1_access(dev) ? false : true;
+		cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+		cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+
+		cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
+		cap->nonleaf.sched_sp_n_priorities_max =
+					nix_max_prio(dev, hw_lvl) + 1;
+		cap->nonleaf.sched_wfq_n_groups_max = 1;
+		cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+
+		if (nix_tm_have_tl1_access(dev))
+			cap->nonleaf.stats_mask =
+				RTE_TM_STATS_N_PKTS_RED_DROPPED |
+				RTE_TM_STATS_N_BYTES_RED_DROPPED;
+	} else if ((lvl < OTX2_TM_LVL_MAX) &&
+		   (hw_lvl < NIX_TXSCH_LVL_CNT)) {
+		/* TL2, TL3, TL4, MDQ */
+		cap->n_nodes_max = rsp->schq[hw_lvl];
+		cap->n_nodes_nonleaf_max = cap->n_nodes_max;
+		cap->non_leaf_nodes_identical = 1;
+
+		cap->nonleaf.shaper_private_supported = true;
+		cap->nonleaf.shaper_private_dual_rate_supported = true;
+		cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+		cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+
+		/* MDQ doesn't support Strict Priority */
+		if (hw_lvl == NIX_TXSCH_LVL_MDQ)
+			cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
+		else
+			cap->nonleaf.sched_n_children_max =
+				rsp->schq[hw_lvl - 1];
+		cap->nonleaf.sched_sp_n_priorities_max =
+			nix_max_prio(dev, hw_lvl) + 1;
+		cap->nonleaf.sched_wfq_n_groups_max = 1;
+		cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+	} else {
+		/* unsupported level */
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		return rc;
+	}
+	return 0;
+}
+
+static int
+otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
+			  struct rte_tm_node_capabilities *cap,
+			  struct rte_tm_error *error)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_mbox *mbox = dev->mbox;
+	struct otx2_nix_tm_node *tm_node;
+	struct free_rsrcs_rsp *rsp;
+	int rc, hw_lvl, lvl;
+
+	memset(cap, 0, sizeof(*cap));
+
+	tm_node = nix_tm_node_search(dev, node_id, true);
+	if (!tm_node) {
+		error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+		error->message = "no such node";
+		return -EINVAL;
+	}
+
+	hw_lvl = tm_node->hw_lvl;
+	lvl = tm_node->lvl;
+
+	/* Leaf node */
+	if (nix_tm_is_leaf(dev, lvl)) {
+		cap->stats_mask = RTE_TM_STATS_N_PKTS |
+					RTE_TM_STATS_N_BYTES;
+		return 0;
+	}
+
+	otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
+	rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+		error->message = "unexpected fatal error";
+		return rc;
+	}
+
+	/* Non Leaf Shaper */
+	cap->shaper_private_supported = true;
+	cap->shaper_private_dual_rate_supported =
+		(hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true;
+	cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
+	cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
+
+	/* Non Leaf Scheduler */
+	if (hw_lvl == NIX_TXSCH_LVL_MDQ)
+		cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
+	else
+		cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
+
+	cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1;
+	cap->nonleaf.sched_wfq_n_children_per_group_max =
+		cap->nonleaf.sched_n_children_max;
+	cap->nonleaf.sched_wfq_n_groups_max = 1;
+	cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
+
+	if (hw_lvl == NIX_TXSCH_LVL_TL1)
+		cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
+			RTE_TM_STATS_N_BYTES_RED_DROPPED;
 	return 0;
 }
 
@@ -2515,6 +2725,10 @@ otx2_nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
 const struct rte_tm_ops otx2_tm_ops = {
 	.node_type_get = otx2_nix_tm_node_type_get,
 
+	.capabilities_get = otx2_nix_tm_capa_get,
+	.level_capabilities_get = otx2_nix_tm_level_capa_get,
+	.node_capabilities_get = otx2_nix_tm_node_capa_get,
+
 	.shaper_profile_add = otx2_nix_tm_shaper_profile_add,
 	.shaper_profile_delete = otx2_nix_tm_shaper_profile_delete,
 
@@ -2911,6 +3125,24 @@ otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
 }
 
 int
+otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *arg)
+{
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+
+	if (!arg)
+		return -EINVAL;
+
+	/* Check for supported revisions */
+	if (otx2_dev_is_95xx_Ax(dev) ||
+	    otx2_dev_is_96xx_Ax(dev))
+		return -EINVAL;
+
+	*(const void **)arg = &otx2_tm_ops;
+
+	return 0;
+}
+
+int
 otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
index 7b1672e..9675182 100644
--- a/drivers/net/octeontx2/otx2_tm.h
+++ b/drivers/net/octeontx2/otx2_tm.h
@@ -19,6 +19,7 @@ struct otx2_eth_dev;
 void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
 int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
+int otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *ops);
 int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
 			      uint32_t *rr_quantum, uint16_t *smq);
 int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
-- 
2.8.4


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [dpdk-dev] [PATCH v3 11/11] net/octeontx2: add tm capability callbacks
  2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 11/11] net/octeontx2: add tm capability callbacks Nithin Dabilpuram
@ 2020-04-06  5:48     ` Jerin Jacob
  2020-04-06  9:14       ` [dpdk-dev] [EXT] " Nithin Dabilpuram
  0 siblings, 1 reply; 41+ messages in thread
From: Jerin Jacob @ 2020-04-06  5:48 UTC (permalink / raw)
  To: Nithin Dabilpuram
  Cc: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, John McNamara,
	Marko Kovacevic, dpdk-dev, Krzysztof Kanas

On Fri, Apr 3, 2020 at 2:24 PM Nithin Dabilpuram <nithind1988@gmail.com> wrote:
>
> From: Krzysztof Kanas <kkanas@marvell.com>
>
> Add Traffic Management capability callbacks to provide
> global, level and node capabilities. This patch also
> adds documentation on Traffic Management Support.
>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>

After fixing the below issues(inlined)
Series applied to dpdk-next-net-mrvl/master. Thanks.


> ---
>
> diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
> index 473fe56..fb13517 100644
> --- a/doc/guides/nics/features/octeontx2.ini
> +++ b/doc/guides/nics/features/octeontx2.ini
> @@ -31,6 +31,7 @@ Inline protocol      = Y
>  VLAN filter          = Y
>  Flow control         = Y
>  Flow API             = Y
> +Rate limitation      = Y

Definition this "Rate limitation" and TM rate limitation functionally
the same. But the interface is different.
Following is the interface for the above "Rate limitation" feature.
So, Above "Y" is not applicable.

Rate limitation
---------------

Supports Tx rate limitation for a queue.

* **[implements] eth_dev_ops**: ``set_queue_rate_limit``.
* **[related]    API**: ``rte_eth_set_queue_rate_limit()``




>  Jumbo frame          = Y
>  Scattered Rx         = Y
>  VLAN offload         = Y
> diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
> index 60187ec..6b885d6 100644
> --- a/doc/guides/nics/octeontx2.rst
> +++ b/doc/guides/nics/octeontx2.rst
> @@ -39,6 +39,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
>  - HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
>  - Support Rx interrupt
>  - Inline IPsec processing support
> +- :ref:`Traffic Management API <tmapi>`

tmapi ref is pointing to mvpp2 driver index.


>
>  Prerequisites
>  -------------
> @@ -213,6 +214,20 @@ Runtime Config Options
>     parameters to all the PCIe devices if application requires to configure on
>     all the ethdev ports.
>
> +Traffic Management API
> +----------------------
> +
> +OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
> +configure the following features:
> +
> +1. Hierarchical scheduling
> +2. Single rate - two color, Two rate - three color shaping

Use #. to auto enumerate.

> +
> +Both DWRR and Static Priority(SP) hierarchial scheduling is supported.
> +Every parent can have atmost 10 SP Children and unlimited DWRR children.
> +Both PF & VF supports traffic management API with PF supporting 6 levels
> +and VF supporting 5 levels of topology.
> +
>  Limitations
>  -----------
>
> diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
> index 000bbf5..47a9825 100644
> --- a/doc/guides/rel_notes/release_20_05.rst
> +++ b/doc/guides/rel_notes/release_20_05.rst
> @@ -62,6 +62,14 @@ New Features
>
>    * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
>
> +* **Updated Marvell OCTEON TX2 ethdev driver.**
> +
> + Updated Marvell OCTEON TX2 ethdev driver with traffic manager support with
> + below features.
> +
> + * Hierarchial Scheduling with DWRR and SP.
> + * Single rate - two color, Two rate - three color shaping.

Alignment is not correct wrt to other items.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH v3 11/11] net/octeontx2: add tm capability callbacks
  2020-04-06  5:48     ` Jerin Jacob
@ 2020-04-06  9:14       ` Nithin Dabilpuram
  2020-04-06  9:31         ` Jerin Jacob
  0 siblings, 1 reply; 41+ messages in thread
From: Nithin Dabilpuram @ 2020-04-06  9:14 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Nithin Dabilpuram, Jerin Jacob, Kiran Kumar K, John McNamara,
	Marko Kovacevic, dpdk-dev, Krzysztof Kanas

On Mon, Apr 06, 2020 at 11:18:36AM +0530, Jerin Jacob wrote:
> External Email
> 
> ----------------------------------------------------------------------
> On Fri, Apr 3, 2020 at 2:24 PM Nithin Dabilpuram <nithind1988@gmail.com> wrote:
> >
> > From: Krzysztof Kanas <kkanas@marvell.com>
> >
> > Add Traffic Management capability callbacks to provide
> > global, level and node capabilities. This patch also
> > adds documentation on Traffic Management Support.
> >
> > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
> 
> After fixing the below issues(inlined)
> Series applied to dpdk-next-net-mrvl/master. Thanks.
> 
> 
> > ---
> >
> > diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
> > index 473fe56..fb13517 100644
> > --- a/doc/guides/nics/features/octeontx2.ini
> > +++ b/doc/guides/nics/features/octeontx2.ini
> > @@ -31,6 +31,7 @@ Inline protocol      = Y
> >  VLAN filter          = Y
> >  Flow control         = Y
> >  Flow API             = Y
> > +Rate limitation      = Y
> 
> Definition this "Rate limitation" and TM rate limitation functionally
> the same. But the interface is different.
> Following is the interface for the above "Rate limitation" feature.
> So, Above "Y" is not applicable.

Actually support to this feature is added via Patch 10/11. My bad,
I added the document update in 11/11 which caused the confusion. 
Another reason is rate limit support depends on TM support in OCTEON TX2.
I also missed to update octeontx2_vec.ini and octeontx2_vf.ini file.
Can the above change be put back into Patch 10/11 ?

Thanks for other corrections.
> 
> Rate limitation
> ---------------
> 
> Supports Tx rate limitation for a queue.
> 
> * **[implements] eth_dev_ops**: ``set_queue_rate_limit``.
> * **[related]    API**: ``rte_eth_set_queue_rate_limit()``
> 
> 
> 
> 
> >  Jumbo frame          = Y
> >  Scattered Rx         = Y
> >  VLAN offload         = Y
> > diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
> > index 60187ec..6b885d6 100644
> > --- a/doc/guides/nics/octeontx2.rst
> > +++ b/doc/guides/nics/octeontx2.rst
> > @@ -39,6 +39,7 @@ Features of the OCTEON TX2 Ethdev PMD are:
> >  - HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
> >  - Support Rx interrupt
> >  - Inline IPsec processing support
> > +- :ref:`Traffic Management API <tmapi>`
> 
> tmapi ref is pointing to mvpp2 driver index.
> 
> 
> >
> >  Prerequisites
> >  -------------
> > @@ -213,6 +214,20 @@ Runtime Config Options
> >     parameters to all the PCIe devices if application requires to configure on
> >     all the ethdev ports.
> >
> > +Traffic Management API
> > +----------------------
> > +
> > +OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
> > +configure the following features:
> > +
> > +1. Hierarchical scheduling
> > +2. Single rate - two color, Two rate - three color shaping
> 
> Use #. to auto enumerate.
> 
> > +
> > +Both DWRR and Static Priority(SP) hierarchial scheduling is supported.
> > +Every parent can have atmost 10 SP Children and unlimited DWRR children.
> > +Both PF & VF supports traffic management API with PF supporting 6 levels
> > +and VF supporting 5 levels of topology.
> > +
> >  Limitations
> >  -----------
> >
> > diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
> > index 000bbf5..47a9825 100644
> > --- a/doc/guides/rel_notes/release_20_05.rst
> > +++ b/doc/guides/rel_notes/release_20_05.rst
> > @@ -62,6 +62,14 @@ New Features
> >
> >    * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
> >
> > +* **Updated Marvell OCTEON TX2 ethdev driver.**
> > +
> > + Updated Marvell OCTEON TX2 ethdev driver with traffic manager support with
> > + below features.
> > +
> > + * Hierarchial Scheduling with DWRR and SP.
> > + * Single rate - two color, Two rate - three color shaping.
> 
> Alignment is not correct wrt to other items.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH v3 11/11] net/octeontx2: add tm capability callbacks
  2020-04-06  9:14       ` [dpdk-dev] [EXT] " Nithin Dabilpuram
@ 2020-04-06  9:31         ` Jerin Jacob
  0 siblings, 0 replies; 41+ messages in thread
From: Jerin Jacob @ 2020-04-06  9:31 UTC (permalink / raw)
  To: Nithin Dabilpuram
  Cc: Nithin Dabilpuram, Jerin Jacob, Kiran Kumar K, John McNamara,
	Marko Kovacevic, dpdk-dev, Krzysztof Kanas

On Mon, Apr 6, 2020 at 2:45 PM Nithin Dabilpuram
<ndabilpuram@marvell.com> wrote:

> > >
> > > diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
> > > index 473fe56..fb13517 100644
> > > --- a/doc/guides/nics/features/octeontx2.ini
> > > +++ b/doc/guides/nics/features/octeontx2.ini
> > > @@ -31,6 +31,7 @@ Inline protocol      = Y
> > >  VLAN filter          = Y
> > >  Flow control         = Y
> > >  Flow API             = Y
> > > +Rate limitation      = Y
> >
> > Definition this "Rate limitation" and TM rate limitation functionally
> > the same. But the interface is different.
> > Following is the interface for the above "Rate limitation" feature.
> > So, Above "Y" is not applicable.
>
> Actually support to this feature is added via Patch 10/11. My bad,
> I added the document update in 11/11 which caused the confusion.
> Another reason is rate limit support depends on TM support in OCTEON TX2.
> I also missed to update octeontx2_vec.ini and octeontx2_vf.ini file.
> Can the above change be put back into Patch 10/11 ?

Sure. I moved the update to the 10/11 patch from the 11/11 patch.

>
> Thanks for other corrections.

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2020-04-06  9:32 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-12 11:18 [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
2020-03-12 11:18 ` [dpdk-dev] [PATCH 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
2020-03-12 11:18 ` [dpdk-dev] [PATCH 02/11] net/octeontx2: restructure tm helper functions Nithin Dabilpuram
2020-03-12 11:18 ` [dpdk-dev] [PATCH 03/11] net/octeontx2: add dynamic topology update support Nithin Dabilpuram
2020-03-12 11:19 ` [dpdk-dev] [PATCH 04/11] net/octeontx2: add tm node add and delete cb Nithin Dabilpuram
2020-03-12 11:19 ` [dpdk-dev] [PATCH 05/11] net/octeontx2: add tm node suspend and resume cb Nithin Dabilpuram
2020-03-12 11:19 ` [dpdk-dev] [PATCH 06/11] net/octeontx2: add tm hierarchy commit callback Nithin Dabilpuram
2020-03-12 11:19 ` [dpdk-dev] [PATCH 07/11] net/octeontx2: add tm stats and shaper profile cbs Nithin Dabilpuram
2020-03-12 11:19 ` [dpdk-dev] [PATCH 08/11] net/octeontx2: add tm dynamic topology update cb Nithin Dabilpuram
2020-03-12 11:19 ` [dpdk-dev] [PATCH 09/11] net/octeontx2: add tm debug support Nithin Dabilpuram
2020-03-12 11:19 ` [dpdk-dev] [PATCH 10/11] net/octeontx2: add tx queue ratelimit callback Nithin Dabilpuram
2020-03-12 11:19 ` [dpdk-dev] [PATCH 11/11] net/octeontx2: add tm capability callbacks Nithin Dabilpuram
2020-03-13 11:08 ` [dpdk-dev] [PATCH 00/11] net/octeontx2: add traffic manager support Andrzej Ostruszka
2020-03-13 15:39   ` [dpdk-dev] [EXT] " Nithin Dabilpuram
2020-04-02 19:34 ` [dpdk-dev] [PATCH v2 " Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 02/11] net/octeontx2: restructure tm helper functions Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 03/11] net/octeontx2: add dynamic topology update support Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 04/11] net/octeontx2: add tm node add and delete cb Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 05/11] net/octeontx2: add tm node suspend and resume cb Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 06/11] net/octeontx2: add tm hierarchy commit callback Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 07/11] net/octeontx2: add tm stats and shaper profile cbs Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 08/11] net/octeontx2: add tm dynamic topology update cb Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 09/11] net/octeontx2: add tm debug support Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 10/11] net/octeontx2: add Tx queue ratelimit callback Nithin Dabilpuram
2020-04-02 19:34   ` [dpdk-dev] [PATCH v2 11/11] net/octeontx2: add tm capability callbacks Nithin Dabilpuram
2020-04-03  8:52 ` [dpdk-dev] [PATCH v3 00/11] net/octeontx2: add traffic manager support Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 01/11] net/octeontx2: setup link config based on BP level Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 02/11] net/octeontx2: restructure tm helper functions Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 03/11] net/octeontx2: add dynamic topology update support Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 04/11] net/octeontx2: add tm node add and delete cb Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 05/11] net/octeontx2: add tm node suspend and resume cb Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 06/11] net/octeontx2: add tm hierarchy commit callback Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 07/11] net/octeontx2: add tm stats and shaper profile cbs Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 08/11] net/octeontx2: add tm dynamic topology update cb Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 09/11] net/octeontx2: add tm debug support Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 10/11] net/octeontx2: add Tx queue ratelimit callback Nithin Dabilpuram
2020-04-03  8:52   ` [dpdk-dev] [PATCH v3 11/11] net/octeontx2: add tm capability callbacks Nithin Dabilpuram
2020-04-06  5:48     ` Jerin Jacob
2020-04-06  9:14       ` [dpdk-dev] [EXT] " Nithin Dabilpuram
2020-04-06  9:31         ` Jerin Jacob

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).