DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC
@ 2015-09-24  6:03 Jingjing Wu
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 1/8] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
                   ` (8 more replies)
  0 siblings, 9 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-09-24  6:03 UTC (permalink / raw)
  To: dev; +Cc: yulong.pei

The patch set enables DCB feature on Intel XL710/X710 NICs, including:
  Receive queue classification based on traffic class
  Round Robin ETS schedule (rx and tx).
  Priority flow control
To make the testpmd and ethdev lib more generic on DCB feature, this patch
set also
  adds a new API to get DCB related information on NICs.
  changes the DCB test forwarding in testpmd to be on traffic class.
  move specific validation from lib and application to drivers. 
Additionally, this patch set also corrects some coding style issues.

The patch set is developed based on another previous patch set "[PATCH
 00/52] update i40e base driver" http://www.dpdk.org/ml/archives/dev/2015-September/023283.html


Jingjing Wu (8):
  ethdev: rename dcb_queue to dcb_tc in dcb config struct
  ethdev: move the multi-queue checking to specific drivers
  i40e: enable DCB feature on FVL
  ixgbe: enable DCB+RSS multi-queue mode
  ethdev: new API to get dcb related information
  ixgbe: get_dcb_info ops implement
  i40e: get_dcb_info ops implement
  app/testpmd: set up DCB forwarding based on traffic class

 app/test-pmd/cmdline.c           |  39 ++-
 app/test-pmd/config.c            | 159 +++++------
 app/test-pmd/testpmd.c           | 151 +++++-----
 app/test-pmd/testpmd.h           |  23 +-
 drivers/net/e1000/igb_ethdev.c   |  84 +++++-
 drivers/net/i40e/i40e_ethdev.c   | 574 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/i40e/i40e_ethdev.h   |  14 +
 drivers/net/i40e/i40e_rxtx.c     |  32 ++-
 drivers/net/i40e/i40e_rxtx.h     |   2 +
 drivers/net/ixgbe/ixgbe_ethdev.c | 248 +++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |   3 +
 drivers/net/ixgbe/ixgbe_rxtx.c   |  58 ++--
 examples/vmdq_dcb/main.c         |   4 +-
 lib/librte_ether/rte_ethdev.c    | 217 ++-------------
 lib/librte_ether/rte_ethdev.h    |  64 ++++-
 15 files changed, 1230 insertions(+), 442 deletions(-)

-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH 1/8] ethdev: rename dcb_queue to dcb_tc in dcb config struct
  2015-09-24  6:03 [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
@ 2015-09-24  6:03 ` Jingjing Wu
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 2/8] ethdev: move the multi-queue checking to specific drivers Jingjing Wu
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-09-24  6:03 UTC (permalink / raw)
  To: dev; +Cc: yulong.pei

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 app/test-pmd/testpmd.c         |  8 ++++----
 drivers/net/ixgbe/ixgbe_rxtx.c | 10 +++++-----
 examples/vmdq_dcb/main.c       |  4 ++--
 lib/librte_ether/rte_ethdev.h  | 14 +++++++-------
 4 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 386bf84..c8ae909 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1866,8 +1866,8 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
 			vmdq_rx_conf.pool_map[i].pools = 1 << (i % vmdq_rx_conf.nb_queue_pools);
 		}
 		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-			vmdq_rx_conf.dcb_queue[i] = i;
-			vmdq_tx_conf.dcb_queue[i] = i;
+			vmdq_rx_conf.dcb_tc[i] = i;
+			vmdq_tx_conf.dcb_tc[i] = i;
 		}
 
 		/*set DCB mode of RX and TX of multiple queues*/
@@ -1897,8 +1897,8 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
 		tx_conf.nb_tcs = dcb_conf->num_tcs;
 
 		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
-			rx_conf.dcb_queue[i] = i;
-			tx_conf.dcb_queue[i] = i;
+			rx_conf.dcb_tc[i] = i;
+			tx_conf.dcb_tc[i] = i;
 		}
 		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB;
 		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index a598a72..d331ef5 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2903,7 +2903,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
 		 */
-		queue_mapping |= ((cfg->dcb_queue[i] & 0x07) << (i * 3));
+		queue_mapping |= ((cfg->dcb_tc[i] & 0x07) << (i * 3));
 
 	IXGBE_WRITE_REG(hw, IXGBE_RTRUP2TC, queue_mapping);
 
@@ -3038,7 +3038,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = vmdq_rx_conf->dcb_queue[i];
+		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
@@ -3066,7 +3066,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = vmdq_tx_conf->dcb_queue[i];
+		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
@@ -3088,7 +3088,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
 
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = rx_conf->dcb_queue[i];
+		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
@@ -3109,7 +3109,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
 
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = tx_conf->dcb_queue[i];
+		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index c31c2ce..b90ac28 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -107,7 +107,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 			.default_pool = 0,
 			.nb_pool_maps = 0,
 			.pool_map = {{0, 0},},
-			.dcb_queue = {0},
+			.dcb_tc = {0},
 		},
 	},
 };
@@ -144,7 +144,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf, enum rte_eth_nb_pools num_pools)
 		conf.pool_map[i].pools = 1 << (i % num_pools);
 	}
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
-		conf.dcb_queue[i] = (uint8_t)(i % (NUM_QUEUES/num_pools));
+		conf.dcb_tc[i] = (uint8_t)(i % (NUM_QUEUES/num_pools));
 	}
 	(void)(rte_memcpy(eth_conf, &vmdq_dcb_conf_default, sizeof(*eth_conf)));
 	(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_dcb_conf, &conf,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index fa06554..0aa00a6 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -543,20 +543,20 @@ enum rte_eth_nb_pools {
 /* This structure may be extended in future. */
 struct rte_eth_dcb_rx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
-	/**< Possible DCB queue,4 or 8. */
+	/** Traffic class each UP mapped to. */
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_dcb_tx_conf {
 	enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
-	/**< Possible DCB queue,4 or 8. */
+	/** Traffic class each UP mapped to. */
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_dcb_tx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
-	/**< Possible DCB queue,4 or 8. */
+	/** Traffic class each UP mapped to. */
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_tx_conf {
@@ -583,7 +583,7 @@ struct rte_eth_vmdq_dcb_conf {
 		uint16_t vlan_id; /**< The vlan id of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet rx */
 	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 	/**< Selects a queue in a pool */
 };
 
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH 2/8] ethdev: move the multi-queue checking to specific drivers
  2015-09-24  6:03 [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 1/8] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
@ 2015-09-24  6:03 ` Jingjing Wu
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 3/8] i40e: enable DCB feature on FVL Jingjing Wu
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-09-24  6:03 UTC (permalink / raw)
  To: dev; +Cc: yulong.pei

Differnet NIC has its specific constraint on the multi-queue
configuration, so move the checking from ethdev lib to drivers.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/e1000/igb_ethdev.c   |  84 ++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.c | 171 +++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |   3 +
 lib/librte_ether/rte_ethdev.c    | 199 ---------------------------------------
 4 files changed, 257 insertions(+), 200 deletions(-)

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 848ef6e..d9c13d9 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -866,16 +866,98 @@ rte_igbvf_pmd_init(const char *name __rte_unused, const char *params __rte_unuse
 }
 
 static int
+igb_check_mq_mode(struct rte_eth_dev *dev)
+{
+	enum rte_eth_rx_mq_mode rx_mq_mode = dev->data->dev_conf.rxmode.mq_mode;
+	enum rte_eth_tx_mq_mode tx_mq_mode = dev->data->dev_conf.txmode.mq_mode;
+	uint16_t nb_rx_q = dev->data->nb_rx_queues;
+	uint16_t nb_tx_q = dev->data->nb_rx_queues;
+
+	if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
+	    tx_mq_mode == ETH_MQ_TX_DCB ||
+	    tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
+		return -EINVAL;
+	}
+	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
+		/* Check multi-queue mode.
+		 * To no break software we accept ETH_MQ_RX_NONE as this might
+		 * be used to turn off VLAN filter.
+		 */
+
+		if (rx_mq_mode == ETH_MQ_RX_NONE ||
+		    rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+		} else {
+			/* Only support one queue on VFs.
+			 * RSS together with SRIOV is not supported.
+			 */
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" wrong mq_mode rx %d.",
+					rx_mq_mode);
+			return -EINVAL;
+		}
+		/* TX mode is not used here, so mode might be ignored.*/
+		if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+			/* SRIOV only works in VMDq enable mode */
+			PMD_INIT_LOG(WARNING, "SRIOV is active,"
+					" TX mode %d is not supported. "
+					" Driver will behave as %d mode.",
+					tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+		}
+
+		/* check valid queue number */
+		if ((nb_rx_q > 1) || (nb_tx_q > 1)) {
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" only support one queue on VFs.");
+			return -EINVAL;
+		}
+	} else {
+		/* To no break software that set invalid mode, only display
+		 * warning if invalid mode is used.
+		 */
+		if (rx_mq_mode != ETH_MQ_RX_NONE &&
+		    rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
+		    rx_mq_mode != ETH_MQ_RX_RSS) {
+			/* RSS together with VMDq not supported*/
+			PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
+				     rx_mq_mode);
+			return -EINVAL;
+		}
+
+		if (tx_mq_mode != ETH_MQ_TX_NONE &&
+		    tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+			PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
+					" Due to txmode is meaningless in this"
+					" driver, just ignore.",
+					tx_mq_mode);
+		}
+	}
+	return 0;
+}
+
+static int
 eth_igb_configure(struct rte_eth_dev *dev)
 {
 	struct e1000_interrupt *intr =
 		E1000_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
+	int ret;
 
 	PMD_INIT_FUNC_TRACE();
+
+	/* multipe queue mode checking */
+	ret  = igb_check_mq_mode(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "igb_check_mq_mode fails with %d.",
+			    ret);
+		return ret;
+	}
+
 	intr->flags |= E1000_FLAG_NEED_LINK_UPDATE;
 	PMD_INIT_FUNC_TRACE();
 
-	return (0);
+	return 0;
 }
 
 static int
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index ec2918c..a7dca55 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1636,14 +1636,185 @@ ixgbe_vmdq_vlan_hw_filter_enable(struct rte_eth_dev *dev)
 }
 
 static int
+ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
+{
+	switch (nb_rx_q) {
+	case 1:
+	case 2:
+		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		break;
+	case 4:
+		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
+	RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx = dev->pci_dev->max_vfs * nb_rx_q;
+
+	return 0;
+}
+
+static int
+ixgbe_check_mq_mode(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	uint16_t nb_rx_q = dev->data->nb_rx_queues;
+	uint16_t nb_tx_q = dev->data->nb_rx_queues;
+
+	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
+		/* check multi-queue mode */
+		switch (dev_conf->rxmode.mq_mode) {
+		case ETH_MQ_RX_VMDQ_DCB:
+		case ETH_MQ_RX_VMDQ_DCB_RSS:
+			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
+			PMD_INIT_LOG(ERR, "SRIOV active,"
+					" unsupported mq_mode rx %d.",
+					dev_conf->rxmode.mq_mode);
+			return -EINVAL;
+		case ETH_MQ_RX_RSS:
+		case ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
+				if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
+					PMD_INIT_LOG(ERR, "SRIOV is active,"
+						" invalid queue number"
+						" for VMDQ RSS, allowed"
+						" value are 1, 2 or 4.");
+					return -EINVAL;
+				}
+			break;
+		case ETH_MQ_RX_VMDQ_ONLY:
+		case ETH_MQ_RX_NONE:
+			/* if nothing mq mode configure, use default scheme */
+			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
+				RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+			break;
+		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+			/* SRIOV only works in VMDq enable mode */
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" wrong mq_mode rx %d.",
+					dev_conf->rxmode.mq_mode);
+			return -EINVAL;
+		}
+
+		switch (dev_conf->txmode.mq_mode) {
+		case ETH_MQ_TX_VMDQ_DCB:
+			/* DCB VMDQ in SRIOV mode, not implement yet */
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" unsupported VMDQ mq_mode tx %d.",
+					dev_conf->txmode.mq_mode);
+			return -EINVAL;
+		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+			break;
+		}
+
+		/* check valid queue number */
+		if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
+		    (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" queue number must less equal to %d.",
+					RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
+			return -EINVAL;
+		}
+	} else {
+		/* check configuration for vmdb+dcb mode */
+		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+			const struct rte_eth_vmdq_dcb_conf *conf;
+
+			if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB, nb_rx_q != %d.",
+						IXGBE_VMDQ_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
+			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
+			       conf->nb_queue_pools == ETH_32_POOLS)) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
+						" nb_queue_pools must be %d or %d.",
+						ETH_16_POOLS, ETH_32_POOLS);
+				return -EINVAL;
+			}
+		}
+		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+			const struct rte_eth_vmdq_dcb_tx_conf *conf;
+
+			if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB, nb_tx_q != %d",
+						 IXGBE_VMDQ_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
+			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
+			       conf->nb_queue_pools == ETH_32_POOLS)) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
+						" nb_queue_pools != %d and"
+						" nb_queue_pools != %d.",
+						ETH_16_POOLS, ETH_32_POOLS);
+				return -EINVAL;
+			}
+		}
+
+		/* For DCB mode check our configuration before we go further */
+		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+			const struct rte_eth_dcb_rx_conf *conf;
+
+			if (nb_rx_q != IXGBE_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "DCB selected, nb_rx_q != %d.",
+						 IXGBE_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
+			if (!(conf->nb_tcs == ETH_4_TCS ||
+			       conf->nb_tcs == ETH_8_TCS)) {
+				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
+						" and nb_tcs != %d.",
+						ETH_4_TCS, ETH_8_TCS);
+				return -EINVAL;
+			}
+		}
+
+		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+			const struct rte_eth_dcb_tx_conf *conf;
+
+			if (nb_tx_q != IXGBE_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "DCB, nb_tx_q != %d.",
+						 IXGBE_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
+			if (!(conf->nb_tcs == ETH_4_TCS ||
+			       conf->nb_tcs == ETH_8_TCS)) {
+				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
+						" and nb_tcs != %d.",
+						ETH_4_TCS, ETH_8_TCS);
+				return -EINVAL;
+			}
+		}
+	}
+	return 0;
+}
+
+static int
 ixgbe_dev_configure(struct rte_eth_dev *dev)
 {
 	struct ixgbe_interrupt *intr =
 		IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
 	struct ixgbe_adapter *adapter =
 		(struct ixgbe_adapter *)dev->data->dev_private;
+	int ret;
 
 	PMD_INIT_FUNC_TRACE();
+	/* multipe queue mode checking */
+	ret  = ixgbe_check_mq_mode(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "ixgbe_check_mq_mode fails with %d.",
+			    ret);
+		return ret;
+	}
 
 	/* set flag to update link status after init */
 	intr->flags |= IXGBE_FLAG_NEED_LINK_UPDATE;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index c3d4f4f..240241a 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -57,6 +57,9 @@
 #define IXGBE_VFTA_SIZE 128
 #define IXGBE_VLAN_TAG_SIZE 4
 #define IXGBE_MAX_RX_QUEUE_NUM	128
+#define IXGBE_VMDQ_DCB_NB_QUEUES     IXGBE_MAX_RX_QUEUE_NUM
+#define IXGBE_DCB_NB_QUEUES          IXGBE_MAX_RX_QUEUE_NUM
+
 #ifndef NBBY
 #define NBBY	8	/* number of bits in a byte */
 #endif
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index b309309..f4bbca6 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -880,197 +880,6 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 	return 0;
 }
 
-static int
-rte_eth_dev_check_vf_rss_rxq_num(uint8_t port_id, uint16_t nb_rx_q)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-
-	switch (nb_rx_q) {
-	case 1:
-	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active =
-			ETH_64_POOLS;
-		break;
-	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active =
-			ETH_32_POOLS;
-		break;
-	default:
-		return -EINVAL;
-	}
-
-	RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
-	RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =
-		dev->pci_dev->max_vfs * nb_rx_q;
-
-	return 0;
-}
-
-static int
-rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
-			  const struct rte_eth_conf *dev_conf)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-
-	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
-		/* check multi-queue mode */
-		if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
-		    (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB_RSS) ||
-		    (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
-			/* SRIOV only works in VMDq enable mode */
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"wrong VMDQ mq_mode rx %u tx %u\n",
-					port_id,
-					dev_conf->rxmode.mq_mode,
-					dev_conf->txmode.mq_mode);
-			return -EINVAL;
-		}
-
-		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"unsupported VMDQ mq_mode rx %u\n",
-					port_id, dev_conf->rxmode.mq_mode);
-			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"Rx mq mode is changed from:"
-					"mq_mode %u into VMDQ mq_mode %u\n",
-					port_id,
-					dev_conf->rxmode.mq_mode,
-					dev->data->dev_conf.rxmode.mq_mode);
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
-			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
-				if (rte_eth_dev_check_vf_rss_rxq_num(port_id, nb_rx_q) != 0) {
-					PMD_DEBUG_TRACE("ethdev port_id=%d"
-							" SRIOV active, invalid queue"
-							" number for VMDQ RSS, allowed"
-							" value are 1, 2 or 4\n",
-							port_id);
-					return -EINVAL;
-				}
-			break;
-		default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
-			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
-			if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
-				RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
-			break;
-		}
-
-		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			/* DCB VMDQ in SRIOV mode, not implement yet */
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"unsupported VMDQ mq_mode tx %u\n",
-					port_id, dev_conf->txmode.mq_mode);
-			return -EINVAL;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
-			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
-			break;
-		}
-
-		/* check valid queue number */
-		if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
-		    (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
-					"queue number must less equal to %d\n",
-					port_id,
-					RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
-			return -EINVAL;
-		}
-	} else {
-		/* For vmdb+dcb mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
-			const struct rte_eth_vmdq_dcb_conf *conf;
-
-			if (nb_rx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB, nb_rx_q "
-						"!= %d\n",
-						port_id, ETH_VMDQ_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->rx_adv_conf.vmdq_dcb_conf);
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			      conf->nb_queue_pools == ETH_32_POOLS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB selected, "
-						"nb_queue_pools must be %d or %d\n",
-						port_id, ETH_16_POOLS, ETH_32_POOLS);
-				return -EINVAL;
-			}
-		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
-			const struct rte_eth_vmdq_dcb_tx_conf *conf;
-
-			if (nb_tx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB, nb_tx_q "
-						"!= %d\n",
-						port_id, ETH_VMDQ_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->tx_adv_conf.vmdq_dcb_tx_conf);
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			      conf->nb_queue_pools == ETH_32_POOLS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB selected, "
-						"nb_queue_pools != %d or nb_queue_pools "
-						"!= %d\n",
-						port_id, ETH_16_POOLS, ETH_32_POOLS);
-				return -EINVAL;
-			}
-		}
-
-		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
-			const struct rte_eth_dcb_rx_conf *conf;
-
-			if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_rx_q "
-						"!= %d\n",
-						port_id, ETH_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->rx_adv_conf.dcb_rx_conf);
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			      conf->nb_tcs == ETH_8_TCS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB selected, "
-						"nb_tcs != %d or nb_tcs "
-						"!= %d\n",
-						port_id, ETH_4_TCS, ETH_8_TCS);
-				return -EINVAL;
-			}
-		}
-
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
-			const struct rte_eth_dcb_tx_conf *conf;
-
-			if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_tx_q "
-						"!= %d\n",
-						port_id, ETH_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->tx_adv_conf.dcb_tx_conf);
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			      conf->nb_tcs == ETH_8_TCS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB selected, "
-						"nb_tcs != %d or nb_tcs "
-						"!= %d\n",
-						port_id, ETH_4_TCS, ETH_8_TCS);
-				return -EINVAL;
-			}
-		}
-	}
-	return 0;
-}
-
 int
 rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
@@ -1182,14 +991,6 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 							ETHER_MAX_LEN;
 	}
 
-	/* multiple queue mode checking */
-	diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q, dev_conf);
-	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode = %d\n",
-				port_id, diag);
-		return diag;
-	}
-
 	/*
 	 * Setup new number of RX/TX queues and reconfigure device.
 	 */
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH 3/8] i40e: enable DCB feature on FVL
  2015-09-24  6:03 [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 1/8] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 2/8] ethdev: move the multi-queue checking to specific drivers Jingjing Wu
@ 2015-09-24  6:03 ` Jingjing Wu
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 4/8] ixgbe: enable DCB+RSS multi-queue mode Jingjing Wu
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-09-24  6:03 UTC (permalink / raw)
  To: dev; +Cc: yulong.pei

This patch enables DCB feature on Intel XL710/X710 NICs. It includes:
  Receive queue classification based on traffic class
  Round Robin ETS schedule (rx and tx)
  Priority flow control

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c | 532 ++++++++++++++++++++++++++++++++++++++++-
 drivers/net/i40e/i40e_ethdev.h |  14 ++
 drivers/net/i40e/i40e_rxtx.c   |  32 ++-
 drivers/net/i40e/i40e_rxtx.h   |   2 +
 4 files changed, 567 insertions(+), 13 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 2dd9fdc..7d252fa 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -56,6 +56,7 @@
 #include "base/i40e_adminq_cmd.h"
 #include "base/i40e_type.h"
 #include "base/i40e_register.h"
+#include "base/i40e_dcb.h"
 #include "i40e_ethdev.h"
 #include "i40e_rxtx.h"
 #include "i40e_pf.h"
@@ -166,6 +167,8 @@ static int i40e_pf_parameter_init(struct rte_eth_dev *dev);
 static int i40e_pf_setup(struct i40e_pf *pf);
 static int i40e_dev_rxtx_init(struct i40e_pf *pf);
 static int i40e_vmdq_setup(struct rte_eth_dev *dev);
+static int i40e_dcb_init_configure(struct rte_eth_dev *dev, bool sw_dcb);
+static int i40e_dcb_setup(struct rte_eth_dev *dev);
 static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,
 		bool offset_loaded, uint64_t *offset, uint64_t *stat);
 static void i40e_stat_update_48(struct i40e_hw *hw,
@@ -469,11 +472,6 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
 		     ((hw->nvm.version >> 4) & 0xff),
 		     (hw->nvm.version & 0xf), hw->nvm.eetrack);
 
-	/* Disable LLDP */
-	ret = i40e_aq_stop_lldp(hw, true, NULL);
-	if (ret != I40E_SUCCESS) /* Its failure can be ignored */
-		PMD_INIT_LOG(INFO, "Failed to stop lldp");
-
 	/* Clear PXE mode */
 	i40e_clear_pxe_mode(hw);
 
@@ -588,6 +586,13 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
 	/* initialize mirror rule list */
 	TAILQ_INIT(&pf->mirror_list);
 
+	/* Init dcb to sw mode by default */
+	ret = i40e_dcb_init_configure(dev, TRUE);
+	if (ret != I40E_SUCCESS) {
+		PMD_INIT_LOG(INFO, "Failed to init dcb.");
+		pf->flags &= ~I40E_FLAG_DCB;
+	}
+
 	return 0;
 
 err_mac_alloc:
@@ -709,6 +714,15 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 		if (ret)
 			goto err;
 	}
+
+	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+		ret = i40e_dcb_setup(dev);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "failed to configure DCB.");
+			goto err;
+		}
+	}
+
 	return 0;
 err:
 	i40e_fdir_teardown(pf);
@@ -2313,6 +2327,9 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
 		 */
 	}
 
+	if (hw->func_caps.dcb)
+		pf->flags |= I40E_FLAG_DCB;
+
 	if (sum_vsis > pf->max_num_vsi ||
 		sum_queues > hw->func_caps.num_rx_qp) {
 		PMD_INIT_LOG(ERR, "VSI/QUEUE setting can't be satisfied");
@@ -2718,7 +2735,7 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi,
 				 struct i40e_aqc_vsi_properties_data *info,
 				 uint8_t enabled_tcmap)
 {
-	int ret, total_tc = 0, i;
+	int ret, i, total_tc = 0;
 	uint16_t qpnum_per_tc, bsf, qp_idx;
 
 	ret = validate_tcmap_parameter(vsi, enabled_tcmap);
@@ -5269,11 +5286,6 @@ i40e_pf_config_mq_rx(struct i40e_pf *pf)
 	int ret = 0;
 	enum rte_eth_rx_mq_mode mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
 
-	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
-		PMD_INIT_LOG(ERR, "i40e doesn't support DCB yet");
-		return -ENOTSUP;
-	}
-
 	/* RSS setup */
 	if (mq_mode & ETH_MQ_RX_RSS_FLAG)
 		ret = i40e_pf_config_rss(pf);
@@ -6298,3 +6310,501 @@ i40e_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
 
 	return  0;
 }
+
+/* return bit map of enabled traffic classes, should be nonzero */
+static inline uint8_t
+nb2bitmap(uint8_t nb_tcs)
+{
+	int i;
+	uint8_t tc_map = 0;
+
+	if (nb_tcs > I40E_MAX_TRAFFIC_CLASS)
+		return UINT8_MAX;
+	if (nb_tcs == 0)
+		return 1; /* tc0 only */
+
+	for (i = 0; i < nb_tcs; i++)
+		tc_map |= (uint8_t)1 << i;
+
+	return tc_map;
+}
+
+/*
+ * i40e_parse_dcb_configure - parse dcb configure from user
+ * @dev: the device being configured
+ * @dcb_cfg: pointer of the result of parse
+ * @*tc_map: bit map of enabled traffic classes
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_parse_dcb_configure(struct rte_eth_dev *dev,
+			 struct i40e_dcbx_config *dcb_cfg,
+			 uint8_t *tc_map)
+{
+	struct rte_eth_dcb_rx_conf *dcb_rx_conf;
+	uint8_t i, tc_bw, bw_lf;
+
+	memset(dcb_cfg, 0, sizeof(struct i40e_dcbx_config));
+
+	dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+	if (dcb_rx_conf->nb_tcs > I40E_MAX_TRAFFIC_CLASS) {
+		PMD_INIT_LOG(ERR, "number of tc exceeds max.");
+		return -EINVAL;
+	}
+
+	/*if want to set the etscfg to local registers, need set willing to no*/
+	dcb_cfg->etscfg.willing = 0;
+	/* always set value 0 when a device supports 8 */
+	dcb_cfg->etscfg.maxtcs = 0;
+
+	/* assume each tc has the same bw */
+	tc_bw = 100 / dcb_rx_conf->nb_tcs;
+	for (i = 0; i < dcb_rx_conf->nb_tcs; i++)
+		dcb_cfg->etscfg.tcbwtable[i] = tc_bw;
+	/* to ensure the sum of tcbw is equal to 100 */
+	bw_lf = 100 - tc_bw * dcb_rx_conf->nb_tcs;
+	for (i = 0; i < bw_lf; i++)
+		dcb_cfg->etscfg.tcbwtable[i] += 1;
+
+	/* assume each tc has the same Transmission Selection Algorithm */
+	for (i = 0; i < dcb_rx_conf->nb_tcs; i++)
+		dcb_cfg->etscfg.tsatable[i] = I40E_IEEE_TSA_ETS;
+
+	for (i = 0; i < I40E_MAX_USER_PRIORITY; i++)
+		dcb_cfg->etscfg.prioritytable[i] =
+				dcb_rx_conf->dcb_tc[i];
+
+	/* FW needs one App to configure HW */
+	dcb_cfg->numapps = 1;
+	dcb_cfg->app[0].selector = I40E_APP_SEL_ETHTYPE;
+	dcb_cfg->app[0].priority = 3;
+	dcb_cfg->app[0].protocolid = I40E_APP_PROTOID_FCOE;
+	*tc_map = nb2bitmap(dcb_rx_conf->nb_tcs);
+
+	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+		dcb_cfg->pfc.willing = 0;
+		dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
+		dcb_cfg->pfc.pfcenable = *tc_map;
+	}
+	return 0;
+}
+
+/*
+ * i40e_vsi_get_bw_info - Query VSI BW Information
+ * @vsi: the VSI being queried
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_vsi_get_bw_info(struct i40e_vsi *vsi)
+{
+	struct i40e_aqc_query_vsi_ets_sla_config_resp bw_ets_config = {0};
+	struct i40e_aqc_query_vsi_bw_config_resp bw_config = {0};
+	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	int i, ret;
+	uint32_t tc_bw_max;
+
+	/* Get the VSI level BW configuration */
+	ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "couldn't get PF vsi bw config, err %s aq_err %s\n",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		return -EINVAL;
+	}
+
+	/* Get the VSI level BW configuration per TC */
+	ret = i40e_aq_query_vsi_ets_sla_config(hw, vsi->seid, &bw_ets_config,
+						  NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "couldn't get PF vsi ets bw config, err %s aq_err %s\n",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		return -EINVAL;
+	}
+
+	if (bw_config.tc_valid_bits != bw_ets_config.tc_valid_bits) {
+		PMD_INIT_LOG(WARNING,
+			 "Enabled TCs mismatch from querying VSI BW info"
+			 " 0x%08x 0x%08x\n", bw_config.tc_valid_bits,
+			 bw_ets_config.tc_valid_bits);
+		/* Still continuing */
+	}
+
+	vsi->bw_info.bw_limit = rte_le_to_cpu_16(bw_config.port_bw_limit);
+	vsi->bw_info.bw_max_quanta = bw_config.max_bw;
+	tc_bw_max = rte_le_to_cpu_16(bw_ets_config.tc_bw_max[0]) |
+		    (rte_le_to_cpu_16(bw_ets_config.tc_bw_max[1]) << 16);
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		vsi->bw_info.bw_ets_share_credits[i] =
+				bw_ets_config.share_credits[i];
+		vsi->bw_info.bw_ets_limit_credits[i] =
+				rte_le_to_cpu_16(bw_ets_config.credits[i]);
+		/* 3 bits out of 4 for each TC */
+		vsi->bw_info.bw_ets_max_quanta[i] =
+			(uint8_t)((tc_bw_max >> (i * 4)) & 0x7);
+		PMD_INIT_LOG(DEBUG,
+			 "%s: vsi seid = %d, TC = %d, qset = 0x%x\n",
+			 __func__, vsi->seid, i, bw_config.qs_handles[i]);
+	}
+
+	return 0;
+}
+
+static int
+i40e_vsi_update_queue_mapping(struct i40e_vsi *vsi,
+			      struct i40e_aqc_vsi_properties_data *info,
+			      uint8_t enabled_tcmap)
+{
+	int ret, i, total_tc = 0;
+	uint16_t qpnum_per_tc, bsf, qp_idx;
+	struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(vsi);
+
+	ret = validate_tcmap_parameter(vsi, enabled_tcmap);
+	if (ret != I40E_SUCCESS)
+		return ret;
+
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (enabled_tcmap & (1 << i))
+			total_tc++;
+	}
+	if (total_tc == 0)
+		total_tc = 1;
+	vsi->enabled_tc = enabled_tcmap;
+
+	qpnum_per_tc = dev_data->nb_rx_queues / total_tc;
+	/* Number of queues per enabled TC */
+	if (qpnum_per_tc == 0) {
+		PMD_INIT_LOG(ERR, " number of queues is less that tcs.");
+		return I40E_ERR_INVALID_QP_ID;
+	}
+	qpnum_per_tc = RTE_MIN(i40e_align_floor(qpnum_per_tc),
+				I40E_MAX_Q_PER_TC);
+	bsf = rte_bsf32(qpnum_per_tc);
+
+	/**
+	 * Configure TC and queue mapping parameters, for enabled TC,
+	 * allocate qpnum_per_tc queues to this traffic. For disabled TC,
+	 * default queue will serve it.
+	 */
+	qp_idx = 0;
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (vsi->enabled_tc & (1 << i)) {
+			info->tc_mapping[i] = rte_cpu_to_le_16((qp_idx <<
+					I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT) |
+				(bsf << I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT));
+			qp_idx += qpnum_per_tc;
+		} else
+			info->tc_mapping[i] = 0;
+	}
+
+	/* Associate queue number with VSI, Keep vsi->nb_qps unchanged */
+	if (vsi->type == I40E_VSI_SRIOV) {
+		info->mapping_flags |=
+			rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_NONCONTIG);
+		for (i = 0; i < vsi->nb_qps; i++)
+			info->queue_mapping[i] =
+				rte_cpu_to_le_16(vsi->base_queue + i);
+	} else {
+		info->mapping_flags |=
+			rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_CONTIG);
+		info->queue_mapping[0] = rte_cpu_to_le_16(vsi->base_queue);
+	}
+	info->valid_sections |=
+		rte_cpu_to_le_16(I40E_AQ_VSI_PROP_QUEUE_MAP_VALID);
+
+	return I40E_SUCCESS;
+}
+
+/*
+ * i40e_vsi_config_tc - Configure VSI tc setting for given TC map
+ * @vsi: VSI to be configured
+ * @tc_map: enabled TC bitmap
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_vsi_config_tc(struct i40e_vsi *vsi, u8 tc_map)
+{
+	struct i40e_aqc_configure_vsi_tc_bw_data bw_data;
+	struct i40e_vsi_context ctxt;
+	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	int ret = 0;
+	int i;
+
+	/* Check if enabled_tc is same as existing or new TCs */
+	if (vsi->enabled_tc == tc_map)
+		return ret;
+
+	/* configure tc bandwidth */
+	memset(&bw_data, 0, sizeof(bw_data));
+	bw_data.tc_valid_bits = tc_map;
+	/* Enable ETS TCs with equal BW Share for now across all VSIs */
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (tc_map & BIT_ULL(i))
+			bw_data.tc_bw_credits[i] = 1;
+	}
+	ret = i40e_aq_config_vsi_tc_bw(hw, vsi->seid, &bw_data, NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "AQ command Config VSI BW allocation"
+			" per TC failed = %d",
+			hw->aq.asq_last_status);
+		goto out;
+	}
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
+		vsi->info.qs_handle[i] = bw_data.qs_handles[i];
+
+	/* Update Queue Pairs Mapping for currently enabled UPs */
+	ctxt.seid = vsi->seid;
+	ctxt.pf_num = hw->pf_id;
+	ctxt.vf_num = 0;
+	ctxt.uplink_seid = vsi->uplink_seid;
+	ctxt.info = vsi->info;
+	i40e_get_cap(hw);
+	ret = i40e_vsi_update_queue_mapping(vsi, &ctxt.info, tc_map);
+	if (ret)
+		goto out;
+
+	/* Update the VSI after updating the VSI queue-mapping information */
+	ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure "
+			    "TC queue mapping = %d",
+			    hw->aq.asq_last_status);
+		goto out;
+	}
+	/* update the local VSI info with updated queue map */
+	(void)rte_memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping,
+					sizeof(vsi->info.tc_mapping));
+	(void)rte_memcpy(&vsi->info.queue_mapping,
+			&ctxt.info.queue_mapping,
+		sizeof(vsi->info.queue_mapping));
+	vsi->info.mapping_flags = ctxt.info.mapping_flags;
+	vsi->info.valid_sections = 0;
+
+	/* Update current VSI BW information */
+	ret = i40e_vsi_get_bw_info(vsi);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "Failed updating vsi bw info, err %s aq_err %s",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		goto out;
+	}
+
+	vsi->enabled_tc = tc_map;
+
+out:
+	return ret;
+}
+
+/*
+ * i40e_dcb_hw_configure - program the dcb setting to hw
+ * @pf: pf the configuration is taken on
+ * @new_cfg: new configuration
+ * @tc_map: enabled TC bitmap
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static enum i40e_status_code
+i40e_dcb_hw_configure(struct i40e_pf *pf,
+		      struct i40e_dcbx_config *new_cfg,
+		      uint8_t tc_map)
+{
+	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+	struct i40e_dcbx_config *old_cfg = &hw->local_dcbx_config;
+	struct i40e_vsi *main_vsi = pf->main_vsi;
+	struct i40e_vsi_list *vsi_list;
+	int i, ret;
+	uint32_t val;
+
+	/* Use the FW API if FW > v4.4*/
+	if (!((hw->aq.fw_maj_ver == 4) && (hw->aq.fw_min_ver >= 4))) {
+		PMD_INIT_LOG(ERR, "FW < v4.4, can not use FW LLDP API"
+				  " to configure DCB");
+		return I40E_ERR_FIRMWARE_API_VERSION;
+	}
+
+	/* Check if need reconfiguration */
+	if (!memcmp(new_cfg, old_cfg, sizeof(struct i40e_dcbx_config))) {
+		PMD_INIT_LOG(ERR, "No Change in DCB Config required.");
+		return I40E_SUCCESS;
+	}
+
+	/* Copy the new config to the current config */
+	*old_cfg = *new_cfg;
+	old_cfg->etsrec = old_cfg->etscfg;
+	ret = i40e_set_dcb_config(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "Set DCB Config failed, err %s aq_err %s\n",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		return ret;
+	}
+	/* set receive Arbiter to RR mode and ETS scheme by default */
+	for (i = 0; i <= I40E_PRTDCB_RETSTCC_MAX_INDEX; i++) {
+		val = I40E_READ_REG(hw, I40E_PRTDCB_RETSTCC(i));
+		val &= ~(I40E_PRTDCB_RETSTCC_BWSHARE_MASK     |
+			 I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK |
+			 I40E_PRTDCB_RETSTCC_ETSTC_SHIFT);
+		val |= ((uint32_t)old_cfg->etscfg.tcbwtable[i] <<
+			I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT) &
+			 I40E_PRTDCB_RETSTCC_BWSHARE_MASK;
+		val |= ((uint32_t)1 << I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT) &
+			 I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK;
+		val |= ((uint32_t)1 << I40E_PRTDCB_RETSTCC_ETSTC_SHIFT) &
+			 I40E_PRTDCB_RETSTCC_ETSTC_MASK;
+		I40E_WRITE_REG(hw, I40E_PRTDCB_RETSTCC(i), val);
+	}
+	/* get local mib to check whether it is configured correctly */
+	/* IEEE mode */
+	hw->local_dcbx_config.dcbx_mode = I40E_DCBX_MODE_IEEE;
+	/* Get Local DCB Config */
+	i40e_aq_get_dcb_config(hw, I40E_AQ_LLDP_MIB_LOCAL, 0,
+				     &hw->local_dcbx_config);
+
+	/* Update each VSI */
+	i40e_vsi_config_tc(main_vsi, tc_map);
+	if (main_vsi->veb) {
+		TAILQ_FOREACH(vsi_list, &main_vsi->veb->head, list) {
+			/* Beside main VSI, only enable default
+			 * TC for other VSIs
+			 */
+			ret = i40e_vsi_config_tc(vsi_list->vsi,
+						I40E_DEFAULT_TCMAP);
+			if (ret)
+				PMD_INIT_LOG(WARNING,
+					 "Failed configuring TC for VSI seid=%d\n",
+					 vsi_list->vsi->seid);
+			/* continue */
+		}
+	}
+	return I40E_SUCCESS;
+}
+
+/*
+ * i40e_dcb_init_configure - initial dcb config
+ * @dev: device being configured
+ * @sw_dcb: indicate whether dcb is sw configured or hw offload
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_dcb_init_configure(struct rte_eth_dev *dev, bool sw_dcb)
+{
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int ret = 0;
+
+	if ((pf->flags & I40E_FLAG_DCB) == 0) {
+		PMD_INIT_LOG(ERR, "HW doesn't support DCB");
+		return -ENOTSUP;
+	}
+
+	/* DCB initialization:
+	 * Update DCB configuration from the Firmware and configure
+	 * LLDP MIB change event.
+	 * lldp agent is stopped by default, so don't stop it if sw_dcb is true
+	 */
+	if (sw_dcb == TRUE) {
+		ret = i40e_aq_stop_lldp(hw, TRUE, NULL);
+		if (ret != I40E_SUCCESS)
+			PMD_INIT_LOG(DEBUG, "Failed to stop lldp");
+
+		ret = i40e_init_dcb(hw);
+		if (ret != I40E_SUCCESS &&
+		    hw->aq.asq_last_status == I40E_AQ_RC_EPERM) {
+			memset(&hw->local_dcbx_config, 0,
+				sizeof(struct i40e_dcbx_config));
+			/* set dcb default configuration */
+			hw->local_dcbx_config.etscfg.willing = 0;
+			hw->local_dcbx_config.etscfg.maxtcs = 0;
+			hw->local_dcbx_config.etscfg.tcbwtable[0] = 100;
+			hw->local_dcbx_config.etscfg.tsatable[0] =
+						I40E_IEEE_TSA_ETS;
+			hw->local_dcbx_config.etsrec =
+				hw->local_dcbx_config.etscfg;
+			hw->local_dcbx_config.pfc.willing = 0;
+			hw->local_dcbx_config.pfc.pfccap =
+						I40E_MAX_TRAFFIC_CLASS;
+			/* FW needs one App to configure HW */
+			hw->local_dcbx_config.numapps = 1;
+			hw->local_dcbx_config.app[0].selector =
+						I40E_APP_SEL_ETHTYPE;
+			hw->local_dcbx_config.app[0].priority = 3;
+			hw->local_dcbx_config.app[0].protocolid =
+						I40E_APP_PROTOID_FCOE;
+			ret = i40e_set_dcb_config(hw);
+			if (ret) {
+				PMD_INIT_LOG(ERR, "default dcb config fails."
+					" err = %d, aq_err = %d.", ret,
+					  hw->aq.asq_last_status);
+				return -ENOSYS;
+			}
+		} else {
+			PMD_INIT_LOG(ERR, "DCBX configuration failed, err = %d,"
+					  " aq_err = %d.", ret,
+					  hw->aq.asq_last_status);
+			return -ENOTSUP;
+		}
+	} else {
+		ret = i40e_aq_start_lldp(hw, NULL);
+		if (ret != I40E_SUCCESS)
+			PMD_INIT_LOG(DEBUG, "Failed to start lldp");
+
+		ret = i40e_init_dcb(hw);
+		if (!ret) {
+			if (hw->dcbx_status == I40E_DCBX_STATUS_DISABLED) {
+				PMD_INIT_LOG(ERR, "HW doesn't support"
+						  " DCBX offload.");
+				return -ENOTSUP;
+			}
+		} else {
+			PMD_INIT_LOG(ERR, "DCBX configuration failed, err = %d,"
+					  " aq_err = %d.", ret,
+					  hw->aq.asq_last_status);
+			return -ENOTSUP;
+		}
+	}
+	return 0;
+}
+
+/*
+ * i40e_dcb_setup - setup dcb related config
+ * @dev: device being configured
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_dcb_setup(struct rte_eth_dev *dev)
+{
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct i40e_dcbx_config dcb_cfg;
+	uint8_t tc_map = 0;
+	int ret = 0;
+
+	if ((pf->flags & I40E_FLAG_DCB) == 0) {
+		PMD_INIT_LOG(ERR, "HW doesn't support DCB");
+		return -ENOTSUP;
+	}
+
+	if (pf->vf_num != 0 ||
+	    (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+		PMD_INIT_LOG(DEBUG, " DCB only works on main vsi.");
+
+	ret = i40e_parse_dcb_configure(dev, &dcb_cfg, &tc_map);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "invalid dcb config");
+		return -EINVAL;
+	}
+	ret = i40e_dcb_hw_configure(pf, &dcb_cfg, tc_map);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "dcb sw configure fails");
+		return -ENOSYS;
+	}
+	return 0;
+}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 6185657..87da0a2 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -199,6 +199,19 @@ struct i40e_macvlan_filter {
 	uint16_t vlan_id;
 };
 
+/* Bandwidth limit information */
+struct i40e_bw_info {
+	uint16_t bw_limit;      /* BW Limit (0 = disabled) */
+	uint8_t  bw_max_quanta; /* Max Quanta when BW limit is enabled */
+
+	/* Relative TC credits across VSIs */
+	uint8_t  bw_ets_share_credits[I40E_MAX_TRAFFIC_CLASS];
+	/* TC BW limit credits within VSI */
+	uint8_t  bw_ets_limit_credits[I40E_MAX_TRAFFIC_CLASS];
+	/* TC BW limit max quanta within VSI */
+	uint8_t  bw_ets_max_quanta[I40E_MAX_TRAFFIC_CLASS];
+};
+
 /*
  * Structure that defines a VSI, associated with a adapter.
  */
@@ -244,6 +257,7 @@ struct i40e_vsi {
 	uint16_t vsi_id;
 	uint16_t msix_intr; /* The MSIX interrupt binds to VSI */
 	uint8_t enabled_tc; /* The traffic class enabled */
+	struct i40e_bw_info bw_info; /* VSI bandwidth information */
 };
 
 struct pool_entry {
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index fd656d5..d333f48 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2111,7 +2111,8 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	struct i40e_rx_queue *rxq;
 	const struct rte_memzone *rz;
 	uint32_t ring_size;
-	uint16_t len;
+	uint16_t len, i;
+	uint16_t base, bsf, tc_mapping;
 	int use_def_burst_func = 1;
 
 	if (hw->mac.type == I40E_MAC_VF) {
@@ -2232,6 +2233,19 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			     rxq->port_id, rxq->queue_id);
 	}
 
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (!(vsi->enabled_tc & (1 << i)))
+			continue;
+		tc_mapping = rte_le_to_cpu_16(vsi->info.tc_mapping[i]);
+		base = (tc_mapping & I40E_AQ_VSI_TC_QUE_OFFSET_MASK) >>
+			I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT;
+		bsf = (tc_mapping & I40E_AQ_VSI_TC_QUE_NUMBER_MASK) >>
+			I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT;
+
+		if (queue_idx >= base && queue_idx < (base + BIT(bsf)))
+			rxq->dcb_tc = i;
+	}
+
 	return 0;
 }
 
@@ -2324,6 +2338,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	const struct rte_memzone *tz;
 	uint32_t ring_size;
 	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint16_t i, base, bsf, tc_mapping;
 
 	if (hw->mac.type == I40E_MAC_VF) {
 		struct i40e_vf *vf =
@@ -2500,6 +2515,19 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		dev->tx_pkt_burst = i40e_xmit_pkts;
 	}
 
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (!(vsi->enabled_tc & (1 << i)))
+			continue;
+		tc_mapping = rte_le_to_cpu_16(vsi->info.tc_mapping[i]);
+		base = (tc_mapping & I40E_AQ_VSI_TC_QUE_OFFSET_MASK) >>
+			I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT;
+		bsf = (tc_mapping & I40E_AQ_VSI_TC_QUE_NUMBER_MASK) >>
+			I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT;
+
+		if (queue_idx >= base && queue_idx < (base + BIT(bsf)))
+			txq->dcb_tc = i;
+	}
+
 	return 0;
 }
 
@@ -2703,7 +2731,7 @@ i40e_tx_queue_init(struct i40e_tx_queue *txq)
 #ifdef RTE_LIBRTE_IEEE1588
 	tx_ctx.timesync_ena = 1;
 #endif
-	tx_ctx.rdylist = rte_le_to_cpu_16(vsi->info.qs_handle[0]);
+	tx_ctx.rdylist = rte_le_to_cpu_16(vsi->info.qs_handle[txq->dcb_tc]);
 	if (vsi->type == I40E_VSI_FDIR)
 		tx_ctx.fd_ena = TRUE;
 
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 4385142..5c76e3d 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -113,6 +113,7 @@ struct i40e_rx_queue {
 	uint8_t hs_mode; /* Header Split mode */
 	bool q_set; /**< indicate if rx queue has been configured */
 	bool rx_deferred_start; /**< don't start this queue in dev start */
+	uint8_t dcb_tc;         /**< Traffic class of rx queue */
 };
 
 struct i40e_tx_entry {
@@ -153,6 +154,7 @@ struct i40e_tx_queue {
 	uint16_t tx_next_rs;
 	bool q_set; /**< indicate if tx queue has been configured */
 	bool tx_deferred_start; /**< don't start this queue in dev start */
+	uint8_t dcb_tc;         /**< Traffic class of tx queue */
 };
 
 /** Offload features */
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH 4/8] ixgbe: enable DCB+RSS multi-queue mode
  2015-09-24  6:03 [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                   ` (2 preceding siblings ...)
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 3/8] i40e: enable DCB feature on FVL Jingjing Wu
@ 2015-09-24  6:03 ` Jingjing Wu
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 5/8] ethdev: new API to get dcb related information Jingjing Wu
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-09-24  6:03 UTC (permalink / raw)
  To: dev; +Cc: yulong.pei

This patch enables DCB+RSS multi-queue mode, and also fix some coding
style.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ixgbe/ixgbe_rxtx.c | 48 +++++++++++++++++++++++++-----------------
 1 file changed, 29 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index d331ef5..1dc05f0 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3144,9 +3144,13 @@ ixgbe_dcb_rx_hw_config(struct ixgbe_hw *hw,
 				reg = (reg & ~IXGBE_MRQC_MRQE_MASK) |
 					IXGBE_MRQC_VMDQRT4TCEN;
 			else {
+				/* no matter the mode is DCB or DCB_RSS, just
+				 * set the MRQE to RSSXTCEN. RSS is controlled
+				 * by RSS_FIELD
+				 */
 				IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, 0);
 				reg = (reg & ~IXGBE_MRQC_MRQE_MASK) |
-					IXGBE_MRQC_RT4TCEN;
+					IXGBE_MRQC_RTRSS4TCEN;
 			}
 		}
 		if (dcb_config->num_tcs.pg_tcs == 8) {
@@ -3156,7 +3160,7 @@ ixgbe_dcb_rx_hw_config(struct ixgbe_hw *hw,
 			else {
 				IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, 0);
 				reg = (reg & ~IXGBE_MRQC_MRQE_MASK) |
-					IXGBE_MRQC_RT8TCEN;
+					IXGBE_MRQC_RTRSS8TCEN;
 			}
 		}
 
@@ -3261,16 +3265,17 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			 *get dcb and VT rx configuration parameters
 			 *from rte_eth_conf
 			 */
-			ixgbe_vmdq_dcb_rx_config(dev,dcb_config);
+			ixgbe_vmdq_dcb_rx_config(dev, dcb_config);
 			/*Configure general VMDQ and DCB RX parameters*/
 			ixgbe_vmdq_dcb_configure(dev);
 		}
 		break;
 	case ETH_MQ_RX_DCB:
+	case ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
-		ixgbe_dcb_rx_config(dev,dcb_config);
+		ixgbe_dcb_rx_config(dev, dcb_config);
 		/*Configure general DCB RX parameters*/
 		ixgbe_dcb_rx_hw_config(hw, dcb_config);
 		break;
@@ -3292,7 +3297,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/*get DCB TX configuration parameters from rte_eth_conf*/
-		ixgbe_dcb_tx_config(dev,dcb_config);
+		ixgbe_dcb_tx_config(dev, dcb_config);
 		/*Configure general DCB TX parameters*/
 		ixgbe_dcb_tx_hw_config(hw, dcb_config);
 		break;
@@ -3433,14 +3438,15 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
 
 	/* check support mq_mode for DCB */
 	if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
+	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
+	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
 		return;
 
 	if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
-	ixgbe_dcb_hw_configure(dev,dcb_cfg);
+	ixgbe_dcb_hw_configure(dev, dcb_cfg);
 
 	return;
 }
@@ -3682,21 +3688,25 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-			case ETH_MQ_RX_RSS:
-				ixgbe_rss_configure(dev);
-				break;
+		case ETH_MQ_RX_RSS:
+		case ETH_MQ_RX_DCB_RSS:
+		case ETH_MQ_RX_VMDQ_RSS:
+			ixgbe_rss_configure(dev);
+			break;
 
-			case ETH_MQ_RX_VMDQ_DCB:
-				ixgbe_vmdq_dcb_configure(dev);
-				break;
+		case ETH_MQ_RX_VMDQ_DCB:
+			ixgbe_vmdq_dcb_configure(dev);
+			break;
 
-			case ETH_MQ_RX_VMDQ_ONLY:
-				ixgbe_vmdq_rx_hw_configure(dev);
-				break;
+		case ETH_MQ_RX_VMDQ_ONLY:
+			ixgbe_vmdq_rx_hw_configure(dev);
+			break;
 
-			case ETH_MQ_RX_NONE:
-				/* if mq_mode is none, disable rss mode.*/
-			default: ixgbe_rss_disable(dev);
+		case ETH_MQ_RX_NONE:
+		default:
+			/* if mq_mode is none, disable rss mode.*/
+			ixgbe_rss_disable(dev);
+			break;
 		}
 	} else {
 		/*
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH 5/8] ethdev: new API to get dcb related information
  2015-09-24  6:03 [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                   ` (3 preceding siblings ...)
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 4/8] ixgbe: enable DCB+RSS multi-queue mode Jingjing Wu
@ 2015-09-24  6:03 ` Jingjing Wu
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 6/8] ixgbe: get_dcb_info ops implement Jingjing Wu
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-09-24  6:03 UTC (permalink / raw)
  To: dev; +Cc: yulong.pei

This patch adds one new API to get dcb related info.
  rte_eth_dev_get_dcb_info

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++
 lib/librte_ether/rte_ethdev.h | 50 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 68 insertions(+)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index f4bbca6..44a2d55 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -3371,3 +3371,21 @@ rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->set_eeprom)(dev, info);
 }
+
+void
+rte_eth_dev_get_dcb_info(uint8_t port_id,
+			     struct rte_eth_dcb_info *dcb_info)
+{
+	struct rte_eth_dev *dev;
+
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	memset(dcb_info, 0, sizeof(struct rte_eth_dcb_info));
+
+	FUNC_PTR_OR_RET(*dev->dev_ops->get_dcb_info);
+	(*dev->dev_ops->get_dcb_info)(dev, dcb_info);
+}
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 0aa00a6..e6b7271 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -962,6 +962,38 @@ struct rte_eth_xstats {
 	uint64_t value;
 };
 
+#define ETH_DCB_NUM_TCS    8
+#define ETH_MAX_VMDQ_POOL  64
+
+/**
+ * A structure used to get the information of queue and
+ * TC mapping on both TX and RX paths.
+ */
+struct rte_eth_dcb_tc_queue_mapping {
+	/** rx queues assigned to tc per Pool */
+	struct {
+		uint8_t base;
+		uint8_t nb_queue;
+	} tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	/** rx queues assigned to tc per Pool */
+	struct {
+		uint8_t base;
+		uint8_t nb_queue;
+	} tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+};
+
+/**
+ * A structure used to get the information of DCB.
+ * It includes TC UP mapping and queue TC mapping.
+ */
+struct rte_eth_dcb_info {
+	uint8_t nb_tcs;        /**< number of TCs */
+	uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+	uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
+	/** rx queues assigned to tc */
+	struct rte_eth_dcb_tc_queue_mapping tc_queue;
+};
+
 struct rte_eth_dev;
 
 struct rte_eth_dev_callback;
@@ -1354,6 +1386,10 @@ typedef int (*eth_filter_ctrl_t)(struct rte_eth_dev *dev,
 				 void *arg);
 /**< @internal Take operations to assigned filter type on an Ethernet device */
 
+typedef void (*eth_get_dcb_info)(struct rte_eth_dev *dev,
+				 struct rte_eth_dcb_info *dcb_info);
+/**< @internal Get dcb information on an Ethernet device */
+
 /**
  * @internal A structure containing the functions exported by an Ethernet driver.
  */
@@ -1476,6 +1512,9 @@ struct eth_dev_ops {
 	eth_timesync_read_rx_timestamp_t timesync_read_rx_timestamp;
 	/** Read the IEEE1588/802.1AS TX timestamp. */
 	eth_timesync_read_tx_timestamp_t timesync_read_tx_timestamp;
+
+	/** Get DCB information */
+	eth_get_dcb_info get_dcb_info;
 };
 
 /**
@@ -3701,6 +3740,17 @@ int rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type,
 			enum rte_filter_op filter_op, void *arg);
 
 /**
+ * Get DCB information on an Ethernet device.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param dcb_info
+ *   dcb information.
+ */
+void rte_eth_dev_get_dcb_info(uint8_t port_id,
+			     struct rte_eth_dcb_info *dcb_info);
+
+/**
  * Add a callback to be called on packet RX on a given port and queue.
  *
  * This API configures a function to be called for each burst of
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH 6/8] ixgbe: get_dcb_info ops implement
  2015-09-24  6:03 [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                   ` (4 preceding siblings ...)
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 5/8] ethdev: new API to get dcb related information Jingjing Wu
@ 2015-09-24  6:03 ` Jingjing Wu
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 7/8] i40e: " Jingjing Wu
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-09-24  6:03 UTC (permalink / raw)
  To: dev; +Cc: yulong.pei

This patch implements the get_dcb_info ops in ixgbe driver.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 77 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 77 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a7dca55..91944bc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -304,6 +304,8 @@ static int ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
 static int ixgbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
 				      struct ether_addr *mc_addr_set,
 				      uint32_t nb_mc_addr);
+static void ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
+				   struct rte_eth_dcb_info *dcb_info);
 
 static int ixgbe_get_reg_length(struct rte_eth_dev *dev);
 static int ixgbe_get_regs(struct rte_eth_dev *dev,
@@ -465,6 +467,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.get_eeprom_length    = ixgbe_get_eeprom_length,
 	.get_eeprom           = ixgbe_get_eeprom,
 	.set_eeprom           = ixgbe_set_eeprom,
+	.get_dcb_info         = ixgbe_dev_get_dcb_info,
 };
 
 /*
@@ -5644,6 +5647,80 @@ ixgbe_set_eeprom(struct rte_eth_dev *dev,
 	return eeprom->ops.write_buffer(hw,  first, length, data);
 }
 
+static void
+ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
+			struct rte_eth_dcb_info *dcb_info)
+{
+	struct ixgbe_dcb_config *dcb_config =
+			IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data->dev_private);
+	uint8_t i, j;
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
+	else
+		dcb_info->nb_tcs = 1;
+
+	if (dcb_config->vt_mode) { /* vt is enabled*/
+		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
+				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
+		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
+		for (i = 0; i < vmdq_rx_conf->nb_queue_pools; i++) {
+			for (j = 0; j < dcb_info->nb_tcs; j++) {
+				dcb_info->tc_queue.tc_rxq[i][j].base =
+						i * dcb_info->nb_tcs + j;
+				dcb_info->tc_queue.tc_rxq[i][j].nb_queue = 1;
+				dcb_info->tc_queue.tc_txq[i][j].base =
+						i * dcb_info->nb_tcs + j;
+				dcb_info->tc_queue.tc_txq[i][j].nb_queue = 1;
+			}
+		}
+	} else { /* vt is disabled*/
+		struct rte_eth_dcb_rx_conf *rx_conf =
+				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
+		if (dcb_info->nb_tcs == ETH_4_TCS) {
+			for (i = 0; i < dcb_info->nb_tcs; i++) {
+				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
+				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
+			}
+			dcb_info->tc_queue.tc_txq[0][0].base = 0;
+			dcb_info->tc_queue.tc_txq[0][1].base = 64;
+			dcb_info->tc_queue.tc_txq[0][2].base = 96;
+			dcb_info->tc_queue.tc_txq[0][3].base = 112;
+			dcb_info->tc_queue.tc_txq[0][0].nb_queue = 64;
+			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
+			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
+			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
+		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+			for (i = 0; i < dcb_info->nb_tcs; i++) {
+				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
+				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
+			}
+			dcb_info->tc_queue.tc_txq[0][0].base = 0;
+			dcb_info->tc_queue.tc_txq[0][1].base = 32;
+			dcb_info->tc_queue.tc_txq[0][2].base = 64;
+			dcb_info->tc_queue.tc_txq[0][3].base = 80;
+			dcb_info->tc_queue.tc_txq[0][4].base = 96;
+			dcb_info->tc_queue.tc_txq[0][5].base = 104;
+			dcb_info->tc_queue.tc_txq[0][6].base = 112;
+			dcb_info->tc_queue.tc_txq[0][7].base = 120;
+			dcb_info->tc_queue.tc_txq[0][0].nb_queue = 32;
+			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
+			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
+			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
+			dcb_info->tc_queue.tc_txq[0][4].nb_queue = 8;
+			dcb_info->tc_queue.tc_txq[0][5].nb_queue = 8;
+			dcb_info->tc_queue.tc_txq[0][6].nb_queue = 8;
+			dcb_info->tc_queue.tc_txq[0][7].nb_queue = 8;
+		}
+	}
+	for (i = 0; i < dcb_info->nb_tcs; i++)
+		dcb_info->tc_bws[i] =
+			dcb_config->bw_percentage[IXGBE_DCB_TX_CONFIG][i];
+}
+
 static struct rte_driver rte_ixgbe_driver = {
 	.type = PMD_PDEV,
 	.init = rte_ixgbe_pmd_init,
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH 7/8] i40e: get_dcb_info ops implement
  2015-09-24  6:03 [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                   ` (5 preceding siblings ...)
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 6/8] ixgbe: get_dcb_info ops implement Jingjing Wu
@ 2015-09-24  6:03 ` Jingjing Wu
  2015-10-22  7:10   ` Liu, Jijiang
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 8/8] app/testpmd: set up DCB forwarding based on traffic class Jingjing Wu
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
  8 siblings, 1 reply; 40+ messages in thread
From: Jingjing Wu @ 2015-09-24  6:03 UTC (permalink / raw)
  To: dev; +Cc: yulong.pei

    This patch implements the get_dcb_info ops in i40e driver.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7d252fa..76e2353 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -220,6 +220,8 @@ static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
 				enum rte_filter_type filter_type,
 				enum rte_filter_op filter_op,
 				void *arg);
+static void i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
+				  struct rte_eth_dcb_info *dcb_info);
 static void i40e_configure_registers(struct i40e_hw *hw);
 static void i40e_hw_init(struct i40e_hw *hw);
 static int i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi);
@@ -292,6 +294,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.timesync_disable             = i40e_timesync_disable,
 	.timesync_read_rx_timestamp   = i40e_timesync_read_rx_timestamp,
 	.timesync_read_tx_timestamp   = i40e_timesync_read_tx_timestamp,
+	.get_dcb_info                 = i40e_dev_get_dcb_info,
 };
 
 static struct eth_driver rte_i40e_pmd = {
@@ -6808,3 +6811,42 @@ i40e_dcb_setup(struct rte_eth_dev *dev)
 	}
 	return 0;
 }
+
+static void
+i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
+		      struct rte_eth_dcb_info *dcb_info)
+{
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct i40e_vsi *vsi = pf->main_vsi;
+	struct i40e_dcbx_config *dcb_cfg = &hw->local_dcbx_config;
+	uint16_t bsf, tc_mapping;
+	int i;
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+		dcb_info->nb_tcs =
+			dev->data->dev_conf.rx_adv_conf.dcb_rx_conf.nb_tcs;
+	else
+		dcb_info->nb_tcs = 1;
+	for (i = 0; i < I40E_MAX_USER_PRIORITY; i++)
+		dcb_info->prio_tc[i] = dcb_cfg->etscfg.prioritytable[i];
+	for (i = 0; i < dcb_info->nb_tcs; i++)
+		dcb_info->tc_bws[i] = dcb_cfg->etscfg.tcbwtable[i];
+
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (vsi->enabled_tc & (1 << i)) {
+			tc_mapping = rte_le_to_cpu_16(vsi->info.tc_mapping[i]);
+			/* only main vsi support multi TCs */
+			dcb_info->tc_queue.tc_rxq[0][i].base =
+				(tc_mapping & I40E_AQ_VSI_TC_QUE_OFFSET_MASK) >>
+				I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT;
+			dcb_info->tc_queue.tc_txq[0][i].base =
+				dcb_info->tc_queue.tc_rxq[0][i].base;
+			bsf = (tc_mapping & I40E_AQ_VSI_TC_QUE_NUMBER_MASK) >>
+				I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT;
+			dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 1 << bsf;
+			dcb_info->tc_queue.tc_txq[0][i].nb_queue =
+				dcb_info->tc_queue.tc_rxq[0][i].nb_queue;
+		}
+	}
+}
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH 8/8] app/testpmd: set up DCB forwarding based on traffic class
  2015-09-24  6:03 [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                   ` (6 preceding siblings ...)
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 7/8] i40e: " Jingjing Wu
@ 2015-09-24  6:03 ` Jingjing Wu
  2015-10-28  1:46   ` Liu, Jijiang
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
  8 siblings, 1 reply; 40+ messages in thread
From: Jingjing Wu @ 2015-09-24  6:03 UTC (permalink / raw)
  To: dev; +Cc: yulong.pei

This patch changes the testpmd DCB forwarding stream to make it
based on traffic class.
It also fixes some coding style issues.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 app/test-pmd/cmdline.c |  39 +++++++-----
 app/test-pmd/config.c  | 159 +++++++++++++++++++++----------------------------
 app/test-pmd/testpmd.c | 151 +++++++++++++++++++++++++---------------------
 app/test-pmd/testpmd.h |  23 +------
 4 files changed, 176 insertions(+), 196 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 0f8f48f..2ec855f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1999,37 +1999,46 @@ cmd_config_dcb_parsed(void *parsed_result,
                         __attribute__((unused)) void *data)
 {
 	struct cmd_config_dcb *res = parsed_result;
-	struct dcb_config dcb_conf;
 	portid_t port_id = res->port_id;
 	struct rte_port *port;
+	uint8_t pfc_en;
+	int ret;
 
 	port = &ports[port_id];
 	/** Check if the port is not started **/
 	if (port->port_status != RTE_PORT_STOPPED) {
-		printf("Please stop port %d first\n",port_id);
+		printf("Please stop port %d first\n", port_id);
 		return;
 	}
 
-	dcb_conf.num_tcs = (enum rte_eth_nb_tcs) res->num_tcs;
-	if ((dcb_conf.num_tcs != ETH_4_TCS) && (dcb_conf.num_tcs != ETH_8_TCS)){
-		printf("The invalid number of traffic class,only 4 or 8 allowed\n");
+	if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+		printf("The invalid number of traffic class,"
+			" only 4 or 8 allowed.\n");
 		return;
 	}
 
-	/* DCB in VT mode */
-	if (!strncmp(res->vt_en, "on",2))
-		dcb_conf.dcb_mode = DCB_VT_ENABLED;
+	if (nb_fwd_lcores < res->num_tcs) {
+		printf("nb_cores shouldn't be less than number of TCs.\n");
+		return;
+	}
+	if (!strncmp(res->pfc_en, "on", 2))
+		pfc_en = 1;
 	else
-		dcb_conf.dcb_mode = DCB_ENABLED;
+		pfc_en = 0;
 
-	if (!strncmp(res->pfc_en, "on",2)) {
-		dcb_conf.pfc_en = 1;
-	}
+	/* DCB in VT mode */
+	if (!strncmp(res->vt_en, "on", 2))
+		ret = init_port_dcb_config(port_id, DCB_VT_ENABLED,
+				(enum rte_eth_nb_tcs)res->num_tcs,
+				pfc_en);
 	else
-		dcb_conf.pfc_en = 0;
+		ret = init_port_dcb_config(port_id, DCB_ENABLED,
+				(enum rte_eth_nb_tcs)res->num_tcs,
+				pfc_en);
+
 
-	if (init_port_dcb_config(port_id,&dcb_conf) != 0) {
-		printf("Cannot initialize network ports\n");
+	if (ret != 0) {
+		printf("Cannot initialize network ports.\n");
 		return;
 	}
 
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cf2aa6e..e10da57 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1128,113 +1128,92 @@ rss_fwd_config_setup(void)
 	}
 }
 
-/*
- * In DCB and VT on,the mapping of 128 receive queues to 128 transmit queues.
- */
-static void
-dcb_rxq_2_txq_mapping(queueid_t rxq, queueid_t *txq)
-{
-	if(dcb_q_mapping == DCB_4_TCS_Q_MAPPING) {
-
-		if (rxq < 32)
-			/* tc0: 0-31 */
-			*txq = rxq;
-		else if (rxq < 64) {
-			/* tc1: 64-95 */
-			*txq =  (uint16_t)(rxq + 32);
-		}
-		else {
-			/* tc2: 96-111;tc3:112-127 */
-			*txq =  (uint16_t)(rxq/2 + 64);
-		}
-	}
-	else {
-		if (rxq < 16)
-			/* tc0 mapping*/
-			*txq = rxq;
-		else if (rxq < 32) {
-			/* tc1 mapping*/
-			 *txq = (uint16_t)(rxq + 16);
-		}
-		else if (rxq < 64) {
-			/*tc2,tc3 mapping */
-			*txq =  (uint16_t)(rxq + 32);
-		}
-		else {
-			/* tc4,tc5,tc6 and tc7 mapping */
-			*txq =  (uint16_t)(rxq/2 + 64);
-		}
-	}
-}
-
 /**
- * For the DCB forwarding test, each core is assigned on every port multi-transmit
- * queue.
+ * For the DCB forwarding test, each core is assigned on each traffic class.
  *
  * Each core is assigned a multi-stream, each stream being composed of
  * a RX queue to poll on a RX port for input messages, associated with
- * a TX queue of a TX port where to send forwarded packets.
- * All packets received on the RX queue of index "RxQj" of the RX port "RxPi"
- * are sent on the TX queue "TxQl" of the TX port "TxPk" according to the two
- * following rules:
- * In VT mode,
- *    - TxPk = (RxPi + 1) if RxPi is even, (RxPi - 1) if RxPi is odd
- *    - TxQl = RxQj
- * In non-VT mode,
- *    - TxPk = (RxPi + 1) if RxPi is even, (RxPi - 1) if RxPi is odd
- *    There is a mapping of RxQj to TxQl to be required,and the mapping was implemented
- *    in dcb_rxq_2_txq_mapping function.
+ * a TX queue of a TX port where to send forwarded packets. All RX and
+ * TX queues are mapping to the same traffic class.
+ * If VMDQ and DCB co-exist, each traffic class on different POOLs share
+ * the same core
  */
 static void
 dcb_fwd_config_setup(void)
 {
-	portid_t   rxp;
-	portid_t   txp;
-	queueid_t  rxq;
-	queueid_t  nb_q;
+	struct rte_eth_dcb_info rxp_dcb_info, txp_dcb_info;
+	portid_t txp, rxp = 0;
+	queueid_t txq, rxq = 0;
 	lcoreid_t  lc_id;
-	uint16_t sm_id;
-
-	nb_q = nb_rxq;
+	uint16_t nb_rx_queue, nb_tx_queue;
+	uint16_t i, j, k, sm_id = 0;
+	uint8_t tc = 0;
 
 	cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores;
 	cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
 	cur_fwd_config.nb_fwd_streams =
-		(streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
+		(streamid_t) (nb_rxq * cur_fwd_config.nb_fwd_ports);
 
 	/* reinitialize forwarding streams */
 	init_fwd_streams();
+	sm_id = 0;
+	if ((rxp & 0x1) == 0)
+		txp = (portid_t) (rxp + 1);
+	else
+		txp = (portid_t) (rxp - 1);
+	/* get the dcb info on the first RX and TX ports */
+	rte_eth_dev_get_dcb_info(fwd_ports_ids[rxp], &rxp_dcb_info);
+	rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
 
-	setup_fwd_config_of_each_lcore(&cur_fwd_config);
-	rxp = 0; rxq = 0;
 	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
-		/* a fwd core can run multi-streams */
-		for (sm_id = 0; sm_id < fwd_lcores[lc_id]->stream_nb; sm_id++)
-		{
-			struct fwd_stream *fs;
-			fs = fwd_streams[fwd_lcores[lc_id]->stream_idx + sm_id];
-			if ((rxp & 0x1) == 0)
-				txp = (portid_t) (rxp + 1);
-			else
-				txp = (portid_t) (rxp - 1);
-			fs->rx_port = fwd_ports_ids[rxp];
-			fs->rx_queue = rxq;
-			fs->tx_port = fwd_ports_ids[txp];
-			if (dcb_q_mapping == DCB_VT_Q_MAPPING)
-				fs->tx_queue = rxq;
-			else
-				dcb_rxq_2_txq_mapping(rxq, &fs->tx_queue);
-			fs->peer_addr = fs->tx_port;
-			rxq = (queueid_t) (rxq + 1);
-			if (rxq < nb_q)
-				continue;
-			rxq = 0;
-			if (numa_support && (nb_fwd_ports <= (nb_ports >> 1)))
-				rxp = (portid_t)
-					(rxp + ((nb_ports >> 1) / nb_fwd_ports));
-			else
-				rxp = (portid_t) (rxp + 1);
+		fwd_lcores[lc_id]->stream_nb = 0;
+		fwd_lcores[lc_id]->stream_idx = sm_id;
+		for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+			/* if the nb_queue is zero, means this tc is
+			 * not enabled on the POOL
+			 */
+			if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0)
+				break;
+			k = fwd_lcores[lc_id]->stream_nb +
+				fwd_lcores[lc_id]->stream_idx;
+			rxq = rxp_dcb_info.tc_queue.tc_rxq[i][tc].base;
+			txq = txp_dcb_info.tc_queue.tc_txq[i][tc].base;
+			nb_rx_queue = txp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue;
+			nb_tx_queue = txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue;
+			for (j = 0; j < nb_rx_queue; j++) {
+				struct fwd_stream *fs;
+
+				fs = fwd_streams[k + j];
+				fs->rx_port = fwd_ports_ids[rxp];
+				fs->rx_queue = rxq + j;
+				fs->tx_port = fwd_ports_ids[txp];
+				fs->tx_queue = txq + j % nb_tx_queue;
+				fs->peer_addr = fs->tx_port;
+			}
+			fwd_lcores[lc_id]->stream_nb +=
+				rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue;
 		}
+		sm_id = (streamid_t) (sm_id + fwd_lcores[lc_id]->stream_nb);
+
+		tc++;
+		if (tc < rxp_dcb_info.nb_tcs)
+			continue;
+		/* Restart from TC 0 on next RX port */
+		tc = 0;
+		if (numa_support && (nb_fwd_ports <= (nb_ports >> 1)))
+			rxp = (portid_t)
+				(rxp + ((nb_ports >> 1) / nb_fwd_ports));
+		else
+			rxp++;
+		if (rxp >= nb_fwd_ports)
+			return;
+		/* get the dcb information on next RX and TX ports */
+		if ((rxp & 0x1) == 0)
+			txp = (portid_t) (rxp + 1);
+		else
+			txp = (portid_t) (rxp - 1);
+		rte_eth_dev_get_dcb_info(fwd_ports_ids[rxp], &rxp_dcb_info);
+		rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
 	}
 }
 
@@ -1354,10 +1333,6 @@ pkt_fwd_config_display(struct fwd_config *cfg)
 void
 fwd_config_display(void)
 {
-	if((dcb_config) && (nb_fwd_lcores == 1)) {
-		printf("In DCB mode,the nb forwarding cores should be larger than 1\n");
-		return;
-	}
 	fwd_config_setup();
 	pkt_fwd_config_display(&cur_fwd_config);
 }
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index c8ae909..25dadbc 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -182,9 +182,6 @@ uint8_t dcb_config = 0;
 /* Whether the dcb is in testing status */
 uint8_t dcb_test = 0;
 
-/* DCB on and VT on mapping is default */
-enum dcb_queue_mapping_mode dcb_q_mapping = DCB_VT_Q_MAPPING;
-
 /*
  * Configurable number of RX/TX queues.
  */
@@ -1840,115 +1837,131 @@ const uint16_t vlan_tags[] = {
 };
 
 static  int
-get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
+get_eth_dcb_conf(struct rte_eth_conf *eth_conf,
+		 enum dcb_mode_enable dcb_mode,
+		 enum rte_eth_nb_tcs num_tcs,
+		 uint8_t pfc_en)
 {
-        uint8_t i;
+	uint8_t i;
 
 	/*
 	 * Builds up the correct configuration for dcb+vt based on the vlan tags array
 	 * given above, and the number of traffic classes available for use.
 	 */
-	if (dcb_conf->dcb_mode == DCB_VT_ENABLED) {
-		struct rte_eth_vmdq_dcb_conf vmdq_rx_conf;
-		struct rte_eth_vmdq_dcb_tx_conf vmdq_tx_conf;
+	if (dcb_mode == DCB_VT_ENABLED) {
+		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
+				&eth_conf->rx_adv_conf.vmdq_dcb_conf;
+		struct rte_eth_vmdq_dcb_tx_conf *vmdq_tx_conf =
+				&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 
 		/* VMDQ+DCB RX and TX configrations */
-		vmdq_rx_conf.enable_default_pool = 0;
-		vmdq_rx_conf.default_pool = 0;
-		vmdq_rx_conf.nb_queue_pools =
-			(dcb_conf->num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
-		vmdq_tx_conf.nb_queue_pools =
-			(dcb_conf->num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
-
-		vmdq_rx_conf.nb_pool_maps = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
-		for (i = 0; i < vmdq_rx_conf.nb_pool_maps; i++) {
-			vmdq_rx_conf.pool_map[i].vlan_id = vlan_tags[ i ];
-			vmdq_rx_conf.pool_map[i].pools = 1 << (i % vmdq_rx_conf.nb_queue_pools);
+		vmdq_rx_conf->enable_default_pool = 0;
+		vmdq_rx_conf->default_pool = 0;
+		vmdq_rx_conf->nb_queue_pools =
+			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+		vmdq_tx_conf->nb_queue_pools =
+			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+
+		vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
+		for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
+			vmdq_rx_conf->pool_map[i].vlan_id = vlan_tags[i];
+			vmdq_rx_conf->pool_map[i].pools =
+				1 << (i % vmdq_rx_conf->nb_queue_pools);
 		}
 		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-			vmdq_rx_conf.dcb_tc[i] = i;
-			vmdq_tx_conf.dcb_tc[i] = i;
+			vmdq_rx_conf->dcb_tc[i] = i;
+			vmdq_tx_conf->dcb_tc[i] = i;
 		}
 
-		/*set DCB mode of RX and TX of multiple queues*/
+		/* set DCB mode of RX and TX of multiple queues */
 		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB;
 		eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
-		if (dcb_conf->pfc_en)
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT|ETH_DCB_PFC_SUPPORT;
-		else
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
-
-		(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_dcb_conf, &vmdq_rx_conf,
-                                sizeof(struct rte_eth_vmdq_dcb_conf)));
-		(void)(rte_memcpy(&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf, &vmdq_tx_conf,
-                                sizeof(struct rte_eth_vmdq_dcb_tx_conf)));
-	}
-	else {
-		struct rte_eth_dcb_rx_conf rx_conf;
-		struct rte_eth_dcb_tx_conf tx_conf;
-
-		/* queue mapping configuration of DCB RX and TX */
-		if (dcb_conf->num_tcs == ETH_4_TCS)
-			dcb_q_mapping = DCB_4_TCS_Q_MAPPING;
-		else
-			dcb_q_mapping = DCB_8_TCS_Q_MAPPING;
-
-		rx_conf.nb_tcs = dcb_conf->num_tcs;
-		tx_conf.nb_tcs = dcb_conf->num_tcs;
-
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
-			rx_conf.dcb_tc[i] = i;
-			tx_conf.dcb_tc[i] = i;
+	} else {
+		struct rte_eth_dcb_rx_conf *rx_conf =
+				&eth_conf->rx_adv_conf.dcb_rx_conf;
+		struct rte_eth_dcb_tx_conf *tx_conf =
+				&eth_conf->tx_adv_conf.dcb_tx_conf;
+
+		rx_conf->nb_tcs = num_tcs;
+		tx_conf->nb_tcs = num_tcs;
+
+		for (i = 0; i < num_tcs; i++) {
+			rx_conf->dcb_tc[i] = i;
+			tx_conf->dcb_tc[i] = i;
 		}
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB;
+		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = rss_hf;
 		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
-		if (dcb_conf->pfc_en)
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT|ETH_DCB_PFC_SUPPORT;
-		else
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
-
-		(void)(rte_memcpy(&eth_conf->rx_adv_conf.dcb_rx_conf, &rx_conf,
-                                sizeof(struct rte_eth_dcb_rx_conf)));
-		(void)(rte_memcpy(&eth_conf->tx_adv_conf.dcb_tx_conf, &tx_conf,
-                                sizeof(struct rte_eth_dcb_tx_conf)));
 	}
 
+	if (pfc_en)
+		eth_conf->dcb_capability_en =
+				ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+	else
+		eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+
 	return 0;
 }
 
 int
-init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf)
+init_port_dcb_config(portid_t pid,
+		     enum dcb_mode_enable dcb_mode,
+		     enum rte_eth_nb_tcs num_tcs,
+		     uint8_t pfc_en)
 {
 	struct rte_eth_conf port_conf;
+	struct rte_eth_dev_info dev_info;
 	struct rte_port *rte_port;
 	int retval;
-	uint16_t nb_vlan;
 	uint16_t i;
 
-	/* rxq and txq configuration in dcb mode */
-	nb_rxq = 128;
-	nb_txq = 128;
+	rte_eth_dev_info_get(pid, &dev_info);
+
+	/* If dev_info.vmdq_pool_base is greater than 0,
+	 * the queue id of vmdq pools is started after pf queues.
+	 */
+	if (dcb_mode == DCB_VT_ENABLED && dev_info.vmdq_pool_base > 0) {
+		printf("VMDQ_DCB multi-queue mode is nonsensical"
+			" for port %d.", pid);
+		return -1;
+	}
+
+	/* Assume the ports in testpmd have the same dcb capability
+	 * and has the same number of rxq and txq in dcb mode
+	 */
+	if (dcb_mode == DCB_VT_ENABLED) {
+		nb_rxq = dev_info.max_rx_queues;
+		nb_txq = dev_info.max_tx_queues;
+	} else {
+		/*if vt is disabled, use all pf queues */
+		if (dev_info.vmdq_pool_base == 0) {
+			nb_rxq = dev_info.max_rx_queues;
+			nb_txq = dev_info.max_tx_queues;
+		} else {
+			nb_rxq = (queueid_t)num_tcs;
+			nb_txq = (queueid_t)num_tcs;
+
+		}
+	}
 	rx_free_thresh = 64;
 
-	memset(&port_conf,0,sizeof(struct rte_eth_conf));
+	memset(&port_conf, 0, sizeof(struct rte_eth_conf));
 	/* Enter DCB configuration status */
 	dcb_config = 1;
 
-	nb_vlan = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
 	/*set configuration of DCB in vt mode and DCB in non-vt mode*/
-	retval = get_eth_dcb_conf(&port_conf, dcb_conf);
+	retval = get_eth_dcb_conf(&port_conf, dcb_mode, num_tcs, pfc_en);
 	if (retval < 0)
 		return retval;
 
 	rte_port = &ports[pid];
-	memcpy(&rte_port->dev_conf, &port_conf,sizeof(struct rte_eth_conf));
+	memcpy(&rte_port->dev_conf, &port_conf, sizeof(struct rte_eth_conf));
 
 	rxtx_port_config(rte_port);
 	/* VLAN filter */
 	rte_port->dev_conf.rxmode.hw_vlan_filter = 1;
-	for (i = 0; i < nb_vlan; i++){
+	for (i = 0; i < RTE_DIM(vlan_tags); i++)
 		rx_vft_set(pid, vlan_tags[i], 1);
-	}
 
 	rte_eth_macaddr_get(pid, &rte_port->eth_addr);
 	map_port_queue_stats_mapping_registers(pid, rte_port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index d287274..5818fdd 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -255,25 +255,6 @@ enum dcb_mode_enable
 	DCB_ENABLED
 };
 
-/*
- * DCB general config info
- */
-struct dcb_config {
-	enum dcb_mode_enable dcb_mode;
-	uint8_t vt_en;
-	enum rte_eth_nb_tcs num_tcs;
-	uint8_t pfc_en;
-};
-
-/*
- * In DCB io FWD mode, 128 RX queue to 128 TX queue mapping
- */
-enum dcb_queue_mapping_mode {
-	DCB_VT_Q_MAPPING = 0,
-	DCB_4_TCS_Q_MAPPING,
-	DCB_8_TCS_Q_MAPPING
-};
-
 #define MAX_TX_QUEUE_STATS_MAPPINGS 1024 /* MAX_PORT of 32 @ 32 tx_queues/port */
 #define MAX_RX_QUEUE_STATS_MAPPINGS 4096 /* MAX_PORT of 32 @ 128 rx_queues/port */
 
@@ -537,7 +518,9 @@ void dev_set_link_down(portid_t pid);
 void init_port_config(void);
 void set_port_slave_flag(portid_t slave_pid);
 void clear_port_slave_flag(portid_t slave_pid);
-int init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf);
+int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
+		     enum rte_eth_nb_tcs num_tcs,
+		     uint8_t pfc_en);
 int start_port(portid_t pid);
 void stop_port(portid_t pid);
 void close_port(portid_t pid);
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [dpdk-dev] [PATCH 7/8] i40e: get_dcb_info ops implement
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 7/8] i40e: " Jingjing Wu
@ 2015-10-22  7:10   ` Liu, Jijiang
  2015-10-26  7:38     ` Wu, Jingjing
  0 siblings, 1 reply; 40+ messages in thread
From: Liu, Jijiang @ 2015-10-22  7:10 UTC (permalink / raw)
  To: Wu, Jingjing, dev; +Cc: Pei, Yulong



> -----Original Message-----
> From: Wu, Jingjing
> Sent: Thursday, September 24, 2015 2:03 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing; Liu, Jijiang; Zhang, Helin; Tao, Zhe; Pei, Yulong
> Subject: [PATCH 7/8] i40e: get_dcb_info ops implement
> 
>     This patch implements the get_dcb_info ops in i40e driver.
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  drivers/net/i40e/i40e_ethdev.c | 42
> ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 7d252fa..76e2353 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -220,6 +220,8 @@ static int i40e_dev_filter_ctrl(struct rte_eth_dev
> *dev,
>  				enum rte_filter_type filter_type,
>  				enum rte_filter_op filter_op,
>  				void *arg);
> +static void i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
> +				  struct rte_eth_dcb_info *dcb_info);
>  static void i40e_configure_registers(struct i40e_hw *hw);  static void
> i40e_hw_init(struct i40e_hw *hw);  static int i40e_config_qinq(struct
> i40e_hw *hw, struct i40e_vsi *vsi); @@ -292,6 +294,7 @@ static const
> struct eth_dev_ops i40e_eth_dev_ops = {
>  	.timesync_disable             = i40e_timesync_disable,
>  	.timesync_read_rx_timestamp   = i40e_timesync_read_rx_timestamp,
>  	.timesync_read_tx_timestamp   = i40e_timesync_read_tx_timestamp,
> +	.get_dcb_info                 = i40e_dev_get_dcb_info,
>  };
> 
>  static struct eth_driver rte_i40e_pmd = { @@ -6808,3 +6811,42 @@
> i40e_dcb_setup(struct rte_eth_dev *dev)
>  	}
>  	return 0;
>  }
> +
> +static void
> +i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
> +		      struct rte_eth_dcb_info *dcb_info) {
> +	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> +	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> +	struct i40e_vsi *vsi = pf->main_vsi;
> +	struct i40e_dcbx_config *dcb_cfg = &hw->local_dcbx_config;
> +	uint16_t bsf, tc_mapping;
> +	int i;
> +
> +	if (dev->data->dev_conf.rxmode.mq_mode &
> ETH_MQ_RX_DCB_FLAG)
> +		dcb_info->nb_tcs =
> +			dev->data-
> >dev_conf.rx_adv_conf.dcb_rx_conf.nb_tcs;
> +	else
> +		dcb_info->nb_tcs = 1;
> +	for (i = 0; i < I40E_MAX_USER_PRIORITY; i++)
> +		dcb_info->prio_tc[i] = dcb_cfg->etscfg.prioritytable[i];
> +	for (i = 0; i < dcb_info->nb_tcs; i++)
> +		dcb_info->tc_bws[i] = dcb_cfg->etscfg.tcbwtable[i];
> +
> +	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
> +		if (vsi->enabled_tc & (1 << i)) {
> +			tc_mapping = rte_le_to_cpu_16(vsi-
> >info.tc_mapping[i]);
> +			/* only main vsi support multi TCs */
> +			dcb_info->tc_queue.tc_rxq[0][i].base =
> +				(tc_mapping &
> I40E_AQ_VSI_TC_QUE_OFFSET_MASK) >>
> +				I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT;
> +			dcb_info->tc_queue.tc_txq[0][i].base =
> +				dcb_info->tc_queue.tc_rxq[0][i].base;
> +			bsf = (tc_mapping &
> I40E_AQ_VSI_TC_QUE_NUMBER_MASK) >>
> +				I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT;
> +			dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 1 << bsf;
> +			dcb_info->tc_queue.tc_txq[0][i].nb_queue =
> +				dcb_info->tc_queue.tc_rxq[0][i].nb_queue;
> +		}
> +	}
> +}
> --
> 2.4.0

If there are some  command lines in testpmd to get DCB information, that is great.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [dpdk-dev] [PATCH 7/8] i40e: get_dcb_info ops implement
  2015-10-22  7:10   ` Liu, Jijiang
@ 2015-10-26  7:38     ` Wu, Jingjing
  0 siblings, 0 replies; 40+ messages in thread
From: Wu, Jingjing @ 2015-10-26  7:38 UTC (permalink / raw)
  To: Liu, Jijiang, dev; +Cc: Pei, Yulong



> -----Original Message-----
> From: Liu, Jijiang
> Sent: Thursday, October 22, 2015 3:10 PM
> To: Wu, Jingjing; dev@dpdk.org
> Cc: Zhang, Helin; Tao, Zhe; Pei, Yulong
> Subject: RE: [PATCH 7/8] i40e: get_dcb_info ops implement
> 
> 
> 
> > -----Original Message-----
> > From: Wu, Jingjing
> > Sent: Thursday, September 24, 2015 2:03 PM
> > To: dev@dpdk.org
> > Cc: Wu, Jingjing; Liu, Jijiang; Zhang, Helin; Tao, Zhe; Pei, Yulong
> > Subject: [PATCH 7/8] i40e: get_dcb_info ops implement
> >
> >     This patch implements the get_dcb_info ops in i40e driver.
> >
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> > ---
> >  drivers/net/i40e/i40e_ethdev.c | 42
> > ++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 42 insertions(+)
> >
> 
> If there are some  command lines in testpmd to get DCB information, that is
> great.

Yes. Will think about it in V2 patch.

Thanks.
Jingjing 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [dpdk-dev] [PATCH 8/8] app/testpmd: set up DCB forwarding based on traffic class
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 8/8] app/testpmd: set up DCB forwarding based on traffic class Jingjing Wu
@ 2015-10-28  1:46   ` Liu, Jijiang
  2015-10-28  2:04     ` Wu, Jingjing
  0 siblings, 1 reply; 40+ messages in thread
From: Liu, Jijiang @ 2015-10-28  1:46 UTC (permalink / raw)
  To: Wu, Jingjing, dev; +Cc: Pei, Yulong



> -----Original Message-----
> From: Wu, Jingjing
> Sent: Thursday, September 24, 2015 2:03 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing; Liu, Jijiang; Zhang, Helin; Tao, Zhe; Pei, Yulong
> Subject: [PATCH 8/8] app/testpmd: set up DCB forwarding based on traffic
> class
> 
> This patch changes the testpmd DCB forwarding stream to make it based on
> traffic class.
> It also fixes some coding style issues.
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  app/test-pmd/cmdline.c |  39 +++++++-----  app/test-pmd/config.c  | 159
> +++++++++++++++++++++----------------------------
>  app/test-pmd/testpmd.c | 151 +++++++++++++++++++++++++---------------------
>  app/test-pmd/testpmd.h |  23 +------
>  4 files changed, 176 insertions(+), 196 deletions(-)
> 
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> 0f8f48f..2ec855f 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1999,37 +1999,46 @@ cmd_config_dcb_parsed(void *parsed_result,
>                          __attribute__((unused)) void *data)  {
>  	struct cmd_config_dcb *res = parsed_result;
> -	struct dcb_config dcb_conf;
>  	portid_t port_id = res->port_id;
>  	struct rte_port *port;
> +	uint8_t pfc_en;
> +	int ret;
> 
>  	port = &ports[port_id];
>  	/** Check if the port is not started **/
>  	if (port->port_status != RTE_PORT_STOPPED) {
> -		printf("Please stop port %d first\n",port_id);
> +		printf("Please stop port %d first\n", port_id);
>  		return;
>  	}
> 
> -	dcb_conf.num_tcs = (enum rte_eth_nb_tcs) res->num_tcs;
> -	if ((dcb_conf.num_tcs != ETH_4_TCS) && (dcb_conf.num_tcs !=
> ETH_8_TCS)){
> -		printf("The invalid number of traffic class,only 4 or 8
> allowed\n");
> +	if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
> +		printf("The invalid number of traffic class,"
> +			" only 4 or 8 allowed.\n");
>  		return;
>  	}
> 
> -	/* DCB in VT mode */
> -	if (!strncmp(res->vt_en, "on",2))
> -		dcb_conf.dcb_mode = DCB_VT_ENABLED;
> +	if (nb_fwd_lcores < res->num_tcs) {
> +		printf("nb_cores shouldn't be less than number of TCs.\n");
> +		return;
> +	}
> +	if (!strncmp(res->pfc_en, "on", 2))
> +		pfc_en = 1;
>  	else
> -		dcb_conf.dcb_mode = DCB_ENABLED;
> +		pfc_en = 0;
> 
> -	if (!strncmp(res->pfc_en, "on",2)) {
> -		dcb_conf.pfc_en = 1;
> -	}
> +	/* DCB in VT mode */
> +	if (!strncmp(res->vt_en, "on", 2))
> +		ret = init_port_dcb_config(port_id, DCB_VT_ENABLED,
> +				(enum rte_eth_nb_tcs)res->num_tcs,
> +				pfc_en);
>  	else
> -		dcb_conf.pfc_en = 0;
> +		ret = init_port_dcb_config(port_id, DCB_ENABLED,
> +				(enum rte_eth_nb_tcs)res->num_tcs,
> +				pfc_en);
> +
> 
> -	if (init_port_dcb_config(port_id,&dcb_conf) != 0) {
> -		printf("Cannot initialize network ports\n");
> +	if (ret != 0) {
> +		printf("Cannot initialize network ports.\n");
>  		return;
>  	}
> 
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> cf2aa6e..e10da57 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1128,113 +1128,92 @@ rss_fwd_config_setup(void)
>  	}
>  }
> 
> -/*
> - * In DCB and VT on,the mapping of 128 receive queues to 128 transmit
> queues.
> - */
> -static void
> -dcb_rxq_2_txq_mapping(queueid_t rxq, queueid_t *txq) -{
> -	if(dcb_q_mapping == DCB_4_TCS_Q_MAPPING) {
> -
> -		if (rxq < 32)
> -			/* tc0: 0-31 */
> -			*txq = rxq;
> -		else if (rxq < 64) {
> -			/* tc1: 64-95 */
> -			*txq =  (uint16_t)(rxq + 32);
> -		}
> -		else {
> -			/* tc2: 96-111;tc3:112-127 */
> -			*txq =  (uint16_t)(rxq/2 + 64);
> -		}
> -	}
> -	else {
> -		if (rxq < 16)
> -			/* tc0 mapping*/
> -			*txq = rxq;
> -		else if (rxq < 32) {
> -			/* tc1 mapping*/
> -			 *txq = (uint16_t)(rxq + 16);
> -		}
> -		else if (rxq < 64) {
> -			/*tc2,tc3 mapping */
> -			*txq =  (uint16_t)(rxq + 32);
> -		}
> -		else {
> -			/* tc4,tc5,tc6 and tc7 mapping */
> -			*txq =  (uint16_t)(rxq/2 + 64);
> -		}
> -	}
> -}
These codes are removed, and how to guarantee DCB function of 82599 NIC work normally? 
>  /**
> - * For the DCB forwarding test, each core is assigned on every port multi-
> transmit
> - * queue.
> + * For the DCB forwarding test, each core is assigned on each traffic class.
>   *
>   * Each core is assigned a multi-stream, each stream being composed of
>   * a RX queue to poll on a RX port for input messages, associated with
> - * a TX queue of a TX port where to send forwarded packets.
> - * All packets received on the RX queue of index "RxQj" of the RX port "RxPi"
> - * are sent on the TX queue "TxQl" of the TX port "TxPk" according to the
> two
> - * following rules:
> - * In VT mode,
> - *    - TxPk = (RxPi + 1) if RxPi is even, (RxPi - 1) if RxPi is odd
> - *    - TxQl = RxQj
> - * In non-VT mode,
> - *    - TxPk = (RxPi + 1) if RxPi is even, (RxPi - 1) if RxPi is odd
> - *    There is a mapping of RxQj to TxQl to be required,and the mapping was
> implemented
> - *    in dcb_rxq_2_txq_mapping function.
> + * a TX queue of a TX port where to send forwarded packets. All RX and
> + * TX queues are mapping to the same traffic class.
> + * If VMDQ and DCB co-exist, each traffic class on different POOLs
> + share
> + * the same core
>   */
>  static void

>  dcb_fwd_config_setup(void)
>  {
> -	portid_t   rxp;
> -	portid_t   txp;
> -	queueid_t  rxq;
> -	queueid_t  nb_q;
> +	struct rte_eth_dcb_info rxp_dcb_info, txp_dcb_info;
> +	portid_t txp, rxp = 0;
> +	queueid_t txq, rxq = 0;
>  	lcoreid_t  lc_id;
> -	uint16_t sm_id;
> -
> -	nb_q = nb_rxq;
> +	uint16_t nb_rx_queue, nb_tx_queue;
> +	uint16_t i, j, k, sm_id = 0;
> +	uint8_t tc = 0;
> 
>  	cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores;
>  	cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
>  	cur_fwd_config.nb_fwd_streams =
> -		(streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
> +		(streamid_t) (nb_rxq * cur_fwd_config.nb_fwd_ports);
> 
>  	/* reinitialize forwarding streams */
>  	init_fwd_streams();
> +	sm_id = 0;
> +	if ((rxp & 0x1) == 0)
> +		txp = (portid_t) (rxp + 1);
> +	else
> +		txp = (portid_t) (rxp - 1);
> +	/* get the dcb info on the first RX and TX ports */
> +	rte_eth_dev_get_dcb_info(fwd_ports_ids[rxp], &rxp_dcb_info);
> +	rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
> 
> -	setup_fwd_config_of_each_lcore(&cur_fwd_config);
> -	rxp = 0; rxq = 0;
>  	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
> -		/* a fwd core can run multi-streams */
> -		for (sm_id = 0; sm_id < fwd_lcores[lc_id]->stream_nb;
> sm_id++)
> -		{
> -			struct fwd_stream *fs;
> -			fs = fwd_streams[fwd_lcores[lc_id]->stream_idx +
> sm_id];
> -			if ((rxp & 0x1) == 0)
> -				txp = (portid_t) (rxp + 1);
> -			else
> -				txp = (portid_t) (rxp - 1);
> -			fs->rx_port = fwd_ports_ids[rxp];
> -			fs->rx_queue = rxq;
> -			fs->tx_port = fwd_ports_ids[txp];
> -			if (dcb_q_mapping == DCB_VT_Q_MAPPING)
> -				fs->tx_queue = rxq;
> -			else
> -				dcb_rxq_2_txq_mapping(rxq, &fs->tx_queue);
> -			fs->peer_addr = fs->tx_port;
> -			rxq = (queueid_t) (rxq + 1);
> -			if (rxq < nb_q)
> -				continue;
> -			rxq = 0;
> -			if (numa_support && (nb_fwd_ports <= (nb_ports >>
> 1)))
> -				rxp = (portid_t)
> -					(rxp + ((nb_ports >> 1) /
> nb_fwd_ports));
> -			else
> -				rxp = (portid_t) (rxp + 1);
> +		fwd_lcores[lc_id]->stream_nb = 0;
> +		fwd_lcores[lc_id]->stream_idx = sm_id;
> +		for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
> +			/* if the nb_queue is zero, means this tc is
> +			 * not enabled on the POOL
> +			 */
> +			if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0)
> +				break;
> +			k = fwd_lcores[lc_id]->stream_nb +
> +				fwd_lcores[lc_id]->stream_idx;
> +			rxq = rxp_dcb_info.tc_queue.tc_rxq[i][tc].base;
> +			txq = txp_dcb_info.tc_queue.tc_txq[i][tc].base;
> +			nb_rx_queue =
> txp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue;
> +			nb_tx_queue =
> txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue;
> +			for (j = 0; j < nb_rx_queue; j++) {
> +				struct fwd_stream *fs;
> +
> +				fs = fwd_streams[k + j];
> +				fs->rx_port = fwd_ports_ids[rxp];
> +				fs->rx_queue = rxq + j;
> +				fs->tx_port = fwd_ports_ids[txp];
> +				fs->tx_queue = txq + j % nb_tx_queue;
> +				fs->peer_addr = fs->tx_port;
> +			}
> +			fwd_lcores[lc_id]->stream_nb +=
> +
> 	rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue;
>  		}
> +		sm_id = (streamid_t) (sm_id + fwd_lcores[lc_id]->stream_nb);
> +
> +		tc++;
> +		if (tc < rxp_dcb_info.nb_tcs)
> +			continue;
> +		/* Restart from TC 0 on next RX port */
> +		tc = 0;
> +		if (numa_support && (nb_fwd_ports <= (nb_ports >> 1)))
> +			rxp = (portid_t)
> +				(rxp + ((nb_ports >> 1) / nb_fwd_ports));
> +		else
> +			rxp++;
> +		if (rxp >= nb_fwd_ports)
> +			return;
> +		/* get the dcb information on next RX and TX ports */
> +		if ((rxp & 0x1) == 0)
> +			txp = (portid_t) (rxp + 1);
> +		else
> +			txp = (portid_t) (rxp - 1);
> +		rte_eth_dev_get_dcb_info(fwd_ports_ids[rxp],
> &rxp_dcb_info);
> +		rte_eth_dev_get_dcb_info(fwd_ports_ids[txp],
> &txp_dcb_info);
>  	}
>  }
> 
> @@ -1354,10 +1333,6 @@ pkt_fwd_config_display(struct fwd_config *cfg)
> void
>  fwd_config_display(void)
>  {
> -	if((dcb_config) && (nb_fwd_lcores == 1)) {
> -		printf("In DCB mode,the nb forwarding cores should be
> larger than 1\n");
> -		return;
> -	}
>  	fwd_config_setup();
>  	pkt_fwd_config_display(&cur_fwd_config);
>  }
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> c8ae909..25dadbc 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -182,9 +182,6 @@ uint8_t dcb_config = 0;
>  /* Whether the dcb is in testing status */  uint8_t dcb_test = 0;
> 
> -/* DCB on and VT on mapping is default */ -enum
> dcb_queue_mapping_mode dcb_q_mapping = DCB_VT_Q_MAPPING;
> -
>  /*
>   * Configurable number of RX/TX queues.
>   */
> @@ -1840,115 +1837,131 @@ const uint16_t vlan_tags[] = {  };
> 
>  static  int
> -get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config
> *dcb_conf)
> +get_eth_dcb_conf(struct rte_eth_conf *eth_conf,
> +		 enum dcb_mode_enable dcb_mode,
> +		 enum rte_eth_nb_tcs num_tcs,
> +		 uint8_t pfc_en)
>  {
> -        uint8_t i;
> +	uint8_t i;
> 
>  	/*
>  	 * Builds up the correct configuration for dcb+vt based on the vlan
> tags array
>  	 * given above, and the number of traffic classes available for use.
>  	 */
> -	if (dcb_conf->dcb_mode == DCB_VT_ENABLED) {
> -		struct rte_eth_vmdq_dcb_conf vmdq_rx_conf;
> -		struct rte_eth_vmdq_dcb_tx_conf vmdq_tx_conf;
> +	if (dcb_mode == DCB_VT_ENABLED) {
> +		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
> +				&eth_conf->rx_adv_conf.vmdq_dcb_conf;
> +		struct rte_eth_vmdq_dcb_tx_conf *vmdq_tx_conf =
> +				&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
> 
>  		/* VMDQ+DCB RX and TX configrations */
> -		vmdq_rx_conf.enable_default_pool = 0;
> -		vmdq_rx_conf.default_pool = 0;
> -		vmdq_rx_conf.nb_queue_pools =
> -			(dcb_conf->num_tcs ==  ETH_4_TCS ? ETH_32_POOLS :
> ETH_16_POOLS);
> -		vmdq_tx_conf.nb_queue_pools =
> -			(dcb_conf->num_tcs ==  ETH_4_TCS ? ETH_32_POOLS :
> ETH_16_POOLS);
> -
> -		vmdq_rx_conf.nb_pool_maps =
> sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
> -		for (i = 0; i < vmdq_rx_conf.nb_pool_maps; i++) {
> -			vmdq_rx_conf.pool_map[i].vlan_id = vlan_tags[ i ];
> -			vmdq_rx_conf.pool_map[i].pools = 1 << (i %
> vmdq_rx_conf.nb_queue_pools);
> +		vmdq_rx_conf->enable_default_pool = 0;
> +		vmdq_rx_conf->default_pool = 0;
> +		vmdq_rx_conf->nb_queue_pools =
> +			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS :
> ETH_16_POOLS);
> +		vmdq_tx_conf->nb_queue_pools =
> +			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS :
> ETH_16_POOLS);
> +
> +		vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf-
> >nb_queue_pools;
> +		for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
> +			vmdq_rx_conf->pool_map[i].vlan_id = vlan_tags[i];
> +			vmdq_rx_conf->pool_map[i].pools =
> +				1 << (i % vmdq_rx_conf->nb_queue_pools);
>  		}
>  		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
> -			vmdq_rx_conf.dcb_tc[i] = i;
> -			vmdq_tx_conf.dcb_tc[i] = i;
> +			vmdq_rx_conf->dcb_tc[i] = i;
> +			vmdq_tx_conf->dcb_tc[i] = i;
>  		}
> 
> -		/*set DCB mode of RX and TX of multiple queues*/
> +		/* set DCB mode of RX and TX of multiple queues */
>  		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB;
>  		eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
> -		if (dcb_conf->pfc_en)
> -			eth_conf->dcb_capability_en =
> ETH_DCB_PG_SUPPORT|ETH_DCB_PFC_SUPPORT;
> -		else
> -			eth_conf->dcb_capability_en =
> ETH_DCB_PG_SUPPORT;
> -
> -		(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_dcb_conf,
> &vmdq_rx_conf,
> -                                sizeof(struct rte_eth_vmdq_dcb_conf)));
> -		(void)(rte_memcpy(&eth_conf-
> >tx_adv_conf.vmdq_dcb_tx_conf, &vmdq_tx_conf,
> -                                sizeof(struct rte_eth_vmdq_dcb_tx_conf)));
> -	}
> -	else {
> -		struct rte_eth_dcb_rx_conf rx_conf;
> -		struct rte_eth_dcb_tx_conf tx_conf;
> -
> -		/* queue mapping configuration of DCB RX and TX */
> -		if (dcb_conf->num_tcs == ETH_4_TCS)
> -			dcb_q_mapping = DCB_4_TCS_Q_MAPPING;
> -		else
> -			dcb_q_mapping = DCB_8_TCS_Q_MAPPING;
> -
> -		rx_conf.nb_tcs = dcb_conf->num_tcs;
> -		tx_conf.nb_tcs = dcb_conf->num_tcs;
> -
> -		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
> -			rx_conf.dcb_tc[i] = i;
> -			tx_conf.dcb_tc[i] = i;
> +	} else {
> +		struct rte_eth_dcb_rx_conf *rx_conf =
> +				&eth_conf->rx_adv_conf.dcb_rx_conf;
> +		struct rte_eth_dcb_tx_conf *tx_conf =
> +				&eth_conf->tx_adv_conf.dcb_tx_conf;
> +
> +		rx_conf->nb_tcs = num_tcs;
> +		tx_conf->nb_tcs = num_tcs;
> +
> +		for (i = 0; i < num_tcs; i++) {
> +			rx_conf->dcb_tc[i] = i;
> +			tx_conf->dcb_tc[i] = i;
>  		}
> -		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB;
> +		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB_RSS;
> +		eth_conf->rx_adv_conf.rss_conf.rss_hf = rss_hf;
>  		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
> -		if (dcb_conf->pfc_en)
> -			eth_conf->dcb_capability_en =
> ETH_DCB_PG_SUPPORT|ETH_DCB_PFC_SUPPORT;
> -		else
> -			eth_conf->dcb_capability_en =
> ETH_DCB_PG_SUPPORT;
> -
> -		(void)(rte_memcpy(&eth_conf->rx_adv_conf.dcb_rx_conf,
> &rx_conf,
> -                                sizeof(struct rte_eth_dcb_rx_conf)));
> -		(void)(rte_memcpy(&eth_conf->tx_adv_conf.dcb_tx_conf,
> &tx_conf,
> -                                sizeof(struct rte_eth_dcb_tx_conf)));
>  	}
> 
> +	if (pfc_en)
> +		eth_conf->dcb_capability_en =
> +				ETH_DCB_PG_SUPPORT |
> ETH_DCB_PFC_SUPPORT;
> +	else
> +		eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
> +
>  	return 0;
>  }
> 
>  int
> -init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf)
> +init_port_dcb_config(portid_t pid,
> +		     enum dcb_mode_enable dcb_mode,
> +		     enum rte_eth_nb_tcs num_tcs,
> +		     uint8_t pfc_en)
>  {
>  	struct rte_eth_conf port_conf;
> +	struct rte_eth_dev_info dev_info;
>  	struct rte_port *rte_port;
>  	int retval;
> -	uint16_t nb_vlan;
>  	uint16_t i;
> 
> -	/* rxq and txq configuration in dcb mode */
> -	nb_rxq = 128;
> -	nb_txq = 128;
> +	rte_eth_dev_info_get(pid, &dev_info);
> +
> +	/* If dev_info.vmdq_pool_base is greater than 0,
> +	 * the queue id of vmdq pools is started after pf queues.
> +	 */
> +	if (dcb_mode == DCB_VT_ENABLED && dev_info.vmdq_pool_base >
> 0) {
> +		printf("VMDQ_DCB multi-queue mode is nonsensical"
> +			" for port %d.", pid);
> +		return -1;
> +	}
> +
> +	/* Assume the ports in testpmd have the same dcb capability
> +	 * and has the same number of rxq and txq in dcb mode
> +	 */
> +	if (dcb_mode == DCB_VT_ENABLED) {
> +		nb_rxq = dev_info.max_rx_queues;
> +		nb_txq = dev_info.max_tx_queues;
> +	} else {
> +		/*if vt is disabled, use all pf queues */
> +		if (dev_info.vmdq_pool_base == 0) {
> +			nb_rxq = dev_info.max_rx_queues;
> +			nb_txq = dev_info.max_tx_queues;
> +		} else {
> +			nb_rxq = (queueid_t)num_tcs;
> +			nb_txq = (queueid_t)num_tcs;
> +
> +		}
> +	}
>  	rx_free_thresh = 64;
> 
> -	memset(&port_conf,0,sizeof(struct rte_eth_conf));
> +	memset(&port_conf, 0, sizeof(struct rte_eth_conf));
>  	/* Enter DCB configuration status */
>  	dcb_config = 1;
> 
> -	nb_vlan = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
>  	/*set configuration of DCB in vt mode and DCB in non-vt mode*/
> -	retval = get_eth_dcb_conf(&port_conf, dcb_conf);
> +	retval = get_eth_dcb_conf(&port_conf, dcb_mode, num_tcs, pfc_en);
>  	if (retval < 0)
>  		return retval;
> 
>  	rte_port = &ports[pid];
> -	memcpy(&rte_port->dev_conf, &port_conf,sizeof(struct
> rte_eth_conf));
> +	memcpy(&rte_port->dev_conf, &port_conf, sizeof(struct
> rte_eth_conf));
> 
>  	rxtx_port_config(rte_port);
>  	/* VLAN filter */
>  	rte_port->dev_conf.rxmode.hw_vlan_filter = 1;
> -	for (i = 0; i < nb_vlan; i++){
> +	for (i = 0; i < RTE_DIM(vlan_tags); i++)
>  		rx_vft_set(pid, vlan_tags[i], 1);
> -	}
> 
>  	rte_eth_macaddr_get(pid, &rte_port->eth_addr);
>  	map_port_queue_stats_mapping_registers(pid, rte_port); diff --git
> a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> d287274..5818fdd 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -255,25 +255,6 @@ enum dcb_mode_enable
>  	DCB_ENABLED
>  };
> 
> -/*
> - * DCB general config info
> - */
> -struct dcb_config {
> -	enum dcb_mode_enable dcb_mode;
> -	uint8_t vt_en;
> -	enum rte_eth_nb_tcs num_tcs;
> -	uint8_t pfc_en;
> -};
> -
> -/*
> - * In DCB io FWD mode, 128 RX queue to 128 TX queue mapping
> - */
> -enum dcb_queue_mapping_mode {
> -	DCB_VT_Q_MAPPING = 0,
> -	DCB_4_TCS_Q_MAPPING,
> -	DCB_8_TCS_Q_MAPPING
> -};
> -
>  #define MAX_TX_QUEUE_STATS_MAPPINGS 1024 /* MAX_PORT of 32 @ 32
> tx_queues/port */  #define MAX_RX_QUEUE_STATS_MAPPINGS 4096 /*
> MAX_PORT of 32 @ 128 rx_queues/port */
> 
> @@ -537,7 +518,9 @@ void dev_set_link_down(portid_t pid);  void
> init_port_config(void);  void set_port_slave_flag(portid_t slave_pid);  void
> clear_port_slave_flag(portid_t slave_pid); -int init_port_dcb_config(portid_t
> pid,struct dcb_config *dcb_conf);
> +int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
> +		     enum rte_eth_nb_tcs num_tcs,
> +		     uint8_t pfc_en);
>  int start_port(portid_t pid);
>  void stop_port(portid_t pid);
>  void close_port(portid_t pid);
> --
> 2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [dpdk-dev] [PATCH 8/8] app/testpmd: set up DCB forwarding based on traffic class
  2015-10-28  1:46   ` Liu, Jijiang
@ 2015-10-28  2:04     ` Wu, Jingjing
  0 siblings, 0 replies; 40+ messages in thread
From: Wu, Jingjing @ 2015-10-28  2:04 UTC (permalink / raw)
  To: Liu, Jijiang, dev; +Cc: Pei, Yulong



> -----Original Message-----
> From: Liu, Jijiang
> Sent: Wednesday, October 28, 2015 9:46 AM
> To: Wu, Jingjing; dev@dpdk.org
> Cc: Zhang, Helin; Tao, Zhe; Pei, Yulong
> Subject: RE: [PATCH 8/8] app/testpmd: set up DCB forwarding based on
> traffic class
> 
> > -}
> These codes are removed, and how to guarantee DCB function of 82599 NIC
> work normally?

In this patch, the mapping relationship is not defined in testpmd.
It can be queried by the rte_eth_dev_get_dcb_info API, and the forwarding
Is setup based on TC. So DCB function of 82599 NIC in testpmd works too.

Thanks
Jingjing

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC
  2015-09-24  6:03 [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                   ` (7 preceding siblings ...)
  2015-09-24  6:03 ` [dpdk-dev] [PATCH 8/8] app/testpmd: set up DCB forwarding based on traffic class Jingjing Wu
@ 2015-10-29  8:53 ` Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 01/10] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
                     ` (12 more replies)
  8 siblings, 13 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

The patch set enables DCB feature on Intel XL710/X710 NICs, including:
  - Receive queue classification based on traffic class
  - Round Robin ETS schedule (rx and tx).
  - Priority flow control
To make the testpmd and ethdev lib more generic on DCB feature,
this patch set also
  - adds a new API to get DCB related information on NICs.
  - changes the DCB test forwarding in testpmd to be on traffic class.
  - move specific validation from lib and application to drivers. 
Additionally, this patch set also corrects some coding style issues.

v2 changes:
 - add a command in testpmd to display dcb info
 - update testpmd guide and release note

Jingjing Wu (10):
  ethdev: rename dcb_queue to dcb_tc in dcb config struct
  ethdev: move the multi-queue checking to specific drivers
  i40e: enable DCB feature on FVL
  ixgbe: enable DCB+RSS multi-queue mode
  ethdev: new API to get dcb related information
  ixgbe: get_dcb_info ops implement
  i40e: get_dcb_info ops implement
  app/testpmd: set up DCB forwarding based on traffic class
  app/testpmd: add command to display DCB info
  doc: update testpmd guide and release note

 app/test-pmd/cmdline.c                      |  54 ++-
 app/test-pmd/config.c                       | 202 +++++-----
 app/test-pmd/testpmd.c                      | 151 ++++----
 app/test-pmd/testpmd.h                      |  24 +-
 doc/guides/rel_notes/release_2_2.rst        |   6 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  12 +-
 drivers/net/e1000/igb_ethdev.c              |  84 +++-
 drivers/net/i40e/i40e_ethdev.c              | 574 +++++++++++++++++++++++++++-
 drivers/net/i40e/i40e_ethdev.h              |  14 +
 drivers/net/i40e/i40e_rxtx.c                |  32 +-
 drivers/net/i40e/i40e_rxtx.h                |   2 +
 drivers/net/ixgbe/ixgbe_ethdev.c            | 251 ++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h            |   3 +
 drivers/net/ixgbe/ixgbe_rxtx.c              |  58 +--
 examples/vmdq_dcb/main.c                    |   4 +-
 lib/librte_ether/rte_ethdev.c               | 217 +----------
 lib/librte_ether/rte_ethdev.h               |  68 +++-
 17 files changed, 1303 insertions(+), 453 deletions(-)

-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 01/10] ethdev: rename dcb_queue to dcb_tc in dcb config struct
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
@ 2015-10-29  8:53   ` Jingjing Wu
  2015-10-30 10:22     ` Thomas Monjalon
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 02/10] ethdev: move the multi-queue checking to specific drivers Jingjing Wu
                     ` (11 subsequent siblings)
  12 siblings, 1 reply; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 app/test-pmd/testpmd.c         |  8 ++++----
 drivers/net/ixgbe/ixgbe_rxtx.c | 10 +++++-----
 examples/vmdq_dcb/main.c       |  4 ++--
 lib/librte_ether/rte_ethdev.h  | 14 +++++++-------
 4 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 2578b6b..8b8eb7d 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1875,8 +1875,8 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
 			vmdq_rx_conf.pool_map[i].pools = 1 << (i % vmdq_rx_conf.nb_queue_pools);
 		}
 		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-			vmdq_rx_conf.dcb_queue[i] = i;
-			vmdq_tx_conf.dcb_queue[i] = i;
+			vmdq_rx_conf.dcb_tc[i] = i;
+			vmdq_tx_conf.dcb_tc[i] = i;
 		}
 
 		/*set DCB mode of RX and TX of multiple queues*/
@@ -1906,8 +1906,8 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
 		tx_conf.nb_tcs = dcb_conf->num_tcs;
 
 		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
-			rx_conf.dcb_queue[i] = i;
-			tx_conf.dcb_queue[i] = i;
+			rx_conf.dcb_tc[i] = i;
+			tx_conf.dcb_tc[i] = i;
 		}
 		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB;
 		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index a598a72..d331ef5 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2903,7 +2903,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
 		 */
-		queue_mapping |= ((cfg->dcb_queue[i] & 0x07) << (i * 3));
+		queue_mapping |= ((cfg->dcb_tc[i] & 0x07) << (i * 3));
 
 	IXGBE_WRITE_REG(hw, IXGBE_RTRUP2TC, queue_mapping);
 
@@ -3038,7 +3038,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = vmdq_rx_conf->dcb_queue[i];
+		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
@@ -3066,7 +3066,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = vmdq_tx_conf->dcb_queue[i];
+		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
@@ -3088,7 +3088,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
 
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = rx_conf->dcb_queue[i];
+		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
@@ -3109,7 +3109,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
 
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = tx_conf->dcb_queue[i];
+		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index c31c2ce..b90ac28 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -107,7 +107,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 			.default_pool = 0,
 			.nb_pool_maps = 0,
 			.pool_map = {{0, 0},},
-			.dcb_queue = {0},
+			.dcb_tc = {0},
 		},
 	},
 };
@@ -144,7 +144,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf, enum rte_eth_nb_pools num_pools)
 		conf.pool_map[i].pools = 1 << (i % num_pools);
 	}
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
-		conf.dcb_queue[i] = (uint8_t)(i % (NUM_QUEUES/num_pools));
+		conf.dcb_tc[i] = (uint8_t)(i % (NUM_QUEUES/num_pools));
 	}
 	(void)(rte_memcpy(eth_conf, &vmdq_dcb_conf_default, sizeof(*eth_conf)));
 	(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_dcb_conf, &conf,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8a8c82b..377da6a 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -543,20 +543,20 @@ enum rte_eth_nb_pools {
 /* This structure may be extended in future. */
 struct rte_eth_dcb_rx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
-	/**< Possible DCB queue,4 or 8. */
+	/** Traffic class each UP mapped to. */
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_dcb_tx_conf {
 	enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
-	/**< Possible DCB queue,4 or 8. */
+	/** Traffic class each UP mapped to. */
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_dcb_tx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
-	/**< Possible DCB queue,4 or 8. */
+	/** Traffic class each UP mapped to. */
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_tx_conf {
@@ -583,7 +583,7 @@ struct rte_eth_vmdq_dcb_conf {
 		uint16_t vlan_id; /**< The vlan id of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet rx */
 	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 	/**< Selects a queue in a pool */
 };
 
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 02/10] ethdev: move the multi-queue checking to specific drivers
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 01/10] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
@ 2015-10-29  8:53   ` Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 03/10] i40e: enable DCB feature on FVL Jingjing Wu
                     ` (10 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

Differnet NIC has its specific constraint on the multi-queue
configuration, so move the checking from ethdev lib to drivers.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/e1000/igb_ethdev.c   |  84 ++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.c | 171 +++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |   3 +
 lib/librte_ether/rte_ethdev.c    | 199 ---------------------------------------
 4 files changed, 257 insertions(+), 200 deletions(-)

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 848ef6e..d9c13d9 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -866,16 +866,98 @@ rte_igbvf_pmd_init(const char *name __rte_unused, const char *params __rte_unuse
 }
 
 static int
+igb_check_mq_mode(struct rte_eth_dev *dev)
+{
+	enum rte_eth_rx_mq_mode rx_mq_mode = dev->data->dev_conf.rxmode.mq_mode;
+	enum rte_eth_tx_mq_mode tx_mq_mode = dev->data->dev_conf.txmode.mq_mode;
+	uint16_t nb_rx_q = dev->data->nb_rx_queues;
+	uint16_t nb_tx_q = dev->data->nb_rx_queues;
+
+	if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
+	    tx_mq_mode == ETH_MQ_TX_DCB ||
+	    tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
+		return -EINVAL;
+	}
+	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
+		/* Check multi-queue mode.
+		 * To no break software we accept ETH_MQ_RX_NONE as this might
+		 * be used to turn off VLAN filter.
+		 */
+
+		if (rx_mq_mode == ETH_MQ_RX_NONE ||
+		    rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+		} else {
+			/* Only support one queue on VFs.
+			 * RSS together with SRIOV is not supported.
+			 */
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" wrong mq_mode rx %d.",
+					rx_mq_mode);
+			return -EINVAL;
+		}
+		/* TX mode is not used here, so mode might be ignored.*/
+		if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+			/* SRIOV only works in VMDq enable mode */
+			PMD_INIT_LOG(WARNING, "SRIOV is active,"
+					" TX mode %d is not supported. "
+					" Driver will behave as %d mode.",
+					tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+		}
+
+		/* check valid queue number */
+		if ((nb_rx_q > 1) || (nb_tx_q > 1)) {
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" only support one queue on VFs.");
+			return -EINVAL;
+		}
+	} else {
+		/* To no break software that set invalid mode, only display
+		 * warning if invalid mode is used.
+		 */
+		if (rx_mq_mode != ETH_MQ_RX_NONE &&
+		    rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
+		    rx_mq_mode != ETH_MQ_RX_RSS) {
+			/* RSS together with VMDq not supported*/
+			PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
+				     rx_mq_mode);
+			return -EINVAL;
+		}
+
+		if (tx_mq_mode != ETH_MQ_TX_NONE &&
+		    tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+			PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
+					" Due to txmode is meaningless in this"
+					" driver, just ignore.",
+					tx_mq_mode);
+		}
+	}
+	return 0;
+}
+
+static int
 eth_igb_configure(struct rte_eth_dev *dev)
 {
 	struct e1000_interrupt *intr =
 		E1000_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
+	int ret;
 
 	PMD_INIT_FUNC_TRACE();
+
+	/* multipe queue mode checking */
+	ret  = igb_check_mq_mode(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "igb_check_mq_mode fails with %d.",
+			    ret);
+		return ret;
+	}
+
 	intr->flags |= E1000_FLAG_NEED_LINK_UPDATE;
 	PMD_INIT_FUNC_TRACE();
 
-	return (0);
+	return 0;
 }
 
 static int
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index ec2918c..a7dca55 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1636,14 +1636,185 @@ ixgbe_vmdq_vlan_hw_filter_enable(struct rte_eth_dev *dev)
 }
 
 static int
+ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
+{
+	switch (nb_rx_q) {
+	case 1:
+	case 2:
+		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		break;
+	case 4:
+		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
+	RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx = dev->pci_dev->max_vfs * nb_rx_q;
+
+	return 0;
+}
+
+static int
+ixgbe_check_mq_mode(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	uint16_t nb_rx_q = dev->data->nb_rx_queues;
+	uint16_t nb_tx_q = dev->data->nb_rx_queues;
+
+	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
+		/* check multi-queue mode */
+		switch (dev_conf->rxmode.mq_mode) {
+		case ETH_MQ_RX_VMDQ_DCB:
+		case ETH_MQ_RX_VMDQ_DCB_RSS:
+			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
+			PMD_INIT_LOG(ERR, "SRIOV active,"
+					" unsupported mq_mode rx %d.",
+					dev_conf->rxmode.mq_mode);
+			return -EINVAL;
+		case ETH_MQ_RX_RSS:
+		case ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
+				if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
+					PMD_INIT_LOG(ERR, "SRIOV is active,"
+						" invalid queue number"
+						" for VMDQ RSS, allowed"
+						" value are 1, 2 or 4.");
+					return -EINVAL;
+				}
+			break;
+		case ETH_MQ_RX_VMDQ_ONLY:
+		case ETH_MQ_RX_NONE:
+			/* if nothing mq mode configure, use default scheme */
+			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
+				RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+			break;
+		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+			/* SRIOV only works in VMDq enable mode */
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" wrong mq_mode rx %d.",
+					dev_conf->rxmode.mq_mode);
+			return -EINVAL;
+		}
+
+		switch (dev_conf->txmode.mq_mode) {
+		case ETH_MQ_TX_VMDQ_DCB:
+			/* DCB VMDQ in SRIOV mode, not implement yet */
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" unsupported VMDQ mq_mode tx %d.",
+					dev_conf->txmode.mq_mode);
+			return -EINVAL;
+		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+			break;
+		}
+
+		/* check valid queue number */
+		if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
+		    (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" queue number must less equal to %d.",
+					RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
+			return -EINVAL;
+		}
+	} else {
+		/* check configuration for vmdb+dcb mode */
+		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+			const struct rte_eth_vmdq_dcb_conf *conf;
+
+			if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB, nb_rx_q != %d.",
+						IXGBE_VMDQ_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
+			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
+			       conf->nb_queue_pools == ETH_32_POOLS)) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
+						" nb_queue_pools must be %d or %d.",
+						ETH_16_POOLS, ETH_32_POOLS);
+				return -EINVAL;
+			}
+		}
+		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+			const struct rte_eth_vmdq_dcb_tx_conf *conf;
+
+			if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB, nb_tx_q != %d",
+						 IXGBE_VMDQ_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
+			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
+			       conf->nb_queue_pools == ETH_32_POOLS)) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
+						" nb_queue_pools != %d and"
+						" nb_queue_pools != %d.",
+						ETH_16_POOLS, ETH_32_POOLS);
+				return -EINVAL;
+			}
+		}
+
+		/* For DCB mode check our configuration before we go further */
+		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+			const struct rte_eth_dcb_rx_conf *conf;
+
+			if (nb_rx_q != IXGBE_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "DCB selected, nb_rx_q != %d.",
+						 IXGBE_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
+			if (!(conf->nb_tcs == ETH_4_TCS ||
+			       conf->nb_tcs == ETH_8_TCS)) {
+				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
+						" and nb_tcs != %d.",
+						ETH_4_TCS, ETH_8_TCS);
+				return -EINVAL;
+			}
+		}
+
+		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+			const struct rte_eth_dcb_tx_conf *conf;
+
+			if (nb_tx_q != IXGBE_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "DCB, nb_tx_q != %d.",
+						 IXGBE_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
+			if (!(conf->nb_tcs == ETH_4_TCS ||
+			       conf->nb_tcs == ETH_8_TCS)) {
+				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
+						" and nb_tcs != %d.",
+						ETH_4_TCS, ETH_8_TCS);
+				return -EINVAL;
+			}
+		}
+	}
+	return 0;
+}
+
+static int
 ixgbe_dev_configure(struct rte_eth_dev *dev)
 {
 	struct ixgbe_interrupt *intr =
 		IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
 	struct ixgbe_adapter *adapter =
 		(struct ixgbe_adapter *)dev->data->dev_private;
+	int ret;
 
 	PMD_INIT_FUNC_TRACE();
+	/* multipe queue mode checking */
+	ret  = ixgbe_check_mq_mode(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "ixgbe_check_mq_mode fails with %d.",
+			    ret);
+		return ret;
+	}
 
 	/* set flag to update link status after init */
 	intr->flags |= IXGBE_FLAG_NEED_LINK_UPDATE;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index c3d4f4f..240241a 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -57,6 +57,9 @@
 #define IXGBE_VFTA_SIZE 128
 #define IXGBE_VLAN_TAG_SIZE 4
 #define IXGBE_MAX_RX_QUEUE_NUM	128
+#define IXGBE_VMDQ_DCB_NB_QUEUES     IXGBE_MAX_RX_QUEUE_NUM
+#define IXGBE_DCB_NB_QUEUES          IXGBE_MAX_RX_QUEUE_NUM
+
 #ifndef NBBY
 #define NBBY	8	/* number of bits in a byte */
 #endif
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index f593f6e..c7247c3 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -880,197 +880,6 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 	return 0;
 }
 
-static int
-rte_eth_dev_check_vf_rss_rxq_num(uint8_t port_id, uint16_t nb_rx_q)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-
-	switch (nb_rx_q) {
-	case 1:
-	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active =
-			ETH_64_POOLS;
-		break;
-	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active =
-			ETH_32_POOLS;
-		break;
-	default:
-		return -EINVAL;
-	}
-
-	RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
-	RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =
-		dev->pci_dev->max_vfs * nb_rx_q;
-
-	return 0;
-}
-
-static int
-rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
-			  const struct rte_eth_conf *dev_conf)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-
-	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
-		/* check multi-queue mode */
-		if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
-		    (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB_RSS) ||
-		    (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
-			/* SRIOV only works in VMDq enable mode */
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"wrong VMDQ mq_mode rx %u tx %u\n",
-					port_id,
-					dev_conf->rxmode.mq_mode,
-					dev_conf->txmode.mq_mode);
-			return -EINVAL;
-		}
-
-		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"unsupported VMDQ mq_mode rx %u\n",
-					port_id, dev_conf->rxmode.mq_mode);
-			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"Rx mq mode is changed from:"
-					"mq_mode %u into VMDQ mq_mode %u\n",
-					port_id,
-					dev_conf->rxmode.mq_mode,
-					dev->data->dev_conf.rxmode.mq_mode);
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
-			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
-				if (rte_eth_dev_check_vf_rss_rxq_num(port_id, nb_rx_q) != 0) {
-					PMD_DEBUG_TRACE("ethdev port_id=%d"
-							" SRIOV active, invalid queue"
-							" number for VMDQ RSS, allowed"
-							" value are 1, 2 or 4\n",
-							port_id);
-					return -EINVAL;
-				}
-			break;
-		default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
-			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
-			if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
-				RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
-			break;
-		}
-
-		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			/* DCB VMDQ in SRIOV mode, not implement yet */
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"unsupported VMDQ mq_mode tx %u\n",
-					port_id, dev_conf->txmode.mq_mode);
-			return -EINVAL;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
-			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
-			break;
-		}
-
-		/* check valid queue number */
-		if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
-		    (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
-					"queue number must less equal to %d\n",
-					port_id,
-					RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
-			return -EINVAL;
-		}
-	} else {
-		/* For vmdb+dcb mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
-			const struct rte_eth_vmdq_dcb_conf *conf;
-
-			if (nb_rx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB, nb_rx_q "
-						"!= %d\n",
-						port_id, ETH_VMDQ_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->rx_adv_conf.vmdq_dcb_conf);
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			      conf->nb_queue_pools == ETH_32_POOLS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB selected, "
-						"nb_queue_pools must be %d or %d\n",
-						port_id, ETH_16_POOLS, ETH_32_POOLS);
-				return -EINVAL;
-			}
-		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
-			const struct rte_eth_vmdq_dcb_tx_conf *conf;
-
-			if (nb_tx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB, nb_tx_q "
-						"!= %d\n",
-						port_id, ETH_VMDQ_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->tx_adv_conf.vmdq_dcb_tx_conf);
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			      conf->nb_queue_pools == ETH_32_POOLS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB selected, "
-						"nb_queue_pools != %d or nb_queue_pools "
-						"!= %d\n",
-						port_id, ETH_16_POOLS, ETH_32_POOLS);
-				return -EINVAL;
-			}
-		}
-
-		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
-			const struct rte_eth_dcb_rx_conf *conf;
-
-			if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_rx_q "
-						"!= %d\n",
-						port_id, ETH_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->rx_adv_conf.dcb_rx_conf);
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			      conf->nb_tcs == ETH_8_TCS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB selected, "
-						"nb_tcs != %d or nb_tcs "
-						"!= %d\n",
-						port_id, ETH_4_TCS, ETH_8_TCS);
-				return -EINVAL;
-			}
-		}
-
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
-			const struct rte_eth_dcb_tx_conf *conf;
-
-			if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_tx_q "
-						"!= %d\n",
-						port_id, ETH_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->tx_adv_conf.dcb_tx_conf);
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			      conf->nb_tcs == ETH_8_TCS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB selected, "
-						"nb_tcs != %d or nb_tcs "
-						"!= %d\n",
-						port_id, ETH_4_TCS, ETH_8_TCS);
-				return -EINVAL;
-			}
-		}
-	}
-	return 0;
-}
-
 int
 rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
@@ -1182,14 +991,6 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 							ETHER_MAX_LEN;
 	}
 
-	/* multiple queue mode checking */
-	diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q, dev_conf);
-	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode = %d\n",
-				port_id, diag);
-		return diag;
-	}
-
 	/*
 	 * Setup new number of RX/TX queues and reconfigure device.
 	 */
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 03/10] i40e: enable DCB feature on FVL
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 01/10] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 02/10] ethdev: move the multi-queue checking to specific drivers Jingjing Wu
@ 2015-10-29  8:53   ` Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 04/10] ixgbe: enable DCB+RSS multi-queue mode Jingjing Wu
                     ` (9 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

This patch enables DCB feature on Intel XL710/X710 NICs. It includes:
  Receive queue classification based on traffic class
  Round Robin ETS schedule (rx and tx)
  Priority flow control

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c | 532 ++++++++++++++++++++++++++++++++++++++++-
 drivers/net/i40e/i40e_ethdev.h |  14 ++
 drivers/net/i40e/i40e_rxtx.c   |  32 ++-
 drivers/net/i40e/i40e_rxtx.h   |   2 +
 4 files changed, 566 insertions(+), 14 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 2dd9fdc..7db1de9 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -56,6 +56,7 @@
 #include "base/i40e_adminq_cmd.h"
 #include "base/i40e_type.h"
 #include "base/i40e_register.h"
+#include "base/i40e_dcb.h"
 #include "i40e_ethdev.h"
 #include "i40e_rxtx.h"
 #include "i40e_pf.h"
@@ -113,6 +114,10 @@
 #define I40E_PRTTSYN_TSYNENA  0x80000000
 #define I40E_PRTTSYN_TSYNTYPE 0x0e000000
 
+#define I40E_MAX_PERCENT            100
+#define I40E_DEFAULT_DCB_APP_NUM    1
+#define I40E_DEFAULT_DCB_APP_PRIO   3
+
 static int eth_i40e_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_i40e_dev_uninit(struct rte_eth_dev *eth_dev);
 static int i40e_dev_configure(struct rte_eth_dev *dev);
@@ -166,6 +171,8 @@ static int i40e_pf_parameter_init(struct rte_eth_dev *dev);
 static int i40e_pf_setup(struct i40e_pf *pf);
 static int i40e_dev_rxtx_init(struct i40e_pf *pf);
 static int i40e_vmdq_setup(struct rte_eth_dev *dev);
+static int i40e_dcb_init_configure(struct rte_eth_dev *dev, bool sw_dcb);
+static int i40e_dcb_setup(struct rte_eth_dev *dev);
 static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,
 		bool offset_loaded, uint64_t *offset, uint64_t *stat);
 static void i40e_stat_update_48(struct i40e_hw *hw,
@@ -469,11 +476,6 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
 		     ((hw->nvm.version >> 4) & 0xff),
 		     (hw->nvm.version & 0xf), hw->nvm.eetrack);
 
-	/* Disable LLDP */
-	ret = i40e_aq_stop_lldp(hw, true, NULL);
-	if (ret != I40E_SUCCESS) /* Its failure can be ignored */
-		PMD_INIT_LOG(INFO, "Failed to stop lldp");
-
 	/* Clear PXE mode */
 	i40e_clear_pxe_mode(hw);
 
@@ -588,6 +590,13 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
 	/* initialize mirror rule list */
 	TAILQ_INIT(&pf->mirror_list);
 
+	/* Init dcb to sw mode by default */
+	ret = i40e_dcb_init_configure(dev, TRUE);
+	if (ret != I40E_SUCCESS) {
+		PMD_INIT_LOG(INFO, "Failed to init dcb.");
+		pf->flags &= ~I40E_FLAG_DCB;
+	}
+
 	return 0;
 
 err_mac_alloc:
@@ -672,7 +681,7 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 {
 	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode;
-	int ret;
+	int i, ret;
 
 	if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_PERFECT) {
 		ret = i40e_fdir_setup(pf);
@@ -709,8 +718,27 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 		if (ret)
 			goto err;
 	}
+
+	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+		ret = i40e_dcb_setup(dev);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "failed to configure DCB.");
+			goto err_dcb;
+		}
+	}
+
 	return 0;
+
+err_dcb:
+	/* need to release vmdq resource if exists */
+	for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+		i40e_vsi_release(pf->vmdq[i].vsi);
+		pf->vmdq[i].vsi = NULL;
+	}
+	rte_free(pf->vmdq);
+	pf->vmdq = NULL;
 err:
+	/* need to release fdir resource if exists */
 	i40e_fdir_teardown(pf);
 	return ret;
 }
@@ -2313,6 +2341,9 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
 		 */
 	}
 
+	if (hw->func_caps.dcb)
+		pf->flags |= I40E_FLAG_DCB;
+
 	if (sum_vsis > pf->max_num_vsi ||
 		sum_queues > hw->func_caps.num_rx_qp) {
 		PMD_INIT_LOG(ERR, "VSI/QUEUE setting can't be satisfied");
@@ -2718,7 +2749,7 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi,
 				 struct i40e_aqc_vsi_properties_data *info,
 				 uint8_t enabled_tcmap)
 {
-	int ret, total_tc = 0, i;
+	int ret, i, total_tc = 0;
 	uint16_t qpnum_per_tc, bsf, qp_idx;
 
 	ret = validate_tcmap_parameter(vsi, enabled_tcmap);
@@ -5269,11 +5300,6 @@ i40e_pf_config_mq_rx(struct i40e_pf *pf)
 	int ret = 0;
 	enum rte_eth_rx_mq_mode mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
 
-	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
-		PMD_INIT_LOG(ERR, "i40e doesn't support DCB yet");
-		return -ENOTSUP;
-	}
-
 	/* RSS setup */
 	if (mq_mode & ETH_MQ_RX_RSS_FLAG)
 		ret = i40e_pf_config_rss(pf);
@@ -6298,3 +6324,485 @@ i40e_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
 
 	return  0;
 }
+
+/*
+ * i40e_parse_dcb_configure - parse dcb configure from user
+ * @dev: the device being configured
+ * @dcb_cfg: pointer of the result of parse
+ * @*tc_map: bit map of enabled traffic classes
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_parse_dcb_configure(struct rte_eth_dev *dev,
+			 struct i40e_dcbx_config *dcb_cfg,
+			 uint8_t *tc_map)
+{
+	struct rte_eth_dcb_rx_conf *dcb_rx_conf;
+	uint8_t i, tc_bw, bw_lf;
+
+	memset(dcb_cfg, 0, sizeof(struct i40e_dcbx_config));
+
+	dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+	if (dcb_rx_conf->nb_tcs > I40E_MAX_TRAFFIC_CLASS) {
+		PMD_INIT_LOG(ERR, "number of tc exceeds max.");
+		return -EINVAL;
+	}
+
+	/* assume each tc has the same bw */
+	tc_bw = I40E_MAX_PERCENT / dcb_rx_conf->nb_tcs;
+	for (i = 0; i < dcb_rx_conf->nb_tcs; i++)
+		dcb_cfg->etscfg.tcbwtable[i] = tc_bw;
+	/* to ensure the sum of tcbw is equal to 100 */
+	bw_lf = I40E_MAX_PERCENT % dcb_rx_conf->nb_tcs;
+	for (i = 0; i < bw_lf; i++)
+		dcb_cfg->etscfg.tcbwtable[i]++;
+
+	/* assume each tc has the same Transmission Selection Algorithm */
+	for (i = 0; i < dcb_rx_conf->nb_tcs; i++)
+		dcb_cfg->etscfg.tsatable[i] = I40E_IEEE_TSA_ETS;
+
+	for (i = 0; i < I40E_MAX_USER_PRIORITY; i++)
+		dcb_cfg->etscfg.prioritytable[i] =
+				dcb_rx_conf->dcb_tc[i];
+
+	/* FW needs one App to configure HW */
+	dcb_cfg->numapps = I40E_DEFAULT_DCB_APP_NUM;
+	dcb_cfg->app[0].selector = I40E_APP_SEL_ETHTYPE;
+	dcb_cfg->app[0].priority = I40E_DEFAULT_DCB_APP_PRIO;
+	dcb_cfg->app[0].protocolid = I40E_APP_PROTOID_FCOE;
+
+	if (dcb_rx_conf->nb_tcs == 0)
+		*tc_map = 1; /* tc0 only */
+	else
+		*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
+
+	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+		dcb_cfg->pfc.willing = 0;
+		dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
+		dcb_cfg->pfc.pfcenable = *tc_map;
+	}
+	return 0;
+}
+
+/*
+ * i40e_vsi_get_bw_info - Query VSI BW Information
+ * @vsi: the VSI being queried
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_vsi_get_bw_info(struct i40e_vsi *vsi)
+{
+	struct i40e_aqc_query_vsi_ets_sla_config_resp bw_ets_config = {0};
+	struct i40e_aqc_query_vsi_bw_config_resp bw_config = {0};
+	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	int i, ret;
+	uint32_t tc_bw_max;
+
+	/* Get the VSI level BW configuration */
+	ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "couldn't get PF vsi bw config, err %s aq_err %s\n",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		return -EINVAL;
+	}
+
+	/* Get the VSI level BW configuration per TC */
+	ret = i40e_aq_query_vsi_ets_sla_config(hw, vsi->seid, &bw_ets_config,
+						  NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "couldn't get PF vsi ets bw config, err %s aq_err %s\n",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		return -EINVAL;
+	}
+
+	if (bw_config.tc_valid_bits != bw_ets_config.tc_valid_bits) {
+		PMD_INIT_LOG(WARNING,
+			 "Enabled TCs mismatch from querying VSI BW info"
+			 " 0x%08x 0x%08x\n", bw_config.tc_valid_bits,
+			 bw_ets_config.tc_valid_bits);
+		/* Still continuing */
+	}
+
+	vsi->bw_info.bw_limit = rte_le_to_cpu_16(bw_config.port_bw_limit);
+	vsi->bw_info.bw_max_quanta = bw_config.max_bw;
+	tc_bw_max = rte_le_to_cpu_16(bw_ets_config.tc_bw_max[0]) |
+		    (rte_le_to_cpu_16(bw_ets_config.tc_bw_max[1]) << 16);
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		vsi->bw_info.bw_ets_share_credits[i] =
+				bw_ets_config.share_credits[i];
+		vsi->bw_info.bw_ets_limit_credits[i] =
+				rte_le_to_cpu_16(bw_ets_config.credits[i]);
+		/* 3 bits out of 4 for each TC */
+		vsi->bw_info.bw_ets_max_quanta[i] =
+			(uint8_t)((tc_bw_max >> (i * 4)) & 0x7);
+		PMD_INIT_LOG(DEBUG,
+			 "%s: vsi seid = %d, TC = %d, qset = 0x%x\n",
+			 __func__, vsi->seid, i, bw_config.qs_handles[i]);
+	}
+
+	return 0;
+}
+
+static int
+i40e_vsi_update_queue_mapping(struct i40e_vsi *vsi,
+			      struct i40e_aqc_vsi_properties_data *info,
+			      uint8_t enabled_tcmap)
+{
+	int ret, i, total_tc = 0;
+	uint16_t qpnum_per_tc, bsf, qp_idx;
+	struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(vsi);
+
+	ret = validate_tcmap_parameter(vsi, enabled_tcmap);
+	if (ret != I40E_SUCCESS)
+		return ret;
+
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (enabled_tcmap & (1 << i))
+			total_tc++;
+	}
+	if (total_tc == 0)
+		total_tc = 1;
+	vsi->enabled_tc = enabled_tcmap;
+
+	qpnum_per_tc = dev_data->nb_rx_queues / total_tc;
+	/* Number of queues per enabled TC */
+	if (qpnum_per_tc == 0) {
+		PMD_INIT_LOG(ERR, " number of queues is less that tcs.");
+		return I40E_ERR_INVALID_QP_ID;
+	}
+	qpnum_per_tc = RTE_MIN(i40e_align_floor(qpnum_per_tc),
+				I40E_MAX_Q_PER_TC);
+	bsf = rte_bsf32(qpnum_per_tc);
+
+	/**
+	 * Configure TC and queue mapping parameters, for enabled TC,
+	 * allocate qpnum_per_tc queues to this traffic. For disabled TC,
+	 * default queue will serve it.
+	 */
+	qp_idx = 0;
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (vsi->enabled_tc & (1 << i)) {
+			info->tc_mapping[i] = rte_cpu_to_le_16((qp_idx <<
+					I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT) |
+				(bsf << I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT));
+			qp_idx += qpnum_per_tc;
+		} else
+			info->tc_mapping[i] = 0;
+	}
+
+	/* Associate queue number with VSI, Keep vsi->nb_qps unchanged */
+	if (vsi->type == I40E_VSI_SRIOV) {
+		info->mapping_flags |=
+			rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_NONCONTIG);
+		for (i = 0; i < vsi->nb_qps; i++)
+			info->queue_mapping[i] =
+				rte_cpu_to_le_16(vsi->base_queue + i);
+	} else {
+		info->mapping_flags |=
+			rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_CONTIG);
+		info->queue_mapping[0] = rte_cpu_to_le_16(vsi->base_queue);
+	}
+	info->valid_sections |=
+		rte_cpu_to_le_16(I40E_AQ_VSI_PROP_QUEUE_MAP_VALID);
+
+	return I40E_SUCCESS;
+}
+
+/*
+ * i40e_vsi_config_tc - Configure VSI tc setting for given TC map
+ * @vsi: VSI to be configured
+ * @tc_map: enabled TC bitmap
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_vsi_config_tc(struct i40e_vsi *vsi, u8 tc_map)
+{
+	struct i40e_aqc_configure_vsi_tc_bw_data bw_data;
+	struct i40e_vsi_context ctxt;
+	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	int ret = 0;
+	int i;
+
+	/* Check if enabled_tc is same as existing or new TCs */
+	if (vsi->enabled_tc == tc_map)
+		return ret;
+
+	/* configure tc bandwidth */
+	memset(&bw_data, 0, sizeof(bw_data));
+	bw_data.tc_valid_bits = tc_map;
+	/* Enable ETS TCs with equal BW Share for now across all VSIs */
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (tc_map & BIT_ULL(i))
+			bw_data.tc_bw_credits[i] = 1;
+	}
+	ret = i40e_aq_config_vsi_tc_bw(hw, vsi->seid, &bw_data, NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "AQ command Config VSI BW allocation"
+			" per TC failed = %d",
+			hw->aq.asq_last_status);
+		goto out;
+	}
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
+		vsi->info.qs_handle[i] = bw_data.qs_handles[i];
+
+	/* Update Queue Pairs Mapping for currently enabled UPs */
+	ctxt.seid = vsi->seid;
+	ctxt.pf_num = hw->pf_id;
+	ctxt.vf_num = 0;
+	ctxt.uplink_seid = vsi->uplink_seid;
+	ctxt.info = vsi->info;
+	i40e_get_cap(hw);
+	ret = i40e_vsi_update_queue_mapping(vsi, &ctxt.info, tc_map);
+	if (ret)
+		goto out;
+
+	/* Update the VSI after updating the VSI queue-mapping information */
+	ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure "
+			    "TC queue mapping = %d",
+			    hw->aq.asq_last_status);
+		goto out;
+	}
+	/* update the local VSI info with updated queue map */
+	(void)rte_memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping,
+					sizeof(vsi->info.tc_mapping));
+	(void)rte_memcpy(&vsi->info.queue_mapping,
+			&ctxt.info.queue_mapping,
+		sizeof(vsi->info.queue_mapping));
+	vsi->info.mapping_flags = ctxt.info.mapping_flags;
+	vsi->info.valid_sections = 0;
+
+	/* Update current VSI BW information */
+	ret = i40e_vsi_get_bw_info(vsi);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "Failed updating vsi bw info, err %s aq_err %s",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		goto out;
+	}
+
+	vsi->enabled_tc = tc_map;
+
+out:
+	return ret;
+}
+
+/*
+ * i40e_dcb_hw_configure - program the dcb setting to hw
+ * @pf: pf the configuration is taken on
+ * @new_cfg: new configuration
+ * @tc_map: enabled TC bitmap
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static enum i40e_status_code
+i40e_dcb_hw_configure(struct i40e_pf *pf,
+		      struct i40e_dcbx_config *new_cfg,
+		      uint8_t tc_map)
+{
+	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+	struct i40e_dcbx_config *old_cfg = &hw->local_dcbx_config;
+	struct i40e_vsi *main_vsi = pf->main_vsi;
+	struct i40e_vsi_list *vsi_list;
+	int i, ret;
+	uint32_t val;
+
+	/* Use the FW API if FW > v4.4*/
+	if (!((hw->aq.fw_maj_ver == 4) && (hw->aq.fw_min_ver >= 4))) {
+		PMD_INIT_LOG(ERR, "FW < v4.4, can not use FW LLDP API"
+				  " to configure DCB");
+		return I40E_ERR_FIRMWARE_API_VERSION;
+	}
+
+	/* Check if need reconfiguration */
+	if (!memcmp(new_cfg, old_cfg, sizeof(struct i40e_dcbx_config))) {
+		PMD_INIT_LOG(ERR, "No Change in DCB Config required.");
+		return I40E_SUCCESS;
+	}
+
+	/* Copy the new config to the current config */
+	*old_cfg = *new_cfg;
+	old_cfg->etsrec = old_cfg->etscfg;
+	ret = i40e_set_dcb_config(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "Set DCB Config failed, err %s aq_err %s\n",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		return ret;
+	}
+	/* set receive Arbiter to RR mode and ETS scheme by default */
+	for (i = 0; i <= I40E_PRTDCB_RETSTCC_MAX_INDEX; i++) {
+		val = I40E_READ_REG(hw, I40E_PRTDCB_RETSTCC(i));
+		val &= ~(I40E_PRTDCB_RETSTCC_BWSHARE_MASK     |
+			 I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK |
+			 I40E_PRTDCB_RETSTCC_ETSTC_SHIFT);
+		val |= ((uint32_t)old_cfg->etscfg.tcbwtable[i] <<
+			I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT) &
+			 I40E_PRTDCB_RETSTCC_BWSHARE_MASK;
+		val |= ((uint32_t)1 << I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT) &
+			 I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK;
+		val |= ((uint32_t)1 << I40E_PRTDCB_RETSTCC_ETSTC_SHIFT) &
+			 I40E_PRTDCB_RETSTCC_ETSTC_MASK;
+		I40E_WRITE_REG(hw, I40E_PRTDCB_RETSTCC(i), val);
+	}
+	/* get local mib to check whether it is configured correctly */
+	/* IEEE mode */
+	hw->local_dcbx_config.dcbx_mode = I40E_DCBX_MODE_IEEE;
+	/* Get Local DCB Config */
+	i40e_aq_get_dcb_config(hw, I40E_AQ_LLDP_MIB_LOCAL, 0,
+				     &hw->local_dcbx_config);
+
+	/* Update each VSI */
+	i40e_vsi_config_tc(main_vsi, tc_map);
+	if (main_vsi->veb) {
+		TAILQ_FOREACH(vsi_list, &main_vsi->veb->head, list) {
+			/* Beside main VSI, only enable default
+			 * TC for other VSIs
+			 */
+			ret = i40e_vsi_config_tc(vsi_list->vsi,
+						I40E_DEFAULT_TCMAP);
+			if (ret)
+				PMD_INIT_LOG(WARNING,
+					 "Failed configuring TC for VSI seid=%d\n",
+					 vsi_list->vsi->seid);
+			/* continue */
+		}
+	}
+	return I40E_SUCCESS;
+}
+
+/*
+ * i40e_dcb_init_configure - initial dcb config
+ * @dev: device being configured
+ * @sw_dcb: indicate whether dcb is sw configured or hw offload
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_dcb_init_configure(struct rte_eth_dev *dev, bool sw_dcb)
+{
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int ret = 0;
+
+	if ((pf->flags & I40E_FLAG_DCB) == 0) {
+		PMD_INIT_LOG(ERR, "HW doesn't support DCB");
+		return -ENOTSUP;
+	}
+
+	/* DCB initialization:
+	 * Update DCB configuration from the Firmware and configure
+	 * LLDP MIB change event.
+	 */
+	if (sw_dcb == TRUE) {
+		ret = i40e_aq_stop_lldp(hw, TRUE, NULL);
+		if (ret != I40E_SUCCESS)
+			PMD_INIT_LOG(DEBUG, "Failed to stop lldp");
+
+		ret = i40e_init_dcb(hw);
+		/* if sw_dcb, lldp agent is stopped, the return from
+		 * i40e_init_dcb we expect is failure with I40E_AQ_RC_EPERM
+		 * adminq status.
+		 */
+		if (ret != I40E_SUCCESS &&
+		    hw->aq.asq_last_status == I40E_AQ_RC_EPERM) {
+			memset(&hw->local_dcbx_config, 0,
+				sizeof(struct i40e_dcbx_config));
+			/* set dcb default configuration */
+			hw->local_dcbx_config.etscfg.willing = 0;
+			hw->local_dcbx_config.etscfg.maxtcs = 0;
+			hw->local_dcbx_config.etscfg.tcbwtable[0] = 100;
+			hw->local_dcbx_config.etscfg.tsatable[0] =
+						I40E_IEEE_TSA_ETS;
+			hw->local_dcbx_config.etsrec =
+				hw->local_dcbx_config.etscfg;
+			hw->local_dcbx_config.pfc.willing = 0;
+			hw->local_dcbx_config.pfc.pfccap =
+						I40E_MAX_TRAFFIC_CLASS;
+			/* FW needs one App to configure HW */
+			hw->local_dcbx_config.numapps = 1;
+			hw->local_dcbx_config.app[0].selector =
+						I40E_APP_SEL_ETHTYPE;
+			hw->local_dcbx_config.app[0].priority = 3;
+			hw->local_dcbx_config.app[0].protocolid =
+						I40E_APP_PROTOID_FCOE;
+			ret = i40e_set_dcb_config(hw);
+			if (ret) {
+				PMD_INIT_LOG(ERR, "default dcb config fails."
+					" err = %d, aq_err = %d.", ret,
+					  hw->aq.asq_last_status);
+				return -ENOSYS;
+			}
+		} else {
+			PMD_INIT_LOG(ERR, "DCBX configuration failed, err = %d,"
+					  " aq_err = %d.", ret,
+					  hw->aq.asq_last_status);
+			return -ENOTSUP;
+		}
+	} else {
+		ret = i40e_aq_start_lldp(hw, NULL);
+		if (ret != I40E_SUCCESS)
+			PMD_INIT_LOG(DEBUG, "Failed to start lldp");
+
+		ret = i40e_init_dcb(hw);
+		if (!ret) {
+			if (hw->dcbx_status == I40E_DCBX_STATUS_DISABLED) {
+				PMD_INIT_LOG(ERR, "HW doesn't support"
+						  " DCBX offload.");
+				return -ENOTSUP;
+			}
+		} else {
+			PMD_INIT_LOG(ERR, "DCBX configuration failed, err = %d,"
+					  " aq_err = %d.", ret,
+					  hw->aq.asq_last_status);
+			return -ENOTSUP;
+		}
+	}
+	return 0;
+}
+
+/*
+ * i40e_dcb_setup - setup dcb related config
+ * @dev: device being configured
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_dcb_setup(struct rte_eth_dev *dev)
+{
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct i40e_dcbx_config dcb_cfg;
+	uint8_t tc_map = 0;
+	int ret = 0;
+
+	if ((pf->flags & I40E_FLAG_DCB) == 0) {
+		PMD_INIT_LOG(ERR, "HW doesn't support DCB");
+		return -ENOTSUP;
+	}
+
+	if (pf->vf_num != 0 ||
+	    (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+		PMD_INIT_LOG(DEBUG, " DCB only works on main vsi.");
+
+	ret = i40e_parse_dcb_configure(dev, &dcb_cfg, &tc_map);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "invalid dcb config");
+		return -EINVAL;
+	}
+	ret = i40e_dcb_hw_configure(pf, &dcb_cfg, tc_map);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "dcb sw configure fails");
+		return -ENOSYS;
+	}
+	return 0;
+}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 6185657..87da0a2 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -199,6 +199,19 @@ struct i40e_macvlan_filter {
 	uint16_t vlan_id;
 };
 
+/* Bandwidth limit information */
+struct i40e_bw_info {
+	uint16_t bw_limit;      /* BW Limit (0 = disabled) */
+	uint8_t  bw_max_quanta; /* Max Quanta when BW limit is enabled */
+
+	/* Relative TC credits across VSIs */
+	uint8_t  bw_ets_share_credits[I40E_MAX_TRAFFIC_CLASS];
+	/* TC BW limit credits within VSI */
+	uint8_t  bw_ets_limit_credits[I40E_MAX_TRAFFIC_CLASS];
+	/* TC BW limit max quanta within VSI */
+	uint8_t  bw_ets_max_quanta[I40E_MAX_TRAFFIC_CLASS];
+};
+
 /*
  * Structure that defines a VSI, associated with a adapter.
  */
@@ -244,6 +257,7 @@ struct i40e_vsi {
 	uint16_t vsi_id;
 	uint16_t msix_intr; /* The MSIX interrupt binds to VSI */
 	uint8_t enabled_tc; /* The traffic class enabled */
+	struct i40e_bw_info bw_info; /* VSI bandwidth information */
 };
 
 struct pool_entry {
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index fd656d5..d333f48 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2111,7 +2111,8 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	struct i40e_rx_queue *rxq;
 	const struct rte_memzone *rz;
 	uint32_t ring_size;
-	uint16_t len;
+	uint16_t len, i;
+	uint16_t base, bsf, tc_mapping;
 	int use_def_burst_func = 1;
 
 	if (hw->mac.type == I40E_MAC_VF) {
@@ -2232,6 +2233,19 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			     rxq->port_id, rxq->queue_id);
 	}
 
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (!(vsi->enabled_tc & (1 << i)))
+			continue;
+		tc_mapping = rte_le_to_cpu_16(vsi->info.tc_mapping[i]);
+		base = (tc_mapping & I40E_AQ_VSI_TC_QUE_OFFSET_MASK) >>
+			I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT;
+		bsf = (tc_mapping & I40E_AQ_VSI_TC_QUE_NUMBER_MASK) >>
+			I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT;
+
+		if (queue_idx >= base && queue_idx < (base + BIT(bsf)))
+			rxq->dcb_tc = i;
+	}
+
 	return 0;
 }
 
@@ -2324,6 +2338,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	const struct rte_memzone *tz;
 	uint32_t ring_size;
 	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint16_t i, base, bsf, tc_mapping;
 
 	if (hw->mac.type == I40E_MAC_VF) {
 		struct i40e_vf *vf =
@@ -2500,6 +2515,19 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		dev->tx_pkt_burst = i40e_xmit_pkts;
 	}
 
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (!(vsi->enabled_tc & (1 << i)))
+			continue;
+		tc_mapping = rte_le_to_cpu_16(vsi->info.tc_mapping[i]);
+		base = (tc_mapping & I40E_AQ_VSI_TC_QUE_OFFSET_MASK) >>
+			I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT;
+		bsf = (tc_mapping & I40E_AQ_VSI_TC_QUE_NUMBER_MASK) >>
+			I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT;
+
+		if (queue_idx >= base && queue_idx < (base + BIT(bsf)))
+			txq->dcb_tc = i;
+	}
+
 	return 0;
 }
 
@@ -2703,7 +2731,7 @@ i40e_tx_queue_init(struct i40e_tx_queue *txq)
 #ifdef RTE_LIBRTE_IEEE1588
 	tx_ctx.timesync_ena = 1;
 #endif
-	tx_ctx.rdylist = rte_le_to_cpu_16(vsi->info.qs_handle[0]);
+	tx_ctx.rdylist = rte_le_to_cpu_16(vsi->info.qs_handle[txq->dcb_tc]);
 	if (vsi->type == I40E_VSI_FDIR)
 		tx_ctx.fd_ena = TRUE;
 
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 4385142..5c76e3d 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -113,6 +113,7 @@ struct i40e_rx_queue {
 	uint8_t hs_mode; /* Header Split mode */
 	bool q_set; /**< indicate if rx queue has been configured */
 	bool rx_deferred_start; /**< don't start this queue in dev start */
+	uint8_t dcb_tc;         /**< Traffic class of rx queue */
 };
 
 struct i40e_tx_entry {
@@ -153,6 +154,7 @@ struct i40e_tx_queue {
 	uint16_t tx_next_rs;
 	bool q_set; /**< indicate if tx queue has been configured */
 	bool tx_deferred_start; /**< don't start this queue in dev start */
+	uint8_t dcb_tc;         /**< Traffic class of tx queue */
 };
 
 /** Offload features */
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 04/10] ixgbe: enable DCB+RSS multi-queue mode
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                     ` (2 preceding siblings ...)
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 03/10] i40e: enable DCB feature on FVL Jingjing Wu
@ 2015-10-29  8:53   ` Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 05/10] ethdev: new API to get dcb related information Jingjing Wu
                     ` (8 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

This patch enables DCB+RSS multi-queue mode, and also fix some coding
style.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ixgbe/ixgbe_rxtx.c | 48 +++++++++++++++++++++++++-----------------
 1 file changed, 29 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index d331ef5..1dc05f0 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3144,9 +3144,13 @@ ixgbe_dcb_rx_hw_config(struct ixgbe_hw *hw,
 				reg = (reg & ~IXGBE_MRQC_MRQE_MASK) |
 					IXGBE_MRQC_VMDQRT4TCEN;
 			else {
+				/* no matter the mode is DCB or DCB_RSS, just
+				 * set the MRQE to RSSXTCEN. RSS is controlled
+				 * by RSS_FIELD
+				 */
 				IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, 0);
 				reg = (reg & ~IXGBE_MRQC_MRQE_MASK) |
-					IXGBE_MRQC_RT4TCEN;
+					IXGBE_MRQC_RTRSS4TCEN;
 			}
 		}
 		if (dcb_config->num_tcs.pg_tcs == 8) {
@@ -3156,7 +3160,7 @@ ixgbe_dcb_rx_hw_config(struct ixgbe_hw *hw,
 			else {
 				IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, 0);
 				reg = (reg & ~IXGBE_MRQC_MRQE_MASK) |
-					IXGBE_MRQC_RT8TCEN;
+					IXGBE_MRQC_RTRSS8TCEN;
 			}
 		}
 
@@ -3261,16 +3265,17 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			 *get dcb and VT rx configuration parameters
 			 *from rte_eth_conf
 			 */
-			ixgbe_vmdq_dcb_rx_config(dev,dcb_config);
+			ixgbe_vmdq_dcb_rx_config(dev, dcb_config);
 			/*Configure general VMDQ and DCB RX parameters*/
 			ixgbe_vmdq_dcb_configure(dev);
 		}
 		break;
 	case ETH_MQ_RX_DCB:
+	case ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
-		ixgbe_dcb_rx_config(dev,dcb_config);
+		ixgbe_dcb_rx_config(dev, dcb_config);
 		/*Configure general DCB RX parameters*/
 		ixgbe_dcb_rx_hw_config(hw, dcb_config);
 		break;
@@ -3292,7 +3297,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/*get DCB TX configuration parameters from rte_eth_conf*/
-		ixgbe_dcb_tx_config(dev,dcb_config);
+		ixgbe_dcb_tx_config(dev, dcb_config);
 		/*Configure general DCB TX parameters*/
 		ixgbe_dcb_tx_hw_config(hw, dcb_config);
 		break;
@@ -3433,14 +3438,15 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
 
 	/* check support mq_mode for DCB */
 	if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
+	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
+	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
 		return;
 
 	if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
-	ixgbe_dcb_hw_configure(dev,dcb_cfg);
+	ixgbe_dcb_hw_configure(dev, dcb_cfg);
 
 	return;
 }
@@ -3682,21 +3688,25 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-			case ETH_MQ_RX_RSS:
-				ixgbe_rss_configure(dev);
-				break;
+		case ETH_MQ_RX_RSS:
+		case ETH_MQ_RX_DCB_RSS:
+		case ETH_MQ_RX_VMDQ_RSS:
+			ixgbe_rss_configure(dev);
+			break;
 
-			case ETH_MQ_RX_VMDQ_DCB:
-				ixgbe_vmdq_dcb_configure(dev);
-				break;
+		case ETH_MQ_RX_VMDQ_DCB:
+			ixgbe_vmdq_dcb_configure(dev);
+			break;
 
-			case ETH_MQ_RX_VMDQ_ONLY:
-				ixgbe_vmdq_rx_hw_configure(dev);
-				break;
+		case ETH_MQ_RX_VMDQ_ONLY:
+			ixgbe_vmdq_rx_hw_configure(dev);
+			break;
 
-			case ETH_MQ_RX_NONE:
-				/* if mq_mode is none, disable rss mode.*/
-			default: ixgbe_rss_disable(dev);
+		case ETH_MQ_RX_NONE:
+		default:
+			/* if mq_mode is none, disable rss mode.*/
+			ixgbe_rss_disable(dev);
+			break;
 		}
 	} else {
 		/*
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 05/10] ethdev: new API to get dcb related information
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                     ` (3 preceding siblings ...)
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 04/10] ixgbe: enable DCB+RSS multi-queue mode Jingjing Wu
@ 2015-10-29  8:53   ` Jingjing Wu
  2015-10-30 11:16     ` Thomas Monjalon
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 06/10] ixgbe: get_dcb_info ops implement Jingjing Wu
                     ` (7 subsequent siblings)
  12 siblings, 1 reply; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

This patch adds one new API to get dcb related info.
  rte_eth_dev_get_dcb_info

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 lib/librte_ether/rte_ethdev.c | 18 +++++++++++++++
 lib/librte_ether/rte_ethdev.h | 54 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 72 insertions(+)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index c7247c3..721cef6 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -3143,3 +3143,21 @@ rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->set_eeprom)(dev, info);
 }
+
+int
+rte_eth_dev_get_dcb_info(uint8_t port_id,
+			     struct rte_eth_dcb_info *dcb_info)
+{
+	struct rte_eth_dev *dev;
+
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return -ENODEV;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	memset(dcb_info, 0, sizeof(struct rte_eth_dcb_info));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
+	return (*dev->dev_ops->get_dcb_info)(dev, dcb_info);
+}
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 377da6a..2e05189 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -854,6 +854,38 @@ struct rte_eth_xstats {
 	uint64_t value;
 };
 
+#define ETH_DCB_NUM_TCS    8
+#define ETH_MAX_VMDQ_POOL  64
+
+/**
+ * A structure used to get the information of queue and
+ * TC mapping on both TX and RX paths.
+ */
+struct rte_eth_dcb_tc_queue_mapping {
+	/** rx queues assigned to tc per Pool */
+	struct {
+		uint8_t base;
+		uint8_t nb_queue;
+	} tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	/** rx queues assigned to tc per Pool */
+	struct {
+		uint8_t base;
+		uint8_t nb_queue;
+	} tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+};
+
+/**
+ * A structure used to get the information of DCB.
+ * It includes TC UP mapping and queue TC mapping.
+ */
+struct rte_eth_dcb_info {
+	uint8_t nb_tcs;        /**< number of TCs */
+	uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+	uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
+	/** rx queues assigned to tc */
+	struct rte_eth_dcb_tc_queue_mapping tc_queue;
+};
+
 struct rte_eth_dev;
 
 struct rte_eth_dev_callback;
@@ -1207,6 +1239,10 @@ typedef int (*eth_filter_ctrl_t)(struct rte_eth_dev *dev,
 				 void *arg);
 /**< @internal Take operations to assigned filter type on an Ethernet device */
 
+typedef int (*eth_get_dcb_info)(struct rte_eth_dev *dev,
+				 struct rte_eth_dcb_info *dcb_info);
+/**< @internal Get dcb information on an Ethernet device */
+
 /**
  * @internal A structure containing the functions exported by an Ethernet driver.
  */
@@ -1312,6 +1348,9 @@ struct eth_dev_ops {
 	eth_timesync_read_rx_timestamp_t timesync_read_rx_timestamp;
 	/** Read the IEEE1588/802.1AS TX timestamp. */
 	eth_timesync_read_tx_timestamp_t timesync_read_tx_timestamp;
+
+	/** Get DCB information */
+	eth_get_dcb_info get_dcb_info;
 };
 
 /**
@@ -3321,6 +3360,21 @@ int rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type,
 			enum rte_filter_op filter_op, void *arg);
 
 /**
+ * Get DCB information on an Ethernet device.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param dcb_info
+ *   dcb information.
+ * @return
+ *   - (0) if successful.
+ *   - (-ENODEV) if port identifier is invalid.
+ *   - (-ENOTSUP) if hardware doesn't support.
+ */
+int rte_eth_dev_get_dcb_info(uint8_t port_id,
+			     struct rte_eth_dcb_info *dcb_info);
+
+/**
  * Add a callback to be called on packet RX on a given port and queue.
  *
  * This API configures a function to be called for each burst of
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 06/10] ixgbe: get_dcb_info ops implement
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                     ` (4 preceding siblings ...)
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 05/10] ethdev: new API to get dcb related information Jingjing Wu
@ 2015-10-29  8:53   ` Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 07/10] i40e: " Jingjing Wu
                     ` (6 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

This patch implements the get_dcb_info ops in ixgbe driver.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 80 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a7dca55..fc5da21 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -304,6 +304,8 @@ static int ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
 static int ixgbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
 				      struct ether_addr *mc_addr_set,
 				      uint32_t nb_mc_addr);
+static int ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
+				   struct rte_eth_dcb_info *dcb_info);
 
 static int ixgbe_get_reg_length(struct rte_eth_dev *dev);
 static int ixgbe_get_regs(struct rte_eth_dev *dev,
@@ -465,6 +467,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.get_eeprom_length    = ixgbe_get_eeprom_length,
 	.get_eeprom           = ixgbe_get_eeprom,
 	.set_eeprom           = ixgbe_set_eeprom,
+	.get_dcb_info         = ixgbe_dev_get_dcb_info,
 };
 
 /*
@@ -5644,6 +5647,83 @@ ixgbe_set_eeprom(struct rte_eth_dev *dev,
 	return eeprom->ops.write_buffer(hw,  first, length, data);
 }
 
+static int
+ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
+			struct rte_eth_dcb_info *dcb_info)
+{
+	struct ixgbe_dcb_config *dcb_config =
+			IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data->dev_private);
+	struct ixgbe_dcb_tc_config *tc;
+	uint8_t i, j;
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
+	else
+		dcb_info->nb_tcs = 1;
+
+	if (dcb_config->vt_mode) { /* vt is enabled*/
+		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
+				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
+		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
+		for (i = 0; i < vmdq_rx_conf->nb_queue_pools; i++) {
+			for (j = 0; j < dcb_info->nb_tcs; j++) {
+				dcb_info->tc_queue.tc_rxq[i][j].base =
+						i * dcb_info->nb_tcs + j;
+				dcb_info->tc_queue.tc_rxq[i][j].nb_queue = 1;
+				dcb_info->tc_queue.tc_txq[i][j].base =
+						i * dcb_info->nb_tcs + j;
+				dcb_info->tc_queue.tc_txq[i][j].nb_queue = 1;
+			}
+		}
+	} else { /* vt is disabled*/
+		struct rte_eth_dcb_rx_conf *rx_conf =
+				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
+		if (dcb_info->nb_tcs == ETH_4_TCS) {
+			for (i = 0; i < dcb_info->nb_tcs; i++) {
+				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
+				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
+			}
+			dcb_info->tc_queue.tc_txq[0][0].base = 0;
+			dcb_info->tc_queue.tc_txq[0][1].base = 64;
+			dcb_info->tc_queue.tc_txq[0][2].base = 96;
+			dcb_info->tc_queue.tc_txq[0][3].base = 112;
+			dcb_info->tc_queue.tc_txq[0][0].nb_queue = 64;
+			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
+			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
+			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
+		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+			for (i = 0; i < dcb_info->nb_tcs; i++) {
+				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
+				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
+			}
+			dcb_info->tc_queue.tc_txq[0][0].base = 0;
+			dcb_info->tc_queue.tc_txq[0][1].base = 32;
+			dcb_info->tc_queue.tc_txq[0][2].base = 64;
+			dcb_info->tc_queue.tc_txq[0][3].base = 80;
+			dcb_info->tc_queue.tc_txq[0][4].base = 96;
+			dcb_info->tc_queue.tc_txq[0][5].base = 104;
+			dcb_info->tc_queue.tc_txq[0][6].base = 112;
+			dcb_info->tc_queue.tc_txq[0][7].base = 120;
+			dcb_info->tc_queue.tc_txq[0][0].nb_queue = 32;
+			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
+			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
+			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
+			dcb_info->tc_queue.tc_txq[0][4].nb_queue = 8;
+			dcb_info->tc_queue.tc_txq[0][5].nb_queue = 8;
+			dcb_info->tc_queue.tc_txq[0][6].nb_queue = 8;
+			dcb_info->tc_queue.tc_txq[0][7].nb_queue = 8;
+		}
+	}
+	for (i = 0; i < dcb_info->nb_tcs; i++) {
+		tc = &dcb_config->tc_config[i];
+		dcb_info->tc_bws[i] = tc->path[IXGBE_DCB_TX_CONFIG].bwg_percent;
+	}
+	return 0;
+}
+
 static struct rte_driver rte_ixgbe_driver = {
 	.type = PMD_PDEV,
 	.init = rte_ixgbe_pmd_init,
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 07/10] i40e: get_dcb_info ops implement
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                     ` (5 preceding siblings ...)
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 06/10] ixgbe: get_dcb_info ops implement Jingjing Wu
@ 2015-10-29  8:53   ` Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 08/10] app/testpmd: set up DCB forwarding based on traffic class Jingjing Wu
                     ` (5 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

    This patch implements the get_dcb_info ops in i40e driver.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7db1de9..28f780b 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -224,6 +224,8 @@ static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
 				enum rte_filter_type filter_type,
 				enum rte_filter_op filter_op,
 				void *arg);
+static int i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
+				  struct rte_eth_dcb_info *dcb_info);
 static void i40e_configure_registers(struct i40e_hw *hw);
 static void i40e_hw_init(struct i40e_hw *hw);
 static int i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi);
@@ -296,6 +298,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.timesync_disable             = i40e_timesync_disable,
 	.timesync_read_rx_timestamp   = i40e_timesync_read_rx_timestamp,
 	.timesync_read_tx_timestamp   = i40e_timesync_read_tx_timestamp,
+	.get_dcb_info                 = i40e_dev_get_dcb_info,
 };
 
 static struct eth_driver rte_i40e_pmd = {
@@ -6806,3 +6809,42 @@ i40e_dcb_setup(struct rte_eth_dev *dev)
 	}
 	return 0;
 }
+
+static int
+i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
+		      struct rte_eth_dcb_info *dcb_info)
+{
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct i40e_vsi *vsi = pf->main_vsi;
+	struct i40e_dcbx_config *dcb_cfg = &hw->local_dcbx_config;
+	uint16_t bsf, tc_mapping;
+	int i;
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+		dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
+	else
+		dcb_info->nb_tcs = 1;
+	for (i = 0; i < I40E_MAX_USER_PRIORITY; i++)
+		dcb_info->prio_tc[i] = dcb_cfg->etscfg.prioritytable[i];
+	for (i = 0; i < dcb_info->nb_tcs; i++)
+		dcb_info->tc_bws[i] = dcb_cfg->etscfg.tcbwtable[i];
+
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (vsi->enabled_tc & (1 << i)) {
+			tc_mapping = rte_le_to_cpu_16(vsi->info.tc_mapping[i]);
+			/* only main vsi support multi TCs */
+			dcb_info->tc_queue.tc_rxq[0][i].base =
+				(tc_mapping & I40E_AQ_VSI_TC_QUE_OFFSET_MASK) >>
+				I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT;
+			dcb_info->tc_queue.tc_txq[0][i].base =
+				dcb_info->tc_queue.tc_rxq[0][i].base;
+			bsf = (tc_mapping & I40E_AQ_VSI_TC_QUE_NUMBER_MASK) >>
+				I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT;
+			dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 1 << bsf;
+			dcb_info->tc_queue.tc_txq[0][i].nb_queue =
+				dcb_info->tc_queue.tc_rxq[0][i].nb_queue;
+		}
+	}
+	return 0;
+}
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 08/10] app/testpmd: set up DCB forwarding based on traffic class
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                     ` (6 preceding siblings ...)
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 07/10] i40e: " Jingjing Wu
@ 2015-10-29  8:53   ` Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 09/10] app/testpmd: add command to display DCB info Jingjing Wu
                     ` (4 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

This patch changes the testpmd DCB forwarding stream to make it
based on traffic class.
It also fixes some coding style issues.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 app/test-pmd/cmdline.c |  39 +++++++-----
 app/test-pmd/config.c  | 159 +++++++++++++++++++++----------------------------
 app/test-pmd/testpmd.c | 151 +++++++++++++++++++++++++---------------------
 app/test-pmd/testpmd.h |  23 +------
 4 files changed, 176 insertions(+), 196 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 0f8f48f..2ec855f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1999,37 +1999,46 @@ cmd_config_dcb_parsed(void *parsed_result,
                         __attribute__((unused)) void *data)
 {
 	struct cmd_config_dcb *res = parsed_result;
-	struct dcb_config dcb_conf;
 	portid_t port_id = res->port_id;
 	struct rte_port *port;
+	uint8_t pfc_en;
+	int ret;
 
 	port = &ports[port_id];
 	/** Check if the port is not started **/
 	if (port->port_status != RTE_PORT_STOPPED) {
-		printf("Please stop port %d first\n",port_id);
+		printf("Please stop port %d first\n", port_id);
 		return;
 	}
 
-	dcb_conf.num_tcs = (enum rte_eth_nb_tcs) res->num_tcs;
-	if ((dcb_conf.num_tcs != ETH_4_TCS) && (dcb_conf.num_tcs != ETH_8_TCS)){
-		printf("The invalid number of traffic class,only 4 or 8 allowed\n");
+	if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+		printf("The invalid number of traffic class,"
+			" only 4 or 8 allowed.\n");
 		return;
 	}
 
-	/* DCB in VT mode */
-	if (!strncmp(res->vt_en, "on",2))
-		dcb_conf.dcb_mode = DCB_VT_ENABLED;
+	if (nb_fwd_lcores < res->num_tcs) {
+		printf("nb_cores shouldn't be less than number of TCs.\n");
+		return;
+	}
+	if (!strncmp(res->pfc_en, "on", 2))
+		pfc_en = 1;
 	else
-		dcb_conf.dcb_mode = DCB_ENABLED;
+		pfc_en = 0;
 
-	if (!strncmp(res->pfc_en, "on",2)) {
-		dcb_conf.pfc_en = 1;
-	}
+	/* DCB in VT mode */
+	if (!strncmp(res->vt_en, "on", 2))
+		ret = init_port_dcb_config(port_id, DCB_VT_ENABLED,
+				(enum rte_eth_nb_tcs)res->num_tcs,
+				pfc_en);
 	else
-		dcb_conf.pfc_en = 0;
+		ret = init_port_dcb_config(port_id, DCB_ENABLED,
+				(enum rte_eth_nb_tcs)res->num_tcs,
+				pfc_en);
+
 
-	if (init_port_dcb_config(port_id,&dcb_conf) != 0) {
-		printf("Cannot initialize network ports\n");
+	if (ret != 0) {
+		printf("Cannot initialize network ports.\n");
 		return;
 	}
 
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cf2aa6e..11136aa 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1128,113 +1128,92 @@ rss_fwd_config_setup(void)
 	}
 }
 
-/*
- * In DCB and VT on,the mapping of 128 receive queues to 128 transmit queues.
- */
-static void
-dcb_rxq_2_txq_mapping(queueid_t rxq, queueid_t *txq)
-{
-	if(dcb_q_mapping == DCB_4_TCS_Q_MAPPING) {
-
-		if (rxq < 32)
-			/* tc0: 0-31 */
-			*txq = rxq;
-		else if (rxq < 64) {
-			/* tc1: 64-95 */
-			*txq =  (uint16_t)(rxq + 32);
-		}
-		else {
-			/* tc2: 96-111;tc3:112-127 */
-			*txq =  (uint16_t)(rxq/2 + 64);
-		}
-	}
-	else {
-		if (rxq < 16)
-			/* tc0 mapping*/
-			*txq = rxq;
-		else if (rxq < 32) {
-			/* tc1 mapping*/
-			 *txq = (uint16_t)(rxq + 16);
-		}
-		else if (rxq < 64) {
-			/*tc2,tc3 mapping */
-			*txq =  (uint16_t)(rxq + 32);
-		}
-		else {
-			/* tc4,tc5,tc6 and tc7 mapping */
-			*txq =  (uint16_t)(rxq/2 + 64);
-		}
-	}
-}
-
 /**
- * For the DCB forwarding test, each core is assigned on every port multi-transmit
- * queue.
+ * For the DCB forwarding test, each core is assigned on each traffic class.
  *
  * Each core is assigned a multi-stream, each stream being composed of
  * a RX queue to poll on a RX port for input messages, associated with
- * a TX queue of a TX port where to send forwarded packets.
- * All packets received on the RX queue of index "RxQj" of the RX port "RxPi"
- * are sent on the TX queue "TxQl" of the TX port "TxPk" according to the two
- * following rules:
- * In VT mode,
- *    - TxPk = (RxPi + 1) if RxPi is even, (RxPi - 1) if RxPi is odd
- *    - TxQl = RxQj
- * In non-VT mode,
- *    - TxPk = (RxPi + 1) if RxPi is even, (RxPi - 1) if RxPi is odd
- *    There is a mapping of RxQj to TxQl to be required,and the mapping was implemented
- *    in dcb_rxq_2_txq_mapping function.
+ * a TX queue of a TX port where to send forwarded packets. All RX and
+ * TX queues are mapping to the same traffic class.
+ * If VMDQ and DCB co-exist, each traffic class on different POOLs share
+ * the same core
  */
 static void
 dcb_fwd_config_setup(void)
 {
-	portid_t   rxp;
-	portid_t   txp;
-	queueid_t  rxq;
-	queueid_t  nb_q;
+	struct rte_eth_dcb_info rxp_dcb_info, txp_dcb_info;
+	portid_t txp, rxp = 0;
+	queueid_t txq, rxq = 0;
 	lcoreid_t  lc_id;
-	uint16_t sm_id;
-
-	nb_q = nb_rxq;
+	uint16_t nb_rx_queue, nb_tx_queue;
+	uint16_t i, j, k, sm_id = 0;
+	uint8_t tc = 0;
 
 	cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores;
 	cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
 	cur_fwd_config.nb_fwd_streams =
-		(streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
+		(streamid_t) (nb_rxq * cur_fwd_config.nb_fwd_ports);
 
 	/* reinitialize forwarding streams */
 	init_fwd_streams();
+	sm_id = 0;
+	if ((rxp & 0x1) == 0)
+		txp = (portid_t) (rxp + 1);
+	else
+		txp = (portid_t) (rxp - 1);
+	/* get the dcb info on the first RX and TX ports */
+	(void)rte_eth_dev_get_dcb_info(fwd_ports_ids[rxp], &rxp_dcb_info);
+	(void)rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
 
-	setup_fwd_config_of_each_lcore(&cur_fwd_config);
-	rxp = 0; rxq = 0;
 	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
-		/* a fwd core can run multi-streams */
-		for (sm_id = 0; sm_id < fwd_lcores[lc_id]->stream_nb; sm_id++)
-		{
-			struct fwd_stream *fs;
-			fs = fwd_streams[fwd_lcores[lc_id]->stream_idx + sm_id];
-			if ((rxp & 0x1) == 0)
-				txp = (portid_t) (rxp + 1);
-			else
-				txp = (portid_t) (rxp - 1);
-			fs->rx_port = fwd_ports_ids[rxp];
-			fs->rx_queue = rxq;
-			fs->tx_port = fwd_ports_ids[txp];
-			if (dcb_q_mapping == DCB_VT_Q_MAPPING)
-				fs->tx_queue = rxq;
-			else
-				dcb_rxq_2_txq_mapping(rxq, &fs->tx_queue);
-			fs->peer_addr = fs->tx_port;
-			rxq = (queueid_t) (rxq + 1);
-			if (rxq < nb_q)
-				continue;
-			rxq = 0;
-			if (numa_support && (nb_fwd_ports <= (nb_ports >> 1)))
-				rxp = (portid_t)
-					(rxp + ((nb_ports >> 1) / nb_fwd_ports));
-			else
-				rxp = (portid_t) (rxp + 1);
+		fwd_lcores[lc_id]->stream_nb = 0;
+		fwd_lcores[lc_id]->stream_idx = sm_id;
+		for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+			/* if the nb_queue is zero, means this tc is
+			 * not enabled on the POOL
+			 */
+			if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0)
+				break;
+			k = fwd_lcores[lc_id]->stream_nb +
+				fwd_lcores[lc_id]->stream_idx;
+			rxq = rxp_dcb_info.tc_queue.tc_rxq[i][tc].base;
+			txq = txp_dcb_info.tc_queue.tc_txq[i][tc].base;
+			nb_rx_queue = txp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue;
+			nb_tx_queue = txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue;
+			for (j = 0; j < nb_rx_queue; j++) {
+				struct fwd_stream *fs;
+
+				fs = fwd_streams[k + j];
+				fs->rx_port = fwd_ports_ids[rxp];
+				fs->rx_queue = rxq + j;
+				fs->tx_port = fwd_ports_ids[txp];
+				fs->tx_queue = txq + j % nb_tx_queue;
+				fs->peer_addr = fs->tx_port;
+			}
+			fwd_lcores[lc_id]->stream_nb +=
+				rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue;
 		}
+		sm_id = (streamid_t) (sm_id + fwd_lcores[lc_id]->stream_nb);
+
+		tc++;
+		if (tc < rxp_dcb_info.nb_tcs)
+			continue;
+		/* Restart from TC 0 on next RX port */
+		tc = 0;
+		if (numa_support && (nb_fwd_ports <= (nb_ports >> 1)))
+			rxp = (portid_t)
+				(rxp + ((nb_ports >> 1) / nb_fwd_ports));
+		else
+			rxp++;
+		if (rxp >= nb_fwd_ports)
+			return;
+		/* get the dcb information on next RX and TX ports */
+		if ((rxp & 0x1) == 0)
+			txp = (portid_t) (rxp + 1);
+		else
+			txp = (portid_t) (rxp - 1);
+		rte_eth_dev_get_dcb_info(fwd_ports_ids[rxp], &rxp_dcb_info);
+		rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
 	}
 }
 
@@ -1354,10 +1333,6 @@ pkt_fwd_config_display(struct fwd_config *cfg)
 void
 fwd_config_display(void)
 {
-	if((dcb_config) && (nb_fwd_lcores == 1)) {
-		printf("In DCB mode,the nb forwarding cores should be larger than 1\n");
-		return;
-	}
 	fwd_config_setup();
 	pkt_fwd_config_display(&cur_fwd_config);
 }
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 8b8eb7d..6805297 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -182,9 +182,6 @@ uint8_t dcb_config = 0;
 /* Whether the dcb is in testing status */
 uint8_t dcb_test = 0;
 
-/* DCB on and VT on mapping is default */
-enum dcb_queue_mapping_mode dcb_q_mapping = DCB_VT_Q_MAPPING;
-
 /*
  * Configurable number of RX/TX queues.
  */
@@ -1849,115 +1846,131 @@ const uint16_t vlan_tags[] = {
 };
 
 static  int
-get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
+get_eth_dcb_conf(struct rte_eth_conf *eth_conf,
+		 enum dcb_mode_enable dcb_mode,
+		 enum rte_eth_nb_tcs num_tcs,
+		 uint8_t pfc_en)
 {
-        uint8_t i;
+	uint8_t i;
 
 	/*
 	 * Builds up the correct configuration for dcb+vt based on the vlan tags array
 	 * given above, and the number of traffic classes available for use.
 	 */
-	if (dcb_conf->dcb_mode == DCB_VT_ENABLED) {
-		struct rte_eth_vmdq_dcb_conf vmdq_rx_conf;
-		struct rte_eth_vmdq_dcb_tx_conf vmdq_tx_conf;
+	if (dcb_mode == DCB_VT_ENABLED) {
+		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
+				&eth_conf->rx_adv_conf.vmdq_dcb_conf;
+		struct rte_eth_vmdq_dcb_tx_conf *vmdq_tx_conf =
+				&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 
 		/* VMDQ+DCB RX and TX configrations */
-		vmdq_rx_conf.enable_default_pool = 0;
-		vmdq_rx_conf.default_pool = 0;
-		vmdq_rx_conf.nb_queue_pools =
-			(dcb_conf->num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
-		vmdq_tx_conf.nb_queue_pools =
-			(dcb_conf->num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
-
-		vmdq_rx_conf.nb_pool_maps = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
-		for (i = 0; i < vmdq_rx_conf.nb_pool_maps; i++) {
-			vmdq_rx_conf.pool_map[i].vlan_id = vlan_tags[ i ];
-			vmdq_rx_conf.pool_map[i].pools = 1 << (i % vmdq_rx_conf.nb_queue_pools);
+		vmdq_rx_conf->enable_default_pool = 0;
+		vmdq_rx_conf->default_pool = 0;
+		vmdq_rx_conf->nb_queue_pools =
+			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+		vmdq_tx_conf->nb_queue_pools =
+			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+
+		vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
+		for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
+			vmdq_rx_conf->pool_map[i].vlan_id = vlan_tags[i];
+			vmdq_rx_conf->pool_map[i].pools =
+				1 << (i % vmdq_rx_conf->nb_queue_pools);
 		}
 		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-			vmdq_rx_conf.dcb_tc[i] = i;
-			vmdq_tx_conf.dcb_tc[i] = i;
+			vmdq_rx_conf->dcb_tc[i] = i;
+			vmdq_tx_conf->dcb_tc[i] = i;
 		}
 
-		/*set DCB mode of RX and TX of multiple queues*/
+		/* set DCB mode of RX and TX of multiple queues */
 		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB;
 		eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
-		if (dcb_conf->pfc_en)
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT|ETH_DCB_PFC_SUPPORT;
-		else
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
-
-		(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_dcb_conf, &vmdq_rx_conf,
-                                sizeof(struct rte_eth_vmdq_dcb_conf)));
-		(void)(rte_memcpy(&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf, &vmdq_tx_conf,
-                                sizeof(struct rte_eth_vmdq_dcb_tx_conf)));
-	}
-	else {
-		struct rte_eth_dcb_rx_conf rx_conf;
-		struct rte_eth_dcb_tx_conf tx_conf;
-
-		/* queue mapping configuration of DCB RX and TX */
-		if (dcb_conf->num_tcs == ETH_4_TCS)
-			dcb_q_mapping = DCB_4_TCS_Q_MAPPING;
-		else
-			dcb_q_mapping = DCB_8_TCS_Q_MAPPING;
-
-		rx_conf.nb_tcs = dcb_conf->num_tcs;
-		tx_conf.nb_tcs = dcb_conf->num_tcs;
-
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
-			rx_conf.dcb_tc[i] = i;
-			tx_conf.dcb_tc[i] = i;
+	} else {
+		struct rte_eth_dcb_rx_conf *rx_conf =
+				&eth_conf->rx_adv_conf.dcb_rx_conf;
+		struct rte_eth_dcb_tx_conf *tx_conf =
+				&eth_conf->tx_adv_conf.dcb_tx_conf;
+
+		rx_conf->nb_tcs = num_tcs;
+		tx_conf->nb_tcs = num_tcs;
+
+		for (i = 0; i < num_tcs; i++) {
+			rx_conf->dcb_tc[i] = i;
+			tx_conf->dcb_tc[i] = i;
 		}
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB;
+		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = rss_hf;
 		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
-		if (dcb_conf->pfc_en)
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT|ETH_DCB_PFC_SUPPORT;
-		else
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
-
-		(void)(rte_memcpy(&eth_conf->rx_adv_conf.dcb_rx_conf, &rx_conf,
-                                sizeof(struct rte_eth_dcb_rx_conf)));
-		(void)(rte_memcpy(&eth_conf->tx_adv_conf.dcb_tx_conf, &tx_conf,
-                                sizeof(struct rte_eth_dcb_tx_conf)));
 	}
 
+	if (pfc_en)
+		eth_conf->dcb_capability_en =
+				ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+	else
+		eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+
 	return 0;
 }
 
 int
-init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf)
+init_port_dcb_config(portid_t pid,
+		     enum dcb_mode_enable dcb_mode,
+		     enum rte_eth_nb_tcs num_tcs,
+		     uint8_t pfc_en)
 {
 	struct rte_eth_conf port_conf;
+	struct rte_eth_dev_info dev_info;
 	struct rte_port *rte_port;
 	int retval;
-	uint16_t nb_vlan;
 	uint16_t i;
 
-	/* rxq and txq configuration in dcb mode */
-	nb_rxq = 128;
-	nb_txq = 128;
+	rte_eth_dev_info_get(pid, &dev_info);
+
+	/* If dev_info.vmdq_pool_base is greater than 0,
+	 * the queue id of vmdq pools is started after pf queues.
+	 */
+	if (dcb_mode == DCB_VT_ENABLED && dev_info.vmdq_pool_base > 0) {
+		printf("VMDQ_DCB multi-queue mode is nonsensical"
+			" for port %d.", pid);
+		return -1;
+	}
+
+	/* Assume the ports in testpmd have the same dcb capability
+	 * and has the same number of rxq and txq in dcb mode
+	 */
+	if (dcb_mode == DCB_VT_ENABLED) {
+		nb_rxq = dev_info.max_rx_queues;
+		nb_txq = dev_info.max_tx_queues;
+	} else {
+		/*if vt is disabled, use all pf queues */
+		if (dev_info.vmdq_pool_base == 0) {
+			nb_rxq = dev_info.max_rx_queues;
+			nb_txq = dev_info.max_tx_queues;
+		} else {
+			nb_rxq = (queueid_t)num_tcs;
+			nb_txq = (queueid_t)num_tcs;
+
+		}
+	}
 	rx_free_thresh = 64;
 
-	memset(&port_conf,0,sizeof(struct rte_eth_conf));
+	memset(&port_conf, 0, sizeof(struct rte_eth_conf));
 	/* Enter DCB configuration status */
 	dcb_config = 1;
 
-	nb_vlan = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
 	/*set configuration of DCB in vt mode and DCB in non-vt mode*/
-	retval = get_eth_dcb_conf(&port_conf, dcb_conf);
+	retval = get_eth_dcb_conf(&port_conf, dcb_mode, num_tcs, pfc_en);
 	if (retval < 0)
 		return retval;
 
 	rte_port = &ports[pid];
-	memcpy(&rte_port->dev_conf, &port_conf,sizeof(struct rte_eth_conf));
+	memcpy(&rte_port->dev_conf, &port_conf, sizeof(struct rte_eth_conf));
 
 	rxtx_port_config(rte_port);
 	/* VLAN filter */
 	rte_port->dev_conf.rxmode.hw_vlan_filter = 1;
-	for (i = 0; i < nb_vlan; i++){
+	for (i = 0; i < RTE_DIM(vlan_tags); i++)
 		rx_vft_set(pid, vlan_tags[i], 1);
-	}
 
 	rte_eth_macaddr_get(pid, &rte_port->eth_addr);
 	map_port_queue_stats_mapping_registers(pid, rte_port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f925df7..3661755 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -255,25 +255,6 @@ enum dcb_mode_enable
 	DCB_ENABLED
 };
 
-/*
- * DCB general config info
- */
-struct dcb_config {
-	enum dcb_mode_enable dcb_mode;
-	uint8_t vt_en;
-	enum rte_eth_nb_tcs num_tcs;
-	uint8_t pfc_en;
-};
-
-/*
- * In DCB io FWD mode, 128 RX queue to 128 TX queue mapping
- */
-enum dcb_queue_mapping_mode {
-	DCB_VT_Q_MAPPING = 0,
-	DCB_4_TCS_Q_MAPPING,
-	DCB_8_TCS_Q_MAPPING
-};
-
 #define MAX_TX_QUEUE_STATS_MAPPINGS 1024 /* MAX_PORT of 32 @ 32 tx_queues/port */
 #define MAX_RX_QUEUE_STATS_MAPPINGS 4096 /* MAX_PORT of 32 @ 128 rx_queues/port */
 
@@ -536,7 +517,9 @@ void dev_set_link_down(portid_t pid);
 void init_port_config(void);
 void set_port_slave_flag(portid_t slave_pid);
 void clear_port_slave_flag(portid_t slave_pid);
-int init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf);
+int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
+		     enum rte_eth_nb_tcs num_tcs,
+		     uint8_t pfc_en);
 int start_port(portid_t pid);
 void stop_port(portid_t pid);
 void close_port(portid_t pid);
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 09/10] app/testpmd: add command to display DCB info
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                     ` (7 preceding siblings ...)
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 08/10] app/testpmd: set up DCB forwarding based on traffic class Jingjing Wu
@ 2015-10-29  8:53   ` Jingjing Wu
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 10/10] doc: update testpmd guide and release note Jingjing Wu
                     ` (3 subsequent siblings)
  12 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

This patch adds a command to display DCB info in ports.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 app/test-pmd/cmdline.c | 15 ++++++++++-----
 app/test-pmd/config.c  | 43 +++++++++++++++++++++++++++++++++++++++++++
 app/test-pmd/testpmd.h |  1 +
 3 files changed, 54 insertions(+), 5 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 2ec855f..bf6b3e4 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -182,7 +182,7 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"Display:\n"
 			"--------\n\n"
 
-			"show port (info|stats|xstats|fdir|stat_qmap) (port_id|all)\n"
+			"show port (info|stats|xstats|fdir|stat_qmap|dcb_tc) (port_id|all)\n"
 			"    Display information for port_id, or all.\n\n"
 
 			"show port X rss reta (size) (mask0,mask1,...)\n"
@@ -5235,6 +5235,9 @@ static void cmd_showportall_parsed(void *parsed_result,
 	else if (!strcmp(res->what, "stat_qmap"))
 		FOREACH_PORT(i, ports)
 			nic_stats_mapping_display(i);
+	else if (!strcmp(res->what, "dcb_tc"))
+		FOREACH_PORT(i, ports)
+			port_dcb_info_display(i);
 }
 
 cmdline_parse_token_string_t cmd_showportall_show =
@@ -5244,13 +5247,13 @@ cmdline_parse_token_string_t cmd_showportall_port =
 	TOKEN_STRING_INITIALIZER(struct cmd_showportall_result, port, "port");
 cmdline_parse_token_string_t cmd_showportall_what =
 	TOKEN_STRING_INITIALIZER(struct cmd_showportall_result, what,
-				 "info#stats#xstats#fdir#stat_qmap");
+				 "info#stats#xstats#fdir#stat_qmap#dcb_tc");
 cmdline_parse_token_string_t cmd_showportall_all =
 	TOKEN_STRING_INITIALIZER(struct cmd_showportall_result, all, "all");
 cmdline_parse_inst_t cmd_showportall = {
 	.f = cmd_showportall_parsed,
 	.data = NULL,
-	.help_str = "show|clear port info|stats|xstats|fdir|stat_qmap all",
+	.help_str = "show|clear port info|stats|xstats|fdir|stat_qmap|dcb_tc all",
 	.tokens = {
 		(void *)&cmd_showportall_show,
 		(void *)&cmd_showportall_port,
@@ -5288,6 +5291,8 @@ static void cmd_showport_parsed(void *parsed_result,
 		 fdir_get_infos(res->portnum);
 	else if (!strcmp(res->what, "stat_qmap"))
 		nic_stats_mapping_display(res->portnum);
+	else if (!strcmp(res->what, "dcb_tc"))
+		port_dcb_info_display(res->portnum);
 }
 
 cmdline_parse_token_string_t cmd_showport_show =
@@ -5297,14 +5302,14 @@ cmdline_parse_token_string_t cmd_showport_port =
 	TOKEN_STRING_INITIALIZER(struct cmd_showport_result, port, "port");
 cmdline_parse_token_string_t cmd_showport_what =
 	TOKEN_STRING_INITIALIZER(struct cmd_showport_result, what,
-				 "info#stats#xstats#fdir#stat_qmap");
+				 "info#stats#xstats#fdir#stat_qmap#dcb_tc");
 cmdline_parse_token_num_t cmd_showport_portnum =
 	TOKEN_NUM_INITIALIZER(struct cmd_showport_result, portnum, UINT8);
 
 cmdline_parse_inst_t cmd_showport = {
 	.f = cmd_showport_parsed,
 	.data = NULL,
-	.help_str = "show|clear port info|stats|xstats|fdir|stat_qmap X (X = port number)",
+	.help_str = "show|clear port info|stats|xstats|fdir|stat_qmap|dcb_tc X (X = port number)",
 	.tokens = {
 		(void *)&cmd_showport_show,
 		(void *)&cmd_showport_port,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 11136aa..6d5820a 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2268,3 +2268,46 @@ mcast_addr_remove(uint8_t port_id, struct ether_addr *mc_addr)
 	mcast_addr_pool_remove(port, i);
 	eth_port_multicast_addr_list_set(port_id);
 }
+
+void
+port_dcb_info_display(uint8_t port_id)
+{
+	struct rte_eth_dcb_info dcb_info;
+	uint16_t i;
+	int ret;
+	static const char *border = "================";
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN))
+		return;
+
+	ret = rte_eth_dev_get_dcb_info(port_id, &dcb_info);
+	if (ret) {
+		printf("\n Failed to get dcb infos on port %-2d\n",
+			port_id);
+		return;
+	}
+	printf("\n  %s DCB infos for port %-2d  %s\n", border, port_id, border);
+	printf("  TC NUMBER: %d\n", dcb_info.nb_tcs);
+	printf("\n  TC :        ");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", i);
+	printf("\n  Priority :  ");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", dcb_info.prio_tc[i]);
+	printf("\n  BW percent :");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d%%", dcb_info.tc_bws[i]);
+	printf("\n  RXQ base :  ");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", dcb_info.tc_queue.tc_rxq[0][i].base);
+	printf("\n  RXQ number :");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", dcb_info.tc_queue.tc_rxq[0][i].nb_queue);
+	printf("\n  TXQ base :  ");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", dcb_info.tc_queue.tc_txq[0][i].base);
+	printf("\n  TXQ number :");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", dcb_info.tc_queue.tc_txq[0][i].nb_queue);
+	printf("\n");
+}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 3661755..ecb411d 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -557,6 +557,7 @@ int tx_queue_id_is_invalid(queueid_t txq_id);
 /* Functions to manage the set of filtered Multicast MAC addresses */
 void mcast_addr_add(uint8_t port_id, struct ether_addr *mc_addr);
 void mcast_addr_remove(uint8_t port_id, struct ether_addr *mc_addr);
+void port_dcb_info_display(uint8_t port_id);
 
 enum print_warning {
 	ENABLED_WARN = 0,
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v2 10/10] doc: update testpmd guide and release note
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                     ` (8 preceding siblings ...)
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 09/10] app/testpmd: add command to display DCB info Jingjing Wu
@ 2015-10-29  8:53   ` Jingjing Wu
  2015-10-30 10:26     ` Thomas Monjalon
  2015-10-30  1:29   ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Liu, Jijiang
                     ` (2 subsequent siblings)
  12 siblings, 1 reply; 40+ messages in thread
From: Jingjing Wu @ 2015-10-29  8:53 UTC (permalink / raw)
  To: dev

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/rel_notes/release_2_2.rst        |  6 ++++++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 12 +++++++-----
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index de6916e..7c0737a 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -11,6 +11,12 @@ New Features
 
 * **Added vhost-user multiple queue support.**
 
+* **Added i40e DCB support.**
+
+  *  Added support to the i40e driver for DCB on PF.
+  *  Provided new API rte_eth_dev_get_dcb_info to query DCB information.
+  *  Changes the testpmd DCB forwarding stream to make it based on TC.
+
 
 Resolved Issues
 ---------------
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 71d831b..b7659d0 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -50,10 +50,10 @@ If you type a partial command and hit ``<TAB>`` you get a list of the available
 
    testpmd> show port <TAB>
 
-       info [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap X
-       info [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap all
-       stats [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap X
-       stats [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap all
+       info [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap|dcb_tc X
+       info [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap|dcb_tc all
+       stats [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap|dcb_tc X
+       stats [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap|dcb_tc all
        ...
 
 
@@ -128,7 +128,7 @@ show port
 
 Display information for a given port or all ports::
 
-   testpmd> show port (info|stats|fdir|stat_qmap) (port_id|all)
+   testpmd> show port (info|stats|fdir|stat_qmap|dcb_tc) (port_id|all)
 
 The available information categories are:
 
@@ -140,6 +140,8 @@ The available information categories are:
 
 * ``stat_qmap``: Queue statistics mapping.
 
+* ``dcb_tc``: DCB information such as TC mapping.
+
 For example:
 
 .. code-block:: console
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                     ` (9 preceding siblings ...)
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 10/10] doc: update testpmd guide and release note Jingjing Wu
@ 2015-10-30  1:29   ` Liu, Jijiang
  2015-10-30  2:21   ` Zhang, Helin
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
  12 siblings, 0 replies; 40+ messages in thread
From: Liu, Jijiang @ 2015-10-30  1:29 UTC (permalink / raw)
  To: Wu, Jingjing, dev



> -----Original Message-----
> From: Wu, Jingjing
> Sent: Thursday, October 29, 2015 4:54 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing; Zhang, Helin; Pei, Yulong; Liu, Jijiang
> Subject: [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC
> 
> The patch set enables DCB feature on Intel XL710/X710 NICs, including:
>   - Receive queue classification based on traffic class
>   - Round Robin ETS schedule (rx and tx).
>   - Priority flow control
> To make the testpmd and ethdev lib more generic on DCB feature, this patch
> set also
>   - adds a new API to get DCB related information on NICs.
>   - changes the DCB test forwarding in testpmd to be on traffic class.
>   - move specific validation from lib and application to drivers.
> Additionally, this patch set also corrects some coding style issues.
> 
> v2 changes:
>  - add a command in testpmd to display dcb info
>  - update testpmd guide and release note
> 
> Jingjing Wu (10):
>   ethdev: rename dcb_queue to dcb_tc in dcb config struct
>   ethdev: move the multi-queue checking to specific drivers
>   i40e: enable DCB feature on FVL
>   ixgbe: enable DCB+RSS multi-queue mode
>   ethdev: new API to get dcb related information
>   ixgbe: get_dcb_info ops implement
>   i40e: get_dcb_info ops implement
>   app/testpmd: set up DCB forwarding based on traffic class
>   app/testpmd: add command to display DCB info
>   doc: update testpmd guide and release note
> 
>  app/test-pmd/cmdline.c                      |  54 ++-
>  app/test-pmd/config.c                       | 202 +++++-----
>  app/test-pmd/testpmd.c                      | 151 ++++----
>  app/test-pmd/testpmd.h                      |  24 +-
>  doc/guides/rel_notes/release_2_2.rst        |   6 +
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  12 +-
>  drivers/net/e1000/igb_ethdev.c              |  84 +++-
>  drivers/net/i40e/i40e_ethdev.c              | 574
> +++++++++++++++++++++++++++-
>  drivers/net/i40e/i40e_ethdev.h              |  14 +
>  drivers/net/i40e/i40e_rxtx.c                |  32 +-
>  drivers/net/i40e/i40e_rxtx.h                |   2 +
>  drivers/net/ixgbe/ixgbe_ethdev.c            | 251 ++++++++++++
>  drivers/net/ixgbe/ixgbe_ethdev.h            |   3 +
>  drivers/net/ixgbe/ixgbe_rxtx.c              |  58 +--
>  examples/vmdq_dcb/main.c                    |   4 +-
>  lib/librte_ether/rte_ethdev.c               | 217 +----------
>  lib/librte_ether/rte_ethdev.h               |  68 +++-
>  17 files changed, 1303 insertions(+), 453 deletions(-)
> 
> --
> 2.4.0

Acked-by: Jijiang Liu <Jijiang.liu@intel.com>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                     ` (10 preceding siblings ...)
  2015-10-30  1:29   ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Liu, Jijiang
@ 2015-10-30  2:21   ` Zhang, Helin
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
  12 siblings, 0 replies; 40+ messages in thread
From: Zhang, Helin @ 2015-10-30  2:21 UTC (permalink / raw)
  To: Wu, Jingjing, dev

> -----Original Message-----
> From: Wu, Jingjing
> Sent: Thursday, October 29, 2015 4:54 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing; Zhang, Helin; Pei, Yulong; Liu, Jijiang
> Subject: [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC
> 
> The patch set enables DCB feature on Intel XL710/X710 NICs, including:
>   - Receive queue classification based on traffic class
>   - Round Robin ETS schedule (rx and tx).
>   - Priority flow control
> To make the testpmd and ethdev lib more generic on DCB feature, this patch set
> also
>   - adds a new API to get DCB related information on NICs.
>   - changes the DCB test forwarding in testpmd to be on traffic class.
>   - move specific validation from lib and application to drivers.
> Additionally, this patch set also corrects some coding style issues.
> 
> v2 changes:
>  - add a command in testpmd to display dcb info
>  - update testpmd guide and release note
> 
> Jingjing Wu (10):
>   ethdev: rename dcb_queue to dcb_tc in dcb config struct
>   ethdev: move the multi-queue checking to specific drivers
>   i40e: enable DCB feature on FVL
>   ixgbe: enable DCB+RSS multi-queue mode
>   ethdev: new API to get dcb related information
>   ixgbe: get_dcb_info ops implement
>   i40e: get_dcb_info ops implement
>   app/testpmd: set up DCB forwarding based on traffic class
>   app/testpmd: add command to display DCB info
>   doc: update testpmd guide and release note
> 
>  app/test-pmd/cmdline.c                      |  54 ++-
>  app/test-pmd/config.c                       | 202 +++++-----
>  app/test-pmd/testpmd.c                      | 151 ++++----
>  app/test-pmd/testpmd.h                      |  24 +-
>  doc/guides/rel_notes/release_2_2.rst        |   6 +
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  12 +-
>  drivers/net/e1000/igb_ethdev.c              |  84 +++-
>  drivers/net/i40e/i40e_ethdev.c              | 574
> +++++++++++++++++++++++++++-
>  drivers/net/i40e/i40e_ethdev.h              |  14 +
>  drivers/net/i40e/i40e_rxtx.c                |  32 +-
>  drivers/net/i40e/i40e_rxtx.h                |   2 +
>  drivers/net/ixgbe/ixgbe_ethdev.c            | 251 ++++++++++++
>  drivers/net/ixgbe/ixgbe_ethdev.h            |   3 +
>  drivers/net/ixgbe/ixgbe_rxtx.c              |  58 +--
>  examples/vmdq_dcb/main.c                    |   4 +-
>  lib/librte_ether/rte_ethdev.c               | 217 +----------
>  lib/librte_ether/rte_ethdev.h               |  68 +++-
>  17 files changed, 1303 insertions(+), 453 deletions(-)
> 
> --
> 2.4.0

Acked-by: Helin Zhang <helin.zhang@intel.com>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/10] ethdev: rename dcb_queue to dcb_tc in dcb config struct
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 01/10] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
@ 2015-10-30 10:22     ` Thomas Monjalon
  0 siblings, 0 replies; 40+ messages in thread
From: Thomas Monjalon @ 2015-10-30 10:22 UTC (permalink / raw)
  To: Jingjing Wu; +Cc: dev

This is an API change and should be noticed in the release notes.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [dpdk-dev] [PATCH v2 10/10] doc: update testpmd guide and release note
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 10/10] doc: update testpmd guide and release note Jingjing Wu
@ 2015-10-30 10:26     ` Thomas Monjalon
  0 siblings, 0 replies; 40+ messages in thread
From: Thomas Monjalon @ 2015-10-30 10:26 UTC (permalink / raw)
  To: Jingjing Wu; +Cc: dev

2015-10-29 16:53, Jingjing Wu:
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  doc/guides/rel_notes/release_2_2.rst        |  6 ++++++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst | 12 +++++++-----

These changes would be easier to understand if they were in the context
of the code changes. I suggest to update docs in the same patch as code change.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [dpdk-dev] [PATCH v2 05/10] ethdev: new API to get dcb related information
  2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 05/10] ethdev: new API to get dcb related information Jingjing Wu
@ 2015-10-30 11:16     ` Thomas Monjalon
  0 siblings, 0 replies; 40+ messages in thread
From: Thomas Monjalon @ 2015-10-30 11:16 UTC (permalink / raw)
  To: Jingjing Wu; +Cc: dev

2015-10-29 16:53, Jingjing Wu:
> This patch adds one new API to get dcb related info.
>   rte_eth_dev_get_dcb_info
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  lib/librte_ether/rte_ethdev.c | 18 +++++++++++++++
>  lib/librte_ether/rte_ethdev.h | 54 +++++++++++++++++++++++++++++++++++++++++++

You forgot to update the .map file.

So there are some build errors later:
	undefined reference to `rte_eth_dev_get_dcb_info'

Jijiang, Helin, and other reviewers generally,
please try to catch this kind of errors (.map, doc) as they are
quite common and make the integration process slower when I catch
it only at the end.
Better are the patches, more will be integrated.
Thanks

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v3 0/9] enable DCB feature on Intel XL710/X710 NIC
  2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
                     ` (11 preceding siblings ...)
  2015-10-30  2:21   ` Zhang, Helin
@ 2015-10-31 15:57   ` Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 1/9] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
                       ` (9 more replies)
  12 siblings, 10 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-31 15:57 UTC (permalink / raw)
  To: dev

The patch set enables DCB feature on Intel XL710/X710 NICs, including:
  - Receive queue classification based on traffic class
  - Round Robin ETS schedule (rx and tx).
  - Priority flow control
To make the testpmd and ethdev lib more generic on DCB feature, this
patch set also
  - adds a new API to get DCB related information on NICs.
  - changes the DCB test forwarding in testpmd to be on traffic class.
  - move specific validation from lib and application to drivers.
Additionally, this patch set also corrects some coding style issues.

v2 changes:
 - add a command in testpmd to display dcb info
 - update testpmd guide and release note

v3 changes:
 - add API change in release note
 - add new function in rte_ether_version.map
 - rebase doc update to the same commit with code change

Jingjing Wu (9):
  ethdev: rename dcb_queue to dcb_tc in dcb config struct
  ethdev: move the multi-queue checking to specific drivers
  i40e: enable DCB feature on FVL
  ixgbe: enable DCB+RSS multi-queue mode
  ethdev: new API to get dcb related information
  ixgbe: get_dcb_info ops implement
  i40e: get_dcb_info ops implement
  app/testpmd: set up DCB forwarding based on traffic class
  app/testpmd: add command to display DCB info

 app/test-pmd/cmdline.c                      |  54 ++-
 app/test-pmd/config.c                       | 202 +++++-----
 app/test-pmd/testpmd.c                      | 151 ++++----
 app/test-pmd/testpmd.h                      |  24 +-
 doc/guides/rel_notes/release_2_2.rst        |   6 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  12 +-
 drivers/net/e1000/igb_ethdev.c              |  84 +++-
 drivers/net/i40e/i40e_ethdev.c              | 574 +++++++++++++++++++++++++++-
 drivers/net/i40e/i40e_ethdev.h              |  14 +
 drivers/net/i40e/i40e_rxtx.c                |  32 +-
 drivers/net/i40e/i40e_rxtx.h                |   2 +
 drivers/net/ixgbe/ixgbe_ethdev.c            | 250 ++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h            |   3 +
 drivers/net/ixgbe/ixgbe_rxtx.c              |  58 +--
 examples/vmdq_dcb/main.c                    |   4 +-
 lib/librte_ether/rte_ethdev.c               | 217 +----------
 lib/librte_ether/rte_ethdev.h               |  68 +++-
 lib/librte_ether/rte_ether_version.map      |   7 +
 18 files changed, 1309 insertions(+), 453 deletions(-)

-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v3 1/9] ethdev: rename dcb_queue to dcb_tc in dcb config struct
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
@ 2015-10-31 15:57     ` Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 2/9] ethdev: move the multi-queue checking to specific drivers Jingjing Wu
                       ` (8 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-31 15:57 UTC (permalink / raw)
  To: dev

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 app/test-pmd/testpmd.c               |  8 ++++----
 doc/guides/rel_notes/release_2_2.rst |  4 ++++
 drivers/net/ixgbe/ixgbe_rxtx.c       | 10 +++++-----
 examples/vmdq_dcb/main.c             |  4 ++--
 lib/librte_ether/rte_ethdev.h        | 14 +++++++-------
 5 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 3cd3cd0..4c6aec6 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1878,8 +1878,8 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
 			vmdq_rx_conf.pool_map[i].pools = 1 << (i % vmdq_rx_conf.nb_queue_pools);
 		}
 		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-			vmdq_rx_conf.dcb_queue[i] = i;
-			vmdq_tx_conf.dcb_queue[i] = i;
+			vmdq_rx_conf.dcb_tc[i] = i;
+			vmdq_tx_conf.dcb_tc[i] = i;
 		}
 
 		/*set DCB mode of RX and TX of multiple queues*/
@@ -1909,8 +1909,8 @@ get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
 		tx_conf.nb_tcs = dcb_conf->num_tcs;
 
 		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
-			rx_conf.dcb_queue[i] = i;
-			tx_conf.dcb_queue[i] = i;
+			rx_conf.dcb_tc[i] = i;
+			tx_conf.dcb_tc[i] = i;
 		}
 		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB;
 		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 116162e..1857e1d 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -138,6 +138,10 @@ API Changes
 
 * The devargs union field virtual is renamed to virt for C++ compatibility.
 
+* The dcb_queue is renamed to dcb_tc in following dcb configuration
+  structures: rte_eth_dcb_rx_conf, rte_eth_vmdq_dcb_tx_conf,
+  rte_eth_dcb_tx_conf, rte_eth_vmdq_dcb_conf.
+
 
 ABI Changes
 -----------
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 1158562..6a62d67 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2928,7 +2928,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
 		 */
-		queue_mapping |= ((cfg->dcb_queue[i] & 0x07) << (i * 3));
+		queue_mapping |= ((cfg->dcb_tc[i] & 0x07) << (i * 3));
 
 	IXGBE_WRITE_REG(hw, IXGBE_RTRUP2TC, queue_mapping);
 
@@ -3063,7 +3063,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = vmdq_rx_conf->dcb_queue[i];
+		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
@@ -3091,7 +3091,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = vmdq_tx_conf->dcb_queue[i];
+		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
@@ -3113,7 +3113,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
 
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = rx_conf->dcb_queue[i];
+		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
@@ -3134,7 +3134,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
 
 	/* User Priority to Traffic Class mapping */
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-		j = tx_conf->dcb_queue[i];
+		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap =
 						(uint8_t)(1 << j);
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index c31c2ce..b90ac28 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -107,7 +107,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 			.default_pool = 0,
 			.nb_pool_maps = 0,
 			.pool_map = {{0, 0},},
-			.dcb_queue = {0},
+			.dcb_tc = {0},
 		},
 	},
 };
@@ -144,7 +144,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf, enum rte_eth_nb_pools num_pools)
 		conf.pool_map[i].pools = 1 << (i % num_pools);
 	}
 	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
-		conf.dcb_queue[i] = (uint8_t)(i % (NUM_QUEUES/num_pools));
+		conf.dcb_tc[i] = (uint8_t)(i % (NUM_QUEUES/num_pools));
 	}
 	(void)(rte_memcpy(eth_conf, &vmdq_dcb_conf_default, sizeof(*eth_conf)));
 	(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_dcb_conf, &conf,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8a8c82b..377da6a 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -543,20 +543,20 @@ enum rte_eth_nb_pools {
 /* This structure may be extended in future. */
 struct rte_eth_dcb_rx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
-	/**< Possible DCB queue,4 or 8. */
+	/** Traffic class each UP mapped to. */
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_dcb_tx_conf {
 	enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
-	/**< Possible DCB queue,4 or 8. */
+	/** Traffic class each UP mapped to. */
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_dcb_tx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
-	/**< Possible DCB queue,4 or 8. */
+	/** Traffic class each UP mapped to. */
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_tx_conf {
@@ -583,7 +583,7 @@ struct rte_eth_vmdq_dcb_conf {
 		uint16_t vlan_id; /**< The vlan id of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet rx */
 	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
-	uint8_t dcb_queue[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
 	/**< Selects a queue in a pool */
 };
 
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v3 2/9] ethdev: move the multi-queue checking to specific drivers
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 1/9] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
@ 2015-10-31 15:57     ` Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 3/9] i40e: enable DCB feature on FVL Jingjing Wu
                       ` (7 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-31 15:57 UTC (permalink / raw)
  To: dev

Differnet NIC has its specific constraint on the multi-queue
configuration, so move the checking from ethdev lib to drivers.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/e1000/igb_ethdev.c   |  84 ++++++++++++++++-
 drivers/net/ixgbe/ixgbe_ethdev.c | 171 +++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |   3 +
 lib/librte_ether/rte_ethdev.c    | 199 ---------------------------------------
 4 files changed, 257 insertions(+), 200 deletions(-)

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 3ab082e..7a8fa93 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -865,16 +865,98 @@ rte_igbvf_pmd_init(const char *name __rte_unused, const char *params __rte_unuse
 }
 
 static int
+igb_check_mq_mode(struct rte_eth_dev *dev)
+{
+	enum rte_eth_rx_mq_mode rx_mq_mode = dev->data->dev_conf.rxmode.mq_mode;
+	enum rte_eth_tx_mq_mode tx_mq_mode = dev->data->dev_conf.txmode.mq_mode;
+	uint16_t nb_rx_q = dev->data->nb_rx_queues;
+	uint16_t nb_tx_q = dev->data->nb_rx_queues;
+
+	if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
+	    tx_mq_mode == ETH_MQ_TX_DCB ||
+	    tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
+		return -EINVAL;
+	}
+	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
+		/* Check multi-queue mode.
+		 * To no break software we accept ETH_MQ_RX_NONE as this might
+		 * be used to turn off VLAN filter.
+		 */
+
+		if (rx_mq_mode == ETH_MQ_RX_NONE ||
+		    rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+		} else {
+			/* Only support one queue on VFs.
+			 * RSS together with SRIOV is not supported.
+			 */
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" wrong mq_mode rx %d.",
+					rx_mq_mode);
+			return -EINVAL;
+		}
+		/* TX mode is not used here, so mode might be ignored.*/
+		if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+			/* SRIOV only works in VMDq enable mode */
+			PMD_INIT_LOG(WARNING, "SRIOV is active,"
+					" TX mode %d is not supported. "
+					" Driver will behave as %d mode.",
+					tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+		}
+
+		/* check valid queue number */
+		if ((nb_rx_q > 1) || (nb_tx_q > 1)) {
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" only support one queue on VFs.");
+			return -EINVAL;
+		}
+	} else {
+		/* To no break software that set invalid mode, only display
+		 * warning if invalid mode is used.
+		 */
+		if (rx_mq_mode != ETH_MQ_RX_NONE &&
+		    rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
+		    rx_mq_mode != ETH_MQ_RX_RSS) {
+			/* RSS together with VMDq not supported*/
+			PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
+				     rx_mq_mode);
+			return -EINVAL;
+		}
+
+		if (tx_mq_mode != ETH_MQ_TX_NONE &&
+		    tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+			PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
+					" Due to txmode is meaningless in this"
+					" driver, just ignore.",
+					tx_mq_mode);
+		}
+	}
+	return 0;
+}
+
+static int
 eth_igb_configure(struct rte_eth_dev *dev)
 {
 	struct e1000_interrupt *intr =
 		E1000_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
+	int ret;
 
 	PMD_INIT_FUNC_TRACE();
+
+	/* multipe queue mode checking */
+	ret  = igb_check_mq_mode(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "igb_check_mq_mode fails with %d.",
+			    ret);
+		return ret;
+	}
+
 	intr->flags |= E1000_FLAG_NEED_LINK_UPDATE;
 	PMD_INIT_FUNC_TRACE();
 
-	return (0);
+	return 0;
 }
 
 static int
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 4373661..ece2e73 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1640,14 +1640,185 @@ ixgbe_vmdq_vlan_hw_filter_enable(struct rte_eth_dev *dev)
 }
 
 static int
+ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
+{
+	switch (nb_rx_q) {
+	case 1:
+	case 2:
+		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		break;
+	case 4:
+		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
+	RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx = dev->pci_dev->max_vfs * nb_rx_q;
+
+	return 0;
+}
+
+static int
+ixgbe_check_mq_mode(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	uint16_t nb_rx_q = dev->data->nb_rx_queues;
+	uint16_t nb_tx_q = dev->data->nb_rx_queues;
+
+	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
+		/* check multi-queue mode */
+		switch (dev_conf->rxmode.mq_mode) {
+		case ETH_MQ_RX_VMDQ_DCB:
+		case ETH_MQ_RX_VMDQ_DCB_RSS:
+			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
+			PMD_INIT_LOG(ERR, "SRIOV active,"
+					" unsupported mq_mode rx %d.",
+					dev_conf->rxmode.mq_mode);
+			return -EINVAL;
+		case ETH_MQ_RX_RSS:
+		case ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
+				if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
+					PMD_INIT_LOG(ERR, "SRIOV is active,"
+						" invalid queue number"
+						" for VMDQ RSS, allowed"
+						" value are 1, 2 or 4.");
+					return -EINVAL;
+				}
+			break;
+		case ETH_MQ_RX_VMDQ_ONLY:
+		case ETH_MQ_RX_NONE:
+			/* if nothing mq mode configure, use default scheme */
+			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
+				RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+			break;
+		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+			/* SRIOV only works in VMDq enable mode */
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" wrong mq_mode rx %d.",
+					dev_conf->rxmode.mq_mode);
+			return -EINVAL;
+		}
+
+		switch (dev_conf->txmode.mq_mode) {
+		case ETH_MQ_TX_VMDQ_DCB:
+			/* DCB VMDQ in SRIOV mode, not implement yet */
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" unsupported VMDQ mq_mode tx %d.",
+					dev_conf->txmode.mq_mode);
+			return -EINVAL;
+		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+			break;
+		}
+
+		/* check valid queue number */
+		if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
+		    (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
+			PMD_INIT_LOG(ERR, "SRIOV is active,"
+					" queue number must less equal to %d.",
+					RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
+			return -EINVAL;
+		}
+	} else {
+		/* check configuration for vmdb+dcb mode */
+		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+			const struct rte_eth_vmdq_dcb_conf *conf;
+
+			if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB, nb_rx_q != %d.",
+						IXGBE_VMDQ_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
+			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
+			       conf->nb_queue_pools == ETH_32_POOLS)) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
+						" nb_queue_pools must be %d or %d.",
+						ETH_16_POOLS, ETH_32_POOLS);
+				return -EINVAL;
+			}
+		}
+		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+			const struct rte_eth_vmdq_dcb_tx_conf *conf;
+
+			if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB, nb_tx_q != %d",
+						 IXGBE_VMDQ_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
+			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
+			       conf->nb_queue_pools == ETH_32_POOLS)) {
+				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
+						" nb_queue_pools != %d and"
+						" nb_queue_pools != %d.",
+						ETH_16_POOLS, ETH_32_POOLS);
+				return -EINVAL;
+			}
+		}
+
+		/* For DCB mode check our configuration before we go further */
+		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+			const struct rte_eth_dcb_rx_conf *conf;
+
+			if (nb_rx_q != IXGBE_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "DCB selected, nb_rx_q != %d.",
+						 IXGBE_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
+			if (!(conf->nb_tcs == ETH_4_TCS ||
+			       conf->nb_tcs == ETH_8_TCS)) {
+				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
+						" and nb_tcs != %d.",
+						ETH_4_TCS, ETH_8_TCS);
+				return -EINVAL;
+			}
+		}
+
+		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+			const struct rte_eth_dcb_tx_conf *conf;
+
+			if (nb_tx_q != IXGBE_DCB_NB_QUEUES) {
+				PMD_INIT_LOG(ERR, "DCB, nb_tx_q != %d.",
+						 IXGBE_DCB_NB_QUEUES);
+				return -EINVAL;
+			}
+			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
+			if (!(conf->nb_tcs == ETH_4_TCS ||
+			       conf->nb_tcs == ETH_8_TCS)) {
+				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
+						" and nb_tcs != %d.",
+						ETH_4_TCS, ETH_8_TCS);
+				return -EINVAL;
+			}
+		}
+	}
+	return 0;
+}
+
+static int
 ixgbe_dev_configure(struct rte_eth_dev *dev)
 {
 	struct ixgbe_interrupt *intr =
 		IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
 	struct ixgbe_adapter *adapter =
 		(struct ixgbe_adapter *)dev->data->dev_private;
+	int ret;
 
 	PMD_INIT_FUNC_TRACE();
+	/* multipe queue mode checking */
+	ret  = ixgbe_check_mq_mode(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "ixgbe_check_mq_mode fails with %d.",
+			    ret);
+		return ret;
+	}
 
 	/* set flag to update link status after init */
 	intr->flags |= IXGBE_FLAG_NEED_LINK_UPDATE;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index f75c6dd..ccef592 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -57,6 +57,9 @@
 #define IXGBE_VFTA_SIZE 128
 #define IXGBE_VLAN_TAG_SIZE 4
 #define IXGBE_MAX_RX_QUEUE_NUM	128
+#define IXGBE_VMDQ_DCB_NB_QUEUES     IXGBE_MAX_RX_QUEUE_NUM
+#define IXGBE_DCB_NB_QUEUES          IXGBE_MAX_RX_QUEUE_NUM
+
 #ifndef NBBY
 #define NBBY	8	/* number of bits in a byte */
 #endif
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index f593f6e..c7247c3 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -880,197 +880,6 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 	return 0;
 }
 
-static int
-rte_eth_dev_check_vf_rss_rxq_num(uint8_t port_id, uint16_t nb_rx_q)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-
-	switch (nb_rx_q) {
-	case 1:
-	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active =
-			ETH_64_POOLS;
-		break;
-	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active =
-			ETH_32_POOLS;
-		break;
-	default:
-		return -EINVAL;
-	}
-
-	RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
-	RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =
-		dev->pci_dev->max_vfs * nb_rx_q;
-
-	return 0;
-}
-
-static int
-rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
-			  const struct rte_eth_conf *dev_conf)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-
-	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
-		/* check multi-queue mode */
-		if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) ||
-		    (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB_RSS) ||
-		    (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB)) {
-			/* SRIOV only works in VMDq enable mode */
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"wrong VMDQ mq_mode rx %u tx %u\n",
-					port_id,
-					dev_conf->rxmode.mq_mode,
-					dev_conf->txmode.mq_mode);
-			return -EINVAL;
-		}
-
-		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"unsupported VMDQ mq_mode rx %u\n",
-					port_id, dev_conf->rxmode.mq_mode);
-			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"Rx mq mode is changed from:"
-					"mq_mode %u into VMDQ mq_mode %u\n",
-					port_id,
-					dev_conf->rxmode.mq_mode,
-					dev->data->dev_conf.rxmode.mq_mode);
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
-			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
-				if (rte_eth_dev_check_vf_rss_rxq_num(port_id, nb_rx_q) != 0) {
-					PMD_DEBUG_TRACE("ethdev port_id=%d"
-							" SRIOV active, invalid queue"
-							" number for VMDQ RSS, allowed"
-							" value are 1, 2 or 4\n",
-							port_id);
-					return -EINVAL;
-				}
-			break;
-		default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
-			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
-			if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
-				RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
-			break;
-		}
-
-		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			/* DCB VMDQ in SRIOV mode, not implement yet */
-			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
-					" SRIOV active, "
-					"unsupported VMDQ mq_mode tx %u\n",
-					port_id, dev_conf->txmode.mq_mode);
-			return -EINVAL;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
-			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
-			break;
-		}
-
-		/* check valid queue number */
-		if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
-		    (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
-			PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
-					"queue number must less equal to %d\n",
-					port_id,
-					RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
-			return -EINVAL;
-		}
-	} else {
-		/* For vmdb+dcb mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
-			const struct rte_eth_vmdq_dcb_conf *conf;
-
-			if (nb_rx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB, nb_rx_q "
-						"!= %d\n",
-						port_id, ETH_VMDQ_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->rx_adv_conf.vmdq_dcb_conf);
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			      conf->nb_queue_pools == ETH_32_POOLS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB selected, "
-						"nb_queue_pools must be %d or %d\n",
-						port_id, ETH_16_POOLS, ETH_32_POOLS);
-				return -EINVAL;
-			}
-		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
-			const struct rte_eth_vmdq_dcb_tx_conf *conf;
-
-			if (nb_tx_q != ETH_VMDQ_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB, nb_tx_q "
-						"!= %d\n",
-						port_id, ETH_VMDQ_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->tx_adv_conf.vmdq_dcb_tx_conf);
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			      conf->nb_queue_pools == ETH_32_POOLS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d VMDQ+DCB selected, "
-						"nb_queue_pools != %d or nb_queue_pools "
-						"!= %d\n",
-						port_id, ETH_16_POOLS, ETH_32_POOLS);
-				return -EINVAL;
-			}
-		}
-
-		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
-			const struct rte_eth_dcb_rx_conf *conf;
-
-			if (nb_rx_q != ETH_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_rx_q "
-						"!= %d\n",
-						port_id, ETH_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->rx_adv_conf.dcb_rx_conf);
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			      conf->nb_tcs == ETH_8_TCS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB selected, "
-						"nb_tcs != %d or nb_tcs "
-						"!= %d\n",
-						port_id, ETH_4_TCS, ETH_8_TCS);
-				return -EINVAL;
-			}
-		}
-
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
-			const struct rte_eth_dcb_tx_conf *conf;
-
-			if (nb_tx_q != ETH_DCB_NUM_QUEUES) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB, nb_tx_q "
-						"!= %d\n",
-						port_id, ETH_DCB_NUM_QUEUES);
-				return -EINVAL;
-			}
-			conf = &(dev_conf->tx_adv_conf.dcb_tx_conf);
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			      conf->nb_tcs == ETH_8_TCS)) {
-				PMD_DEBUG_TRACE("ethdev port_id=%d DCB selected, "
-						"nb_tcs != %d or nb_tcs "
-						"!= %d\n",
-						port_id, ETH_4_TCS, ETH_8_TCS);
-				return -EINVAL;
-			}
-		}
-	}
-	return 0;
-}
-
 int
 rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
@@ -1182,14 +991,6 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 							ETHER_MAX_LEN;
 	}
 
-	/* multiple queue mode checking */
-	diag = rte_eth_dev_check_mq_mode(port_id, nb_rx_q, nb_tx_q, dev_conf);
-	if (diag != 0) {
-		PMD_DEBUG_TRACE("port%d rte_eth_dev_check_mq_mode = %d\n",
-				port_id, diag);
-		return diag;
-	}
-
 	/*
 	 * Setup new number of RX/TX queues and reconfigure device.
 	 */
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v3 3/9] i40e: enable DCB feature on FVL
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 1/9] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 2/9] ethdev: move the multi-queue checking to specific drivers Jingjing Wu
@ 2015-10-31 15:57     ` Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 4/9] ixgbe: enable DCB+RSS multi-queue mode Jingjing Wu
                       ` (6 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-31 15:57 UTC (permalink / raw)
  To: dev

This patch enables DCB feature on Intel XL710/X710 NICs. It includes:
  Receive queue classification based on traffic class
  Round Robin ETS schedule (rx and tx)
  Priority flow control

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/rel_notes/release_2_2.rst |   2 +
 drivers/net/i40e/i40e_ethdev.c       | 532 ++++++++++++++++++++++++++++++++++-
 drivers/net/i40e/i40e_ethdev.h       |  14 +
 drivers/net/i40e/i40e_rxtx.c         |  32 ++-
 drivers/net/i40e/i40e_rxtx.h         |   2 +
 5 files changed, 568 insertions(+), 14 deletions(-)

diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 1857e1d..ddfd322 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -45,6 +45,8 @@ New Features
 
 * **Added port hotplug support to xenvirt.**
 
+* **Added support DCB on PF to the i40e driver.**
+
 
 Resolved Issues
 ---------------
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 016838a..ce4efb2 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -56,6 +56,7 @@
 #include "base/i40e_adminq_cmd.h"
 #include "base/i40e_type.h"
 #include "base/i40e_register.h"
+#include "base/i40e_dcb.h"
 #include "i40e_ethdev.h"
 #include "i40e_rxtx.h"
 #include "i40e_pf.h"
@@ -134,6 +135,10 @@
 #define I40E_PRTTSYN_TSYNENA  0x80000000
 #define I40E_PRTTSYN_TSYNTYPE 0x0e000000
 
+#define I40E_MAX_PERCENT            100
+#define I40E_DEFAULT_DCB_APP_NUM    1
+#define I40E_DEFAULT_DCB_APP_PRIO   3
+
 static int eth_i40e_dev_init(struct rte_eth_dev *eth_dev);
 static int eth_i40e_dev_uninit(struct rte_eth_dev *eth_dev);
 static int i40e_dev_configure(struct rte_eth_dev *dev);
@@ -189,6 +194,8 @@ static int i40e_pf_parameter_init(struct rte_eth_dev *dev);
 static int i40e_pf_setup(struct i40e_pf *pf);
 static int i40e_dev_rxtx_init(struct i40e_pf *pf);
 static int i40e_vmdq_setup(struct rte_eth_dev *dev);
+static int i40e_dcb_init_configure(struct rte_eth_dev *dev, bool sw_dcb);
+static int i40e_dcb_setup(struct rte_eth_dev *dev);
 static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,
 		bool offset_loaded, uint64_t *offset, uint64_t *stat);
 static void i40e_stat_update_48(struct i40e_hw *hw,
@@ -517,11 +524,6 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
 		     ((hw->nvm.version >> 4) & 0xff),
 		     (hw->nvm.version & 0xf), hw->nvm.eetrack);
 
-	/* Disable LLDP */
-	ret = i40e_aq_stop_lldp(hw, true, NULL);
-	if (ret != I40E_SUCCESS) /* Its failure can be ignored */
-		PMD_INIT_LOG(INFO, "Failed to stop lldp");
-
 	/* Clear PXE mode */
 	i40e_clear_pxe_mode(hw);
 
@@ -642,6 +644,13 @@ eth_i40e_dev_init(struct rte_eth_dev *dev)
 	/* initialize mirror rule list */
 	TAILQ_INIT(&pf->mirror_list);
 
+	/* Init dcb to sw mode by default */
+	ret = i40e_dcb_init_configure(dev, TRUE);
+	if (ret != I40E_SUCCESS) {
+		PMD_INIT_LOG(INFO, "Failed to init dcb.");
+		pf->flags &= ~I40E_FLAG_DCB;
+	}
+
 	return 0;
 
 err_mac_alloc:
@@ -728,7 +737,7 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode;
-	int ret;
+	int i, ret;
 
 	/* Initialize to TRUE. If any of Rx queues doesn't meet the
 	 * bulk allocation or vector Rx preconditions we will reset it.
@@ -773,8 +782,27 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 		if (ret)
 			goto err;
 	}
+
+	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+		ret = i40e_dcb_setup(dev);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "failed to configure DCB.");
+			goto err_dcb;
+		}
+	}
+
 	return 0;
+
+err_dcb:
+	/* need to release vmdq resource if exists */
+	for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+		i40e_vsi_release(pf->vmdq[i].vsi);
+		pf->vmdq[i].vsi = NULL;
+	}
+	rte_free(pf->vmdq);
+	pf->vmdq = NULL;
 err:
+	/* need to release fdir resource if exists */
 	i40e_fdir_teardown(pf);
 	return ret;
 }
@@ -2517,6 +2545,9 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
 		 */
 	}
 
+	if (hw->func_caps.dcb)
+		pf->flags |= I40E_FLAG_DCB;
+
 	if (sum_vsis > pf->max_num_vsi ||
 		sum_queues > hw->func_caps.num_rx_qp) {
 		PMD_INIT_LOG(ERR, "VSI/QUEUE setting can't be satisfied");
@@ -2922,7 +2953,7 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi,
 				 struct i40e_aqc_vsi_properties_data *info,
 				 uint8_t enabled_tcmap)
 {
-	int ret, total_tc = 0, i;
+	int ret, i, total_tc = 0;
 	uint16_t qpnum_per_tc, bsf, qp_idx;
 
 	ret = validate_tcmap_parameter(vsi, enabled_tcmap);
@@ -5479,11 +5510,6 @@ i40e_pf_config_mq_rx(struct i40e_pf *pf)
 	int ret = 0;
 	enum rte_eth_rx_mq_mode mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
 
-	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
-		PMD_INIT_LOG(ERR, "i40e doesn't support DCB yet");
-		return -ENOTSUP;
-	}
-
 	/* RSS setup */
 	if (mq_mode & ETH_MQ_RX_RSS_FLAG)
 		ret = i40e_pf_config_rss(pf);
@@ -6508,3 +6534,485 @@ i40e_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
 
 	return  0;
 }
+
+/*
+ * i40e_parse_dcb_configure - parse dcb configure from user
+ * @dev: the device being configured
+ * @dcb_cfg: pointer of the result of parse
+ * @*tc_map: bit map of enabled traffic classes
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_parse_dcb_configure(struct rte_eth_dev *dev,
+			 struct i40e_dcbx_config *dcb_cfg,
+			 uint8_t *tc_map)
+{
+	struct rte_eth_dcb_rx_conf *dcb_rx_conf;
+	uint8_t i, tc_bw, bw_lf;
+
+	memset(dcb_cfg, 0, sizeof(struct i40e_dcbx_config));
+
+	dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+	if (dcb_rx_conf->nb_tcs > I40E_MAX_TRAFFIC_CLASS) {
+		PMD_INIT_LOG(ERR, "number of tc exceeds max.");
+		return -EINVAL;
+	}
+
+	/* assume each tc has the same bw */
+	tc_bw = I40E_MAX_PERCENT / dcb_rx_conf->nb_tcs;
+	for (i = 0; i < dcb_rx_conf->nb_tcs; i++)
+		dcb_cfg->etscfg.tcbwtable[i] = tc_bw;
+	/* to ensure the sum of tcbw is equal to 100 */
+	bw_lf = I40E_MAX_PERCENT % dcb_rx_conf->nb_tcs;
+	for (i = 0; i < bw_lf; i++)
+		dcb_cfg->etscfg.tcbwtable[i]++;
+
+	/* assume each tc has the same Transmission Selection Algorithm */
+	for (i = 0; i < dcb_rx_conf->nb_tcs; i++)
+		dcb_cfg->etscfg.tsatable[i] = I40E_IEEE_TSA_ETS;
+
+	for (i = 0; i < I40E_MAX_USER_PRIORITY; i++)
+		dcb_cfg->etscfg.prioritytable[i] =
+				dcb_rx_conf->dcb_tc[i];
+
+	/* FW needs one App to configure HW */
+	dcb_cfg->numapps = I40E_DEFAULT_DCB_APP_NUM;
+	dcb_cfg->app[0].selector = I40E_APP_SEL_ETHTYPE;
+	dcb_cfg->app[0].priority = I40E_DEFAULT_DCB_APP_PRIO;
+	dcb_cfg->app[0].protocolid = I40E_APP_PROTOID_FCOE;
+
+	if (dcb_rx_conf->nb_tcs == 0)
+		*tc_map = 1; /* tc0 only */
+	else
+		*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
+
+	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+		dcb_cfg->pfc.willing = 0;
+		dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
+		dcb_cfg->pfc.pfcenable = *tc_map;
+	}
+	return 0;
+}
+
+/*
+ * i40e_vsi_get_bw_info - Query VSI BW Information
+ * @vsi: the VSI being queried
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_vsi_get_bw_info(struct i40e_vsi *vsi)
+{
+	struct i40e_aqc_query_vsi_ets_sla_config_resp bw_ets_config = {0};
+	struct i40e_aqc_query_vsi_bw_config_resp bw_config = {0};
+	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	int i, ret;
+	uint32_t tc_bw_max;
+
+	/* Get the VSI level BW configuration */
+	ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "couldn't get PF vsi bw config, err %s aq_err %s\n",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		return -EINVAL;
+	}
+
+	/* Get the VSI level BW configuration per TC */
+	ret = i40e_aq_query_vsi_ets_sla_config(hw, vsi->seid, &bw_ets_config,
+						  NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "couldn't get PF vsi ets bw config, err %s aq_err %s\n",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		return -EINVAL;
+	}
+
+	if (bw_config.tc_valid_bits != bw_ets_config.tc_valid_bits) {
+		PMD_INIT_LOG(WARNING,
+			 "Enabled TCs mismatch from querying VSI BW info"
+			 " 0x%08x 0x%08x\n", bw_config.tc_valid_bits,
+			 bw_ets_config.tc_valid_bits);
+		/* Still continuing */
+	}
+
+	vsi->bw_info.bw_limit = rte_le_to_cpu_16(bw_config.port_bw_limit);
+	vsi->bw_info.bw_max_quanta = bw_config.max_bw;
+	tc_bw_max = rte_le_to_cpu_16(bw_ets_config.tc_bw_max[0]) |
+		    (rte_le_to_cpu_16(bw_ets_config.tc_bw_max[1]) << 16);
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		vsi->bw_info.bw_ets_share_credits[i] =
+				bw_ets_config.share_credits[i];
+		vsi->bw_info.bw_ets_limit_credits[i] =
+				rte_le_to_cpu_16(bw_ets_config.credits[i]);
+		/* 3 bits out of 4 for each TC */
+		vsi->bw_info.bw_ets_max_quanta[i] =
+			(uint8_t)((tc_bw_max >> (i * 4)) & 0x7);
+		PMD_INIT_LOG(DEBUG,
+			 "%s: vsi seid = %d, TC = %d, qset = 0x%x\n",
+			 __func__, vsi->seid, i, bw_config.qs_handles[i]);
+	}
+
+	return 0;
+}
+
+static int
+i40e_vsi_update_queue_mapping(struct i40e_vsi *vsi,
+			      struct i40e_aqc_vsi_properties_data *info,
+			      uint8_t enabled_tcmap)
+{
+	int ret, i, total_tc = 0;
+	uint16_t qpnum_per_tc, bsf, qp_idx;
+	struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(vsi);
+
+	ret = validate_tcmap_parameter(vsi, enabled_tcmap);
+	if (ret != I40E_SUCCESS)
+		return ret;
+
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (enabled_tcmap & (1 << i))
+			total_tc++;
+	}
+	if (total_tc == 0)
+		total_tc = 1;
+	vsi->enabled_tc = enabled_tcmap;
+
+	qpnum_per_tc = dev_data->nb_rx_queues / total_tc;
+	/* Number of queues per enabled TC */
+	if (qpnum_per_tc == 0) {
+		PMD_INIT_LOG(ERR, " number of queues is less that tcs.");
+		return I40E_ERR_INVALID_QP_ID;
+	}
+	qpnum_per_tc = RTE_MIN(i40e_align_floor(qpnum_per_tc),
+				I40E_MAX_Q_PER_TC);
+	bsf = rte_bsf32(qpnum_per_tc);
+
+	/**
+	 * Configure TC and queue mapping parameters, for enabled TC,
+	 * allocate qpnum_per_tc queues to this traffic. For disabled TC,
+	 * default queue will serve it.
+	 */
+	qp_idx = 0;
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (vsi->enabled_tc & (1 << i)) {
+			info->tc_mapping[i] = rte_cpu_to_le_16((qp_idx <<
+					I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT) |
+				(bsf << I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT));
+			qp_idx += qpnum_per_tc;
+		} else
+			info->tc_mapping[i] = 0;
+	}
+
+	/* Associate queue number with VSI, Keep vsi->nb_qps unchanged */
+	if (vsi->type == I40E_VSI_SRIOV) {
+		info->mapping_flags |=
+			rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_NONCONTIG);
+		for (i = 0; i < vsi->nb_qps; i++)
+			info->queue_mapping[i] =
+				rte_cpu_to_le_16(vsi->base_queue + i);
+	} else {
+		info->mapping_flags |=
+			rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_CONTIG);
+		info->queue_mapping[0] = rte_cpu_to_le_16(vsi->base_queue);
+	}
+	info->valid_sections |=
+		rte_cpu_to_le_16(I40E_AQ_VSI_PROP_QUEUE_MAP_VALID);
+
+	return I40E_SUCCESS;
+}
+
+/*
+ * i40e_vsi_config_tc - Configure VSI tc setting for given TC map
+ * @vsi: VSI to be configured
+ * @tc_map: enabled TC bitmap
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_vsi_config_tc(struct i40e_vsi *vsi, u8 tc_map)
+{
+	struct i40e_aqc_configure_vsi_tc_bw_data bw_data;
+	struct i40e_vsi_context ctxt;
+	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	int ret = 0;
+	int i;
+
+	/* Check if enabled_tc is same as existing or new TCs */
+	if (vsi->enabled_tc == tc_map)
+		return ret;
+
+	/* configure tc bandwidth */
+	memset(&bw_data, 0, sizeof(bw_data));
+	bw_data.tc_valid_bits = tc_map;
+	/* Enable ETS TCs with equal BW Share for now across all VSIs */
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (tc_map & BIT_ULL(i))
+			bw_data.tc_bw_credits[i] = 1;
+	}
+	ret = i40e_aq_config_vsi_tc_bw(hw, vsi->seid, &bw_data, NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "AQ command Config VSI BW allocation"
+			" per TC failed = %d",
+			hw->aq.asq_last_status);
+		goto out;
+	}
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
+		vsi->info.qs_handle[i] = bw_data.qs_handles[i];
+
+	/* Update Queue Pairs Mapping for currently enabled UPs */
+	ctxt.seid = vsi->seid;
+	ctxt.pf_num = hw->pf_id;
+	ctxt.vf_num = 0;
+	ctxt.uplink_seid = vsi->uplink_seid;
+	ctxt.info = vsi->info;
+	i40e_get_cap(hw);
+	ret = i40e_vsi_update_queue_mapping(vsi, &ctxt.info, tc_map);
+	if (ret)
+		goto out;
+
+	/* Update the VSI after updating the VSI queue-mapping information */
+	ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure "
+			    "TC queue mapping = %d",
+			    hw->aq.asq_last_status);
+		goto out;
+	}
+	/* update the local VSI info with updated queue map */
+	(void)rte_memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping,
+					sizeof(vsi->info.tc_mapping));
+	(void)rte_memcpy(&vsi->info.queue_mapping,
+			&ctxt.info.queue_mapping,
+		sizeof(vsi->info.queue_mapping));
+	vsi->info.mapping_flags = ctxt.info.mapping_flags;
+	vsi->info.valid_sections = 0;
+
+	/* Update current VSI BW information */
+	ret = i40e_vsi_get_bw_info(vsi);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "Failed updating vsi bw info, err %s aq_err %s",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		goto out;
+	}
+
+	vsi->enabled_tc = tc_map;
+
+out:
+	return ret;
+}
+
+/*
+ * i40e_dcb_hw_configure - program the dcb setting to hw
+ * @pf: pf the configuration is taken on
+ * @new_cfg: new configuration
+ * @tc_map: enabled TC bitmap
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static enum i40e_status_code
+i40e_dcb_hw_configure(struct i40e_pf *pf,
+		      struct i40e_dcbx_config *new_cfg,
+		      uint8_t tc_map)
+{
+	struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+	struct i40e_dcbx_config *old_cfg = &hw->local_dcbx_config;
+	struct i40e_vsi *main_vsi = pf->main_vsi;
+	struct i40e_vsi_list *vsi_list;
+	int i, ret;
+	uint32_t val;
+
+	/* Use the FW API if FW > v4.4*/
+	if (!((hw->aq.fw_maj_ver == 4) && (hw->aq.fw_min_ver >= 4))) {
+		PMD_INIT_LOG(ERR, "FW < v4.4, can not use FW LLDP API"
+				  " to configure DCB");
+		return I40E_ERR_FIRMWARE_API_VERSION;
+	}
+
+	/* Check if need reconfiguration */
+	if (!memcmp(new_cfg, old_cfg, sizeof(struct i40e_dcbx_config))) {
+		PMD_INIT_LOG(ERR, "No Change in DCB Config required.");
+		return I40E_SUCCESS;
+	}
+
+	/* Copy the new config to the current config */
+	*old_cfg = *new_cfg;
+	old_cfg->etsrec = old_cfg->etscfg;
+	ret = i40e_set_dcb_config(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR,
+			 "Set DCB Config failed, err %s aq_err %s\n",
+			 i40e_stat_str(hw, ret),
+			 i40e_aq_str(hw, hw->aq.asq_last_status));
+		return ret;
+	}
+	/* set receive Arbiter to RR mode and ETS scheme by default */
+	for (i = 0; i <= I40E_PRTDCB_RETSTCC_MAX_INDEX; i++) {
+		val = I40E_READ_REG(hw, I40E_PRTDCB_RETSTCC(i));
+		val &= ~(I40E_PRTDCB_RETSTCC_BWSHARE_MASK     |
+			 I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK |
+			 I40E_PRTDCB_RETSTCC_ETSTC_SHIFT);
+		val |= ((uint32_t)old_cfg->etscfg.tcbwtable[i] <<
+			I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT) &
+			 I40E_PRTDCB_RETSTCC_BWSHARE_MASK;
+		val |= ((uint32_t)1 << I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT) &
+			 I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK;
+		val |= ((uint32_t)1 << I40E_PRTDCB_RETSTCC_ETSTC_SHIFT) &
+			 I40E_PRTDCB_RETSTCC_ETSTC_MASK;
+		I40E_WRITE_REG(hw, I40E_PRTDCB_RETSTCC(i), val);
+	}
+	/* get local mib to check whether it is configured correctly */
+	/* IEEE mode */
+	hw->local_dcbx_config.dcbx_mode = I40E_DCBX_MODE_IEEE;
+	/* Get Local DCB Config */
+	i40e_aq_get_dcb_config(hw, I40E_AQ_LLDP_MIB_LOCAL, 0,
+				     &hw->local_dcbx_config);
+
+	/* Update each VSI */
+	i40e_vsi_config_tc(main_vsi, tc_map);
+	if (main_vsi->veb) {
+		TAILQ_FOREACH(vsi_list, &main_vsi->veb->head, list) {
+			/* Beside main VSI, only enable default
+			 * TC for other VSIs
+			 */
+			ret = i40e_vsi_config_tc(vsi_list->vsi,
+						I40E_DEFAULT_TCMAP);
+			if (ret)
+				PMD_INIT_LOG(WARNING,
+					 "Failed configuring TC for VSI seid=%d\n",
+					 vsi_list->vsi->seid);
+			/* continue */
+		}
+	}
+	return I40E_SUCCESS;
+}
+
+/*
+ * i40e_dcb_init_configure - initial dcb config
+ * @dev: device being configured
+ * @sw_dcb: indicate whether dcb is sw configured or hw offload
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_dcb_init_configure(struct rte_eth_dev *dev, bool sw_dcb)
+{
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int ret = 0;
+
+	if ((pf->flags & I40E_FLAG_DCB) == 0) {
+		PMD_INIT_LOG(ERR, "HW doesn't support DCB");
+		return -ENOTSUP;
+	}
+
+	/* DCB initialization:
+	 * Update DCB configuration from the Firmware and configure
+	 * LLDP MIB change event.
+	 */
+	if (sw_dcb == TRUE) {
+		ret = i40e_aq_stop_lldp(hw, TRUE, NULL);
+		if (ret != I40E_SUCCESS)
+			PMD_INIT_LOG(DEBUG, "Failed to stop lldp");
+
+		ret = i40e_init_dcb(hw);
+		/* if sw_dcb, lldp agent is stopped, the return from
+		 * i40e_init_dcb we expect is failure with I40E_AQ_RC_EPERM
+		 * adminq status.
+		 */
+		if (ret != I40E_SUCCESS &&
+		    hw->aq.asq_last_status == I40E_AQ_RC_EPERM) {
+			memset(&hw->local_dcbx_config, 0,
+				sizeof(struct i40e_dcbx_config));
+			/* set dcb default configuration */
+			hw->local_dcbx_config.etscfg.willing = 0;
+			hw->local_dcbx_config.etscfg.maxtcs = 0;
+			hw->local_dcbx_config.etscfg.tcbwtable[0] = 100;
+			hw->local_dcbx_config.etscfg.tsatable[0] =
+						I40E_IEEE_TSA_ETS;
+			hw->local_dcbx_config.etsrec =
+				hw->local_dcbx_config.etscfg;
+			hw->local_dcbx_config.pfc.willing = 0;
+			hw->local_dcbx_config.pfc.pfccap =
+						I40E_MAX_TRAFFIC_CLASS;
+			/* FW needs one App to configure HW */
+			hw->local_dcbx_config.numapps = 1;
+			hw->local_dcbx_config.app[0].selector =
+						I40E_APP_SEL_ETHTYPE;
+			hw->local_dcbx_config.app[0].priority = 3;
+			hw->local_dcbx_config.app[0].protocolid =
+						I40E_APP_PROTOID_FCOE;
+			ret = i40e_set_dcb_config(hw);
+			if (ret) {
+				PMD_INIT_LOG(ERR, "default dcb config fails."
+					" err = %d, aq_err = %d.", ret,
+					  hw->aq.asq_last_status);
+				return -ENOSYS;
+			}
+		} else {
+			PMD_INIT_LOG(ERR, "DCBX configuration failed, err = %d,"
+					  " aq_err = %d.", ret,
+					  hw->aq.asq_last_status);
+			return -ENOTSUP;
+		}
+	} else {
+		ret = i40e_aq_start_lldp(hw, NULL);
+		if (ret != I40E_SUCCESS)
+			PMD_INIT_LOG(DEBUG, "Failed to start lldp");
+
+		ret = i40e_init_dcb(hw);
+		if (!ret) {
+			if (hw->dcbx_status == I40E_DCBX_STATUS_DISABLED) {
+				PMD_INIT_LOG(ERR, "HW doesn't support"
+						  " DCBX offload.");
+				return -ENOTSUP;
+			}
+		} else {
+			PMD_INIT_LOG(ERR, "DCBX configuration failed, err = %d,"
+					  " aq_err = %d.", ret,
+					  hw->aq.asq_last_status);
+			return -ENOTSUP;
+		}
+	}
+	return 0;
+}
+
+/*
+ * i40e_dcb_setup - setup dcb related config
+ * @dev: device being configured
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int
+i40e_dcb_setup(struct rte_eth_dev *dev)
+{
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct i40e_dcbx_config dcb_cfg;
+	uint8_t tc_map = 0;
+	int ret = 0;
+
+	if ((pf->flags & I40E_FLAG_DCB) == 0) {
+		PMD_INIT_LOG(ERR, "HW doesn't support DCB");
+		return -ENOTSUP;
+	}
+
+	if (pf->vf_num != 0 ||
+	    (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+		PMD_INIT_LOG(DEBUG, " DCB only works on main vsi.");
+
+	ret = i40e_parse_dcb_configure(dev, &dcb_cfg, &tc_map);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "invalid dcb config");
+		return -EINVAL;
+	}
+	ret = i40e_dcb_hw_configure(pf, &dcb_cfg, tc_map);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "dcb sw configure fails");
+		return -ENOSYS;
+	}
+	return 0;
+}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 832c036..0c9c78c 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -204,6 +204,19 @@ struct i40e_macvlan_filter {
 	uint16_t vlan_id;
 };
 
+/* Bandwidth limit information */
+struct i40e_bw_info {
+	uint16_t bw_limit;      /* BW Limit (0 = disabled) */
+	uint8_t  bw_max_quanta; /* Max Quanta when BW limit is enabled */
+
+	/* Relative TC credits across VSIs */
+	uint8_t  bw_ets_share_credits[I40E_MAX_TRAFFIC_CLASS];
+	/* TC BW limit credits within VSI */
+	uint8_t  bw_ets_limit_credits[I40E_MAX_TRAFFIC_CLASS];
+	/* TC BW limit max quanta within VSI */
+	uint8_t  bw_ets_max_quanta[I40E_MAX_TRAFFIC_CLASS];
+};
+
 /*
  * Structure that defines a VSI, associated with a adapter.
  */
@@ -249,6 +262,7 @@ struct i40e_vsi {
 	uint16_t vsi_id;
 	uint16_t msix_intr; /* The MSIX interrupt binds to VSI */
 	uint8_t enabled_tc; /* The traffic class enabled */
+	struct i40e_bw_info bw_info; /* VSI bandwidth information */
 };
 
 struct pool_entry {
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 8731712..e7af655 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2110,7 +2110,8 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	struct i40e_rx_queue *rxq;
 	const struct rte_memzone *rz;
 	uint32_t ring_size;
-	uint16_t len;
+	uint16_t len, i;
+	uint16_t base, bsf, tc_mapping;
 	int use_def_burst_func = 1;
 
 	if (hw->mac.type == I40E_MAC_VF) {
@@ -2231,6 +2232,19 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		ad->rx_bulk_alloc_allowed = false;
 	}
 
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (!(vsi->enabled_tc & (1 << i)))
+			continue;
+		tc_mapping = rte_le_to_cpu_16(vsi->info.tc_mapping[i]);
+		base = (tc_mapping & I40E_AQ_VSI_TC_QUE_OFFSET_MASK) >>
+			I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT;
+		bsf = (tc_mapping & I40E_AQ_VSI_TC_QUE_NUMBER_MASK) >>
+			I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT;
+
+		if (queue_idx >= base && queue_idx < (base + BIT(bsf)))
+			rxq->dcb_tc = i;
+	}
+
 	return 0;
 }
 
@@ -2323,6 +2337,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	const struct rte_memzone *tz;
 	uint32_t ring_size;
 	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint16_t i, base, bsf, tc_mapping;
 
 	if (hw->mac.type == I40E_MAC_VF) {
 		struct i40e_vf *vf =
@@ -2492,6 +2507,19 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	/* Use a simple TX queue without offloads or multi segs if possible */
 	i40e_set_tx_function_flag(dev, txq);
 
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (!(vsi->enabled_tc & (1 << i)))
+			continue;
+		tc_mapping = rte_le_to_cpu_16(vsi->info.tc_mapping[i]);
+		base = (tc_mapping & I40E_AQ_VSI_TC_QUE_OFFSET_MASK) >>
+			I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT;
+		bsf = (tc_mapping & I40E_AQ_VSI_TC_QUE_NUMBER_MASK) >>
+			I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT;
+
+		if (queue_idx >= base && queue_idx < (base + BIT(bsf)))
+			txq->dcb_tc = i;
+	}
+
 	return 0;
 }
 
@@ -2704,7 +2732,7 @@ i40e_tx_queue_init(struct i40e_tx_queue *txq)
 #ifdef RTE_LIBRTE_IEEE1588
 	tx_ctx.timesync_ena = 1;
 #endif
-	tx_ctx.rdylist = rte_le_to_cpu_16(vsi->info.qs_handle[0]);
+	tx_ctx.rdylist = rte_le_to_cpu_16(vsi->info.qs_handle[txq->dcb_tc]);
 	if (vsi->type == I40E_VSI_FDIR)
 		tx_ctx.fd_ena = TRUE;
 
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 39cb95a..c35828c 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -133,6 +133,7 @@ struct i40e_rx_queue {
 	bool q_set; /**< indicate if rx queue has been configured */
 	bool rx_deferred_start; /**< don't start this queue in dev start */
 	uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
+	uint8_t dcb_tc;         /**< Traffic class of rx queue */
 };
 
 struct i40e_tx_entry {
@@ -173,6 +174,7 @@ struct i40e_tx_queue {
 	uint16_t tx_next_rs;
 	bool q_set; /**< indicate if tx queue has been configured */
 	bool tx_deferred_start; /**< don't start this queue in dev start */
+	uint8_t dcb_tc;         /**< Traffic class of tx queue */
 };
 
 /** Offload features */
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v3 4/9] ixgbe: enable DCB+RSS multi-queue mode
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
                       ` (2 preceding siblings ...)
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 3/9] i40e: enable DCB feature on FVL Jingjing Wu
@ 2015-10-31 15:57     ` Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 5/9] ethdev: new API to get dcb related information Jingjing Wu
                       ` (5 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-31 15:57 UTC (permalink / raw)
  To: dev

This patch enables DCB+RSS multi-queue mode, and also fix some coding
style.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ixgbe/ixgbe_rxtx.c | 48 +++++++++++++++++++++++++-----------------
 1 file changed, 29 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 6a62d67..ad66b09 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3169,9 +3169,13 @@ ixgbe_dcb_rx_hw_config(struct ixgbe_hw *hw,
 				reg = (reg & ~IXGBE_MRQC_MRQE_MASK) |
 					IXGBE_MRQC_VMDQRT4TCEN;
 			else {
+				/* no matter the mode is DCB or DCB_RSS, just
+				 * set the MRQE to RSSXTCEN. RSS is controlled
+				 * by RSS_FIELD
+				 */
 				IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, 0);
 				reg = (reg & ~IXGBE_MRQC_MRQE_MASK) |
-					IXGBE_MRQC_RT4TCEN;
+					IXGBE_MRQC_RTRSS4TCEN;
 			}
 		}
 		if (dcb_config->num_tcs.pg_tcs == 8) {
@@ -3181,7 +3185,7 @@ ixgbe_dcb_rx_hw_config(struct ixgbe_hw *hw,
 			else {
 				IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, 0);
 				reg = (reg & ~IXGBE_MRQC_MRQE_MASK) |
-					IXGBE_MRQC_RT8TCEN;
+					IXGBE_MRQC_RTRSS8TCEN;
 			}
 		}
 
@@ -3286,16 +3290,17 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			 *get dcb and VT rx configuration parameters
 			 *from rte_eth_conf
 			 */
-			ixgbe_vmdq_dcb_rx_config(dev,dcb_config);
+			ixgbe_vmdq_dcb_rx_config(dev, dcb_config);
 			/*Configure general VMDQ and DCB RX parameters*/
 			ixgbe_vmdq_dcb_configure(dev);
 		}
 		break;
 	case ETH_MQ_RX_DCB:
+	case ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
-		ixgbe_dcb_rx_config(dev,dcb_config);
+		ixgbe_dcb_rx_config(dev, dcb_config);
 		/*Configure general DCB RX parameters*/
 		ixgbe_dcb_rx_hw_config(hw, dcb_config);
 		break;
@@ -3317,7 +3322,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/*get DCB TX configuration parameters from rte_eth_conf*/
-		ixgbe_dcb_tx_config(dev,dcb_config);
+		ixgbe_dcb_tx_config(dev, dcb_config);
 		/*Configure general DCB TX parameters*/
 		ixgbe_dcb_tx_hw_config(hw, dcb_config);
 		break;
@@ -3458,14 +3463,15 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
 
 	/* check support mq_mode for DCB */
 	if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB))
+	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
+	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
 		return;
 
 	if (dev->data->nb_rx_queues != ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
-	ixgbe_dcb_hw_configure(dev,dcb_cfg);
+	ixgbe_dcb_hw_configure(dev, dcb_cfg);
 
 	return;
 }
@@ -3707,21 +3713,25 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-			case ETH_MQ_RX_RSS:
-				ixgbe_rss_configure(dev);
-				break;
+		case ETH_MQ_RX_RSS:
+		case ETH_MQ_RX_DCB_RSS:
+		case ETH_MQ_RX_VMDQ_RSS:
+			ixgbe_rss_configure(dev);
+			break;
 
-			case ETH_MQ_RX_VMDQ_DCB:
-				ixgbe_vmdq_dcb_configure(dev);
-				break;
+		case ETH_MQ_RX_VMDQ_DCB:
+			ixgbe_vmdq_dcb_configure(dev);
+			break;
 
-			case ETH_MQ_RX_VMDQ_ONLY:
-				ixgbe_vmdq_rx_hw_configure(dev);
-				break;
+		case ETH_MQ_RX_VMDQ_ONLY:
+			ixgbe_vmdq_rx_hw_configure(dev);
+			break;
 
-			case ETH_MQ_RX_NONE:
-				/* if mq_mode is none, disable rss mode.*/
-			default: ixgbe_rss_disable(dev);
+		case ETH_MQ_RX_NONE:
+		default:
+			/* if mq_mode is none, disable rss mode.*/
+			ixgbe_rss_disable(dev);
+			break;
 		}
 	} else {
 		/*
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v3 5/9] ethdev: new API to get dcb related information
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
                       ` (3 preceding siblings ...)
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 4/9] ixgbe: enable DCB+RSS multi-queue mode Jingjing Wu
@ 2015-10-31 15:57     ` Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 6/9] ixgbe: get_dcb_info ops implement Jingjing Wu
                       ` (4 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-31 15:57 UTC (permalink / raw)
  To: dev

This patch adds one new API to get dcb related info.
  rte_eth_dev_get_dcb_info

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 lib/librte_ether/rte_ethdev.c          | 18 ++++++++++++
 lib/librte_ether/rte_ethdev.h          | 54 ++++++++++++++++++++++++++++++++++
 lib/librte_ether/rte_ether_version.map |  7 +++++
 3 files changed, 79 insertions(+)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index c7247c3..721cef6 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -3143,3 +3143,21 @@ rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info)
 	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->set_eeprom, -ENOTSUP);
 	return (*dev->dev_ops->set_eeprom)(dev, info);
 }
+
+int
+rte_eth_dev_get_dcb_info(uint8_t port_id,
+			     struct rte_eth_dcb_info *dcb_info)
+{
+	struct rte_eth_dev *dev;
+
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return -ENODEV;
+	}
+
+	dev = &rte_eth_devices[port_id];
+	memset(dcb_info, 0, sizeof(struct rte_eth_dcb_info));
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->get_dcb_info, -ENOTSUP);
+	return (*dev->dev_ops->get_dcb_info)(dev, dcb_info);
+}
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 377da6a..2e05189 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -854,6 +854,38 @@ struct rte_eth_xstats {
 	uint64_t value;
 };
 
+#define ETH_DCB_NUM_TCS    8
+#define ETH_MAX_VMDQ_POOL  64
+
+/**
+ * A structure used to get the information of queue and
+ * TC mapping on both TX and RX paths.
+ */
+struct rte_eth_dcb_tc_queue_mapping {
+	/** rx queues assigned to tc per Pool */
+	struct {
+		uint8_t base;
+		uint8_t nb_queue;
+	} tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	/** rx queues assigned to tc per Pool */
+	struct {
+		uint8_t base;
+		uint8_t nb_queue;
+	} tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+};
+
+/**
+ * A structure used to get the information of DCB.
+ * It includes TC UP mapping and queue TC mapping.
+ */
+struct rte_eth_dcb_info {
+	uint8_t nb_tcs;        /**< number of TCs */
+	uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+	uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
+	/** rx queues assigned to tc */
+	struct rte_eth_dcb_tc_queue_mapping tc_queue;
+};
+
 struct rte_eth_dev;
 
 struct rte_eth_dev_callback;
@@ -1207,6 +1239,10 @@ typedef int (*eth_filter_ctrl_t)(struct rte_eth_dev *dev,
 				 void *arg);
 /**< @internal Take operations to assigned filter type on an Ethernet device */
 
+typedef int (*eth_get_dcb_info)(struct rte_eth_dev *dev,
+				 struct rte_eth_dcb_info *dcb_info);
+/**< @internal Get dcb information on an Ethernet device */
+
 /**
  * @internal A structure containing the functions exported by an Ethernet driver.
  */
@@ -1312,6 +1348,9 @@ struct eth_dev_ops {
 	eth_timesync_read_rx_timestamp_t timesync_read_rx_timestamp;
 	/** Read the IEEE1588/802.1AS TX timestamp. */
 	eth_timesync_read_tx_timestamp_t timesync_read_tx_timestamp;
+
+	/** Get DCB information */
+	eth_get_dcb_info get_dcb_info;
 };
 
 /**
@@ -3321,6 +3360,21 @@ int rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type,
 			enum rte_filter_op filter_op, void *arg);
 
 /**
+ * Get DCB information on an Ethernet device.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param dcb_info
+ *   dcb information.
+ * @return
+ *   - (0) if successful.
+ *   - (-ENODEV) if port identifier is invalid.
+ *   - (-ENOTSUP) if hardware doesn't support.
+ */
+int rte_eth_dev_get_dcb_info(uint8_t port_id,
+			     struct rte_eth_dcb_info *dcb_info);
+
+/**
  * Add a callback to be called on packet RX on a given port and queue.
  *
  * This API configures a function to be called for each burst of
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 8345a6c..3c4dc2d 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -127,3 +127,10 @@ DPDK_2.1 {
 	rte_eth_timesync_read_tx_timestamp;
 
 } DPDK_2.0;
+
+DPDK_2.2 {
+	global:
+
+	rte_eth_dev_get_dcb_info;
+
+} DPDK_2.1;
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v3 6/9] ixgbe: get_dcb_info ops implement
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
                       ` (4 preceding siblings ...)
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 5/9] ethdev: new API to get dcb related information Jingjing Wu
@ 2015-10-31 15:57     ` Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 7/9] i40e: " Jingjing Wu
                       ` (3 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-31 15:57 UTC (permalink / raw)
  To: dev

This patch implements the get_dcb_info ops in ixgbe driver.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 79 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index ece2e73..e9ce466 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -304,6 +304,8 @@ static int ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
 static int ixgbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
 				      struct ether_addr *mc_addr_set,
 				      uint32_t nb_mc_addr);
+static int ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
+				   struct rte_eth_dcb_info *dcb_info);
 
 static int ixgbe_get_reg_length(struct rte_eth_dev *dev);
 static int ixgbe_get_regs(struct rte_eth_dev *dev,
@@ -465,6 +467,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.get_eeprom_length    = ixgbe_get_eeprom_length,
 	.get_eeprom           = ixgbe_get_eeprom,
 	.set_eeprom           = ixgbe_set_eeprom,
+	.get_dcb_info         = ixgbe_dev_get_dcb_info,
 };
 
 /*
@@ -5734,6 +5737,82 @@ ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type) {
 	}
 }
 
+static int
+ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
+			struct rte_eth_dcb_info *dcb_info)
+{
+	struct ixgbe_dcb_config *dcb_config =
+			IXGBE_DEV_PRIVATE_TO_DCB_CFG(dev->data->dev_private);
+	struct ixgbe_dcb_tc_config *tc;
+	uint8_t i, j;
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
+	else
+		dcb_info->nb_tcs = 1;
+
+	if (dcb_config->vt_mode) { /* vt is enabled*/
+		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
+				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
+		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
+		for (i = 0; i < vmdq_rx_conf->nb_queue_pools; i++) {
+			for (j = 0; j < dcb_info->nb_tcs; j++) {
+				dcb_info->tc_queue.tc_rxq[i][j].base =
+						i * dcb_info->nb_tcs + j;
+				dcb_info->tc_queue.tc_rxq[i][j].nb_queue = 1;
+				dcb_info->tc_queue.tc_txq[i][j].base =
+						i * dcb_info->nb_tcs + j;
+				dcb_info->tc_queue.tc_txq[i][j].nb_queue = 1;
+			}
+		}
+	} else { /* vt is disabled*/
+		struct rte_eth_dcb_rx_conf *rx_conf =
+				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
+		if (dcb_info->nb_tcs == ETH_4_TCS) {
+			for (i = 0; i < dcb_info->nb_tcs; i++) {
+				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
+				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
+			}
+			dcb_info->tc_queue.tc_txq[0][0].base = 0;
+			dcb_info->tc_queue.tc_txq[0][1].base = 64;
+			dcb_info->tc_queue.tc_txq[0][2].base = 96;
+			dcb_info->tc_queue.tc_txq[0][3].base = 112;
+			dcb_info->tc_queue.tc_txq[0][0].nb_queue = 64;
+			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
+			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
+			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
+		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+			for (i = 0; i < dcb_info->nb_tcs; i++) {
+				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
+				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
+			}
+			dcb_info->tc_queue.tc_txq[0][0].base = 0;
+			dcb_info->tc_queue.tc_txq[0][1].base = 32;
+			dcb_info->tc_queue.tc_txq[0][2].base = 64;
+			dcb_info->tc_queue.tc_txq[0][3].base = 80;
+			dcb_info->tc_queue.tc_txq[0][4].base = 96;
+			dcb_info->tc_queue.tc_txq[0][5].base = 104;
+			dcb_info->tc_queue.tc_txq[0][6].base = 112;
+			dcb_info->tc_queue.tc_txq[0][7].base = 120;
+			dcb_info->tc_queue.tc_txq[0][0].nb_queue = 32;
+			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
+			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
+			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
+			dcb_info->tc_queue.tc_txq[0][4].nb_queue = 8;
+			dcb_info->tc_queue.tc_txq[0][5].nb_queue = 8;
+			dcb_info->tc_queue.tc_txq[0][6].nb_queue = 8;
+			dcb_info->tc_queue.tc_txq[0][7].nb_queue = 8;
+		}
+	}
+	for (i = 0; i < dcb_info->nb_tcs; i++) {
+		tc = &dcb_config->tc_config[i];
+		dcb_info->tc_bws[i] = tc->path[IXGBE_DCB_TX_CONFIG].bwg_percent;
+	}
+	return 0;
+}
 
 static struct rte_driver rte_ixgbe_driver = {
 	.type = PMD_PDEV,
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v3 7/9] i40e: get_dcb_info ops implement
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
                       ` (5 preceding siblings ...)
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 6/9] ixgbe: get_dcb_info ops implement Jingjing Wu
@ 2015-10-31 15:57     ` Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 8/9] app/testpmd: set up DCB forwarding based on traffic class Jingjing Wu
                       ` (2 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-31 15:57 UTC (permalink / raw)
  To: dev

This patch implements the get_dcb_info ops in i40e driver.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index ce4efb2..480dd57 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -247,6 +247,8 @@ static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
 				enum rte_filter_type filter_type,
 				enum rte_filter_op filter_op,
 				void *arg);
+static int i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
+				  struct rte_eth_dcb_info *dcb_info);
 static void i40e_configure_registers(struct i40e_hw *hw);
 static void i40e_hw_init(struct i40e_hw *hw);
 static int i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi);
@@ -320,6 +322,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.timesync_disable             = i40e_timesync_disable,
 	.timesync_read_rx_timestamp   = i40e_timesync_read_rx_timestamp,
 	.timesync_read_tx_timestamp   = i40e_timesync_read_tx_timestamp,
+	.get_dcb_info                 = i40e_dev_get_dcb_info,
 };
 
 static struct eth_driver rte_i40e_pmd = {
@@ -7016,3 +7019,42 @@ i40e_dcb_setup(struct rte_eth_dev *dev)
 	}
 	return 0;
 }
+
+static int
+i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
+		      struct rte_eth_dcb_info *dcb_info)
+{
+	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct i40e_vsi *vsi = pf->main_vsi;
+	struct i40e_dcbx_config *dcb_cfg = &hw->local_dcbx_config;
+	uint16_t bsf, tc_mapping;
+	int i;
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+		dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
+	else
+		dcb_info->nb_tcs = 1;
+	for (i = 0; i < I40E_MAX_USER_PRIORITY; i++)
+		dcb_info->prio_tc[i] = dcb_cfg->etscfg.prioritytable[i];
+	for (i = 0; i < dcb_info->nb_tcs; i++)
+		dcb_info->tc_bws[i] = dcb_cfg->etscfg.tcbwtable[i];
+
+	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+		if (vsi->enabled_tc & (1 << i)) {
+			tc_mapping = rte_le_to_cpu_16(vsi->info.tc_mapping[i]);
+			/* only main vsi support multi TCs */
+			dcb_info->tc_queue.tc_rxq[0][i].base =
+				(tc_mapping & I40E_AQ_VSI_TC_QUE_OFFSET_MASK) >>
+				I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT;
+			dcb_info->tc_queue.tc_txq[0][i].base =
+				dcb_info->tc_queue.tc_rxq[0][i].base;
+			bsf = (tc_mapping & I40E_AQ_VSI_TC_QUE_NUMBER_MASK) >>
+				I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT;
+			dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 1 << bsf;
+			dcb_info->tc_queue.tc_txq[0][i].nb_queue =
+				dcb_info->tc_queue.tc_rxq[0][i].nb_queue;
+		}
+	}
+	return 0;
+}
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v3 8/9] app/testpmd: set up DCB forwarding based on traffic class
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
                       ` (6 preceding siblings ...)
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 7/9] i40e: " Jingjing Wu
@ 2015-10-31 15:57     ` Jingjing Wu
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 9/9] app/testpmd: add command to display DCB info Jingjing Wu
  2015-11-01 13:53     ` [dpdk-dev] [PATCH v3 0/9] enable DCB feature on Intel XL710/X710 NIC Thomas Monjalon
  9 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-31 15:57 UTC (permalink / raw)
  To: dev

This patch changes the testpmd DCB forwarding stream to make it
based on traffic class.
It also fixes some coding style issues.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 app/test-pmd/cmdline.c |  39 +++++++-----
 app/test-pmd/config.c  | 159 +++++++++++++++++++++----------------------------
 app/test-pmd/testpmd.c | 151 +++++++++++++++++++++++++---------------------
 app/test-pmd/testpmd.h |  23 +------
 4 files changed, 176 insertions(+), 196 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index b3c36f3..0254628 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1999,37 +1999,46 @@ cmd_config_dcb_parsed(void *parsed_result,
                         __attribute__((unused)) void *data)
 {
 	struct cmd_config_dcb *res = parsed_result;
-	struct dcb_config dcb_conf;
 	portid_t port_id = res->port_id;
 	struct rte_port *port;
+	uint8_t pfc_en;
+	int ret;
 
 	port = &ports[port_id];
 	/** Check if the port is not started **/
 	if (port->port_status != RTE_PORT_STOPPED) {
-		printf("Please stop port %d first\n",port_id);
+		printf("Please stop port %d first\n", port_id);
 		return;
 	}
 
-	dcb_conf.num_tcs = (enum rte_eth_nb_tcs) res->num_tcs;
-	if ((dcb_conf.num_tcs != ETH_4_TCS) && (dcb_conf.num_tcs != ETH_8_TCS)){
-		printf("The invalid number of traffic class,only 4 or 8 allowed\n");
+	if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+		printf("The invalid number of traffic class,"
+			" only 4 or 8 allowed.\n");
 		return;
 	}
 
-	/* DCB in VT mode */
-	if (!strncmp(res->vt_en, "on",2))
-		dcb_conf.dcb_mode = DCB_VT_ENABLED;
+	if (nb_fwd_lcores < res->num_tcs) {
+		printf("nb_cores shouldn't be less than number of TCs.\n");
+		return;
+	}
+	if (!strncmp(res->pfc_en, "on", 2))
+		pfc_en = 1;
 	else
-		dcb_conf.dcb_mode = DCB_ENABLED;
+		pfc_en = 0;
 
-	if (!strncmp(res->pfc_en, "on",2)) {
-		dcb_conf.pfc_en = 1;
-	}
+	/* DCB in VT mode */
+	if (!strncmp(res->vt_en, "on", 2))
+		ret = init_port_dcb_config(port_id, DCB_VT_ENABLED,
+				(enum rte_eth_nb_tcs)res->num_tcs,
+				pfc_en);
 	else
-		dcb_conf.pfc_en = 0;
+		ret = init_port_dcb_config(port_id, DCB_ENABLED,
+				(enum rte_eth_nb_tcs)res->num_tcs,
+				pfc_en);
+
 
-	if (init_port_dcb_config(port_id,&dcb_conf) != 0) {
-		printf("Cannot initialize network ports\n");
+	if (ret != 0) {
+		printf("Cannot initialize network ports.\n");
 		return;
 	}
 
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 1ec6a77..ef87581 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1128,113 +1128,92 @@ rss_fwd_config_setup(void)
 	}
 }
 
-/*
- * In DCB and VT on,the mapping of 128 receive queues to 128 transmit queues.
- */
-static void
-dcb_rxq_2_txq_mapping(queueid_t rxq, queueid_t *txq)
-{
-	if(dcb_q_mapping == DCB_4_TCS_Q_MAPPING) {
-
-		if (rxq < 32)
-			/* tc0: 0-31 */
-			*txq = rxq;
-		else if (rxq < 64) {
-			/* tc1: 64-95 */
-			*txq =  (uint16_t)(rxq + 32);
-		}
-		else {
-			/* tc2: 96-111;tc3:112-127 */
-			*txq =  (uint16_t)(rxq/2 + 64);
-		}
-	}
-	else {
-		if (rxq < 16)
-			/* tc0 mapping*/
-			*txq = rxq;
-		else if (rxq < 32) {
-			/* tc1 mapping*/
-			 *txq = (uint16_t)(rxq + 16);
-		}
-		else if (rxq < 64) {
-			/*tc2,tc3 mapping */
-			*txq =  (uint16_t)(rxq + 32);
-		}
-		else {
-			/* tc4,tc5,tc6 and tc7 mapping */
-			*txq =  (uint16_t)(rxq/2 + 64);
-		}
-	}
-}
-
 /**
- * For the DCB forwarding test, each core is assigned on every port multi-transmit
- * queue.
+ * For the DCB forwarding test, each core is assigned on each traffic class.
  *
  * Each core is assigned a multi-stream, each stream being composed of
  * a RX queue to poll on a RX port for input messages, associated with
- * a TX queue of a TX port where to send forwarded packets.
- * All packets received on the RX queue of index "RxQj" of the RX port "RxPi"
- * are sent on the TX queue "TxQl" of the TX port "TxPk" according to the two
- * following rules:
- * In VT mode,
- *    - TxPk = (RxPi + 1) if RxPi is even, (RxPi - 1) if RxPi is odd
- *    - TxQl = RxQj
- * In non-VT mode,
- *    - TxPk = (RxPi + 1) if RxPi is even, (RxPi - 1) if RxPi is odd
- *    There is a mapping of RxQj to TxQl to be required,and the mapping was implemented
- *    in dcb_rxq_2_txq_mapping function.
+ * a TX queue of a TX port where to send forwarded packets. All RX and
+ * TX queues are mapping to the same traffic class.
+ * If VMDQ and DCB co-exist, each traffic class on different POOLs share
+ * the same core
  */
 static void
 dcb_fwd_config_setup(void)
 {
-	portid_t   rxp;
-	portid_t   txp;
-	queueid_t  rxq;
-	queueid_t  nb_q;
+	struct rte_eth_dcb_info rxp_dcb_info, txp_dcb_info;
+	portid_t txp, rxp = 0;
+	queueid_t txq, rxq = 0;
 	lcoreid_t  lc_id;
-	uint16_t sm_id;
-
-	nb_q = nb_rxq;
+	uint16_t nb_rx_queue, nb_tx_queue;
+	uint16_t i, j, k, sm_id = 0;
+	uint8_t tc = 0;
 
 	cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores;
 	cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
 	cur_fwd_config.nb_fwd_streams =
-		(streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
+		(streamid_t) (nb_rxq * cur_fwd_config.nb_fwd_ports);
 
 	/* reinitialize forwarding streams */
 	init_fwd_streams();
+	sm_id = 0;
+	if ((rxp & 0x1) == 0)
+		txp = (portid_t) (rxp + 1);
+	else
+		txp = (portid_t) (rxp - 1);
+	/* get the dcb info on the first RX and TX ports */
+	(void)rte_eth_dev_get_dcb_info(fwd_ports_ids[rxp], &rxp_dcb_info);
+	(void)rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
 
-	setup_fwd_config_of_each_lcore(&cur_fwd_config);
-	rxp = 0; rxq = 0;
 	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
-		/* a fwd core can run multi-streams */
-		for (sm_id = 0; sm_id < fwd_lcores[lc_id]->stream_nb; sm_id++)
-		{
-			struct fwd_stream *fs;
-			fs = fwd_streams[fwd_lcores[lc_id]->stream_idx + sm_id];
-			if ((rxp & 0x1) == 0)
-				txp = (portid_t) (rxp + 1);
-			else
-				txp = (portid_t) (rxp - 1);
-			fs->rx_port = fwd_ports_ids[rxp];
-			fs->rx_queue = rxq;
-			fs->tx_port = fwd_ports_ids[txp];
-			if (dcb_q_mapping == DCB_VT_Q_MAPPING)
-				fs->tx_queue = rxq;
-			else
-				dcb_rxq_2_txq_mapping(rxq, &fs->tx_queue);
-			fs->peer_addr = fs->tx_port;
-			rxq = (queueid_t) (rxq + 1);
-			if (rxq < nb_q)
-				continue;
-			rxq = 0;
-			if (numa_support && (nb_fwd_ports <= (nb_ports >> 1)))
-				rxp = (portid_t)
-					(rxp + ((nb_ports >> 1) / nb_fwd_ports));
-			else
-				rxp = (portid_t) (rxp + 1);
+		fwd_lcores[lc_id]->stream_nb = 0;
+		fwd_lcores[lc_id]->stream_idx = sm_id;
+		for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+			/* if the nb_queue is zero, means this tc is
+			 * not enabled on the POOL
+			 */
+			if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0)
+				break;
+			k = fwd_lcores[lc_id]->stream_nb +
+				fwd_lcores[lc_id]->stream_idx;
+			rxq = rxp_dcb_info.tc_queue.tc_rxq[i][tc].base;
+			txq = txp_dcb_info.tc_queue.tc_txq[i][tc].base;
+			nb_rx_queue = txp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue;
+			nb_tx_queue = txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue;
+			for (j = 0; j < nb_rx_queue; j++) {
+				struct fwd_stream *fs;
+
+				fs = fwd_streams[k + j];
+				fs->rx_port = fwd_ports_ids[rxp];
+				fs->rx_queue = rxq + j;
+				fs->tx_port = fwd_ports_ids[txp];
+				fs->tx_queue = txq + j % nb_tx_queue;
+				fs->peer_addr = fs->tx_port;
+			}
+			fwd_lcores[lc_id]->stream_nb +=
+				rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue;
 		}
+		sm_id = (streamid_t) (sm_id + fwd_lcores[lc_id]->stream_nb);
+
+		tc++;
+		if (tc < rxp_dcb_info.nb_tcs)
+			continue;
+		/* Restart from TC 0 on next RX port */
+		tc = 0;
+		if (numa_support && (nb_fwd_ports <= (nb_ports >> 1)))
+			rxp = (portid_t)
+				(rxp + ((nb_ports >> 1) / nb_fwd_ports));
+		else
+			rxp++;
+		if (rxp >= nb_fwd_ports)
+			return;
+		/* get the dcb information on next RX and TX ports */
+		if ((rxp & 0x1) == 0)
+			txp = (portid_t) (rxp + 1);
+		else
+			txp = (portid_t) (rxp - 1);
+		rte_eth_dev_get_dcb_info(fwd_ports_ids[rxp], &rxp_dcb_info);
+		rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
 	}
 }
 
@@ -1354,10 +1333,6 @@ pkt_fwd_config_display(struct fwd_config *cfg)
 void
 fwd_config_display(void)
 {
-	if((dcb_config) && (nb_fwd_lcores == 1)) {
-		printf("In DCB mode,the nb forwarding cores should be larger than 1\n");
-		return;
-	}
 	fwd_config_setup();
 	pkt_fwd_config_display(&cur_fwd_config);
 }
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4c6aec6..2e302bb 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -182,9 +182,6 @@ uint8_t dcb_config = 0;
 /* Whether the dcb is in testing status */
 uint8_t dcb_test = 0;
 
-/* DCB on and VT on mapping is default */
-enum dcb_queue_mapping_mode dcb_q_mapping = DCB_VT_Q_MAPPING;
-
 /*
  * Configurable number of RX/TX queues.
  */
@@ -1852,115 +1849,131 @@ const uint16_t vlan_tags[] = {
 };
 
 static  int
-get_eth_dcb_conf(struct rte_eth_conf *eth_conf, struct dcb_config *dcb_conf)
+get_eth_dcb_conf(struct rte_eth_conf *eth_conf,
+		 enum dcb_mode_enable dcb_mode,
+		 enum rte_eth_nb_tcs num_tcs,
+		 uint8_t pfc_en)
 {
-        uint8_t i;
+	uint8_t i;
 
 	/*
 	 * Builds up the correct configuration for dcb+vt based on the vlan tags array
 	 * given above, and the number of traffic classes available for use.
 	 */
-	if (dcb_conf->dcb_mode == DCB_VT_ENABLED) {
-		struct rte_eth_vmdq_dcb_conf vmdq_rx_conf;
-		struct rte_eth_vmdq_dcb_tx_conf vmdq_tx_conf;
+	if (dcb_mode == DCB_VT_ENABLED) {
+		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
+				&eth_conf->rx_adv_conf.vmdq_dcb_conf;
+		struct rte_eth_vmdq_dcb_tx_conf *vmdq_tx_conf =
+				&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 
 		/* VMDQ+DCB RX and TX configrations */
-		vmdq_rx_conf.enable_default_pool = 0;
-		vmdq_rx_conf.default_pool = 0;
-		vmdq_rx_conf.nb_queue_pools =
-			(dcb_conf->num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
-		vmdq_tx_conf.nb_queue_pools =
-			(dcb_conf->num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
-
-		vmdq_rx_conf.nb_pool_maps = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
-		for (i = 0; i < vmdq_rx_conf.nb_pool_maps; i++) {
-			vmdq_rx_conf.pool_map[i].vlan_id = vlan_tags[ i ];
-			vmdq_rx_conf.pool_map[i].pools = 1 << (i % vmdq_rx_conf.nb_queue_pools);
+		vmdq_rx_conf->enable_default_pool = 0;
+		vmdq_rx_conf->default_pool = 0;
+		vmdq_rx_conf->nb_queue_pools =
+			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+		vmdq_tx_conf->nb_queue_pools =
+			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+
+		vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
+		for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
+			vmdq_rx_conf->pool_map[i].vlan_id = vlan_tags[i];
+			vmdq_rx_conf->pool_map[i].pools =
+				1 << (i % vmdq_rx_conf->nb_queue_pools);
 		}
 		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
-			vmdq_rx_conf.dcb_tc[i] = i;
-			vmdq_tx_conf.dcb_tc[i] = i;
+			vmdq_rx_conf->dcb_tc[i] = i;
+			vmdq_tx_conf->dcb_tc[i] = i;
 		}
 
-		/*set DCB mode of RX and TX of multiple queues*/
+		/* set DCB mode of RX and TX of multiple queues */
 		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB;
 		eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
-		if (dcb_conf->pfc_en)
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT|ETH_DCB_PFC_SUPPORT;
-		else
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
-
-		(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_dcb_conf, &vmdq_rx_conf,
-                                sizeof(struct rte_eth_vmdq_dcb_conf)));
-		(void)(rte_memcpy(&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf, &vmdq_tx_conf,
-                                sizeof(struct rte_eth_vmdq_dcb_tx_conf)));
-	}
-	else {
-		struct rte_eth_dcb_rx_conf rx_conf;
-		struct rte_eth_dcb_tx_conf tx_conf;
-
-		/* queue mapping configuration of DCB RX and TX */
-		if (dcb_conf->num_tcs == ETH_4_TCS)
-			dcb_q_mapping = DCB_4_TCS_Q_MAPPING;
-		else
-			dcb_q_mapping = DCB_8_TCS_Q_MAPPING;
-
-		rx_conf.nb_tcs = dcb_conf->num_tcs;
-		tx_conf.nb_tcs = dcb_conf->num_tcs;
-
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
-			rx_conf.dcb_tc[i] = i;
-			tx_conf.dcb_tc[i] = i;
+	} else {
+		struct rte_eth_dcb_rx_conf *rx_conf =
+				&eth_conf->rx_adv_conf.dcb_rx_conf;
+		struct rte_eth_dcb_tx_conf *tx_conf =
+				&eth_conf->tx_adv_conf.dcb_tx_conf;
+
+		rx_conf->nb_tcs = num_tcs;
+		tx_conf->nb_tcs = num_tcs;
+
+		for (i = 0; i < num_tcs; i++) {
+			rx_conf->dcb_tc[i] = i;
+			tx_conf->dcb_tc[i] = i;
 		}
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB;
+		eth_conf->rxmode.mq_mode = ETH_MQ_RX_DCB_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = rss_hf;
 		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
-		if (dcb_conf->pfc_en)
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT|ETH_DCB_PFC_SUPPORT;
-		else
-			eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
-
-		(void)(rte_memcpy(&eth_conf->rx_adv_conf.dcb_rx_conf, &rx_conf,
-                                sizeof(struct rte_eth_dcb_rx_conf)));
-		(void)(rte_memcpy(&eth_conf->tx_adv_conf.dcb_tx_conf, &tx_conf,
-                                sizeof(struct rte_eth_dcb_tx_conf)));
 	}
 
+	if (pfc_en)
+		eth_conf->dcb_capability_en =
+				ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+	else
+		eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+
 	return 0;
 }
 
 int
-init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf)
+init_port_dcb_config(portid_t pid,
+		     enum dcb_mode_enable dcb_mode,
+		     enum rte_eth_nb_tcs num_tcs,
+		     uint8_t pfc_en)
 {
 	struct rte_eth_conf port_conf;
+	struct rte_eth_dev_info dev_info;
 	struct rte_port *rte_port;
 	int retval;
-	uint16_t nb_vlan;
 	uint16_t i;
 
-	/* rxq and txq configuration in dcb mode */
-	nb_rxq = 128;
-	nb_txq = 128;
+	rte_eth_dev_info_get(pid, &dev_info);
+
+	/* If dev_info.vmdq_pool_base is greater than 0,
+	 * the queue id of vmdq pools is started after pf queues.
+	 */
+	if (dcb_mode == DCB_VT_ENABLED && dev_info.vmdq_pool_base > 0) {
+		printf("VMDQ_DCB multi-queue mode is nonsensical"
+			" for port %d.", pid);
+		return -1;
+	}
+
+	/* Assume the ports in testpmd have the same dcb capability
+	 * and has the same number of rxq and txq in dcb mode
+	 */
+	if (dcb_mode == DCB_VT_ENABLED) {
+		nb_rxq = dev_info.max_rx_queues;
+		nb_txq = dev_info.max_tx_queues;
+	} else {
+		/*if vt is disabled, use all pf queues */
+		if (dev_info.vmdq_pool_base == 0) {
+			nb_rxq = dev_info.max_rx_queues;
+			nb_txq = dev_info.max_tx_queues;
+		} else {
+			nb_rxq = (queueid_t)num_tcs;
+			nb_txq = (queueid_t)num_tcs;
+
+		}
+	}
 	rx_free_thresh = 64;
 
-	memset(&port_conf,0,sizeof(struct rte_eth_conf));
+	memset(&port_conf, 0, sizeof(struct rte_eth_conf));
 	/* Enter DCB configuration status */
 	dcb_config = 1;
 
-	nb_vlan = sizeof( vlan_tags )/sizeof( vlan_tags[ 0 ]);
 	/*set configuration of DCB in vt mode and DCB in non-vt mode*/
-	retval = get_eth_dcb_conf(&port_conf, dcb_conf);
+	retval = get_eth_dcb_conf(&port_conf, dcb_mode, num_tcs, pfc_en);
 	if (retval < 0)
 		return retval;
 
 	rte_port = &ports[pid];
-	memcpy(&rte_port->dev_conf, &port_conf,sizeof(struct rte_eth_conf));
+	memcpy(&rte_port->dev_conf, &port_conf, sizeof(struct rte_eth_conf));
 
 	rxtx_port_config(rte_port);
 	/* VLAN filter */
 	rte_port->dev_conf.rxmode.hw_vlan_filter = 1;
-	for (i = 0; i < nb_vlan; i++){
+	for (i = 0; i < RTE_DIM(vlan_tags); i++)
 		rx_vft_set(pid, vlan_tags[i], 1);
-	}
 
 	rte_eth_macaddr_get(pid, &rte_port->eth_addr);
 	map_port_queue_stats_mapping_registers(pid, rte_port);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f925df7..3661755 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -255,25 +255,6 @@ enum dcb_mode_enable
 	DCB_ENABLED
 };
 
-/*
- * DCB general config info
- */
-struct dcb_config {
-	enum dcb_mode_enable dcb_mode;
-	uint8_t vt_en;
-	enum rte_eth_nb_tcs num_tcs;
-	uint8_t pfc_en;
-};
-
-/*
- * In DCB io FWD mode, 128 RX queue to 128 TX queue mapping
- */
-enum dcb_queue_mapping_mode {
-	DCB_VT_Q_MAPPING = 0,
-	DCB_4_TCS_Q_MAPPING,
-	DCB_8_TCS_Q_MAPPING
-};
-
 #define MAX_TX_QUEUE_STATS_MAPPINGS 1024 /* MAX_PORT of 32 @ 32 tx_queues/port */
 #define MAX_RX_QUEUE_STATS_MAPPINGS 4096 /* MAX_PORT of 32 @ 128 rx_queues/port */
 
@@ -536,7 +517,9 @@ void dev_set_link_down(portid_t pid);
 void init_port_config(void);
 void set_port_slave_flag(portid_t slave_pid);
 void clear_port_slave_flag(portid_t slave_pid);
-int init_port_dcb_config(portid_t pid,struct dcb_config *dcb_conf);
+int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
+		     enum rte_eth_nb_tcs num_tcs,
+		     uint8_t pfc_en);
 int start_port(portid_t pid);
 void stop_port(portid_t pid);
 void close_port(portid_t pid);
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [dpdk-dev] [PATCH v3 9/9] app/testpmd: add command to display DCB info
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
                       ` (7 preceding siblings ...)
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 8/9] app/testpmd: set up DCB forwarding based on traffic class Jingjing Wu
@ 2015-10-31 15:57     ` Jingjing Wu
  2015-11-01 13:53     ` [dpdk-dev] [PATCH v3 0/9] enable DCB feature on Intel XL710/X710 NIC Thomas Monjalon
  9 siblings, 0 replies; 40+ messages in thread
From: Jingjing Wu @ 2015-10-31 15:57 UTC (permalink / raw)
  To: dev

This patch adds a command to display DCB info in ports.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 app/test-pmd/cmdline.c                      | 15 ++++++----
 app/test-pmd/config.c                       | 43 +++++++++++++++++++++++++++++
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 12 ++++----
 4 files changed, 61 insertions(+), 10 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 0254628..410f149 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -182,7 +182,7 @@ static void cmd_help_long_parsed(void *parsed_result,
 			"Display:\n"
 			"--------\n\n"
 
-			"show port (info|stats|xstats|fdir|stat_qmap) (port_id|all)\n"
+			"show port (info|stats|xstats|fdir|stat_qmap|dcb_tc) (port_id|all)\n"
 			"    Display information for port_id, or all.\n\n"
 
 			"show port X rss reta (size) (mask0,mask1,...)\n"
@@ -5235,6 +5235,9 @@ static void cmd_showportall_parsed(void *parsed_result,
 	else if (!strcmp(res->what, "stat_qmap"))
 		FOREACH_PORT(i, ports)
 			nic_stats_mapping_display(i);
+	else if (!strcmp(res->what, "dcb_tc"))
+		FOREACH_PORT(i, ports)
+			port_dcb_info_display(i);
 }
 
 cmdline_parse_token_string_t cmd_showportall_show =
@@ -5244,13 +5247,13 @@ cmdline_parse_token_string_t cmd_showportall_port =
 	TOKEN_STRING_INITIALIZER(struct cmd_showportall_result, port, "port");
 cmdline_parse_token_string_t cmd_showportall_what =
 	TOKEN_STRING_INITIALIZER(struct cmd_showportall_result, what,
-				 "info#stats#xstats#fdir#stat_qmap");
+				 "info#stats#xstats#fdir#stat_qmap#dcb_tc");
 cmdline_parse_token_string_t cmd_showportall_all =
 	TOKEN_STRING_INITIALIZER(struct cmd_showportall_result, all, "all");
 cmdline_parse_inst_t cmd_showportall = {
 	.f = cmd_showportall_parsed,
 	.data = NULL,
-	.help_str = "show|clear port info|stats|xstats|fdir|stat_qmap all",
+	.help_str = "show|clear port info|stats|xstats|fdir|stat_qmap|dcb_tc all",
 	.tokens = {
 		(void *)&cmd_showportall_show,
 		(void *)&cmd_showportall_port,
@@ -5288,6 +5291,8 @@ static void cmd_showport_parsed(void *parsed_result,
 		 fdir_get_infos(res->portnum);
 	else if (!strcmp(res->what, "stat_qmap"))
 		nic_stats_mapping_display(res->portnum);
+	else if (!strcmp(res->what, "dcb_tc"))
+		port_dcb_info_display(res->portnum);
 }
 
 cmdline_parse_token_string_t cmd_showport_show =
@@ -5297,14 +5302,14 @@ cmdline_parse_token_string_t cmd_showport_port =
 	TOKEN_STRING_INITIALIZER(struct cmd_showport_result, port, "port");
 cmdline_parse_token_string_t cmd_showport_what =
 	TOKEN_STRING_INITIALIZER(struct cmd_showport_result, what,
-				 "info#stats#xstats#fdir#stat_qmap");
+				 "info#stats#xstats#fdir#stat_qmap#dcb_tc");
 cmdline_parse_token_num_t cmd_showport_portnum =
 	TOKEN_NUM_INITIALIZER(struct cmd_showport_result, portnum, UINT8);
 
 cmdline_parse_inst_t cmd_showport = {
 	.f = cmd_showport_parsed,
 	.data = NULL,
-	.help_str = "show|clear port info|stats|xstats|fdir|stat_qmap X (X = port number)",
+	.help_str = "show|clear port info|stats|xstats|fdir|stat_qmap|dcb_tc X (X = port number)",
 	.tokens = {
 		(void *)&cmd_showport_show,
 		(void *)&cmd_showport_port,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ef87581..1b0d5d5 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2285,3 +2285,46 @@ mcast_addr_remove(uint8_t port_id, struct ether_addr *mc_addr)
 	mcast_addr_pool_remove(port, i);
 	eth_port_multicast_addr_list_set(port_id);
 }
+
+void
+port_dcb_info_display(uint8_t port_id)
+{
+	struct rte_eth_dcb_info dcb_info;
+	uint16_t i;
+	int ret;
+	static const char *border = "================";
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN))
+		return;
+
+	ret = rte_eth_dev_get_dcb_info(port_id, &dcb_info);
+	if (ret) {
+		printf("\n Failed to get dcb infos on port %-2d\n",
+			port_id);
+		return;
+	}
+	printf("\n  %s DCB infos for port %-2d  %s\n", border, port_id, border);
+	printf("  TC NUMBER: %d\n", dcb_info.nb_tcs);
+	printf("\n  TC :        ");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", i);
+	printf("\n  Priority :  ");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", dcb_info.prio_tc[i]);
+	printf("\n  BW percent :");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d%%", dcb_info.tc_bws[i]);
+	printf("\n  RXQ base :  ");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", dcb_info.tc_queue.tc_rxq[0][i].base);
+	printf("\n  RXQ number :");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", dcb_info.tc_queue.tc_rxq[0][i].nb_queue);
+	printf("\n  TXQ base :  ");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", dcb_info.tc_queue.tc_txq[0][i].base);
+	printf("\n  TXQ number :");
+	for (i = 0; i < dcb_info.nb_tcs; i++)
+		printf("\t%4d", dcb_info.tc_queue.tc_txq[0][i].nb_queue);
+	printf("\n");
+}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 3661755..ecb411d 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -557,6 +557,7 @@ int tx_queue_id_is_invalid(queueid_t txq_id);
 /* Functions to manage the set of filtered Multicast MAC addresses */
 void mcast_addr_add(uint8_t port_id, struct ether_addr *mc_addr);
 void mcast_addr_remove(uint8_t port_id, struct ether_addr *mc_addr);
+void port_dcb_info_display(uint8_t port_id);
 
 enum print_warning {
 	ENABLED_WARN = 0,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 71d831b..b7659d0 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -50,10 +50,10 @@ If you type a partial command and hit ``<TAB>`` you get a list of the available
 
    testpmd> show port <TAB>
 
-       info [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap X
-       info [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap all
-       stats [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap X
-       stats [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap all
+       info [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap|dcb_tc X
+       info [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap|dcb_tc all
+       stats [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap|dcb_tc X
+       stats [Mul-choice STRING]: show|clear port info|stats|fdir|stat_qmap|dcb_tc all
        ...
 
 
@@ -128,7 +128,7 @@ show port
 
 Display information for a given port or all ports::
 
-   testpmd> show port (info|stats|fdir|stat_qmap) (port_id|all)
+   testpmd> show port (info|stats|fdir|stat_qmap|dcb_tc) (port_id|all)
 
 The available information categories are:
 
@@ -140,6 +140,8 @@ The available information categories are:
 
 * ``stat_qmap``: Queue statistics mapping.
 
+* ``dcb_tc``: DCB information such as TC mapping.
+
 For example:
 
 .. code-block:: console
-- 
2.4.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/9] enable DCB feature on Intel XL710/X710 NIC
  2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
                       ` (8 preceding siblings ...)
  2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 9/9] app/testpmd: add command to display DCB info Jingjing Wu
@ 2015-11-01 13:53     ` Thomas Monjalon
  9 siblings, 0 replies; 40+ messages in thread
From: Thomas Monjalon @ 2015-11-01 13:53 UTC (permalink / raw)
  To: Jingjing Wu; +Cc: dev

2015-10-31 23:57, Jingjing Wu:
> The patch set enables DCB feature on Intel XL710/X710 NICs, including:
>   - Receive queue classification based on traffic class
>   - Round Robin ETS schedule (rx and tx).
>   - Priority flow control
> To make the testpmd and ethdev lib more generic on DCB feature, this
> patch set also
>   - adds a new API to get DCB related information on NICs.
>   - changes the DCB test forwarding in testpmd to be on traffic class.
>   - move specific validation from lib and application to drivers.
> Additionally, this patch set also corrects some coding style issues.
> 
> v2 changes:
>  - add a command in testpmd to display dcb info
>  - update testpmd guide and release note
> 
> v3 changes:
>  - add API change in release note
>  - add new function in rte_ether_version.map
>  - rebase doc update to the same commit with code change

Applied with acks from previous version, thanks.
Please do not hesitate to keep acks when changes are minor.

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2015-11-01 13:55 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-24  6:03 [dpdk-dev] [PATCH 0/8] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
2015-09-24  6:03 ` [dpdk-dev] [PATCH 1/8] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
2015-09-24  6:03 ` [dpdk-dev] [PATCH 2/8] ethdev: move the multi-queue checking to specific drivers Jingjing Wu
2015-09-24  6:03 ` [dpdk-dev] [PATCH 3/8] i40e: enable DCB feature on FVL Jingjing Wu
2015-09-24  6:03 ` [dpdk-dev] [PATCH 4/8] ixgbe: enable DCB+RSS multi-queue mode Jingjing Wu
2015-09-24  6:03 ` [dpdk-dev] [PATCH 5/8] ethdev: new API to get dcb related information Jingjing Wu
2015-09-24  6:03 ` [dpdk-dev] [PATCH 6/8] ixgbe: get_dcb_info ops implement Jingjing Wu
2015-09-24  6:03 ` [dpdk-dev] [PATCH 7/8] i40e: " Jingjing Wu
2015-10-22  7:10   ` Liu, Jijiang
2015-10-26  7:38     ` Wu, Jingjing
2015-09-24  6:03 ` [dpdk-dev] [PATCH 8/8] app/testpmd: set up DCB forwarding based on traffic class Jingjing Wu
2015-10-28  1:46   ` Liu, Jijiang
2015-10-28  2:04     ` Wu, Jingjing
2015-10-29  8:53 ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Jingjing Wu
2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 01/10] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
2015-10-30 10:22     ` Thomas Monjalon
2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 02/10] ethdev: move the multi-queue checking to specific drivers Jingjing Wu
2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 03/10] i40e: enable DCB feature on FVL Jingjing Wu
2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 04/10] ixgbe: enable DCB+RSS multi-queue mode Jingjing Wu
2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 05/10] ethdev: new API to get dcb related information Jingjing Wu
2015-10-30 11:16     ` Thomas Monjalon
2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 06/10] ixgbe: get_dcb_info ops implement Jingjing Wu
2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 07/10] i40e: " Jingjing Wu
2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 08/10] app/testpmd: set up DCB forwarding based on traffic class Jingjing Wu
2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 09/10] app/testpmd: add command to display DCB info Jingjing Wu
2015-10-29  8:53   ` [dpdk-dev] [PATCH v2 10/10] doc: update testpmd guide and release note Jingjing Wu
2015-10-30 10:26     ` Thomas Monjalon
2015-10-30  1:29   ` [dpdk-dev] [PATCH v2 00/10] enable DCB feature on Intel XL710/X710 NIC Liu, Jijiang
2015-10-30  2:21   ` Zhang, Helin
2015-10-31 15:57   ` [dpdk-dev] [PATCH v3 0/9] " Jingjing Wu
2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 1/9] ethdev: rename dcb_queue to dcb_tc in dcb config struct Jingjing Wu
2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 2/9] ethdev: move the multi-queue checking to specific drivers Jingjing Wu
2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 3/9] i40e: enable DCB feature on FVL Jingjing Wu
2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 4/9] ixgbe: enable DCB+RSS multi-queue mode Jingjing Wu
2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 5/9] ethdev: new API to get dcb related information Jingjing Wu
2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 6/9] ixgbe: get_dcb_info ops implement Jingjing Wu
2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 7/9] i40e: " Jingjing Wu
2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 8/9] app/testpmd: set up DCB forwarding based on traffic class Jingjing Wu
2015-10-31 15:57     ` [dpdk-dev] [PATCH v3 9/9] app/testpmd: add command to display DCB info Jingjing Wu
2015-11-01 13:53     ` [dpdk-dev] [PATCH v3 0/9] enable DCB feature on Intel XL710/X710 NIC Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).